Datasets:

meta
dict
text
stringlengths
224
571k
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9419358968734741, "language": "en", "url": "https://tickertape.tdameritrade.com/investing/cpi-inflation-measures-15503", "token_count": 1236, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.031982421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ec23b434-6d6e-4762-9247-50c2bff2c41c>" }
Investors and the Fed closely watch monthly Consumer Price Index (CPI) data, but there’s another inflation measure that the Fed values even more. Inflation has been stubbornly low in recent years, but that hasn't always been the case. Inflation hit a whopping 14% in 1980, a period that some economic historians call the “Great Inflation.” At that time, inflation soared due to energy shortages, wage and price controls, recessions, and fiscal imbalances. Today, the Federal Reserve has its binoculars zoomed in on the economy, looking for signs of an inflationary pick-up from current subdued levels. Low prices may sound good, but they can also signal weak demand, which in turn can translate into sluggish growth. Deflation, the opposite of inflation, is now a concern in many large economies, including Europe and Japan. Can inflation hit the Federal Reserve's 2% target this year? The jury is still out on that one. But heightened focus on the inflation outlook may keep Wall Street hypersensitive to monthly inflation reports. Here's what you need to know about the data. Report name: The Consumer Price Index (CPI)Released by: The Bureau Of Labor Statistics Release date: Usually the third Tuesday of the monthRelease time: 8:30 a.m. Eastern time Best trait: The CPI report is a monthly measure of how consumer prices change for a large range of goods and services. This is a significant measure for many, as it dictates the annual cost-of-living adjustment for Social Security increases each year and is also utilized by a number of labor contracts, which revolve around the changes in CPI, says Patrick O'Hare, chief market analyst at Briefing.com. "This is one of the most important inflation measures as far as the market is concerned," O'Hare says. Good tip: While many investors focus on CPI, the central bankers who line the halls of the Federal Reserve favor another inflation gauge known as the PCE, or personal consumption expenditures. Why? The reason may well be because of a critical weakness in the CPI. Read on. One weakness: The CPI doesn't account for any substitution bias or effect, O'Hare explains. What does that mean? Basically, "the Bureau of Labor Statistics’ assumption is that households buy the same basket of goods and services from month to month," O'Hare says. Price-conscious consumers, however, tend to look for the best value. "Say beef prices get high, consumers might buy more chicken instead. The CPI does not capture the substitution effect, while the Fed's preferred inflation indicator, the PCE, does. The PCE is seen as a more realistic type of inflation gauge." Tradability rating: (a) Tends to be ignored. (b) Depends on overall trading climate. (c) Don't miss this one. Both CPI and PCE inflation reports are very tradable. “Traders always pay close attention because they offer hard data; it is not just a survey of inflation expectations. They will create trading opportunities across currencies, bonds, and stocks," says O'Hare. Current read: For now, the U.S. remains mired in a low-inflation environment. The latest reading for CPI revealed a 0.9% 12-month change through March 2016. Core PCE, which strips out food and energy, was up 0.1% in March from February, and up 1.6% year over year. Those readings sound far from the Fed's 2% target, which the Fed is monitoring in relation to additional rate hikes it has signaled will occur this year. But beware: "The Fed has said it doesn't need to hit 2%; just make progress toward that target," O'Hare concludes. Mark your calendar: The BLS will release the May inflation data on June 16. Check back next week for Part 5 of this economic reports series as we take a look at the retail sales report. Want to see just how much the CPI skews from central bankers’ preferred Personal Consumption Expenditures report? Of course you do. for thinkMoney ® Financial Communications Society 2016 for Ticker Tape Content Marketing Awards 2016 Content intended for educational/informational purposes only. Not investment advice, or a recommendation of any security, strategy, or account type. Be sure to understand all risks involved with each strategy, including commission costs, before attempting to place any trade. Clients must consider all relevant risk factors, including their own personal financial situations, before trading. TD Ameritrade, Briefing.com and any other third party mentioned are separate and unaffiliated, and are not responsible for one another's policies, services or content. Market volatility, volume, and system availability may delay account access and trade executions. Past performance of a security or strategy does not guarantee future results or success. Options are not suitable for all investors as the special risks inherent to options trading may expose investors to potentially rapid and substantial losses. Options trading subject to TD Ameritrade review and approval. Please read Characteristics and Risks of Standardized Options before investing in options. Supporting documentation for any claims, comparisons, statistics, or other technical data will be supplied upon request. This is not an offer or solicitation in any jurisdiction where we are not authorized to do business or where such offer or solicitation would be contrary to the local laws and regulations of that jurisdiction, including, but not limited to persons residing in Australia, Canada, Hong Kong, Japan, Saudi Arabia, Singapore, UK, and the countries of the European Union. TD Ameritrade, Inc., member FINRA/SIPC, a subsidiary of The Charles Schwab Corporation. TD Ameritrade is a trademark jointly owned by TD Ameritrade IP Company, Inc. and The Toronto-Dominion Bank. © 2021 Charles Schwab & Co. Inc. All rights reserved.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.97866290807724, "language": "en", "url": "https://www.aol.com/2012/04/17/jobs-and-the-battle-of-the-sexes/", "token_count": 1026, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.404296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4686e4c3-9b80-4c02-8932-e1f108191e33>" }
It wasn't long ago that we were talking about the "man-cession," or the idea that men were being hit by the recession much harder than women. "For a recession to have had such a disproportionate effect on one gender has never before happened in the modern period," economist Mark Perry said in 2010, referring to men. But last week, presidential candidate Mitt Romney noted that 92% of jobs lost during Barack Obama's presidency have been lost by women. Who's right? Both, it turns out. It all depends on what date you use as a starting point. Men lost jobs much faster than women early on in the recession. Then, as male job losses leveled off and actually turned into gains, female jobs kept falling and only recently began inching up. Here's how it looks since the recession began in December 2007 (the vertical line marks when President Obama entered office): Source: Bureau of Labor Statistics. From December 2007 through March 2012, male employment is down 4.7%, while female employment is down 2.8%. But since January 2009, male employment is down just 0.1%, while female employment is down about 1%. Males, therefore, still have fared worse than their better halves during the recession. But their prospects for recovery in recent months and years have turned decidedly better than female's. As I noted earlier this month, 88% of new jobs since the recession ended in mid-2009 have gone to men, and the share of men claiming the economy is improving is now 41%, compared with 26% for females. What gives? Recessions almost always work this way. Male jobs fell five times as hard as women's during the 2001 recession, but then bounced back far faster. During the 1991 recession, male jobs fell modestly while female employment actually rose, but male jobs then rebounded sharply. The reason is usually the type of jobs being cut throughout the lifecycle of a recession. "I think that the recession has happened in stages," said Stanford labor economist Myra Strober last year. "The first stage hit manufacturing hard, and that's where men have more jobs than women do, and now the recession has moved to state and local government where women have a higher percentage of jobs." Nearly 60% of all job losses during the recession came from the manufacturing and construction sectors -- both heavily male-dominated fields. They're also the kind of fields where employment is now rebounding the fastest. Manufacturing employment has increased by 400,000 since 2009, and oil and gas employment has shot up to the highest level in two decades. Both disproportionately benefit men. Meanwhile, government jobs declined for most of the last two years and only recently began stabilizing. That, as Strober points out, hurts women more than men. In the long term, however, females are not only catching up, but surpassing males in the job market. Female nonfarm payroll employment briefly surpassed male employment in 2009 for the first time ever. That shift has been staggering: As recently as 2000, male workers outnumbered females by more than 5 million. In 1964, males had the lead by 21 million. Much of the rise in female employment is due to social changes as the marriage rate declines and married households take on dual-income roles. But part is due to education. "Women were earning about 166 associates degrees and 135 bachelor's degrees for every 100 earned by men in 2007," TheWall Street Journal reported, citing data from the Department of Education. Last year, women surpassed men in graduate degrees as well. For adults over age 25, 10.6 million women now have a master's degrees or higher, versus 10.5 million for men. That's really the key to employment, both today and going forward. The unemployment rate for those with a college degree is 4.2%. For those with only a high-school diploma, it's 8%. For those who didn't complete high school, it's 12.6%. And not only do educated workers have an easier time obtaining jobs, but they earn considerably more than those without a degree. According to economist Tyler Cowan, college grads earn 83% more on average than high school graduates. That gap will likely grow as the economy continues shifting from low-skill industry-based jobs to advanced information-based fields. And women, it seems, will benefit the most. Long live the man-cession. Check back every Tuesday and Friday for Morgan Housel's columns on finance and economics. At the time thisarticle was published Fool contributorMorgan Houseldoesn't own shares in any of the companies mentioned in this article. Follow him on Twitter @TMFHousel. The Motley Fool has a disclosure policy. We Fools may not all hold the same opinions, but we all believe thatconsidering a diverse range of insights makes us better investors. Try any of our Foolish newsletter services free for 30 days. Copyright © 1995 - 2012 The Motley Fool, LLC. All rights reserved. The Motley Fool has a disclosure policy.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9251999258995056, "language": "en", "url": "https://www.buyersvalley.com/how-to-calculate-solar-panel-cost-into-your-budget/", "token_count": 684, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.02294921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f3aedd29-83ce-4196-947a-83cbd0c30102>" }
On average, you can save $1,500 a year on energy costs by going solar. Depending on where you live, you may even get tax breaks from your local government for using solar power. Using solar power isn’t just good for your wallet, it’s also good for the environment. One simulation estimated that if everyone in the world switched to solar power the average temperature would decrease by 2°C and solar radiation in desert areas would decrease by 10%. To decide if solar power is an economical option for you, consider solar panel cost for your area. Questions to Ask Yourself Before Going Solar Unfortunately, solar power isn’t for everyone. Today, solar panels are still an expensive investment. You may not see returns for a long time. Also, not every home is situated well for collecting solar energy with panels. Initial Costs vs. Future Savings When figuring out how much a residential solar panel system will cost you, look at your current energy bill. You’ll need a system big enough to power your current energy consumption. On average, solar systems cost between $3 and $5 per watt. The average solar system costs $15-25 thousand dollars. If you save $1,500/year on energy bills with solar, then it will take you 16 years to reach a point where you’re saving money. Do You Have Enough Sun? Another factor to consider is whether or not your house is enough to direct sunlight for solar panels. You may have to set up solar panels in a sunny part of your yard. If you get less sunlight on your property, then you have to compensate with more solar panels, which will make the system more expensive. Solar Panel Cost The upfront costs of going solar are still worth it for people with the cash to invest. Also, solar panels increase your home value. If you plan on staying in your home for a long time, then you’ll see money-savings eventually. For many people, just being more energy efficient is motivation enough to go solar. When you begin your search for “solar companies near me,” you’ll run into many different price points. If a solar company’s price seems too good to be true, it probably is. Before getting an estimate, be aware of trade standards and costs so you know what price to expect for good quality. Lesser quality solar panels aren’t worth your money because they’ll break down sooner and won’t create as much energy. Many states offer incentives for getting solar, which cuts down on your price tag significantly. For example, California offers a 25% tax break to go toward your solar system. California also offers a rebate for the amount of solar energy your system is for. That rebate can range from $500 to $0.95 per watt capacity for your system. Make sure to check your local rebates when you’re researching installing solar panels for your home. Get More Helpful Content There are pros and cons to going solar. Solar panel costs vary state by state and home by home. Hopefully, this article helped you determine how to calculate the costs of solar panels for your home. If you found this article helpful, visit the rest of our blog for more like it!
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9577927589416504, "language": "en", "url": "https://www.economist.com/finance-and-economics/2014/12/30/barbarians-at-the-farm-gate", "token_count": 1233, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.263671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c6cd45c8-9de7-4f23-bedf-12951b946fdc>" }
IN THE next 40 years, humans will need to produce more food than they did in the previous 10,000 put together. But with sprawling cities gobbling up arable land, agricultural productivity gains decreasing, and demand for biofuels increasing, supply is not keeping up with demand. Clever farmers, scientists and entrepreneurs are bursting with ideas. But they need money to make this jump. Financiers more often found buying and selling companies have cottoned on to the opportunity. Farm gates have traditionally been closed to capital markets: nine in ten farms are held by families. But demography is forcing a shift: the average age of farmers in Europe, America and New Zealand is now in the late fifties. They often have no successor, because offspring do not want to farm or cannot afford to buy out family members. In addition, adopting new technologies and farming at ever-greater scale require the sort of capital few farmers have, even after years of bumper crop prices. Institutional investors such as pension funds see farmland as fertile ground to plough, either doing their own deals or farming them out to specialist funds. Some act as landlords by buying land and leasing it out. Others buy plots of low-value land, such as pastures, and upgrade them to higher-yielding orchards. Investors who are keen on even bigger risks and rewards flock to places such as Brazil, Ukraine and Zambia, where farming techniques are often still underdeveloped and potential productivity gains immense. Farmland has been a great investment over the past 20 years, certainly in America, where annual returns of 12% caused some to dub it “gold with a coupon”. In America and Britain, where tax incentives have distorted the market, it outperformed most major asset classes over the past decade, and with low volatility to boot (see chart). Those going against the grain warn of a land-price bubble. Believers argue that increasing demand and shrinking supply—as well as urbanisation, poor soil management and pressure on water systems that are threats to farmland—mean the investment case is on solid ground. It is not just the asset appreciation and yields that attract outside capital, says Bruce Sherrick of the University of Illinois at Urbana-Champaign: as important is the diversification to portfolios that farmland offers. It is uncorrelated with paper assets such as stocks and bonds, has proven relatively resistant to inflation, and is less sensitive to economic shocks (people continue to eat even during downturns) and to interest-rate hikes. Moreover, in the aftermath of the financial crisis investors are reassured by assets they can touch and sniff. Some are already getting their boots dirty. In 2009 Hassad, part of Qatar’s sovereign-wealth fund, asked Bydand Global Agriculture to buy nearly 50 farms in Australia and merge them into a single investment portfolio. Terrapin Palisades, a private-equity firm, bought a dairy company and some vineyards and tomato fields in California, and converted all to grow almonds, whose price has soared as the Chinese have gone nuts for them. Such conversions require up-front capital and the ability to survive without returns for years. The private-equity approach can take the form of simple improvements, such as changing irrigation from antiquated dykes and canal networks to automatic spray systems: these are the equivalent of picking low-hanging fruit. Pricey robots can boost milk per cow by 10-15%. Using “big-data” analytics to plant and cultivate seeds can push crop yields up 5%. “This is an industry where the gap between the top and bottom quartile is greater than anywhere else,” says Detlef Schoen of Aquila Capital, an alternative-investment firm. And yet the 36 agriculture-focused funds, with $15 billion under management, pale in comparison to the 144 funds focused on infrastructure ($89 billion) and 473 targeting real estate ($163 billion), according to Preqin, a data provider. TIAA-CREF, an American financial group, is a market leader with $5 billion in farmland, from Australia to Brazil, and its own agricultural academic centre at the University of Illinois. Canadian pension funds and Britain’s Wellcome Trust are among those bolstering their farming savvy. Most investors are put off by the sector’s peculiar risks and complexities. Weather, commodity prices, soil health, water access, dietary fads and animal health are not the forte of the average pension-fund investment officer. Political risks abound: cash-strapped governments in Europe and America may (belatedly) get around to cutting farm subsidies. In poor countries, land titles may give outsiders dubious protection—if those countries even allow foreign ownership of land in the first place. Some liken the sector to real estate and infrastructure 20 years ago. It lacks indices, consultant reports and track records. But unlike skyscrapers or pipelines, farming offers few of the multi-billion-dollar deals that are needed to entice mega-investors. For more money to flow in, financiers and farmers will have to learn a lot more about each other. Money managers need to get their hands dirty and find out more about crops. Only a handful have the expertise needed; farmers gleefully share stories of Wall Street types wondering how chicks are planted. And farmers can do more to attract capital, for example by seeking out financial deals where investors’ incentives are aligned with their own, such as through joint ventures. Investors need to separate the wheat from the chaff, too. Farm investing requires patience; it is ill-suited to flipping and trading. But those willing to climb over the barriers could reap big rewards. The investment thesis is as simple as they come, as Mark Twain realised long ago: “Buy land, they’re not making it any more.” This article appeared in the Finance & economics section of the print edition under the headline "Barbarians at the farm gate"
{ "dump": "CC-MAIN-2021-17", "language_score": 0.933433473110199, "language": "en", "url": "https://www.locusassignments.com/solution/unit-16-mcki-assignment-solution-dyson", "token_count": 3001, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.130859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b6eedeb7-dbe5-4288-bf46-af25e494a538>" }
Delivery in day(s): 5 The technological advancement of the organisation has led many organisations to operate in many different ways to gain competitive advantage over other organisations. In recent years, many new organisations have emerged and are capturing the market with their innovative thinking and design to retain consumers. However, with growing time and technology many organisations are losing their grasp on the market as they are not able to retain their patrons. Under such situations, organisations are proposing different ways to alter the information and communication pattern to attract new consumers and retain the old consumers. In this assignment, the learner discusses different strategies with the help of relevant theories to implement new decision making processes that can help the organisation to sustain in the market. The learner explains the business process of Dyson that recently opened its retail store in UK. Any organisation has to make several decisions before releasing it to the market. It is necessary to understand the various forms of marketing strategies that can help in the development of the business (Argenti, 2015).Under such situations, Dyson has to understand the potential market and understand the needs of the consumers and make decisions accordingly to ensure success in the sales figure. For this, the organisation has to undergo various levels of decision making processes that are segregated in various levels such as, Figure 1 – Decision Making Stages (Source: Argenti, 2015) Market Expansion: Expanding to new markets like India and China can help Dyson to increase business market. Presently, Dyson has expanded its market to UK, which is not making any positive impact on the organisation’s sales performance. As there are many competitors with a wide variety of services, Dyson is not able to retain consumers in UK market. Innovation: With the advent of technology and engineering, Organisation can develop new innovative products that can help Dyson to gain consumer attention and retain consumers. These levels of decision making process help an organisation to operate under various stages of the business operation and help them to take decisions wisely. It is crucially important to understand the nature and the demand of the product in the specified location of business. Overseas Production:Moving the production to Asian countries can be beneficial for the organisation as it will help them to lower the cost on the manpower and help them to maintain the quality of their products. Presently, Dyson has been operating for a while now and their operational costs are considered to be huge for their quality. This reflects on the overall pricing of the products that fails to attract new consumers. Under such circumstances, the organisation when relocating their production to Asian countries like China or Japan, it can help them to lower the price on the products and help them to attract consumers. These decisions put forward by Dyson management helps them to regain control of the market as they have good brand value in the global market place. Characteristics of the decision Internal or External Explicit or Tacit Time taken to collect Cost of collecting Orientation; current or Future 6 months to 1year 3 – 4 Million 6months to 1 year 1 – 2 Million 6months to 1 year 1 – 2 Million Table 1 – Effective decision making Strategic Decision making in an organisation is considered to be very important and crucial. It has to look over various factors to understand their competitive edge in the market. As opined by Shwom and Snyder, (2015), in order to evaluate the organisation’s status in the market, it is essential to identify the strength, weakness, opportunity and threats. A SWOT analysis becomes handy for any organisation because it shows the business in a simplified process. Source of information Newspapers and surveys Information is extracted by the business organisation itself This process is considered to be very costly to extract up to date information Internet and government publication Fetching the information is reasonable at cost because the data is already present Chances are that the information may be out of date. Financial Performance, and managing assets The information is necessary for all management levels as it is collected internally Information comes from all departments of the organisation and therefore sometimes does not produce the required data Helps the seniors to identify the potential sources and understand the flaws in business Information is generally derived through external places and does not provide appropriate data Health and safety statistics Contains ordered structure that has all the information available Information are traceable and documented. Voice Calls, Memos Information discussed are crucial for the organisation Has no relevant structure or format. Table 2 – Internal and external sources of information The information gathered is extremely essential for the organisation and should be assessed accordingly with time. Dyson has up to date information with the management that has helped them to take the respective decision. The decision making process is completely unbiased. In order to improve information and knowledge, the manager has to improve the information system of the organisation to develop appropriate flowchart to store the necessary information. Management Information System is considered to be useful in this matter as it helps the organisation to store relevant data and information. The information stored are for each term that is quarterly or yearly and then evaluate the information to predict the business prospect accordingly. Apart from that, Decision Support System are another area to store information as they have appropriate database that includes information input procedure for statistical analysis, then comes the processing which involves statistical simulation analysis to generate proper results and after that comes responses to statistical test results (Bovee and Thill, 2000, p.96) . MIS and DSS are considered to be effective in terms of storing and improving the information and knowledge to the manager as it helps them to view the present and predict the future. These reports are handy when communicating to the internal stakeholders of the organisation. When the manager decides to store this information, the data are stored and then used for future purposes to measure the success of the business. This also helps the manager in communicating to the stakeholders and providing them with proper information. In order to make an effective decision making process, the manager has to ensure information are being delivered to the shareholders of the company. The manager can involve the stakeholders of the organisation by producing periodic reports, press conferences and other modes to communicate with the top level management. It is essential for the manager to understand that without appropriate data and information, communication will be invalid and therefore the use of information system is very crucial as it has all the necessary information and data that can be produced to the stakeholders (Guffey and Loewy, 2010, p.127). With the help of necessary information, the manager can seek the involvement of stakeholders in the decision making process. The shareholders and the investors are considered to be an integral part of the decision making process of Dyson because of their shares in the organisation. As the company promised them with good returns, it is the responsibility of the organisation to provide proper information. Communicating with the stakeholders is useful for the organisation. It also helps in maintaining long term relationship with the organisation. Apart from those publications, annual reports and seminars to inform the shareholders of the organisation (Coombs, 2015, p. 143). This helps them to provide clarity and maintain transparency. I want to take this opportunity to thank you for your continued participation in our organisation. It has been a genuine pleasure to have you all as an esteemed pillars of the organisation. As you might expect, we have received certain flaws in the organisation in relation to communication within the management. We are continuously trying to improve our communication pattern and would appreciate your presence in the board for decision making process. For example, we have received suggestions to improve our web portal services to include more information to bring transparency within the organisation. Therefore, it is my request that you join us on the improvement phase to improve our business communication in a better way. I would like to hereby inform you on the current affairs of the organisation and expect your sincere participation in the decision making process to improve our communication channels and bring transparency among the managements of the organisation. We recently reviewed a lot of request to provide training on the groupware of the organisation to make the information and knowledge available to the employees of the organisation. Therefore it would be really appreciable on your behalf if you joined us in the improvement stage for a better tomorrow. Thanks and Regards, In order to involve the stakeholders in the decision making process, it is necessary for the organisation to provide them with appropriate information such as newsletter, mails and other formal ways to approach the stakeholders. Since they are an integral part of the organisation, the responsibility of the manager is to communicate this information to the stakeholders to ensure that they are informed of the current affairs of the organisation and therefore can be involved in the decision making process (Lauring and Klitmoller, 2015, p. 48), In a press conference, the stakeholders are gathered to discuss on the core issues and present the solutions to the esteemed members of the organisation. In press conferences, the organisation discusses the various problems that they are facing in the market and communicate to come to a proper solution. Apart from that, newsletters are another way of communicating to the stakeholders of the organisation by providing them information on the organisation’s status in the business market (Castelli, 2016, p. 219). Meetings among the top level management are also important to discuss the operational costs and other resources that are essential for the business operations. These are some of the ways in which an organisation can draw the attention of the stakeholders of the organisation and involve them in the decision making process. In order to improve the participation of the stakeholders in the decision making process of the organisation, Dyson has put forward two proposals that can help them to engage the stakeholders in the business decision. The first and the foremost action taken by the organisation are to arrange quarterly meetings with the board and discuss the loopholes of the organisation in the market place. This helps the company to receive innovative ideas and suggestions that can benefit them for their future run (Dale, 2016, p. 588). Secondly, the organisation has decided to increase the profit percentage that will keep the stakeholders satisfied and they will be motivated enough to involve in the decision making process of the organisation. With these two proposals, the organisation can increase the communication amongst the stakeholders and expect their participation for a better future of the business. It is essential to understand the demand of the market and with the involvement of the stakeholders in the decision making process, the decisions can be expected to be on the positive scale (Hwang and Chung, 2016, p. 236). Being the communications officer of Tesco, the current processes of communication and existing approaches to the storage of information and knowledge are presently out of date. The management level system of the organisation serves in monitoring the management activities by controlling the information, administrative activities through the decision making process. The decision support system (DSS) and Management Information System (MIS) needs to be updated to store the collected information to circulate the information to the respective stakeholders of the organisation. Usability is a major concern for management information system. Despite having an appropriate system to store data, the information is not circulated among the employees thereby draining the costs of the company in information system (Jablonsky, 2015). Therefore under such circumstances, necessary actions are required to bridge the gap between the managements. Apart from that, the information captured by the system requires security that demands regulation. Under such situation, the organisation can propose new ways to improve the communication process by using Intranets, web portals, groupware. These are some of the best ways to ensure protection of the data and with the help of web portals, new information can be collected and stored in the servers. It is important to understand that data collection is an important part of the organisation’s research and development (Stoichev, 2014, p. 97). Another area to make improvement can be measured by installing intranet facility within the organisation so that the employees are informed about the necessary changes. Intranet serves as an effective mode of communication within the organisation and can spread the information and knowledge within seconds (Kneale, 2007, p. 33). The DSS and MIS are aligned according to the system security and therefore ensure proper security that can improve the current communication process. To improve the information and knowledge facility, installing and testing the new hardware equipment to configure the network infrastructure. In addition to this, implementing the use of Web portals, intranet and groupware will be helpful. The organisation must train users of their new system and then transfer data from the previous system to the new database thereby securing the old data and adding new data and information (Messner, 2007, p. 126). However, this requires cautious planning to ensure smooth flow of data and information. For this the organisation has to ensure proper testing and implementation of IT resources such as, With the help of this flowchart, Tesco can make proper improvement in relation to the integration of communication systems and improve access to systems of information and knowledge thereby minimising the errors In order to improve the communication as a manager of Dyson, it is essential to understand the complications and then develop appropriate actions that can enhance the professional experience of the organisation (Tapinoset al. 373). For this, the manager of Dyson must undergo communication programme to enhance the communication skills. With the help SMART objectives, the manager can trace the development of his/her professional course. Under soft skills development programme and public speaking course to improve communication skills It is measurable by the degree of communicating with other personnel and the ease of solutions With the help of proper training it is achievable Highly relevant as it is an integral part of business Table 3 – PDP (Source: Designed by Learner) The learner has used some of the most promising secondary research journals for the assignment. After analysing the discussions and identifying the loopholes, the learner has presented some of the most promising solutions that can help the organisation to improve the business communication, information and knowledge. In order to improve the business functions of the organisation, Dyson must adopt these proposed strategies that can help in improving the business functions of the organisation. The organisation has to develop short term courses to shorten the barriers and work as a team to increase the transparency among the personnel. Argenti, P.A., (2015). Corporate communication McGraw-Hill Higher Education Bovee, C and Thill, J. (2000) Business communication today 1st ed. Upper Saddle River, N.J.: Prentice Hall. Guffey, M. and Loewy, D (2010) Essentials of business communication 1st ed. Mason, OH: South-Western Cengage Learning
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9546993374824524, "language": "en", "url": "https://www.money-zine.com/definitions/investing-dictionary/goodwill/", "token_count": 383, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.09814453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:18d53b2b-a632-4131-87d0-c0cf989095c1>" }
The financial accounting term goodwill refers to the present value of earnings that are in excess of normal profitability for a particular industry. Goodwill is commonly recorded when a business is acquired and the price paid is in excess of the book value of the company. Goodwill = Cost - (Tangible Assets + Identifiable Intangible Assets - Liabilities) Investors are often willing to pay a premium to acquire a company if they're able to demonstrate they can produce profits in excess of what the industry would suggest are "normal," and those excess profits can be reasonably expected to continue into the future. Above average earnings may be a result of a monopoly, customer loyalty or manufacturing efficiency. Competitive market forces typically limit the ability of any company to generate these excess earnings for more than three to five years. Therefore, the amount of goodwill paid will be less than five times the calculated excess earnings each year. That being said, assigning a goodwill premium to a company is typically a very subjective exercise. Goodwill is an intangible asset, and as such appears on the company's balance sheet. Amortization is the process of allocating the cost of goodwill to the accounting periods over which it can be expected to provide economic benefit. Company A wishes to acquire Company B. The total of all tangible and identifiable intangible assets is $10,000,000. Company B's balance sheet indicates liabilities of $6,000,000. Using industry benchmarks, Company A's management team has determined Company B generates excess profits of $100,000 annually. Company A's management team is willing to pay goodwill in the amount of four times the excess profits. The proposed cost to acquire Company B would be: = $10,000,000 - $6,000,000 + $100,000 x 4 =$4,000,000 + $400,000, or $4,400,000
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9643548727035522, "language": "en", "url": "https://www.pigprogress.net/Home/General/2008/5/USDA-Effect-of-biofuels-on-feed-prices-is-low-PP001586W/", "token_count": 741, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.12158203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6beb70e3-ed1a-447c-a9ac-463dc4e13aea>" }
USDA: Effect of biofuels on feed prices is low The role of ethanol production in the exploding feed prices is much smaller than often has been said lately, the US Department of Agriculture (USDA) In a report by the department's Economic Research Service (ERS), factors were considered that contributed to the recent increase in food and commodity prices. Many different causes were considered for the recent rise in food and feed prices. In a press conference, where also Agriculture Schafer was present, the report was explained in detail. about the effect of the increase of ethanol production on the corn prices, the USDA's chief economist Joe Glauber said: " I think there's no question in looking at the overall effect on corn prices, I think it's fair to say the increase in biofuel production has had some effect, but again, what I'd consider a relatively small effect and one that in looking at it it's important to take into account a lot of other things that are going on outside of the biofuel Glauber mentioned other causes to be more important for the global increase in feed prices, emphasising global economic "In fact if you were to look at countries like India and China where the GDP there has been increasing on the order to 5 to 10% annually, that has expanded demand, particularly demand for meat products, which has contributed to both a growth in livestock exports in the case of this country and also demand for protein meals, soybean meal, other sorts of things. And that has continued and is projected to continue." Glauber also mentioned the weather situation. "In particular, droughts that have affected Oceania; Australia is suffering now or is just beginning to come out of a drought that really affected the last two crops quite adversely. We also had problems in the Canadian crop last year, problems in the Ukraine, problems in the European Union. All contributed to a very low wheat He also identified strong export restrictions on rice and wheat markets, which were put in place by many countries, "to the extent that a lot less wheat made it on to the world market than we had originally He continued, "The other major factor on food prices of course has been the energy costs, and the impact that they've had on food marketing and transportation costs. no question, biofuels also has been a very, very important part of this picture. As I mentioned, in the US we've seen increases over the last two or three years, and as we increase capacity and add more capacity to the ethanol processing manufacturing sector, that we're going to see - again we're calling for about a 33% increase in corn use in ethanol this year." For the study to the influence of ethanol production on feed prices, the USDA used data from Iowa State University and University of Missouri. He closed off, by saying that "it's also important to realise that when these corn prices pass through to retail prices that again is a much smaller effect." Glauber mentioned that estimates show that total global increase in corn-based ethanol production accounts for only about 3% of the recent increase in global food • United States Department of Agriculture (USDA) • Economic Research • University of Missouri • Iowa State University Subscribe here to the free Pig Progress To comment, login here Or register to be able to comment.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9492076635360718, "language": "en", "url": "http://brandstudio.kyivpost.com/feogi/hydrogen-energy-future/", "token_count": 526, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07080078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:41b2c3ed-bd3a-4e02-a694-9c1768a4560c>" }
Hydrogen – future of energy or passing fad? Hydrogen is not a new idea for energy experts. As Mykola Kadenskyi, network head at the GTS Operator of Ukraine, noted during the recent Energy Thursday webinar, hydrogen is nothing new for countries like Ukraine. “Ukrainian nuclear scientists have long been familiar with hydrogen technology because it is used to cool generators,” Kadenskyi explained. “Of course, the volumes are small but the technological process itself is clear.” What has changed is growing concern about the looming climate crisis. “Green” hydrogen is produced through electrolysis of water, with oxygen as the only by-product (by contrast, so-called “blue” hydrogen involves splitting natural gas into hydrogen and carbon dioxide, which is the stored). As a result, it has piqued the interest of political actors prioritizing environmental goals. ...hydrogen is seen as a potential alternative for the fossil fuel dependent transport industry According to global consultancy McKinsey, this fuel has several uses. Firstly, it serves as a catalyst for a renewables-based energy system transition, making it easier to store energy, transport it across regions, and creating a buffer in the system (shortages can be covered by using up hydrogen stocks). Furthermore, hydrogen is seen as a potential alternative for the fossil fuel dependent transport industry (i.e., to power vehicles), for industrial uses (both for heating and as a feedstock for chemical reactions), and as an input for heating networks (diluting the natural gas currently used). But hydrogen also has its sceptics. Most importantly, experts are concerned about other technologies supplanting hydrogen as a preferred energy source – notably batteries that are becoming increasingly effective at storing electrical energy. Energy transformations require huge investments, with plans being made in five-year installments or even decades. Imagine the fallout if a country bets on batteries, builds thousands of charging and storage stations, consumers buy millions of vehicles, and then the world moves to hydrogen. But batteries and hydrogen are not necessarily at odds, argues McKinsey. “The relative strengths and weaknesses of these technologies, however, suggest that they should play complementary roles,” reads a recent report, which suggests that lighter batteries are more efficient on smaller distances (personal vehicles), while hydrogen delivers better at long ranges (industrial vehicles or transport). Moreover, batteries would not supplant the role of hydrogen in industry or heating. Energy transformations require huge investments, with plans being made in five-year installments or even decades
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9427991509437561, "language": "en", "url": "https://ageconsearch.umn.edu/record/206853", "token_count": 453, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.08447265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5ab0020e-6fe1-4c2d-9249-6b02c5fcc81f>" }
This report develops an analytical framework that assesses the macroeconomic, environmental and distributional consequences of energy subsidy reforms. The framework is applied to the case of Indonesia to study the consequences in this country of a gradual phase out of all energy consumption subsidies between 2012 and 2020. The energy subsidy estimates used as inputs to this modelling analysis are those calculated by the International Energy Agency, using a synthetic indicator known as “price gaps”. The analysis relies on simulations made with an extended version of the OECD’s ENV-Linkages model. The phase out of energy consumption subsidies was simulated under three stylised redistribution schemes: direct payment on a per household basis, support to labour incomes, and subsidies on food products. The modelling results in this report indicate that if Indonesia were to remove its fossil fuel and electricity consumption subsidies, it would record real GDP gains of 0.4% to 0.7% in 2020, according to the redistribution scheme envisaged. The redistribution through direct payment on a per household basis performs best in terms of GDP gains. The aggregate gains for consumers in terms of welfare are higher, ranging from 0.8% to 1.6% in 2020. Both GDP and welfare gains arise from a more efficient allocation of resources across sectors resulting from phasing out energy subsidies. Meanwhile, a redistribution scheme through food subsidies tends to create other inefficiencies. The simulations show that the redistribution scheme ultimately matters in determining the overall distributional performance of the reform. Cash transfers, and to a lesser extent food subsidies, can make the reform more attractive for poorer households and reduce poverty. Mechanisms that compensate households via payments proportional to labour income are, on the contrary, more beneficial to higher income households and increase poverty. This is because households with informal labour earnings, which are not eligible for these payments, are more represented among the poor. The analysis also shows that phasing out energy subsidies is projected to reduce Indonesian CO2 emissions from fuel combustion by 10.8% to 12.6% and GHG emissions by 7.9% to 8.3%, in 2020 in the various scenarios, with respect to the baseline. These emission reductions exclude emissions from deforestation, which are large but highly uncertain and for which the model cannot make reliable projections.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9366142153739929, "language": "en", "url": "https://paycheck.com.co/", "token_count": 1851, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0bd516b7-beaf-462b-8483-e4bc4541b5c6>" }
Paycheck Calculator or Salary Paycheck calculator is the tool used to calculate the take home salary per paycheck. The paycheck calculator provides complete salary information of the user. It helps in calculating both salary and hourly jobs by taking in to account of the federal and state, local taxes. There are large number of websites and payroll agencies providing the Paycheck Calculators or Salary calculators to present the final data for the user. The calculator requires to key some information of the user to key in data i.e. State, job title, Period, Gross salary, taxes etc. to arrive at the final paycheck amount. Employees, employers and payroll agencies, generally use paycheck calculators /Salary calculators to assess the salary offer or calculate their income after tax deductions by federal income tax and Local taxes. Almost every state in the United States have difference in pay frequency for the employees as per the law. Each state established a minimum frequency referred as payday requirement. Some states set up payday laws either bimonthly where as some states are bi-weekly (every other week), but some states require monthly or weekly payment. Illinois and California pays the employees bi monthly, for workers in Kansas to must be hand over paychecks once in a month. Paycheck frequency in Michigan is decided by the occupations. What is a Paycheck? A Paycheck is physical document issued by the employer to its employees for the services rendered for a specific period. Earlier it was in paper document form (a cheque) which is given physically to the employees with the attachment of other details. However, in recent times the electronic deposits to the employee bank accounts have replaced the physical cheques. Even for electronic transfers employees continue to get the detailed calculations of the final payment through Paystubs (Pay Slips) electronically. Factors affecting the paychecks The deductions play important role in the Paycheck tax calculator. It is mandatory to deduct Federal income tax and FICA tax withholding while calculating the Paychecks of the employees. There is no way to escape from these taxes unless your earnings for the period are low and you are exempt from certain taxes. It is common that every country has its own payment frequencies to pay the employees. In United States some employees receive monthly paychecks (12 paychecks in a year), in other states, employer issue bimonthly paycheck on set dates (24 paychecks per year) and others pay bi-weekly (26 in a year). The frequency of the paychecks will affect the paycheck amount the get. The number of paycheck in a year will result in the lesser paycheck amount they get assuming the same salary. Local Factors affect paycheck In United State, each state has different tax laws. You can see the difference in each city or state you live you will see different paycheck amounts. Apart from the federal income tax, employer withhold part of amount from your paychecks for covering state and local taxes. The Trump Tax Plan The US president Donald Trump has introduced new tax plan in December 2017 in to law. The IRS has introduced new tax guidelines starting in the month of February 2018 and taxpayers have been educated by the amendments in their paychecks. However, as per the change in tax plans the IRS has not revised the W-4 form. Employers advised to use the withholdings on the current form. The IRS is working on the W-4 form revision to be available for Employers and employees, so that in the future it will better reflect the changes to the tax code. Taxpayers need not fill up the W-4 form for the time being. Single Fliers Married, Married, Filing Jointly Filing Separately Head of Household |$0 – $9,525||10.0%| |$9,525 – $38,700||12.0%| |$38,700 – $82,500||22.0%| |$82,500 – $157,500||24.0%| |$157,500 – $200,000||32.0%| |$200,000 – $500,000||35.0%| Withholding Taxes on Salary Pay When a paycheck is released to an employee, the employer needs to withhold tax as per the legally required percentage decided by the government to pay as income tax. The Federal tax authorities collects withholding tax from employees throughout the year, by deducting directly from the monthly paychecks. The employer’s has to ensure to withhold this money based on the information provided by the employee in W-4 Form. Certain section of people are exempt from federal income tax withholding. To get exemption, you must satisfy both of the following criteria; Paycheck tax calculator helps you in determining the same. 1) If the employee received a refund in the previous year calculated from all of federal income tax withheld from your paycheck. As the employee has no tax liability pending from his side. 2) If the income is low in the current year the employee expect a refund from tax authorities because all federal income tax withheld and your tax liability also zero for the current year.Youcan mention the same in W-4 Form. At the time filing the income tax return, the I.R.S. can check your tax return for the withhold tax against the deductions from your employer. Federal Top Income Tax Rate At the time of tax withholding by the organization, employees face a swap between higher paychecks and a low federal taxes. The more the employee claims allowances and deduction on W-4, the higher will be the paycheck amount. If the employees do not have much allowances and benefits his withholding tax is more and he will receive lesser paychecks. One can see that difference by input the required information in to the Paycheck tax calculator In case if you have paid huge tax amount previous year and do not want to pay again in the current year you can request a certain amount of additional withholding from each paycheck. You can mention the same of this amount in the W-4 Form. Withholding Taxes by FICA Apart from the income tax withholding the other main federal component of your paycheck, withholding is for FICA taxes (Federal Insurance Contributions Act). Your FICA taxes are your contribution to the Social Security and Healthcare programs that you will have access to when your age grows. It is the savings for you to pay in the system. 2017 – 2018 Income Tax Slabs: Single Fliers Married, Married, Filing Jointly Filing Separately Head of Household |$0 – $9,325||10.00%| |$9,325 – $37,950||15.00%| |$37,950 – $91,900||25.00%| |$91,900 – $191,650||28.00%| |$191,650 – $416,700||33.00%| |$416,700 – $418,400||35.00%| Paycheck as a Communication Tool In olden days, the employees receive paycheck in a physical form; reminders, newsletter are attached to the paycheck and handed over to the employee in envelope. Nowadays all the Paystub and communications are sent through electronically to the employees. The paycheck stub shows the employee how much vacation time, sick time, or paid time off (PTO) was accrued during the period. Companies also outsource payroll services to third-party vendor such as ADP to process employee paychecks and they will send paycheck stub or Pay Slip through electronically. Employees have access to their records online on their websites. Paycheck stub also known as pay slip is a statement showing the details of the earnings and deductions of the employee and the tax to be paid or refunded during each pay period. Payroll deductions depend on the individual employee and the employer’s benefits offerings. The employee can check electronically paycheck stubs or Pay Slips: - Start date and end date of the payroll period. - Gross pay: The total amount payable to the employee by the employer before deductions. - Net pay: The amount payable to the employee after deductions. - Federal taxes : The taxes to be deducted as per the Federal Law - State taxes withheld the taxes to be deducted by the states separately. - Local taxes: Any local taxes specially imposed by the states. Normally all the states will not charge these taxes. - Any Insurance deductions by the employees. - Healthcare related deductions. - Social Security deduction - Retirement benefits deductions - Wage garnishments PAYCHEK CALCULATOR / PAYCHECK TAX CALCULATOR PROVIDERS ONLINE Many corporate companies have large number of employees. It is difficult to manage payroll and paycheck related queries, as the work force is limited. Paycheck calculators / paycheck tax calculator serves as platform between employers and employees. It provides self-service portals (ESS) that can be used by both.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9391263127326965, "language": "en", "url": "https://ulyssesmaclaren.com/2017/11/17/the-5-technologies-that-will-change-everything-in-the-next-decade/", "token_count": 483, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:01318358-aafb-4678-bec0-efb380aedfa9>" }
Here’s my take on the main technology areas that will make a massive difference to our lives in the next decade. This is a huge one and encompasses something as simple as Amazon’s recommendation engine, up to self driving cars, and eventually expert systems such as IBM’s Jenkins that could potentially replace doctors and lawyers and any other information based career. The most famous blockchain technology is Bitcoin, but cryptocurrency is only one use for this technology, taking out the requirement for trusted third parties such as banks. Fundamentally however, it’s just a general ledger system and could be used to track ownership of anything, including property, votes, assets, contracts, licencing, or even identity. Theoretically, this goes a long way to removing the need for currency, government, and banks, unless they can find other ways to stay relevant than being the “trusted third party”. Here’s how a blockchain transaction works: Computers have become approximately twice as powerful every 18 months, following Moore’s Law, but we are now approaching the physic limits of how small transistors can get and how fast this technology can be. The next big jump will be dropping binary transistors and adopting QuBits instead. With this potential for quantum computers to become exponentially more powerful than transistor computers are today, this will enable much stronger cryptography (as well as also making current cryptography obsolete), machine learning algorithms to be run much faster (enabling AI), Bitcoin mining, and of course just doing everything we currently do much faster. Brain Machine Interfaces This technology will start with restoring lost function to disabled people, such as paraplegics, the blind and the deaf, will continue into solving Alzheimer’s, dementia, and mood disorders, and will eventually be the new way we interact with computers and potentially even each other. Elon Musk has bet big on this and thinks it’s one of the best ways to make sure the AI revolution doesn’t leave humanity behind. Mapping the human Genome to fully understand the source code behind life will allow us to, first of all, remove all genetic and hereditary disorders and diseases, and then move on to specify exactly what traits we would like to optimise for in our offspring. Future humans will be smarter, stronger, and more robust.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9325233101844788, "language": "en", "url": "https://www.investopedia.com/terms/a/activity-cost-driver.asp", "token_count": 611, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.030517578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:44869c03-0549-4fcc-8dfb-4ba21fbac0e4>" }
What Is an Activity Cost Driver? An activity cost driver is an accounting term. A cost driver affects the cost of specific business activities. In activity-based costing (ABC), an activity cost driver influences the costs of labor, maintenance, or other variable costs. Cost drivers are essential in ABC, a branch of managerial accounting that allocates the indirect costs, or overheads, of an activity. How Activity Cost Drivers Work A cost driver directly influences a business activity. There may be multiple cost drivers associated with an activity. For example, direct labor hours are a driver of most activities in product manufacturing. If the cost of labor is high, this will increase the cost of producing all company products or services. If the cost of warehousing is high, this will also increase the expenses incurred for product manufacturing or providing services. An activity cost driver, also known as a causal factor, causes the cost of an activity to increase or decrease. An example is a change in the cost of warehousing or a change in the level of production. More technical cost drivers are machine hours, the number of engineering change orders, the number of customer contacts, the number of product returns, the machine setups required for production, or the number of inspections. If a business owner can identify the cost drivers, the business owner can more accurately estimate the true cost of production for the business. - Activity-based costing (ABC) is an accounting method that allocates both direct and indirect costs to business activities. - A cost driver simplifies the allocation of manufacturing overheads, such as the costs of factory space and electricity. - Management selects cost drivers based on the associated variables of the expense incurred. When a factory machine requires periodic maintenance, the cost of the maintenance is allocated to the products produced by the machine. For example, the cost driver selected is machinery hours. After every 1,000 machine-hours, there is a maintenance expense of $500. Therefore, every machine hour results in a 50 cent (500 / 1,000) maintenance cost allocated to the product being manufactured based on the cost driver of machine-hours. Distribution of Overhead Costs A cost driver simplifies the allocation of manufacturing overhead. The correct allocation of manufacturing overhead is important to determine the true cost of a product. Internal management uses the cost of a product to determine the prices of the products they produce. For this reason, the selection of accurate cost drivers has a direct impact on the profitability and operations of an entity. Activity-based costing (ABC) is a more accurate way of allocating both direct and indirect costs. ABC calculates the true cost of each product by identifying the amount of resources consumed by a business activity, such as electricity or man hours. Special Considerations: The Subjectivity of Cost Drivers Management selects cost drivers as the basis for manufacturing overhead allocation. There are no industry standards stipulating or mandating cost driver selection. Company management selects cost drivers based on the variables of the expenses incurred during production.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9316443800926208, "language": "en", "url": "https://economistsview.typepad.com/economistsview/2016/10/what-is-the-new-normal-for-us-growth.html", "token_count": 2387, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.08349609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4d5f9325-ceac-4107-afee-c048aeb884d2>" }
What Is the New Normal for U.S. Growth?: Economic growth during the recovery has been slower on average than its trend from before the Great Recession, prompting policymakers to ask if there is a “new normal” for U.S. GDP growth. This Economic Letter argues that the new normal pace for GDP growth, in real (inflation-adjusted) terms, might plausibly fall in the range of 1½ to 1¾%. This estimate is based on trends in demographics, education, and productivity. The aging and retirement of the baby boom generation is expected to hold down employment growth relative to population growth. Further, educational attainment has plateaued, reducing the contribution of labor quality to productivity growth. The slower forecast for overall GDP growth assumes that, apart from these effects, productivity growth is relatively normal, if modest—in line with its pace for most of the period since 1973. Subdued growth in the labor force In thinking about prospects for economic growth, it is necessary to distinguish between the labor force and the larger population. Both are expected to grow at a relatively subdued pace; however, because of the aging of the population, the labor force is likely to grow even more slowly than the overall population. Figure 1 shows that growth in the labor force has varied substantially over time and has often diverged from overall population growth. In the 1950s and 1960s, population (yellow line) grew more rapidly than the working-age population ages 15 to 64 (blue line) or the labor force (red line). In contrast, in the 1970s and 1980s, the labor force grew much more rapidly than the population as the baby boom generation reached working age and as female labor force participation rose. Those drivers of labor force growth largely subsided by the early 1990s. Since then, the labor force, working-age population, and overall population have all seen slower growth rates. Labor force participation fell sharply during the Great Recession, which held down labor force growth. But labor force growth has since rebounded to roughly the pace of the working-age population. Slowing growth in working-age population and labor force Source: Bureau of Labor Statistics, Bureau of Economic Analysis, Census Bureau, Congressional Budget Office (labor force projections). Future labor force growth is likely to remain low for a couple of reasons. First, as shown in Figure 1, the population is now growing relatively slowly, and census projections expect that slow pace to continue. Second, these projections also suggest the working-age population will grow more slowly than the overall population, reflecting the aging of baby boomers. Of course, some of those older individuals will continue to work. Hence, the Congressional Budget Office (CBO) projects the labor force will grow about ½% per year (red dashed line) over the next decade—a little faster than the working-age population, but substantially slower than in the second half of the 20th century. I use their estimate as a basis for my assumption that hours worked will also grow at about ½% per year so that hours per worker do not change much. Recent slow growth for productivity Figure 2 shows growth in GDP per hour since 1947 broken into periods to reflect variation in productivity growth. This measure of productivity growth was very fast from 1947 to 1973 but much slower from 1973 to 1995. It returned to a fast pace from 1995 to 2004, but has slowed again since 2004. During the fast-growth periods, productivity growth averaged 2½ to 2¾%. During the slower periods, growth was only 1 to 1¼% and dropped dramatically lower in 2010–2015 (Fernald 2016 discusses this period). Variation in productivity growth by trend period Source: Bureau of Labor Statistics, Bureau of Economic Analysis. Figure 2 is consistent with the view that the history of productivity growth has shifted between normal periods and exceptional ones (Gordon 2016, Fernald 2015, and David and Wright 2003). Unusually influential innovations—such as the steam engine, electric dynamo, internal combustion engine, and microprocessor—typically lead to a host of complementary innovations that boost productivity growth broadly for a time. For example, productivity growth was exceptional before 1973, reflecting gains associated with such developments as electricity, the telephone, the internal combustion engine, and the Interstate Highway System (Fernald 1999). Those exceptional gains ran their course by the early 1970s, and productivity growth receded to a normal, modest pace. Starting around 1995, productivity growth was again exceptional for eight or nine years. Considerable research highlighted how businesses throughout the economy used information technology (IT) to transform what and how they produced. After 2004, the low-hanging fruit of IT had been plucked. Productivity growth returned to a more normal, modest, and incremental pace—similar to that in 1973–95. The past and future of GDP growth GDP growth is the sum of growth in worker hours and GDP per hour. The blue line in Figure 3 shows how GDP growth fluctuated on average for each period mentioned in Figure 2. Before 2005, GDP growth since World War II was typically 3 to 4%. The dashed lines in the figure show two projections for future GDP. The higher estimate assumes productivity growth will return to its 1973–95 pace in the long run, while hours grow at the ½% per year pace projected by the CBO. In this scenario, GDP growth would average about 1¾%. GDP scenarios with low labor force growth Note: Annual percent change averaged over periods from Figure 2. Source: Bureau of Labor Statistics, Bureau of Economic Analysis, and author’s calculations. But productivity growth could easily be lower than in the 1973–95 period for two main reasons. First, productivity has grown a little more slowly from 2004–15 than in the 1973–95 period—and much more slowly since 2010 (Figure 2). Second, and perhaps more importantly, future educational attainment will add less to productivity growth. In recent decades, educational attainment of younger individuals has plateaued. This reduces productivity growth via increases in labor quality, which measures the combined contribution of education and experience. Labor quality has added about 0.4 percentage points to annual productivity growth since 1973. However, by early next decade, labor quality will contribute only about 0.10 to 0.20 percentage points to annual productivity growth (Bosler et al. 2016). On its own, then, reduced labor quality growth suggests marking down productivity and GDP projections by at least two-tenths of a percentage point and possibly more. The lower dashed line in Figure 3 shows future GDP growth assuming that productivity growth net of labor quality grows at its 1973–95 pace, while labor quality grows at the slower pace of 0.2%. By this projection, GDP growth per hour would be only a little above 1½%. At first glance, a pace of 1½ to 1¾% seems very low relative to history. But the main reason for the slow pace is demographics: Growth in the 1973–95 period would have been equally slow had hours grown only ½% per year. The red line shows how fast GDP would have grown in that scenario, holding productivity growth at its actual historical pace by period but using the slower pace of growth for hours that the CBO expects in the future. For example, in the 1973–95 period, GDP grew at nearly a 3% pace. But if hours had grown only ½% per year, then GDP growth would have been about 1¾%. The major source of uncertainty about the future concerns productivity growth rather than demographics. Historically, changes in trend productivity growth have been unpredictable and large. Looking ahead, another wave of the IT revolution from machine learning and robots could boost productivity growth. Or, as Fernald and Jones (2014) suggest, the rise of China, India, and other countries as centers of frontier research might lead to more innovation. In such a case, as Fernald (2016) discusses, the forecast here could reflect an extended pause before the next wave of transformative productivity growth. But, until such a development occurs, the most likely outcome is a continuation of slow productivity growth. Once the economy recovers fully from the Great Recession, GDP growth is likely to be well below historical norms, plausibly in the range of 1½ to 1¾% per year. The preferred point estimate in Fernald (2016), who examines these issues in even more detail, is for 1.6% GDP growth. This forecast is consistent with productivity growth net of labor quality returning over the coming decade to its average pace from 1973–95, which is a bit faster than its pace since 2004. In the past we have seen long periods with comparably modest productivity growth. But we have not experienced such modest productivity growth combined with the types of changes in demographics and labor quality that researchers are expecting. This slower pace of growth has numerous implications. For workers, it means slow growth in average wages and living standards. For businesses, it implies relatively modest growth in sales. For policymakers, it suggests a low “speed limit” for the economy and relatively modest growth in tax revenue. It also suggests a lower equilibrium or neutral rate of interest (Williams 2016). Boosting productivity growth above this modest pace will depend primarily on whether the private sector can find new and improved ways of doing business. Still, policy changes may help. For example, policies to improve education and lifelong learning can help raise labor quality and, thereby, labor productivity. Improving infrastructure can complement private activities. Finally, providing more public funding for research and development can make new innovations more likely in the future (Jones and Williams, 1998). John Fernald is a senior research advisor in the Economic Research Department of the Federal Reserve Bank of San Francisco. Bosler, Canyon, Mary C. Daly, John G. Fernald, and Bart Hobijn. 2016. “The Outlook for U.S. Labor-Quality Growth.” FRB San Francisco Working Paper 2016-14. David, Paul, and Gavin Wright. 2003. “General Purpose Technologies and Productivity Surges: Historical Reflections on the Future of the ICT Revolution.” In The Economic Future in Historical Perspective, eds. Paul A. David and Mark Thomas. Oxford: Oxford University Press. Fernald, John G. 1999. “Roads to Prosperity? Assessing the Link between Public Capital and Productivity.” American Economic Review 89(3), pp. 619–638. Fernald, John G. 2016. “Reassessing Longer-Run U.S. Growth: How Low?” FRB San Francisco Working Paper 2016-18. Fernald, John G., and Charles I. Jones. 2014. “The Future of U.S. Economic Growth.” American Economic Review Papers and Proceedings 104(5, May), pp. 44–49. Gordon, Robert. 2016. The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War. Princeton, NJ: Princeton University Press. Jones, Charles I., and John C. Williams. 1998. “Measuring the Social Return to R&D.” Quarterly Journal of Economics 113(4), pp. 1119–1135. Williams, John C. 2016. “Monetary Policy in a Low R-star World.” FRBSF Economic Letter 2016-23 (August 15). Opinions expressed in FRBSF Economic Letter do not necessarily reflect the views of the management of the Federal Reserve Bank of San Francisco or of the Board of Governors of the Federal Reserve System.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9319860935211182, "language": "en", "url": "https://seattleducation.com/2016/10/20/washington-states-digital-promise-school-districts-part-1/", "token_count": 1032, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1494140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ee0a6560-febf-4531-96f2-6902dbc018ad>" }
Back in 2011, Congress created a non-profit which would allow education startups and software companies easier access to America’s public schools. The initiative was called Digital Promise. From White House to Launch “Digital Promise” Initiative press release: Transforming the market for learning technologies. With more than 14,000 school districts, and an outdated procurement system, it’s difficult for entrepreneurs to break into the market, and it’s also tough to prove that their products can deliver meaningful results. Meanwhile, the amount we invest in R&D in K-12 education is estimated at just 0.2% of total spending on K-12 education, compared to 10-20% of revenues spent on R&D in many knowledge-intensive industries such as software development and biotech. Digital Promise will work with school districts to create “smart demand” that drives private sector investment in innovation. But how would the Digital Promise Initiative allow education hucksters, sorry – “entrepreneurs”- to break into the market and get around school districts’ outdated procurement systems? Simple. Create another layer of well-funded, unaccountable bureaucracy and then encourage individual Superintendents to commit their districts to this system. No need to involve school boards or notify parents. This additional layer of bureaucracy is called the League of Innovative Schools. Here’s part of the League of Innovative Schools Membership Charter: The League is… A network of superintendents and district leaders leveraging technology to improve student outcomes A national coalition of public school districts partnering with entrepreneurs, researchers, and leading thinkers A testbed for new approaches to teaching and learning A representation of the diversity of public education in the U.S. The League is action-oriented. League members: Collaborate with colleagues to enhance learning for ALL students Share successful strategies and adopt innovative teaching and learning practices Solve challenges facing K-12 schools through learning technology and education research Commit to equity of access to technology for all students Upon joining the League, members commit to: Attend biannual League meetings, which feature classroom visits, collaborative problemsolving, and relationship-building with peers and partners Join working groups on a broad range of topics relevant to the changing needs of school districts Engage with entrepreneurs to advance product development and meet district needs Support research that expands what we know about teaching and learning Participate in the League’s professional learning community by connecting with other members online, in person, and at each other’s school districts In short, the commitment outlined in the charter allows our public schools to be the testing ground for new education products and our kids as the unpaid, software testers. No permission needed. It also drops the pretense of public education being anything other than a talent and product development pipeline for corporate America. The League of Innovative Schools is a resource grab wrapped in the progressive jargon of innovation, 21st Century skills, and personalized learning for all. As you can imagine, everyone wants in on the action. The philanthropic supporters of Digital Promise includes The Gates Foundation, Carnegie Corporation of New York, Chevron, Ewing Marion Kauffman Foundation, The Grable Foundation, The Joyce Foundation, The Michael and Susan Dell Foundation, The Overdeck Family Foundation, PriceWaterhouseCoopers, Startup:Education, Verizon, and The William and Flora Hewlett Foundation. If that list isn’t business friendly enough, here’s the actual corporate sponsors: There’s only one small problem with this masterpiece of technocratic subterfuge: Getting people to buy the snake oil. Even tech-happy EdSurge admits to this weakness: But in a recent study of 450 educators, including district leaders, school leaders, teachers, private businesses and other groups from 46 U.S. states, the District of Columbia and multiple foreign countries, it became clear there is one thing everyone could agree on: The biggest challenge to personalized learning is getting others to buy into it. Education entrepreneurs have another unstated concern: The value of online teaching software is questionable at best. The success of the Digital Promise Initiative rests on districts quickly switching to online learning platforms before parents or school communities have a chance to question the benefit of this drastic change. Currently, Washington State has three Digital Promise School Districts: Highline, Kent and Vancouver. Parents, this is the time to speak up and make some noise. Ask the uncomfortable questions the education speculators don’t want to answer. Teachers, what’s happening in your schools? Are you being asked to incorporate personalized, online learning in your classroom? What are your stories? Please share them with us at [email protected]. Edupreneurs think they can use our public schools as product development laboratories, our kids as guinea pigs and our teachers as market research assistants. This is unacceptable. The time to pushback is now.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9760788679122925, "language": "en", "url": "https://www.businessforscotland.com/oil-an-epic-example-of-westminsters-economic-mismanagement/", "token_count": 1474, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.27734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:dbb60b8f-b8d7-4710-82c0-23179e8448bb>" }
Norway and the UK have managed their oil and gas industries very differently, resulting in a substantial contrast of fortunes in their economies. A key argument against independence is the claim that Scotland would be too small a nation to maximise the benefit of its oil industry. However, when comparing the UK and Norway’s oil and gas production, and their revenues generated from this industry, we can see that Norway (a smaller independent northern European nation) has generated £386bn more than the UK Government in tax revenues since the production of oil and gas began. 402 403 If Norway had produced significantly more oil and gas than the UK, this may be understandable. However, this is not the case. Instead, the UK has (to date) produced 1% more oil and gas than Norway overall. 404 A comparative loss of £386bn in tax revenue amounts to a monstrous failure in resource governance by the UK Government. With comparative geologies, original production costs, oil grades, prices and resources, Norway – a country very similar to Scotland in population and geography – has collected a huge revenue windfall, whilst Scotland has not. We have already explained the cumulative deficit or the share of the rest of the UK’s national debt that has been loaded on to Scotland’s accounts. However, it is worth noting that even with debt loading and the Croydon Principle, Scotland’s indicative deficit in GERS was lower than the UK’s for 33 of the 38 years for which we have figures. So what changed? The price of oil crashed in 2015. Brent crude averaged $52/b in 2015 after having averaged $98/b in 2014. 405 If the claims that Scotland was dependent on oil and gas were true then Scotland’s economy would have crashed with it. However, Scotland’s economy did not even enter into a recession. The economic growth for the year was negative, at 0.04%. However, the onshore economy grew 1.6%, according to the GDP figures in the 2015/16 GERS report, as the benefit of lower oil prices boosted the onshore economy. Hardly the armageddon predicted during the independence referendum if oil prices dropped, never mind crashed the way they did. Contrast that with the collapse of the financial markets in 2007, upon which the UK economy was twice as dependent on as Scotland was on oil. This led to six consecutive quarters of negative growth – in other words, a recession, from March 2008 to July 2009. 406 We now know that the UK Government’s natural resource management track record is dire. Even so, the GERS reports showed a smaller deficit for Scotland than for the UK until the oil crash and the economy stayed pretty much the same from 2014/15 to 2015/16. 407 So why did oil and gas revenues drop to a deficit of £290m in 2016/17, before recovering to £1.25bn in 2018/19? 408 The answer is that the UK Government stopped collecting tax from the big oil companies. In response to falling oil prices, the UK Government decided to cut Petroleum Revenues Tax (PRT) in 2015 from 50% to 35%, and in 2016 it was further reduced to 0%. 409 The supplementary charge was also cut from 62% to 50% to 10% during the same time. 410 This meant that the UK earned less in tax revenue from its oil and gas industry. Despite oil prices rising again and stabilising at between $60.00 and $70.00, and increased production and cost-cutting helping to lower some UK production costs to around $15.00 per barrel, 411 the zero PRT rate means that revenues will not increase in line with oil company profits. This also means that the illustrative Scottish deficit figure will not reduce as tax revenues are negligible. This has led to Shell, for example, receiving tax rebates in 2018 from the UK Government of £105.48m. Those show up in Scotland’s GERS report as a loss on North Sea operations of £105.48m. At the same time Shell paid Norway $3,154m in taxes. 412 That’s good for Shell, a company that remained in profit during the oil price drop, which recorded profits in 2015 of $3.84bn, then $3.5bn, $15.8bn and a massive $21.4bn in 2018. 413 Shell also paid out the world’s highest shareholder dividend in 2015. It was the oil workers of Aberdeen and the North Sea services companies (more often Scottish owned) that took the hit when the price crashed. The UK Government protected the big oil companies and their shareholders. From their last sets of accounts, the profits declared by the four biggest oil majors operating in the North Sea were as follows: Shell $21.4bn; BP $12.7bn; 414 Exxon $20.8bn; 415 ConacoPhilips $6.3bn. 416 The North Sea giants, Shell and BP, are now more profitable than they were before the oil price crash and have received billions in tax cuts and credits between them. Maybe it’s now time to start phasing in the taxes again and investing the profits in renewable energy projects? This, of course, would return Scotland’s finances to the default setting of “significantly better than the UK’s”. So, the UK Government’s decision to significantly reduce tax on oil companies has had a major impact on Scotland’s national accounts, leading them to show a larger fiscal deficit than the rest of the UK. Since 2015, £1.822bn has been lost from PRT alone. 417 This support to large oil companies (whether necessary, advisable or otherwise) was managed through tax rebates, which have effectively wiped out Scotland’s North Sea revenues. Around 60% of the cost of the PRT cut is deducted from Scotland’s accounts in GERS, which cost Scotland around £340m in 2017/18. 418 To make this clearer, when the UK Government lowers a set of revenues which is almost completely attributed to Scotland as a region of the UK, there is a major reduction in tax revenues assigned to GERS. On the other hand, if the UK Government had maintained tax levels but then offered grants from the Treasury retrospectively for decommissioning and exploration, then only a population percentage (8.4%) of the costs of that grant support would have been deducted, and the illustrative deficit in GERS would be been smaller than the UK’s – not larger. The UK’s oil wealth was shared on a population basis across the UK, meaning Scotland received approximately 9% of that wealth. However, the cost of decommissioning is being met solely by Scotland on a geographic basis. This starved Scotland of investment when it needed it most in the 1970s and 1980s and now creates a false deficit which is used as the key argument against Scottish independence. If Scotland had kept a geographic share of its oil wealth it would be £508 billion better off than GERS suggests it is now 372.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9621511697769165, "language": "en", "url": "https://www.daviddarling.info/encyclopedia/S/St_Petersburg_Paradox.html", "token_count": 726, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.322265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:75fca6cd-673a-4d4a-ad65-43adfe6ae2d6>" }
St. Petersburg paradox The St. Petersburg paradox is a strange state of affairs that arises from a game proposed, in 1713, by Nikolaus (I) Bernoulli. It is named after the fact that a treatise on the paradox was written by Nikolaus' cousin, Daniel, and published (1738) in the Commentaries of the Imperial Academy of Science of St. Petersburg. The game goes as follows. You toss a coin. If it shows heads you get $2. Otherwise, if it shows tails, you toss again. If the coin now shows heads you get $4, and so on. Whenever you toss tails the prize is doubled. After n tosses you get $2n if heads appear for the first time. The only catch is you have to pay the play the game. How much should you be willing to pay? Classical decision theory says that you should be willing to pay any amount up to the expected prize, the value of which is obtained by multiplying all the possible prizes by the probability that they are obtained and adding the resulting numbers. The chance of winning $2 is 1/2 (heads on the first toss); the chance of winning $4 is 1/4 (tails followed by heads); the chance of winning $8 is 1/8 (tails followed by tails followed by heads); and so on. Since the expected payoff of each possible consequence is $1 ($2 × 1/2, $4 × 1/4), etc) and there are an infinite number of them, the total expected payoff is an infinite sum of money. A rational gambler would enter a game if and only if the price of entry was less than the expected value. In the St. Petersburg game, any finite price of entry is smaller than the expected value of the game. Thus, the rational gambler would play no matter how large the entry price was! But there's clearly something wrong with this. Most people would offer between $5 and $20 on the grounds that the chance of winning more than $4 is only 25% and the odds of winning a fortune are very small. And therein lies the paradox: If the expected payoff is infinite, why is no one willing to pay a huge amount to play? The classical solution to this mystery, provided by Daniel Bernoulli and another Swiss mathematician, Gabriel Cremer, goes beyond probability theory to touch areas of psychology and economics. Bernoulli and Cremer pointed out that a given amount of money isn't always of the same use to its owner. For example, to a millionaire $1 is nothing, whereas to a beggar it can mean not going hungry. In a similar way, the utility of $2 million is not twice the utility of $1 million. Thus, the important quantity in the St. Petersburg game is the expected utility of the game (the utility of the prize multiplied by its probability) which is far less than the expected prize. This explanation forms the theoretical basis of the insurance business. The existence of a utility function means that most people prefer, for example, having $98 in cash to gambling in a lottery where they could win $70 or $130 each with a chance of 50%, even though the lottery has the higher expected prize of $100. The difference of $2 is the premium most of us would be willing to pay for insurance. That many people pay for insurance to avoid any risk, yet at the same time spend money on lottery tickets in order to take a risk of a different kind, is another paradox, which is still waiting to be explained.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9580740928649902, "language": "en", "url": "https://www.simonstapleton.com/wordpress/2018/06/29/9-smart-ways-teachers-can-increase-their-earning-potential/", "token_count": 1383, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.047119140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fb767335-7b81-4d94-a320-add11fdabe47>" }
Estimated reading time: 5 mins Teaching is not known for being a lucrative career. There are numerous teachers who feel overworked, undervalued, stressed out and underpaid. It is not uncommon for teachers to start feeling as if they are stuck in dead-end jobs — but is this really the case? It’s possible that teachers may be overlooking some excellent opportunities to increase their incomes and advance their careers. Let’s explore 9 smart ways you can increase your earning potential as a teacher. 1. Invest More in Your Education There is an obvious and well-documented correlation between teachers’ educational achievements and their earnings, according to the National Center for Education Statistics. On average, NCES analysts estimate that public school teachers are able to increase their salaries by 11.31 percent if they hold a master’s degree; private school teachers are able to earn 8.20 percent higher salaries. The main takeaway: If you hold a certificate III in education support, the next step is to earn your bachelor’s degree. If you already hold a bachelor’s degree, consider earning your master’s degree or your Ph.D. 2. Teach Subjects That Are Known to Be More Lucrative When you teach college, some subjects offer more lucrative teaching opportunities than others do. Some of the highest paying topics to teach include law, engineering and economics. Some of the least lucrative subjects include criminal justice, social work and education. This might or might not hold true for public elementary, high school and middle school teachers. Some school districts don’t make any distinction for subject matter in their payment schedules, while some public school have adopted a market-based compensation model. Your choice of college majors can also influence the teaching salary you are able to negotiate, according to the NCES. On average, public schools pay 2.37 percent higher salaries to teachers who majored in mathematics; 1.63 percent more to teachers who majored in business; and 3.02 percent more to teachers who majored in specialized vocational subjects. 3. Teach at a More Prestigious Academic Institution There’s a significant gap between earnings for instructors who work at community colleges and professors who teach at highly respected, top-tier universities. Professors at public state community colleges, on average, earn median yearly salaries of $56,030. In contrast, university professors earn median yearly salaries of $79,340. This is a significant difference. This advice doesn’t, however, always hold true for elementary, middle and high school teachers. The private school system is sometimes seen as more prestigious, but private schools tend to pay teachers poorly in comparison to the public school system. 4. Advance to a Position in School Administration School administrators are almost always required to have teaching experience — so if you’ve already proven yourself in the classroom, school administration could be a viable, and much better paying, career path for you. To qualify, you’ll need to have at least a master’s degree, which you can obtain by returning to school during the summer months when you are not teaching. You could first become a vice principal, then get promoted to school principal, and then seek work as a school district administrator. 5. Gain Expertise in Educational Technology As schools and academic institutions shift their teaching methods to incorporate more online learning into their curriculum, there’s rising demand for educators who have experience in the latest technologies. Education technology expertise can open up new and better paying opportunities in numerous ways, whether you want to pursue work as a training and development specialist (yearly median pay of $60,360), an online learning specialist (yearly median pay of $61,102) an instructional coordinator (average yearly pay of $63,750) or eventually work in an administrative position (average yearly pay of $94,390 for school principals). 6. Move to an Area That Pays Teachers Better Salaries Location is a factor that influences teachers’ salaries dramatically. Study the salary data provided by Edweek.org and the Bureau of Labor Statistics to see if you could find another nearby school district that would pay you a higher salary for doing the same job. For example, simply moving from West Virginia to Virginia could possibly earn you an average estimated pay raise of $18,016, although it might also mean you’d have to absorb an increase in the cost of living if you end up living in one of the expensive northern Virginia suburbs. With any given move, you’d have to do your homework to determine whether you’d be gaining more than you’d lose in terms of earning power. In some cases, it might even make sense to consider taking a small pay cut to move out of an expensive city and into a less costly area. There are instances where such a move would enable you to shrink your expenses and increase your standard of living despite the smaller paycheck. 7. Tutor on the Side — Or Make It Your New Full-Time Business For motivated and entrepreneurial individuals, private tutoring can be much more lucrative than teaching is; and nobody is better qualified for the job than a teacher. Tutoring makes an outstanding side hustle for teachers who want to earn extra income during their summer vacations, evenings or weekends. 8. Line Up Public Speaking Engagements in Your Area of Expertise You can earn substantial amounts of income by booking public speaking engagements at conferences, conventions, trade shows, club meetings, professional organizations or other gatherings. It is not uncommon for engaging speakers to earn fees ranging from $1,000 – $10,000 US. There are speakers who make more than six figures from their speaking gigs every year — enough to quit their full-time teaching jobs to just pursue public speaking as a career. One caveat: Do be careful to faithfully adhere to your academic institution’s policies regarding conflicts of interest. 9. Start a Blog – Just Like Me! Blogging offers you numerous opportunities to earn extra income while sharing your subject matter expertise. There are many lucrative angles you could blog about — some of which relate to education and some of which do not. You could start a blog focused on education technology, homeschooling, traveling to your hometown or any of your hobbies or personal interests. There are many ways to make money from your blog including the possibility of charging brands to write sponsored posts about their products, getting paid a sales commission via affiliate marketing or earning advertising revenue. There are numerous other ways teachers could earn side income or increase their salaries, but these are 9 of the smartest and most obvious approaches an educator could use to earn more. I hope these ideas are useful to you as you take action to increase your earnings.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9810694456100464, "language": "en", "url": "https://www.washingtonpost.com/news/wonk/wp/2013/10/21/everything-you-need-to-know-about-jpmorgans-13-billion-settlement/?utm_term=.bffa22867135", "token_count": 3107, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.400390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:21587803-c8c9-4408-813d-5ce2b60b5c16>" }
What is JPMorgan Chase? It's the largest bank in the United States, with $2.4 trillion in assets. It is active in a wide variety of financial services businesses, including both what you might think of as ordinary consumer banking--taking deposits and offering car loans, mortgages, and credit cards--and more exotic Wall Street deal-making like helping large companies issue stocks or bonds. It has 255,000 employees, about the population of Orlando. Its history dates back to 1799, with the Bank of the Manhattan Co., founded by Aaron Burr (the guy who killed America's first treasury secretary in a duel), which helped finance the Erie Canal; that is one of more than 1,000 banks that have merged over the generations to become the colossus that is now JPMorgan Chase. The most important of those predecessor firms is J.P. Morgan & Co., founded by the great Gilded Age financier in 1871, which played a key role financing the American rail system, the Brooklyn Bridge, and the Panama Canal. Morgan was a titan of American finance, helping guide the young republic through a series of financial crises at a time there was no central bank. Think Tim Geithner circa 2008, but with less hair and a groovy mustache. J.P. Morgan merged with Chase Manhattan in 2000, leading to the current JPMorgan Chase. It is run by Jamie Dimon, who is arguably the most successful banker of his generation. Dimon also has better hair than J. Pierpont Morgan. So what is this settlement about? In the years before the 2008 crisis, large banks were in the business of "mortgage securitization." They would take home loans made by retail banks and mortgage brokers all over the country, and sell them to others. The government-sponsored mortgage finance companies Fannie Mae and Freddie Mac bought some of these mortgages. And the banks also packaged some of them into complex, privately issued "residential mortgage backed securities" that were bought by investors around the globe. But a lot of the loans that the banks sold were bad. Many were subprime, meaning to people with weak credit, small down payments, or both. Many more were "Alt-A", a category of loan quality a little better than subprime but worse than prime loans. The companies buying the mortgages knew that they were investing in lower-quality credit risks. What they may not have known is just how bad lending standards had become, that many of the people taking out mortgage loans didn't make as much money as they said they did, for example, or that there were other red flags to suggest that they wouldn't be able to handle their mortgages. As a result, the people who ended up owning the loans--both Fannie Mae and Freddie Mac, and the private investors who purchased mortgage backed securities--ended losing money as borrowers were unable to make their mortgage payments. What did this have to do with the financial crisis? It was losses on mortgage securities like those involved in this case that triggered a loss of confidence in the U.S. banking and financial system. Securities that had been rated AAA that were based on faulty underlying mortgages turned out to be junk, and losses on them prompted huge losses for big banks and other investors, causing a crisis of confidence in the global banking and financial system. That in turn necessitated a $700 billion bailout of the U.S. banking system, and a bailout of Fannie Mae and Freddie Mac that has totaled $188 billion. There were a lot of causes of the 2008 crisis, but the packaging of bad mortgages into mortgage backed securities was the Patient Zero. So what did JPMorgan Chase do wrong in all of this? They bought Bear Stearns. Okay, that's overstating it a bit. They also got into this mess by buying Washington Mutual. And JPMorgan was responsible for some of the alleged misdeeds on its own. Here's what we're talking about. One of the firms most heavily involved in this businesses of packaging and reselling subprime mortgage-backed securities was Bear Stearns, then the fifth-largest U.S. investment bank. One of the most active retail mortgage lenders was Washington Mutual. In March 2008, Bear Stearns was on the verge of failure. In a last-minute deal to prevent the firm from collapsing, facilitated with a $29 billion loan from the Federal Reserve, JPMorgan swooped in and bought the company. Later in 2008, it bought up Washington Mutual out of FDIC receivership. In the process, JPMorgan took on all of Bear Stearns' and Washington Mutual's outstanding legal exposures. By some estimates 70 to 80 percent of the dealmaking at the heart of the Justice Department settlement was by the acquired companies rather than the pre-2008 version of JPMorgan. But legally, that doesn't matter; the JPMorgan put itself on the hook for those misdeeds when it acquired the two firms. OK, but what are they accused of actually doing? There are two suits that have actually been filed that would be settled as part of the $13 billion Justice Department deal, one by the regulator of Fannie Mae and Freddie Mac, the other by the New York Attorney General. There are civil charges pending by the Justice Department that have not been filed, but presumably would be if the negotiations over a settlement break down. Here's the gist of the accusations: * Bear Stearns said it was doing "due diligence" to make sure the mortgages it was packaging were sound. But the process was shoddy. "Rather than carefully reviewing loans for compliance with underwriting guidelines, Defendants instead implemented and managed a fundamentally flawed due diligence process that often, and improperly, gave way to originator's demands," says the lawsuit by New York attorney general Eric Schneiderman. The workers who were supposed to be vetting the loans were pushed to process as many as possible and not to look at them very carefully. "Have 1594 loans to do in 5 days," wrote one team leader in an e-mail, according to the suit. "Sound like fun? NOT!" * Washington Mutual, Bear Stearns, and pre-2008 JPMorgan itself "were negligent in allowing into the Securitizations a substantial number of mortgage loans that, as reported to them by third-party due diligence firms, did not conform to the underwriting standards" that had been stated, and that those poorly vetted mortgages were then offloaded to Fannie Mae and Freddie Mac and, by extension, American taxpayers. The Federal Housing Finance Agency suit is full of details of the alleged wrongdoing, but here is the best example of inadequate due diligence. From the suit: "Fay Chapman, WaMu’s Chief Legal Officer from 1997 to 2007, relayed that, on one occasion, '[s]omeone in Florida made a second-mortgage loan to O.J. Simpson, and I just about blew my top, because there was this huge judgment against him from his wife’s parents.' When she asked how they could possibly close it, 'they said there was a letter in the file from O.J. Simpson saying ‘the judgment is no good, because I didn’t do it.'" In other words, the government has alleged that JPMorgan and the companies it later acquired were offloading bad mortgages on other parties (mortgage backed securities investors, and U.S. taxpayers) through their lax practices. The Justice Department was said to be on the verge of launching a new civil suit along the same lines before negotiations over a settlement heated up. So was JPMorgan the only firm doing this stuff? No! There have been similar charges against many other banks; the FHFA filed suit against 17 banks at the same time as the JPMorgan action mentioned above. Indeed, like JPMorgan, Bank of America has been particularly weighed down by legal exposure by firms it acquired during the crisis, in its case Countrywide Financial and Merrill Lynch. Ironically, the firms that kept their noses (relatively) clean in the pre-crisis years were the ones that were in strong enough financial position to pick off competitors as they failed in 2007 and 2008, and in the process exposed their own shareholders to enormous potential losses. So did the government force them to buy these companies that are now dragging them down? Did Treasury Secretary Hank Paulson and New York Fed President Tim Geithner and other federal officials encourage these emergency acquisitions? Absolutely. They even helped broker them in some cases by helping encourage communication among the parties, and in the case of Bear Stearns actively encouraged the deal by putting Fed money up to facilitate the transaction. But the government didn't have any ability to force anybody to buy these failed banks. That was evident when Lehman Brothers went bankrupt in September 2008, because they couldn't find anyone with the ability and will to buy it. Jamie Dimon knew when he was buying Bear Stearns and WaMu the risks his bank was taking on. He was advised by some of the most talented, and highly compensated, lawyers on earth. It may have turned out to be a bad bet, with more legal exposure than he and JPMorgan lawyers were expecting, but those are the judgments they are paid to make. This all happened a really long time ago. Whatever happened to the statute of limitations? There is only a six-year statue of limitations in federal law for securities and commodities fraud, tax crimes, or violations of securities laws. If those were the charges, then prosecutors would probably be out of luck, given that many of the bad mortgage securities were issued in the 2005 to 2007 time frame. But there's a different set of financial violations that carry a 10-year statute of limitations. Under legislation enacted in 1989 to help deal with the savings & loan crisis, prosecutors have a 10-year statute of limitations on crimes that involve defrauding banks. They are using that time now. (They would have a lot more time if the charge was major art theft, with a nice 20-year window for prosecutors to do their work). So is anyone going to jail? Maybe! In negotiations with the Justice Department over the settlement, Dimon has reportedly pushed for the terms to include absolving bank employees of criminal charges related to mortgage securitization being weighed by a U.S. attorney in Sacramento, California. The Justice Department, led by Attorney General Eric Holder, has reportedly rejected that possibility, and this will be solely a civil settlement. Any criminal charges that materialize from that investigation or others could still go forward. And under terms of the settlement, JPMorgan reportedly will agree to cooperate with the investigation. But why now, more than five years after the financial crisis and six or more years after the bad lending practices took place? The version of this that is generous to the Justice Department goes like this: It took a while after the crisis to figure out where legal culpability might lie. Once they zeroed in on mortgage securitization as a key area of potential fraud, it was a massive job to ascertain who might have broken which laws. They had to examine thousands of transactions worth trillions of dollars, by dozens of banks and other financial intermediaries. As much as we might want to believe this is a "Law and Order" world where the most complex of cases can be promptly tied up within an hour (less when time is allowed for commercials, introductory theme song, and final-scene-wistful-scotch-drinking), that's not how the law really works. Especially with complex securities litigation, it takes time to build these cases and ensure they are nailed down. The version that is less generous to the Justice Department is this: In the aftermath of the financial crisis, they were too timid and chicken to go after the big banks. But now some time has passed, the financial system is less on the brink, and new leadership is in charge at the criminal division. Eric Holder wants some legacy cases to show he has gone aggressively after those culpable for the financial crisis, and this will be one of those cases. This is going to be a $13 billion settlement. But how much is that for JPMorgan? It's a lot of money even for a bank the size of JPMorgan, though certainly nothing approaching a death blow. The bank earned $32 billion in operating income in 2012, so the settlement would be equivalent to about five months worth of income for the company. It is clear that JPMorgan lawyers had hoped for a much cheaper price for settling the cases; earlier settlement offers were as low a $1 billion and $3 billion. Put another way, the reserve that JPMorgan set aside for the settlement last quarter caused the company to record its first quarterly loss since 2004. It managed to remain profitable throughout the financial crisis, but not through the legal losses that followed. So who gets the money? Of the $13 billion, $9 billion is to go to fines that would ultimately end up in government coffers, essentially helping repay taxpayers in part for their $188 billion bailout of Fannie Mae and Freddie Mac that was necessitated in part because of bad mortgages the companies bought from JPMorgan. The other $4 billion is to go to help homeowners struggling with their mortgages. The exact contours of how that money will be used will be a matter of some focus once more details of the settlement materialize. You may notice who is not included in this list: The private investors who bought residential mortgage backed securities stuffed with bad loans. So are they going to admit wrongdoing? It looks that way! In recent civil settlements with financial firms, prosecutors have insisted that the firms cop to whatever bad behavior they stood accused of. A past practice was to not require any admission of guilt, which often eased the pathway to a settlement. Now, JPMorgan lawyers and federal prosecutors are reportedly hammering out a "statement of facts" in which the company will concede some misdeeds. So what does this mean for Jamie Dimon and JPMorgan? First things first: it doesn't resolve a number of unrelated legal matters the firm is facing, ranging from a probe of energy trading practices to investigations into its "London Whale" trading scandal to an investigation into whether the firm bribed Chinese officials by hiring their children. The company has said it will ramp up its hiring of compliance staff and spending on technology to try to prevent its sprawling, multi-trillion dollar business from having so many legal issues in the future. Still, JPMorgan shareholders appear to be relatively happy to have the legal exposure potentially behind them despite the record-high settlement. Its shares have bounced around between roughly $50 and $54 since word of a potential settlement. And when they last had the opportunity to voice their view of Dimon's performance, in a shareholder referendum this past spring, some 98 percent endorsed his continued leadership of JPMorgan. Is JPMorgan too big to manage? Its shareholders, from all appearances, don't think so, even with the $13 billion soon to be heading out the door to settle these old legal problems. Is there a song, preferably by The Clash, that characterizes JPMorgan's recent entanglements with the Justice Department?
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9080880880355835, "language": "en", "url": "http://blog.socratesk.com/", "token_count": 6990, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.020751953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f3614d39-cbf7-4c8b-a15c-53a285de4809>" }
In this Blog you will find evidence of my journey towards becoming a Data Scientist including the challenges I faced and addressed, Data Science competitions, tools, and tips & tricks. Contact me at @ksocrates or Email me. Socrates, one of the greatest Greek philosophers of mankind, once said, “The unexamined life is not worth living.” This famous quote can be adapted to Machine Learning models as well. If this quote has to be rewritten to ML world, it will read as “The unexamined ML model is not worth-production.” An important aspect of the predictive modeling pipeline is, measuring the performance of the model developed. It determines how best the model fits the purpose. The performance of model is measured by running the model on unseen dataset and comparing the output with actual results. There is no one type of metric that can be used to measure the performance of the models. In other words, the techniques used for regression models cannot be applied to classification or clustering models. Let us look at various evaluation metrics used for Linear Regression and how they are important to a business problem. a. MSE - Means Square Error (L2 Loss) Mean Squared Error (MSE), also known as Least Squares Error (LSE), is the simple and commonly used evaluation metrics for linear regression. To compute MSE, take the difference between actual and predicted values of each observation, square the differences, and then find out the mean. MSE represents both the variance and bias of the predicted values. If the outliers represent anomaly that is important for business and should be detected, then we should use MSE. Note that MSE is sensitive to outliers, but it gives a more stable and closed-form solution. The unit of MSE is square times the unit of the predicted value. The MSE can be calculated using the sklearn library as below: b. RMSE – Root Mean Squared Error Root Mean Squared Error (RMSE) is the next simplest form of evaluation metrics used for linear regression. To compute RMSE, first, compute MSE as stated above and take a square root of the final value. The square root is taken to arrive at the scale of the error same as the scale of the predicted value. In other words, the unit of RMSE is the same as the unit of the predicted value. As RMSE is the square root of MSE (Variance), it represents the Standard Deviation of the error. Note that any outliers present in the data will magnify the error term to a higher value. The RMSE can be calculated using the sklearn library as below: c. RMSLE - Root Mean Squared Logarithmic Error The Root Mean Squared Logarithmic Error is calculated by taking the log of actual and predicted values before computing the error. After taking the log values, it is the same as computing the RMSE. RMSLE is used in cases where the predicted values above the actual values (overestimation) are preferable than the predicted values lower than the actuals (underestimation). Further, RMSLE considers only the relative error between the actual and predicted values and not the scale of the error. The RMSLE can be calculated using the sklearn library as below: RMSLE is used when both the predicted and actual values are big numbers. It is used if the huge differences between the predicted and actual values are not to be penalized. d. MAE - Mean Absolute Error (L1 Loss) Mean Absolute Error (MAE), also known as the Least Absolute Deviation (LAD), is the average of the sum of the absolute difference between actual and predicted values. In other words, calculate the difference between actual and predicted values of each observation, compute their absolute values, sum it up, and then determine the average. MAE function tries to reduce the absolute differences between the actual and predicted values. If the outliers represent corrupted data, then we should choose MAE as a loss function. Unlike MSE, MAE is insensitive and more robust to outliers. The MAE can be calculated using the sklearn library as below: Cross-Validation (CV) is a model evaluation technique to compute the performance of a Machine Learning (ML) model. After building a model, validating it on trained data fetches the residual errors, and one should never use the same data to measure the quality of a model. It is required to validate it against the unseen data to arrive at the actual performance. Following are the three important CV techniques that are widely used in the industry to measure the performance: 1. Hold-out 2. K-Fold 3. Leave-One-Out (LOO) Let us take a look at each one of them in detail. In this technique, the given dataset is split into Train and Test datasets in random. Usually, the split ratio varies from 70:30 to 90:10. The model is trained on the train set and validated on the test set. The performance of the model is computed using predicted outcome and actual value of the test dataset. Hold-out CV is a widely used technique when the dataset is large. The drawback with this technique is the randomness of the data that may lead to overfitting. Think of a scenario where the dataset is small, and after splitting the train set contains people from a particular state/gender, and the test set has different state/gender. In this technique, the given dataset is split into an equal number (K) of folds. The split may either be Random or be Stratified. The model is iteratively trained on K-1 folds and tested on the fold that is not included for training. Though there is no formula, the number of folds (K fold) varies from 5 to 10 depending on the size of the dataset. In the above picture, the given dataset is split into 5 folds equally. In the first iteration, folds 1, 2, 3, and 4 are used for training a model (Model 1) and fold 5 is used for prediction. In the second iteration, folds 1, 2, 3, and 5 are used for training (Model 2) and fold 4 is used for prediction. This process is carried out iteratively for all the 5 folds. Based on the predicted values and actual targets of each fold, the accuracy of each model (fold) is computed and averaged for mean accuracy. In a Random split, as the name implies, the dataset is split in random. The drawback with this approach is, there may be an imbalance in target feature in each fold. In other words, one or more of the folds may have very less or only one target feature than the other folds. Unlike random split, the Stratified split ensures that there is the same amount of distribution of target features in each fold. The advantage with this technique is that each data point is tested as unseen data and is participated K-1 times in the training process. This technique eliminates the randomness bias stated in the Hold-out method. This method best works for small and medium size datasets. The main drawback with this approach is that the time taken to train and test the model increases by K-1 times when compared to Hold-out technique. For instance, in 5-fold CV the time taken is at least 4 times higher than the hold-out approach. 3. Leave-One-Out (LOO): Leave-One-Out is a special case of K-Fold technique in which the K becomes N. The dataset is split into N folds where N is the total number of data points in the dataset. The models are trained N separate times using N-1 data points, and the prediction is made on the left-out data point. This method best works for small size datasets. The code for K-Fold can be used for LOO as well with below updates: One of the widely used techniques to identify all the important features from a given dataset is Backward Elimination, which is discussed in the post How to identify the features that are important for a Machine Learning model?. With this technique, a model has to be developed each time to determine the importance of all the features and eliminate the least important one. As only one feature gets eliminated during each iteration, the model has to be re-trained every time a feature gets eliminated, till all the insignificant features are removed. This technique is, certainly, a computationally intensive and time-consuming. Think of a situation where there are 100s of features and 10s of 1000s of observations in a dataset, and we want to identify the important features. Permutation Importance or Mean Decrease Accuracy (MDA): In this technique, a model is generated only once to compute the importance of all the features. Due to this, the Permutation Importance algorithm is much faster than the other techniques and is more reliable. The following steps are involved, behind the scene: A model is created with all the features in a dataset (This is the only model created) The values in a single feature are randomly shuffled, and predictions are made using the resulting dataset The predicted values are compared with the actual values to compute the performance degradation due to the shuffling (The feature is “important” if shuffling decreases the performance because the model relied on that feature for the prediction) The feature’s value is unshuffled to bring it to the original state The steps 2 to 4 are carried out with the next feature in the dataset until the importance of all the features are computed Random Shuffle of the first feature Let us take a sample dataset 50-Startups.csv, run a simple Random Forest Regressor model, and compute Permutation Importance. The above code prints a table with the list of features and weights in descending order, as shown below. The number before the ± denotes how much the model performance decreased with a random shuffling of that particular feature alone. The number after the ± denotes how the performance varied from one reshuffling to the next. Based on the above table, the performance of the model got significantly degraded when RDSpend was shuffled, making it the most important one. When the MarketingSpend was shuffled, the performance was degraded to some extent making it the second important one. It is evident that RDSpend and MarketingSpend are the ones that have more influence in determining the Profit of a startup company. Shuffling of the other features have very less impact on performance, and hence they can be eliminated. Note: It is highly recommended to use the Permutation Importance technique after eliminating the highly correlated features because shuffling of one correlated feature may not find its importance as the model has access to the other similar feature. If all these correlated features are dropped from the dataset based on its least importance, then the final model may have a bad performance if one of the correlated features are highly significant. If a Computer Vision (CV) related application deals with detecting or tracking a specific object, then it is necessary to determine the range of HSV (Hue, Saturation, and Value) or RGB (Red, Green, and Blue) values of that object. This range is required to be specified as part of the coding to detect that object. If the correct range is not specified, the CV algorithm may pick-up noises as well, besides the actual object, leading to false detection and tracking. In the below OpenCV code snippet, a Tennis ball is about to be detected and tracked when it is moved in front of a webcam. To identify the ball alone, not any other objects/ noises, it is necessary to specify a correct range of corresponding HSV numbers. The exact HSV or RGB range can be determined programmatically using OpenCV for an object to be identified or tracked. In the below clip, a Tennis ball, which needs to be detected and tracked, is used to determine its HSV range. Grab the python code ColorPicker.py from my GIT repository, copy the same to a local machine, and issue one of the below commands in Command Line Interface (CLI), based on your requirement. The source code is taken from Adrian Rosebrock’s repository. To determine HSV range based on an image, python ColorPicker.py --filter HSV --image /path/image.png To determine RGB range based on an image, python ColorPicker.py --filter RGB --image /path/image.png To determine HSV range based on webcam video, python ColorPicker.py --filter HSV --webcam To determine RGB range based on webcam video, python ColorPicker.py --filter RGB --webcam This script launches three windows as shown in the above clip: Original: shows original video/image Trackbars: shows sliders to adjust the HSV/RGB Min and Max range Thresh: shows video/image adjusted to the selected HSV/RGB range Adjust the Min and Max slide bars in Trackbars window till you get the desired object alone appear in White in Thresh window. Take the corresponding HSV/RGB range and use them in the code as indicated earlier. To improve the performance of a machine learning model, one of the aspects that Data Scientists focus on is, tuning and fine-tuning hyper-parameters of Machine Learning (ML) models, besides working on Feature Handling and Model Ensemble. Parameter tuning plays a vital role in achieving higher accuracy of an ML model. The initial accuracy of XGBoost model, from the above PDF document, is 73.26% with random parameters. After tuning 6 different parameters, the accuracy increased by 1.16% to 74.42%. Though the increase in accuracy is marginal due to the very small dataset, this document explains how one can tune hyper-parameters using GridSearchCV and improve the performance. To improve the performance of a Machine Learning (ML) model, Feature Engineering, Feature Extraction, and Feature Selection are the important aspects, besides Model Ensembling and Parameter Tuning. Data Scientists spend most of their time working on features than on developing ML models. This post, which contains examples and corresponding Python code, is aimed towards reducing the time spent on handling features. Feature Engineering is more of an art than science that requires domain expertise. It is a process through which new features are created, though the original dataset could have been used as such. The new features help arrive at better ML model than the one trained from the original dataset. Most of the time, the new features help improve the accuracy of a model and help minimize the cost function. In Feature Extraction, the existing features are converted and/or transformed from raw form to most useful ones so that the ML algorithm can handle them in a better way. Let us dive into some of the frequently used feature engineering techniques that are widely adopted across the industry on the Categorical features. A. Categorical Features: One-hot Encoding: Represent each categorical variable as a binary vector Label Encoding: Assign each categorical variable (label) a unique numerical ID Label Count Encoding: Replace categorical variables (labels) with their total count Label Rank Encoding: Rank categorical variables (labels) by their count (more count higher number) Target Encoding: Encode categorical variables by their ratio of target (label) NaN Encoding: Assign explicit encoding to NaN values in a category Expansion Encoding: Create multiple categorical variables from a single variable Consolidation Encoding: Map different categorical variables to the same variable Performing feature engineering on numerical data is different from that on categorical data. In numerical data, the techniques involve rounding, binning, scaling, missing values imputation, the interaction between features, etc. Feature Selection helps identify the most significant features, from a given dataset, which will be helpful in generating a better model. Besides the raw dataset, a Data Scientist has to use the engineered and extracted datasets as well to identify the importance of them. For example, to predict whether a startup company will be profitable or not, the Administrative expense may not be a significant feature when compared to Marketing and R&D expenses. To determine this, follow the article “How to identify the features that are important for a Machine Learning model?” that explains how features can be selected using statistically and through the ML model. Human vision is one of the most complex functions of the brain. Computer Vision(CV), a sub-discipline of Artificial Intelligence (AI) attempts to replicate some visual functions of the brain, one of them being able to recognize faces. The first step is to detect the face. In this demo video feed, I used the OpenCV library to detect the face and eyes of myself and pictures behind me. Face detection in video has a wide range of applications. For example, in video surveillance, you can use automatic face detection to detect if someone just came into the view of the camera and send out an alert, instead of having a person constantly looking at the video for human activity. I used Heroku - A cloud-based, Platform as a Service (PaaS) provider that enables developers to build, run, and operate applications instantly. The code base used to develop this app can be found in my GitHub location. If you would like to know the step-by-step details of how to create an app, load the files from GitHub, build, and deploy the app in Heroku, leave a note in the Comments section, by clicking this post, and I will get back to you as soon as possible. In one of the recent meet-ups, I was asked, which is important for generating a good Machine Learning (ML) model - A good Data Scientist or Data? That is an interesting question, right? A data scientist can be hired, trained, or outsourced by any enterprise at any time, but how about the data? The data can only be captured and collected by that enterprise alone, through their core business processes, over a period of time. Data collection takes time; it requires infrastructure and software components to be in place. Of course, external sources, either publically available or third party data, can be leveraged as supplemental sources for improving the machine learning model efficiency but the core secret sauce i.e. data has to come from the enterprise itself. The next challenge is to identify whether the data captured is good or bad. In other words, are all the captured features important for the generation of a good machine learning model? A good domain knowledge may help answer this question partially, but how to identify and prove it mathematically? Well, this challenge can be approached either using Statistical methods or using Machine Learning models itself. In this method, we do not create an actual machine learning model using any algorithms but we use the given dataset to analyze how the features are correlated to each other. Chi-squared and Adjusted R-squared are the two majorly used metrics that can be employed. Though there are many methods viz All-in, Backward Elimination, Forward Selection, Bi-directional Elimination, and Score Comparison, Backward Elimination (a stepwise-regression technique) is the widely used method in the industry. This can be achieved using Gretl - an open-source statistical software package provided by SourceForge. Gretl User Guide is a good resource to start with for this exercise. Follow the below steps to perform backward elimination: Select a significance level (SL) (say, 0.05) Fit a model using Gretl with all available features (predictors). The dataset (50-Startups.csv) used for this analysis can be found here Identify the feature with the highest P-value If the P-value of a feature is higher than SL, remove that feature and refit the model with remaining features Note: Even if there are multiple features whose P-values are higher than the selected SL, remove only the one that has the highest P-value and fit the model again. This is because the removal of one feature will impact the constants, coefficients, and P-value of other features. Further, as we selected an arbitrary value for SL (as 0.05), it is necessary to compare the models before and after removing the selected feature for Adjusted R-squared metric (or Chi-squared) as well. Do the steps 3 and 4 to a point where the selected features yield highest Adjusted R-squared values and/or the P-value of features is less than the selected SL. After reaching that point, the list of features collected are the ones that are important for building a good machine learning model In the above picture, Model 4 contains all the features whose P-value are less than 0.05. However, its Adjusted R-squared value is less than that of Model 3 that contains a feature that has P-value greater than chosen SL (0.06 > 0.05). In spite of this feature, Model 3 (thereby the features in it) has to be selected as the best one based on its highest Adjusted R-squared value. From Model 3, the important features that required for generating a machine learning model, which can predict the target feature, are RDSpend and MarketingSpend. Machine Learning Model: In this method, we create an actual machine learning model using one of the algorithms that output importance matrix as part of the model generation. This matrix will provide details about each feature in a dataset and its percentage of importance in generating the model Let us take a simple Random Forest Regressor model to arrive at the important features using the same dataset (50-Startups.csv) we used for the statistical method. The above code prints a table with the percentage of importance of each feature in descending order, as shown below. This clearly indicates that RDSpend and MarketingSpend are the two features that are majorly important for generating a model that can predict the target feature. From both Statistical and Machine Learning methods, it is evident that RDSpend and MarketingSpend features are the ones that are required to determine the Profit of a startup company. The other features are not significant enough to be included in a model and hence they can be rejected. Further, these selected features can be used as a feedback mechanism to business processes that capture data or to the process that aggregates data from different data stores for model generation. This will drastically reduce the number of features that need to be captured for model generation and during real-time prediction. To realize the true benefit of a Machine Learning model it has to be deployed onto a production environment and should start predicting outcomes for a business problem. Most Data Scientists know how to extract data from multiple data sources, combine, clean, and consolidate data, perform feature engineering, extract features, train multiple models, ensemble, validate, and test the models. But what they lack is how to take a trained model onto production. There are multiple ways to deploy a model in production. However, in this post, we will go through the step-by-step process of creating a basic model and deploy it as a Web App using Flask micro framework - a Python based toolkit. These steps are executed in Windows Operating System. But for Linux, Ubuntu, and other OS, this should work seamlessly by adopting relevant syntax. Let us build a simple Machine Learning Model using iris dataset that is bundled with sklearn package. This can be done using Jupyter Notebook, PyCharm, PyTorch or any other IDE that you are comfortable with. Now that your machine learning model is created and persisted in hard-disk as SVMModel.pckl In Windows Command prompt, execute the below command to install Flask framework and its associated dependencies/libraries: pip install flask gevent requests pillow Let us create a folder structure as below so that it can be extended to production-like interactive and real-time application later. Root folder flask-blog contains server start-up class Sub-folder templates contains static and dynamic html files Under flask-blog folder, create a file called server.py with below content: After the above file is created, go to flask-blog folder, open a Command prompt, and run the command python server.py. It will execute as below: After the server is started successfully, open a browser window, and enter URL http://127.0.0.1:5000/ If you get a message Hi, Welcome to Flask!! in your browser, congratulations, your Flask server is up and running successfully! If you get any error or could not get the server up and running, leave a note under the Comments section of this blog and I will get back to you as early as possible. Having successfully started the server, let us move on to extend server.py to predict the new observation using previously trained and stored SVM Model. Before updating code, go to the Command prompt and stop Flask server using Ctrl-C. Update server.py code as below, or you may simply copy & paste the contents to your code. Go to the Command prompt again and start the server using python server.py. Once the server is started, open a browser window and enter the URL: http://127.0.0.1:5000/predict?sepal_length=6.0&sepal_width=2.5&petal_length=5.5&petal_width=1.6 Voila! The predicted class of Iris will appear on the screen as above. Play around by changing the values of features in the URL query string. Now that you understand how a machine learning model can be created, persisted onto a disk, loaded from disk, can extract features from a browser request, and can use the model to predict the class using those features. This application can be extended with fancy UI containing form element, dropdown boxes, submit button, etc.. Most of the MOOCs, online courses, tutorials, and webinars talk about how to generate better, robust, efficient, and generic models to address business problems but not the deterioration models over a period of time and how to upkeep them so that it continues to deliver its purpose. Besides Analyzing and Generating models, a Data Scientist’s role extends to Assess and Maintain them after deployment into production. There are many reasons a model may deteriorate over a period of time. A model may start deteriorating slowly in 3 or 6 or 12 or 18 months depending upon the factors and business problem it addresses. Since there is no fixed period or template to follow, it is highly recommended to assess the model once in 3 months or at least once in 6 months. Why do models deteriorate? Consider that there is a model developed to segment customers of an Insurance company. This model may deteriorate due to one or more reasons stated below: a. Added factors that are not considered originally: The company expanded its operation to another country or added one or more product lines b. Changes in Customer Behavior: Customers (especially millennials) expect instant insurance quotes rather than quotes emailed to them c. Changes in Business Process: The company moved from Agent-based system to online system d. Changes in Existing factors: The minimum wage of customers got changed but the salary of the model remains same Other competitors offer more products or process claims at faster rate that are not accounted in the model f. Changes in Industry: Mergers and acquisition of similar companies in the industry. New start-ups that process quotes and claims through AI g. Changes in Regulations: New and/or updated Government regulations. Ex: mandating AML/KYC for baby-boomers and Gen-X customers h. Changes in Product: Change in premium rate or coverage of insurance products that makes customers to change their coverage plan i. Changes in Dataset that are not considered originally: Change in discount code and/or addition of new discount code for Insurance premium calculation How to maintain models? The following Hierarchical Processes shall be employed to maintain a model: 1. Assess: Assess the models periodically and proactively with new datasets and compare its performance with measures. Even if the performance deteriorates but still within the acceptable threshold, it should still be OK. However, this assessment has to be carried out at least once in 6 months. 2. Retrain: If the above assessment falls below the threshold, retrain the model with fresh sample of datasets - sometimes with more number of observations. However, during retraining, keep all the original and derived features of the original model. The fresh and added datasets may lead to change in coefficients of the model performing better. 3. Rebuild: In spite of retrain, if the performance of model does not improve, just scrap the original model entirely and start from scratch. This means, analyzing new and old features, imputing missing values, one-hot encoding and label encoding features, performing feature engineering, building diversified models, and ensemble them. Finally deploy this model into production and perform A/B testing (aka Champion-Challenger testing) to measure the performance of new model. I was working on a binary Classification challenge for which I had to compute the Performance metrics for all the Predictive models. Using XGBoost, H2O, GBM, and MLR packages, I developed 5 models for which AUC (ROCR) has to be computed. Following is the order in which the libraries were loaded in the script: For one of the models (GLMNet), I used the below code to predict Target feature: glmNetPred <- predict(glmNetModel$glmnet.fit, ...) After prediction, I ran the below code to compute ROCR prediction, and it got executed successfully: ROCRpred <- prediction(glmNetPred, testSetActual) But when I executed the below code to compute Area Under Curve (AUC), AUC <- as.numeric(performance(ROCRpred, "auc")@y.values) it gave me the following error: Error in performance(ROCRpred, “auc”) : Assertion on ‘pred’ failed: Must have class ‘Prediction’, but has class ‘prediction’. Now what? I searched for help in Net for any solution but did not find any. When I did more analysis, I found that the ROCRpred object was created using ROCR package’s prediction function, and supplied to performance function of mlr package. But the mlr package expects the object to be of type Prediction. A careful scan on the logs, when the packages were loaded, also proved the same: Attaching package: ‘mlr’ The following object is masked from 'package:ROCR': What it means is that both mlr and ROCR packages contain performance function which is identical but have different signatures. The performance function in mlr package expects the parameter to be of type Prediction whereas the same function in ROCR package expects it to be of type prediction and hence is the error!! There are two ways to solve this issue: a. Supply package name explicitly while calling the function. With this approach the package name has to be specified in each and every model files and it may reduce the code readability. Further, there are chances of missing it out in some places leading to undesired results. Want to learn how to develop a googleVis Motion Chart and upload it in your blog or website, using R? I downloaded the Bikesharing dataset from Kaggle’s Competition site, performed some data munging and grouping, and prepared a googleVis Motion Chart. In X-axis, choose ‘Time’ variable instead of default ‘Period’ to match with the slider below. You may slowdown the animation by dragging arrow down the arrow as indicated here. Note: This chart will not be visible in some of the mobile devices due to Flash incompatibility. Please use your regular computer to see this chart. Before learning any tools and technologies, it would be beneficial if we could understand the underlying architecture of them. The below visual represents my understanding of SparkR package provided by Apache to handle Big Data in Data Science field. Though the MLlib in SparkR has limited number algorithms, adoption of new algorithms are taking place at faster rate. This is my another attempt to prepare an interactive histogram using Shiny Apps. I have embedded both Server and UI side codes that produced this app on the main page itself. Take a look at it here or directly below: Most of us know that R Programming has good prediction algorithms and rich visual representation libraries available. I was wondering whether we can build an interactive Web application using R and host is online. Shiny library, a Web Application Framework for R, came-in handy for me to achieve this. Using Shiny, developed a very simple interactive web application to predict the mileage of a car based on User’s Manual. To predict that, I had to develop a Linear Regression model based on mtcars dataset provided by ‘Motor Trends Magazine’ for various Cars, Makes, and Models, and finally hosted the application using Shinyapps. To determine the mileage of your car, feel free to use this interactive Shiny application hosted in my Shiny site or directly below.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9589638710021973, "language": "en", "url": "http://economicsessays.com/purposes-of-different-types-of-organisation-economics-essay-2/", "token_count": 3695, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2412109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:330c2cf2-7bf8-4d2c-94d2-7d1b27ee3995>" }
The sole trader is a business owned by one person who is self-employed and why may, in some cases, employ other people on a full or part time basis. Normally using personal funds to start business, the sole trader decides on the type of goods or services to be produced, where the business is to be located, what capital is required, what staff(if any) to employ, what the target market should be. In Britain about eighty per cent of all business is sole traders. The reason for this predominance is the relative ease with which an individual can establish a business this type. Examples include: builders, small shops, independent agents. Partnership is when two or more individuals establish a business which they own. The partners have unlimited personal liability both jointly and severally. The liability of limited partners is limited to their investment in the partnership. Under the law, partnerships are limited to 20 or less partners. Partnership companies usually have written contracts between partners, but that’s not necessary. This states the type of partnership it is, how much capital each party has contributed, and how profits and losses will be shared. The typical examples of partnerships are doctors, dentists and solicitors. They can benefit from shared expertise, but like the sole trader, have unlimited liability. Limited companies are companies which are registered at Companies House- www.companieshouse.gov.uk. It is a legal entity or legal person with its own legal rights and obligations, separate and distinct from those of its members. All property, which is registered on company, belongs to company and is not treated as belonging to the company’s shareholders and directors. The benefit of limited company is that is offers limited liability to its members. The company as a separate legal entity is liable for its debts and the members and directors are not personally liable unless they have acted wrongly in some way. There are two types of limited companies as public limited companies (PLCs) and private limited companies (Limited, LTD). The vast majority of trading companies are private companies limited by shares. Many private companies are very small. There is no minimum capital required for private company and it’s commonly less than 100£. A private company may not offer shares to the public. For example it can be any shop, pub, construction company etc. PLC is company which is appropriate for larger businesses where shares are intended to be available to the general public. A public company must have a minimum share capital of £50,000, of which at least one-quarter plus any share premium must be paid up before the company can obtain its trading certificate from Companies House and start trading. This is the only type of company which may raise capital by offering shares to the public. For example it’s some supermarket chain, delivery company or airlines company. Consumer co-operative societies are organisations owned by consumers which aim is fulfilling needs and aspirations of their members. They operate in market system independently from the state as mutual aid, oriented to service rather than make a profit. Consumer’s cooperatives often take the form of retail outlets owned and operated by their consumers, such as food cooperatives, health care, insurance, housing, utilities and personal finance. Workers’ co-operatives are organisations in which ownership and control of the assets are in the hands of the people who working in it. They have the objective of creating and maintaining sustainable jobs and generating wealth, to improve the quality of life of the worker-members, dignify human work, allow workers democratic self-management and promote community and local development. The main principles of the organisations are democracy, open membership, social responsibility, mutual co-operation and trust, help to differentiate co-operative from other forms of business organisations. Public corporations are legal entities created by government to undertake commercial activities behalf of an owner government. In the public sector the state owns assets in various forms, which it uses to provide a range of goods and services felt to be of benefit to its citizens. These state corporations an important part of the public sector of the economy and they are very significant to national output, employment and investment. These public corporations are hospitals, municipal water companies, rail services etc. Describe the extent to which an organisation meets the objectives of different. The main organisational objective of for-profit organisation is to make more profit. Aims and objectives establish where the business would like to be in the future, helping to control their plans, motivate staff and give everyone sense of direction. Any decision made within the organisation should be in line with their aims and objectives. The objectives are influence by various stakeholders, as well as the nature of the business. Different stakeholder groups will have different objectives to satisfy their interests. Objectives can be: corporate which affect the whole business, departmental objectives that are for a certain area of business and individual objectives are used in performance appraisal for employees. Employees- wage levels; working conditions; job security; personal development Managers- job security; status; personal power; organisational profitability; growth of the organisation Shareholders- market value of investment; dividends; security of investment; liquidity of investment Creditors- security of loan; interest of loan; liquidity of investment Suppliers- security of contract; regular payment; growth of organisation; market development Society- safe products; environmental sensitivity; equal opportunities; avoidance of discrimination Explain the responsibilities of an organisation and strategies employed to meet them. Every company, business, department has a duty and remit to provide a service. An organisation must operate within the boundaries of the law. Reputation and trust are everything, and a consumer can’t have trust or faith in your ability to deliver if you can’t prove and guarantee you’re legitimacy. An organisation must also have strict financial control. Recruitment is vitally important. Organisations need reliable workers who have enthusiasm, but also intelligence; workers that are able to be creative but also to take advice and critique from management. Also organisations are responsible for health and safety of their employees. They need to provide safe working environment and equipment. Explain how economic systems attempt to allocate resources effectively. There are three kinds of economic system which are basically adopted by the different countries. They are: free market, centrally planned, mixed market. Free market economic system: The intervention of government is kept at a minimum level or neglected in free market system and all the economics resources comes under the private sectors as well market. Price mechanism will determine how much of goods or services will be supplied according to the market demands. Most decisions are based on market mechanism. The supply, demand and ability play the vital role in market decision making. As per looking at the free market system it raises the various unsolved questions like who will produce the goods and services and infrastructures for the country to meet the needs of every public. For example UK. Centrally planned economy system: Centrally planned economic system refers that government allocates the economic resources; government makes all the planning regarding the economical activities. Private sectors are kept far away in involvement of any economical accumulation. These kinds of economics were found in the Asian, central Europe, Eastern Europe and Latin American nations but now these are found in Cuba, Iraq, Iran, North Korea etc. In these systems basically unemployment problems will not be faced since government plan all the economical activities and resources will be allocated based needs of its people and different industries inputs. Mixed economy system: This system is a mixture of all other systems. The system where both capitalism and socialism economic system are included it is known as mixed economic system. Mixed economic system splits the available economic resources available in the country to both private sectors and government. Private sectors are encouraged to get involved and participate in utilizing the resources which helps to gain economic profit for whole nation. Countries like USA, UK, Russia and China to countries like Cambodia, Peru and Vietnam has adopted this economic system. When one fails to meets the public desire other can get it and helps to maintain the economic balance not only in the particular country but also the whole nation. Assess the impact of fiscal and monetary policy on business organisations and their activities. Fiscal policy decisions have a widespread effect on the everyday decisions and behaviour of individual households and businesses. Basically fiscal policy means how government taxes us and how it spends the money. Increased taxation makes the price of goods and services more expensive, reducing demand for them and reducing employment. Lower taxes mean more disposable income for consumers and more cash for businesses to invest in jobs and equipment. Stimulus-spending programs, which are short term in nature and often involve infrastructure projects, can also help drive business demand by creating short term jobs. Increasing income or consumption taxes usually mean less disposable income, which, over time, can decelerate business activity. Monetary policy impact changes in short term interest rates influence long term interest rates, such as mortgage rates. Low interest rates mean lower interest expense for businesses and higher disposable income for consumers. This combination means higher business profits. Lower mortgage rates may spur more home buying activity, which is usually good for the construction industry. Lower rates also mean more refinancing of existing mortgages, which may also enable consumers to consider other purchases. High interest rates can have the opposite impact for businesses: higher interest expenses, lower sales and lower profits. Interest-rate changes can affect stock prices, which can impact consumer spending. Evaluate the impact of competition policy and other regulatory mechanisms on the activities of a selected organisation. Apple Inc. Apple Inc was founded on April 1, 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne in U.S. California. They had produced and selling computers. Company was growing up very quick, because they were kind of pioneers in that industry. Apple Computer was predominantly a manufacturer of personal computers. Apple core product lines are: Macintosh computer line, iPod music player, iPhone and iPad. The company now is also known for its iOS product range that began with iPhone, iPod and iPad. They also have iTunes- online music store. Now Apple is the largest technology company in the world its stock market value is $500 billion. Revenue of 2011 was $127.8 billion in sales. Recently, European Commission accused Apple Inc. of violating European Competition rules in music industry. Apple Inc. uses iTunes to sell particular songs. iTunes services prevent its users in one Member State of the European Union to buy songs from another iTunes webpage, which is located in another Member State. E.g. if consumer lives in the Czech Republic and he wants to buy the particular song from Slovakian iTunes webpage, he is not allowed to do so. That means that the price of the song shall be charged according to the place where the consumer lives (cost of songs varies between the Member States). European Commission sent to the Apple Inc. so called “statement of objections” which accused the Apple Inc. of unfair agreements with record labels of containing territorial sales restrictions, which violates European Union competition rules. The Apple Inc. tried to defend itself that this policy was as the outcome of the demands of the music record industry. Moreover the music which is bought from iTunes obligate to use only the Apple iPod music player, because other portable music players, does not support songs bought from iTunes. iPod users in the United Kingdom have to pay more cash for the song if they want to buy it from the iTunes online store in the United Kingdom than other users in the Continental Europe. European Commission was investigating this issue and threatened Apple Inc. with the fine of GBP 330.000.000. Explain how market structures determine the pricing and output decisions of businesses. Market structure is number of firms producing identical product homogenous. Monopolistic competition where there is a large number of firms, each having a small proportion of the market share and slightly differentiated products. They take the prices of other competitors as given and ignore the impact of its own prices of other firms. The number of firms and output determines supply and demand. For example: Coke and Pepsi; toothpaste; shaving foams like Gillette and Dove. Oligopoly is when a small number of firms control the market. Then usually prices of products or services are high. Industries which are examples of oligopolies include: Steel industry, aluminium, film, television, cell phone, gas, electricity. Duopoly is a special case of an oligopoly where two companies compete in a market. Monopsony when there is one buyer faces with many sellers. Oligopsony, a market where many sellers can be present but meet only a few buyers. Monopoly, where there is only one provider of a product or service. For example it was Microsoft Company in U.S. Natural monopoly is when firm is a natural monopoly if it is able to serve the entire market demand at a lower cost than any combination of two or more smaller, more specialized firms. Perfect competition a theoretical market structure that features no barriers to entry, an unlimited number of producers and consumers, and a perfectly elastic demand curve. Illustrate the way in which market forces shape organisational responses using a range of examples. Supply and demand are the forces that make market economies work. They determine the quantity of each good produced and the price at which it is sold. A market is a group of buyers and sellers of a particular good or service. The buyers as a group determine the demand for the product, and the sellers as a group determine the supply of the product. For example, if oil prices rises then price of delivery services rise and the price of the goods as well. When summer is end and tourist season is finished then prices of hotel rooms goes down. If grape harvest is bad one year then prices of wine will be higher in next year. Judge how the business and cultural environments shape the behaviour of a selected company. Apple Inc. The approach they working with and the secret of success, is based on simple, creative design and ease of using technology on daily bases. The success of Apple is embedded in Steve Jobs strength-based approach to company strategy. The pillars of Steve Jobs’ strategy are built upon a core of capabilities, the seizing of opportunities, and an organizational culture that enables the attainment of Apple’s goals. Steve Jobs was one of the company’s master minds who had absolutely genial ideas, and he realized those ideas to change people lives. Before iPhone was released and smart phones come to our lives, the mobile phones became more complicated and difficult to use them. When the Apple released iPhone, it changes mobile phone market for ever. It was sensation, because it was absolutely different and completely new technology with touch screen and without many buttons as usually. Now the iPhone is part of many people lives and also fashion. iPhone becoming more popular in the world and the Apple now is 3rd larger mobile phone company in the world. Discuss the significance of international trade to UK business organisations. Some of the key commodities in which the UK trades are manufactured goods, beverages, fuels and chemicals. According to a World Trade Organization (WTO) report published in 2008, the UK has retained its position as the world’s largest commercial services exporter. Moreover, with the UK recording a profit of $263 billion in the commercial services sector, the country continues to be the world’s second largest provider of these services. UK trade consists of the movement of goods and services within the European Union, of which it is a member, and to non-EU countries. International trade in the UK is assisted by UK Trade & Investment (UKTI). This government organization focuses on enhancing the competitiveness of UK companies through overseas trade and investments. It also aims at continuing to attract high-quality foreign direct investment (FDI). In order to attract foreign businesses and foreign investment, the British government has adopted a variety of programs. For instance, the Parliament allows local and regional governments to establish enterprise zones. In these zones, companies receive exemptions from property taxes and reimbursement for costs involved in the construction of new factories or business locations. There are also programs that provide incentives for companies to locate in economically depressed urban areas that are known as “Assisted Areas.” In 1998, the total value of these programs was US$315 million. There are 7 free trade zones in the United Kingdom (Birmingham, Humberside, Liverpool, Prestwick, Sheerness, Southampton, and Tilbury). These zones allow goods to be stored for shipment without tariffs or import duties. Analyse the impact of global factors on UK business organizations. International trade and the UK economy: UK businesses will see international trade growth accelerate from 2014 as the global economy ends a period of growth contraction, according to HSBC. There are fundamental changes taking place in world trade, UK exports to China and to India grew by 21% and 37% respectively in 2011 and HSBC estimates that it processed around one third of these by value. Market opportunities: Evaluating markets and future trends can be a major challenge for any business. New market opportunities spring from a range of possible sources and vary in their size, importance, and risk. New demographic or vertical industry segments New geographic regions Alternate offerings of service models, supplies, and other annuities World Trade Organisation (WTO) is the only international agency overseeing the rules of international trade. It polices free trade agreements, settles trade disputes between governments and organises trade negotiations. 4.3 Evaluate the impact of policies of the European Union on UK business organisations. The United Kingdom is a member of the European Union but isn’t part of the single currency, the Euro. Free trade – The EU is a trade bloc which means there are no quotas or tariffs for companies exporting goods and services within the EU. European legislation is meant to make it easier for UK businesses to trade across the EU’s 27 states. The internal market – the single market means UK citizens are free to move, live, study and trade anywhere within the EU.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.946122407913208, "language": "en", "url": "https://destinationscreditunion.blog/2011/04/29/money-mattersseven-steps-to-a-successful-budget/", "token_count": 391, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1455078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:edd88ad6-d000-4e5a-850f-7747b521e101>" }
Brought to you by Accel Members Financial Counseling – through financial knowledge and expertise we enable people to enjoy a better quality of life. Budgeting can be a simple and straightforward process. It can also be a rewarding experience for all family members. But, it takes interest and commitment. Here are seven steps to help you create a successful budget. Discuss Values – Determine what is most important to the people involved in your budget. By understanding these values, you can make decisions that will provide you with the most satisfaction. Set Goals – Begin setting goals by discussing with family members what each one may want to do with their money. Have each member list the goal and a deadline. Work on the most important goals first. Put money aside in your budget for your priority goals. Determine Income – Figure out your take-home pay. The money that makes up your income can come from sources such as salary, allowances, social security or child support. Do not include overtime pay. Determine Expenses – Consider fixed, variable and periodic expenses. Fixed expenses consistently stay the same every month, variable expenses change from month to month and periodic expenses are not due every month. Create a Plan – Design a spending plan so that your income will allow you and your family to have what you want and need. If you find that your income does not cover your expenses, re-evaluate your plan and decide what categories can be changed. Keep Track of Expenses – Keep a record of expenses to see where your money is being spent. By comparing your estimated expenses with what you are actually spending, you can evaluate whether or not your plan is working. Evaluate Your Plan – Periodically evaluate your spending plan. Is the plan still helping you meet your needs and achieve your goals? A budget is the cornerstone of your family’s financial plan and a guide to help you achieve your goals.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9651302695274353, "language": "en", "url": "https://medicalxpress.com/news/2013-10-scientists-money-doesnt-happier.html", "token_count": 594, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.11279296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8d908070-3cfb-4066-93d6-89eea97db847>" }
Scientists explain why having more money doesn't make us happier (Medical Xpress)—Living standards have risen significantly in the developed world over the past 50 years, so why aren't we happier than our grandparents? A new University of Stirling study, which uses survey data from about 50 000 people in the UK and Germany, has the answer: the psychological benefits from income rises are wiped out by much smaller income losses. The research, led by Dr Christopher Boyce of the Behavioural Science Centre at Stirling Management School, has significant implications for policymakers under pressure to establish economies that help maintain higher well-being. Findings suggest that fiscal and monetary policy that focuses on economic stability, rather than high growth at the risk of instability, is more likely to enhance national happiness and wellbeing levels. A strategy that runs the risk of small, temporary cuts to our spending, on the other hand, will probably lead to wider spread dissatisfaction than previously believed. This is because people experience the pain of losing money more intensely than the joys of earning more. The "Money, Well-being and Loss Aversion" research project based its conclusions on data gathered on about 20 000 people in the UK and 30 000 in Germany for up to nine years. The study may help explain why bonus structures and remuneration schemes that are based on commissions can easily backfire, with staff morale taking a larger dip than expected in leaner times when there are lower – or no – bonuses. The research also helps explain why there is risk aversion among investors: temporary falls in income have a much larger impact on our feeling of contentment than income gains of the same magnitude. "Findings show that we have been over-estimating the positive wellbeing effects of income increases. Income losses have a much greater influence on wellbeing than equivalent income gains," says Dr Boyce. "Over the past 50 years we have experienced long-term economic growth, but there has not been accompanying increases in our long-term wellbeing. We undertook this study to help understand why our happiness levels have not improved with rises in income," says the behavioural scientist. Previous studies on money and wellbeing have examined income changes, but have not differentiated between the income changes that arose from losses, as opposed to gains. "Both individuals and societal well-being may be best served by small and stable income increases even if such stability impairs long-term income growth," says Dr Boyce. "Findings suggest that when we are thinking about trying to increase individuals' and societies' well-being it would be preferable to focus on economic stability as opposed to higher economic growth that risks greater volatility," he adds. The research is published in the latest edition of leading academic journal Psychological Science and is available online from today. Scientists analysed data gathered in the German Socio-Economic Panel Study from 2001 to 2009 and British Household Panel Study between 1998 and 2007.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9718666672706604, "language": "en", "url": "https://www.fool.com/investing/general/2014/08/24/with-nearly-35-million-barrels-of-oil-per-day-offl.aspx", "token_count": 826, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.43359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3faa35b2-5528-4c1c-b36e-eac15ec154bb>" }
Since 2009, oil prices have enjoyed a prolonged period of remarkable stability characterized by annualized volatility significantly below its long-term average. And though volatility has ticked up markedly in recent months due to the escalating conflicts in Russia and Iraq, oil prices haven't risen much above $115 a barrel and have actually declined sharply in recent weeks. What's really remarkable about this recent decline is that it occurred despite the fact that nearly 3.5 million barrels per day of global oil supplies -- representing about 4% of daily output -- have gone offline since the beginning of 2011. Here's a chart from the U.S. Energy Information Administration that shows the severity of recent years' supply disruptions: So, why haven't oil prices spiked? It's mainly because these supply disruptions in places like Libya, Iran, and Nigeria have been offset by the rapid growth in U.S. crude oil production over the same time period. Thanks to a combination of horizontal drilling and hydraulic fracturing, U.S. oil production surged from roughly 5.6 million barrels per day (bpd) in 2011 to nearly 7.5 million bpd in 2013. Combined with output increases from Saudi Arabia and other large producers, this nearly 2 million bpd increase has helped negate the impact of production losses from other key oil producing regions, lending unparalleled stability to global oil prices. Indeed, Brent price volatility in 2013 was the lowest it has been since the oil markets were deregulated in the early 1970s. But why have prices started to fall recently? While that explains much of the stability in oil prices over the past few years, you might still be wondering why prices have fallen in recent weeks, especially given the stampede of scary geopolitical developments. This, I believe, is due to a few key factors. First, U.S. and EU economic sanctions against Russia are unlikely to have a meaningful impact on its level of exports, according to the International Energy Agency (IEA). While the sanctions, which are intended to cut off Kremlin-controlled firms' access to Western capital, could impact long-term production growth, they probably won't impact supplies in the near and medium term, the agency said. The same goes for Iraq, OPEC's second-largest oil producer. The sectarian violence perpetrated by Sunni militants has largely been confined to the northern part of the country, far from its main oil-producing regions in the south and in Kurdistan. As long as the violence doesn't spread to these regions, Iraq's near-term oil production should be safe. At least that's what the markets perceive. And with the U.S. having commenced airstrikes against the rebels, investors are even more confident that Iraqi production and exports won't be meaningfully affected. Indeed, the price of Brent actually fell after announcements that the U.S. began bombing rebel-controlled territories in northern Iraq because it signaled to the markets that the U.S. is determined to keep oil prices stable. With geopolitical risk to supplies perceived as low, the focus is back on fundamentals, which are bearish. OPEC supplies are as robust as ever. They hit a five-month high of 30.44 million bpd in July, led by increased production from Saudi Arabia, the cartel's largest producer, and Libya, where two major crude export terminals were recently reopened. An upbeat production forecast for the U.S. provides additional support from the supply side. Meanwhile, on the demand side, markets are concerned about slowing global economic growth that could constrain demand for petroleum. Hedge funds also recently reduced their bullish bets on Brent to the lowest level in six months, leading the benchmark to near its lowest level in 14 months. The bottom line In a nutshell, global oil supplies are plentiful, demand is looking like it could be pretty weak, and the risk of supply disruptions is perceived as low. Barring a major supply disruption, investors can expect oil prices to remain under pressure in the near term. The longer term, however, could be an entirely different story if the security situation in Iraq deteriorates further.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9774933457374573, "language": "en", "url": "https://www.hackmath.net/en/math-problem/5372", "token_count": 803, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0108642578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d0d342a9-d98d-4492-b19b-47902d23493f>" }
Tomas spent 60% of his savings for his weekly vacation. He was 32 € left. How many euros did he have before vacation? We will be pleased if You send us any improvements to this math problem. Thank you! Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it. Tips to related online calculators You need to know the following knowledge to solve this word math problem: Related math problems and questions: - Interst on savings The bank offers 1.6% interest. How many euros have to insert at the beginning if we received € 15 on the interest? - Bdf tablet Detective Harry Thomson has come across a surprising mystery. As part of the weekend's action, the tablet he looked for dropped by 30%. He also recommended this bargain to his friend. However, it came to the shop on Monday, and the tablet was 30% more exp Eva borrowed 1/3 of her savings to her brother, 1/2 of savings spent in the store and 7 euros left. How much did she save? - Solutions, mixtures How many liters of 70% solution we must add to 5 liters of 30% solution to give us a 60% solution? Peter spent a quarter of his pocket saving. He has 9 euros now in his wallet. How many euros spent? - 7 roses Peter buys 7 roses. When he pays for it left him 4 euros. If he bought 5 roses left him 40 euros. How many euros had Peter before buying? - Budget plan In the construction of the building, the planned budget exceeded 13%, which was 32,500 euros. How many euros cost built the building? - The recommended The recommended price of the novel "Laughing Sun" is 285 SKK. The bookseller bought 60 pieces of the novel at the wholesale store and paid 82% of the recommended price (18% is his profit for selling books). For the recommended price, he sold 55 pieces of Bookshelve with an original price of € 200 twice become cheaper. After the second discounted by 15% the price was € 149.60. Determine how much % become cheaper for the first time. Mr. Smith have gone withdraw from bank saving interest € 1,500. How big was his initial deposit if the annual interest rate is 1.5%? - The farmer The farmer calculated that the supply of fodder for his 20 cows was enough for 60 days. He decided to sell 2 cows and a third of the feed. How long will the feed for the rest of the peasant's herd last? - Sale off 2 A pair of of blues jeans went on sale. After a 30% reduction the pants cost $35. How much did the jeans cost before the price reduction? - Cost reduction Two MP3 players whose price was equal to originally have been discounted the first by 20%, the second by 35%. After the price reduction was the difference in their prices 750, - CZK. What was the original price of each of the two players? Smiths paid a deposit for a vacation of two-sevenths of the total price of the vacation. Then paid also € 550. How much cost their vacation? - Bike cost The father gave his son € 100 to buy a bicycle, which was 40% of the total amount of the bicycle. How much did the bike cost? - Price saleoff Shoes standing y euros. At first, they were discounted by 12% and then 50% of the new amount. After this double cheapening the cost was exactly 22 euros. Determine the original price of the shoes. - VAT on books The cost of a book in the store is 12.5 euros. How much euros is the VAT of this book? VAT is 10%.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9448540806770325, "language": "en", "url": "http://www.pellegrinoandassociates.com/2014/08/", "token_count": 559, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.38671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b7189a95-55d1-49f3-ae4b-8438b1961fde>" }
Innovation plays a material role in the U.S. economy. In fact, the direct and indirect impacts of innovation account for more than 40% of U.S. economic growth and employment. However, without intellectual property (IP), innovation may not reach its economic potential. IP protects innovation and drives a significant portion of the market value of companies. It comes in the form of patents, copyrights, trademarks, and trade secrets. The value placed on IP, especially patents, continues to rise. In fact, the patent landscape may reach worldwide licensing revenue of $500 billion by 2015 according to Ernst & Young. And as headlines show, companies continue to apply for and acquire patents to build portfolios for defense measures and to increase market value. Despite the continued interest in patents, the United States faces major competition with foreign countries. In 2013, Japan and Taiwan ranked first, third, and fourth for top geographic grants. This is a significant indicator that the United States may be at risk in the innovation landscape. A part of the issue may stem from the fact that much of the public does not understand what IP is, what it does, and how important it is. This is not great news for up and coming entrepreneurs and innovators. It’s really no wonder since it is generally not taught in schools. IP strategist Ben Goodger reports that few business schools and universities teach courses focused on the importance of IP as a crucial economic and financial asset. Therefore, many people simply stumble through the nuances of IP. This is not ideal as the lack of IP can make the difference between success and failure. Therefore, the question arises as to when IP should be introduced. Today, plenty of adults do not understand IP. Perhaps IP should be taught at the grade school level. At least one organization focused on young children finds value in IP education. In collaboration with the Intellectual Property Owners (IPO) Education Foundation and the USPTO, the Girl Scout Council of the Nation’s Capital now offers an intellectual property patch. In an effort to encourage girls to focus on STEM (science, technology, engineering, and math) careers, the organization offers an IP patch to familiarize scouts with the patent process and innovation. By introducing IP value at a young age, we can better prepare our children about the business world. These children continue to face a technologically advanced society, which makes copyright infringement and trade secret theft much easier. However, some of these offenses stem from lack of knowledge. Many people simply do not realize they are infringing on copyrighted work. Educating the public about IP can only result in good things, increasing our chances of introducing new inventions, reducing copyright infringement, stifling trade secret theft, and getting the most value from all forms of IP.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9683908820152283, "language": "en", "url": "https://latestfashion.date/insurance-patent/", "token_count": 349, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.302734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:461a813d-51a2-47a2-9c3b-f1386fba99d9>" }
Insurance patent are some patents laws, which a person obtained to use of an invention or improvement in any insurance policy. Some people purchased patent to own the patent which are issued by government. Usually people purchased this without the permission of owner of patent by a license and contract. But the party who will exceed the limits of the contract will liable of all the profits which he/she gained with that patent, to the owner of patent. He/she will also be responsible in case of any harm or damage to the patent (may be intentional or accidental). This insurance patent covers only the invention and technological aspects of any new insurance. But this insurance patent is not completely legal, in many countries this insurance patent is still case. In United States of America many courts encouraged persons who invents insurance policies and the methods of doing business. While these insurance patents may be used to get more reliable and comprehensive about the coverage of invention and improvements of basic process of insurance, like in different methods of insurance of calculating, premium, cost and underwriting etc. Insurance patent is turning a big controversy in the insurance industry, because some people think it positive development in insurance, they says that by protecting the patents an insurance company can invest more amount to develop a product, while other thinks it as negative development. The first insurance patent was issued in 1982. The rate of this insurance patent is about one or two patents every year in USA and those patents are related about different insurance policies. The patents are usually confirmed by the courts and then they are used. The rate of filed patents were about 150 per year before 1998, which was then decreased about 30 after Court of Appeals for the Federal Circuit turned down the appeal of State Bank Decision to invent new methods of business.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.96479332447052, "language": "en", "url": "https://www.ajklawoffice.com/post/c-corp-versus-s-corp", "token_count": 749, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.251953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e2de05a8-ae36-42ec-8d31-b4d8b03fdc15>" }
C-Corp Versus S-Corp C-corporations and S-corporations have a lot in common. For instance, owners of both a C-corp and S-corp are called shareholders, and they elect directors to oversee business its operations. In turn, those directors hire officers whose job it will be to manage the day-to-day operations of the business. Both a C-corp and S-corp are formed by preparing and filing a document called the Articles of Incorporation, and filing registration documents with the Secretary of State. Setting up a corporation provides it's owner(s) with limited personal liability. A corporation is legally a separate entity from the owners and as a separate legal entity, only assets of the corporation, not the owner's, are subject to corporate debts. While, as stated above, there are some similarities between C-corporations and S-corporations, there are also various differences between them. All corporations start their life as C corporations, meaning you must file and form a C-corp with the Secretary of State and then elect or convert the duly formed C-corp into an S-corp. A C corporation may be converted to an S corporation by filing IRS Form 2553. This is because the IRS does not recognize an S-corp as a valid entity. Perhaps the biggest reason people and/or business elect S-corp status is for tax purposes. When it comes to taxes there's a big difference in how a C corp and an S corp are taxed. For federal tax purposes, C-corps are hit with something called "double taxation" which means the C-corp's profits are taxed, and are reported on the C-corp's tax return. Then, any after-tax profits that are distributed to its shareholders as dividends are taxed again, and are reported by the shareholders on their personal tax returns. This “double taxation” can be avoided by opting for S-corp status with the IRS. An S-corp. is treated similar to a sole proprietorship or a partnership meaning the profits and losses are "passed through" the S-corp. to the shareholder(s), and are only taxed to the shareholders and reported on their personal tax returns. This is commonly referred to as "pass-through taxation." Ownership & Qualifications A C corp. will provide more flexibility when it comes to selling a corporation's stock. According the IRS, a corporation that elects for S-corp status may not: Have more than 100 shareholders Issue more than one class of stock Have shareholders who are not U.S. citizens or residents Be owned by a C corporation, other S corporations, LLCs, partnerships, or various trusts Generally, S -corps are preferred by small businesses which fit within the legal requirements for an S corp (as discussed above). Certain types of corporations find more advantages with a C corp. For example, having more than one class of stock can help a business raise capital from investors without giving them voting rights. However, whether a C corp. or S corp. would be best for your business is dependent upon careful analysis of various factors as they relate to your particular situation. This post is intended to provide you with a brief and simply overview and understanding of the similarities and differences between C-corporations and S-corporations. As with any other area of law, filing corporate documents and electing for S-Corp status can get complicated and difficult to understand. Should you have any questions or concerns, please do not hesitate to contact us.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9894473552703857, "language": "en", "url": "https://www.hedgethink.com/history-funds-hedge-funds-part-3-expansion-mutual-fund-industry-regulation/", "token_count": 806, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0576171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4207c573-1863-4c53-8ed5-6a6ad2d9f92f>" }
Until now, we have seen how earlier versions of mutual funds came into being but we have barely even scratched the surface so far; there is a long way to go before we can begin to see the rise of modern day mutual funds. They have grown into a vast industry, which is a force to be reckoned with as more and more of them have cropped up over the past several years. To get a better sense of how quickly this growth took place, let us look at a few numbers to help us put things into perspective. At the end of 1929, it is estimated that there were roughly 19 open-ended funds. At the same time, the number of closed-end funds dramatically rose to about 700. This is the point on the timeline during which governments began to take notice of what was happening in the financial scenario. As a result, rules and regulations began being drafted accordingly. The need for intervention arose out of the downward spiral of the stock market at that time, which was a major crisis of its era. The effect that it had on the young industry was a sudden fall in the numbers of closed-end funds that were unable to survive the existing conditions. Open-ended funds, however, sailed smoothly through these tough times. A regulating body called the Securities and Exchange Commissions (SEC) was formed during this phase with the hope of being able to gain better control over how the mutual fund industry worked. From here onwards, any and every new mutual fund had to be registered with the SEC before the public could get involved in any way. The Securities Act of 1933 and the Securities and Exchange Act of 1934, under which the SEC was formed, further extended the new rules and regulations. The laws that followed necessitated that the nature and details of investment be communicated to the public openly and clearly. It also meant that those who were entrusted with the task of running the fund had to constantly keep their investors up to speed with all recent happenings. Soon after this came the Revenue Act of 1936, which laid down the criteria that was to be used with regards to the taxation of mutual funds. Four years later, the Investment Company Act presented a set of instructions that were to be followed while outlining the structure of mutual funds. Thus, one can see how the environment was rapidly changing and the traditional ways under which mutual funds previously used to work were being abandoned. Fresh and more streamlined practices were now being put into place so that the movements within the mutual fund industry could be monitored more closely and thoroughly. Thus, over the course of the next few years, the growth of the industry was rather stunted and hardly any iconic changes took place. However, as famous saying goes, change itself seems to be the only constant. This proved to be very true as soon as the 1950s arrived; this new decade brought with it much prosperity for the industry, allowing it to thrive once again. The industry matured and expanded at an exponential rate as the financial markets started to stabilize once again. During the 1960s alone, about 100 mutual funds were initiated. It is estimated that by the time the 1970s came around, a staggering grand total of 48 billion dollars-worth of assets were tied to mutual funds. Two of the prominent new types of funds that surfaced during this time included the first ever index fund as well as the no-load fund. Both of these funds had a positive effect on the industry and helped it to become even larger. Between now and then, the industry did see a few highs and lows due to changing conditions, such as the advent of technology and the recent worldwide recession of 2008. Nonetheless, it has sustained throughout all of this. Not only just that, but it also managed to retain its upward growth patterns and still continues to do so. Currently, there are more than 10,000 mutual funds in the United States of America alone. If one was to calculate the collective worth of all of these, they can easily expect them to amount to trillions of dollars.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9486780762672424, "language": "en", "url": "https://forum.effectivealtruism.org/tag/allocating-risk-mitigation-work-over-time", "token_count": 320, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.041015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3ab4762a-1d38-433d-88c6-5fba3083022b>" }
Some risks, like climate change, are likely to have only moderate effects in the near term but potentially large ones in the long term. Others, like asteroid risk, are roughly evenly distributed over time. Even within risk areas, the mechanism that creates the risk can be different at different time periods. For example, if climate change does create large impacts in the next decade, it is likely to be because of unforseen feedback effects, rather than because of accumulation of warming from industrial emissions. In all of these cases, the work that one should do to mitigate risk will differ depending on which time-frames one is considering. A balanced portfolio of risk-mitigation strategies therefore needs to allocate resources against these different timelines. Two separate considerations favor a focus on near-term risk. First, we are short-sighted about the nature of future risks and what can be done about them, so our efforts are better targeted when acting on near-term risks. Second, we are the only ones who can act now, whereas future risks can be addressed by others later. On the other hand, risks that emerge in the future might be larger. Moreover, working now on building capacity to address risks in the future might help scale up the total amount of effort going to risk reduction. The optimal strategy for a risk community is likely to be a mixture of work targeting different time-horizons. There are some basic quantitative models suggesting how to address this balance (Cotton-Barratt 2015). Cotton-Barratt, Owen. 2015. Allocating risk mitigation across time.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.957017719745636, "language": "en", "url": "https://gopublicschoolsoakland.org/2015/03/contract-matters-funding-a-bigger-raise-for-teachers/", "token_count": 970, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06591796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:20721236-5d52-42cf-ab3c-8916e8b68e50>" }
It is widely understood that we need a raise for teachers. As a community, we have under invested in the teachers for years and the current contract negotiations between the Oakland Unified School District (OUSD) and the Oakland Education Association (OEA) provide an opportunity to change that. The issue remains how much of a raise and when. For a great primer on teacher salaries check out “Who Stays In Teaching and Why” (see Ch 4, pg 37-44). Since OUSD is not magic and simply cannot create more money to spend on teacher salaries (AND California is 46th in the country in school funding!), there are three ways for the district to fund a larger raise: 1. Prioritize new state funding: OUSD’s state funding is slated to increase every year from now until the 2020-21 school year. Of this increase, how much should be applied to teacher raises? - We will be receiving more money in the coming years due to (1) an improving economy, and (2) the passage of Local Control Funding Formula. LCFF provides more funds for districts like OUSD with high levels of student need. For the funds we receive due to our student population, we must “increase or improve” programs, services, and outcomes for high needs students, but there will also be new funding that we can use more broadly. How much of that should fund teacher salary increases? - This increase in state funding also has some strings attached. In June 2014, the state passed a law that raises district contributions into the state pension system. When the law was passed, the district contributed 8.25% of compensation per year, but by 2020 this will increase to 19.1%. This will result in over $15 million dollars per year of state-mandated costs to the district. 2. Reprioritization (or cuts): Of existing programs/initiatives, are there cuts that we should make to fund teacher raises? What follows are examples, not suggestions - Measure G: Measure G states that, among other things, its $20 million is to be used to “to attract and retain highly qualified teachers.” In the 2013-14 school year, there was about $3 million in Measure G funding not used to address retention issues or to reduce class sizes (see Measure G report). If used in its entirety, this could fund a 2% raise, but that would mean eliminating the art, music, and library positions and programs it currently funds. - Reduce administrative salaries: Many people have asked about top administrative salaries. According to 2013-14 data, cutting the top 30 OUSD salaries by 33% would create a 1% raise for teachers. We will share more information on this, but wanted to share a sense of how much money was actually at stake in this area. 3. Increasing efficiency: Are there places in the district where we can be more efficient than we are now and shift the savings to teacher salaries? Once again, what follows are examples, not suggestions: - Reduce personnel: The district could cut the number of district accountants. But if we cut too many, the district will lose far more money in additional audit findings and see more mistakes like the ones that caused the school sites to have to redo their entire site budgets. - Comparing OUSD to other Districts: Do we know how OUSD compares in terms of infrastructure, salary levels, etc.? If not, how does central office know if it is doing a good job? We noted above that we were giving examples, rather that suggestions. Ideally, these priorities should be determined by a budget process that makes it easy to identify our priorities and by criteria that help us to know whether certain programs are working. We do not currently have that budget process (click here for an example of one). In lieu of such a process, we must turn to other methods. Given the urgency of the situation and the public’s thirst for information, we ask that the Board of Education convene a special board study session to: - Discuss the steps that have already been taken to fund a raise - Share 2-3 scenarios that propose different tradeoffs to fund larger raises. - Discuss other obstacles, opportunities, considerations (such as raises for all other employees) and long-term projections/risks Ultimately, we need to think long-term. Our salaries have to be higher than just competitive because of the combination of a looming teacher shortage and the fact that Oakland teachers face more challenging conditions. To do so, we need to have a broader community discussion about the steps we’ve taken so far and where we’re willing to act to go further for our teachers.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9425431489944458, "language": "en", "url": "https://gradeup.co/strategic-petroleum-reserve-in-india-i", "token_count": 1185, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.396484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a91f7577-fe4a-44a6-882b-3419cab93d7b>" }
Context: Keeping in mind the ever-fluctuating global crude oil availability and prices, Government of India is planning its strategic oil reserves. These oil reserves are a boost to the energy security of the nation as they act as a buffer against volatility in crude oil prices. The Government plans to build underground caverns at two locations in Odisha and Karnataka which can hold up to 6.5 million tonnes. Along with the proposed new locations, India already has three underground storage facilities which have a total capacity of 5.33 million tonnes. Strategic Petroleum Reserve In India: Definition; Why India Needs strategic reserves; Govt Initiatives; Global Strategic Petroleum Reserve What is the Strategic Petroleum Reserve? The SPR is a national supply of emergency crude oil established primarily to reduce the impact of disruptions in supplies. The strategic reserves are in addition to the existing ones with the oil companies. The crude oil storages are constructed in underground rock caverns. In India, Indian Strategic Petroleum Reserves Limited ( ISPRL), a Special Purpose Vehicle, is responsible for maintaining the country's strategic petroleum reserves. ISPRL is wholly owned subsidiary of Oil Industry Development Board (OIDB) which functions under the administrative control of Ministry of Petroleum and Natural Gas. At present, ISPRL maintains an emergency fuel store of 5.33 million tonnes ( 36.92 million barrels). This is enough to provide crude oil for ten days of consumption. The Strategic Petroleum Reserve programme in India is being developed in several phases. In Phase I, strategic oil storages are built at Mangalore (1.55 MMT, Karnataka), Padur ( 2.5 MMT, Karnataka) and Visakhapatnam ( 1.33 MMT, Andhra Pradesh). It has a storage capacity of 5.33 million tonnes. In Phase II, two more underground reserves will be developed at Chandikhol (Odisha) and Udupi (Karnataka), giving an extra storage capacity of 6.5 million tonnes. It will amount to 12 days of national consumption. These will, however, be a Public-Private Partnership (PPP) collaboration. Why underground rock caverns? Rock caverns are large man-made spaces in the rock and are considered the safest means of storing hydrocarbons. Why does India Need Strategic Oil Reserves? India is not an oil abundant nation. Along with its vast population, rapid development, and improving standards of living, its energy demand is exponentially increasing. This has made India one of the largest importer and consumer of crude oil and natural gas. Due to this, India is vulnerable to global oil shocks which could occur out of any reason- economic, political or natural. - In 1990, the Gulf War led to an energy crisis in India. At that time India's oil reserves were adequate only for three days. Such a similar threat in the present as well owing to the volatile geopolitical conditions of the Middle East. - Despite promising to move to non-fossil fuel-based resources for 40% power generation by 2030, India's dependence on fossils is not visible to go down anytime soon. On top of that, India imports 82% of its oil needs at present and aims to bring it down to 67% by 2022. - India remains 3rd largest consumer of oil in the world. Thus, India is always reeling under the strategic risk, energy insecurity and financial drains due to its crude oil import. To address strategic risk and energy insecurity, Atal Bihari Vajpayee government came up with a concept of Strategic Petroleum Reserves in 1998. Indian refinery companies also maintain crude oil storage of 65 days. So now after completion of Phase II, India will have a total of 87 (65+22) days of the strategic buffer of crude oil. Government Initiatives to improve Strategic Oil Reserves - Attracting investments and technology to improve hydrocarbon extraction worth Rs 50 lakh crore in the next 20 years. - Special budgetary provisions to construct strategic oil reserves in various places in Phase II of SPR programme. SPR Programmes at Global Level Crude oil is considered a valuable resource all over the world. So, the countries, as well as private industries, maintain a healthy stockpile of crude oil. This hoarding is done to protect from future oil shocks due to the volatile Middle East region, which is seeing regular conflicts. The oil production of the world is affected by the groups such an OPEC (Organisation of Petroleum Exporting Countries) and GCC (Gulf Cooperation Council) which regulate over 44% of world's total crude oil production and approximately 21% of the natural gas production. The disruption in oil supply can also be caused due to any natural disaster as well. So, the strategic oil reserves are a defence against any downfall in future oil production. Since the oil reserves are unevenly located across the countries, it is wise to maintain a stockpile, especially when the prices are determined by the demand-supply system of the market. As the present era is built upon globalisation and industrialisation, any disruption in the oil market severely impact the socio-economic and political environment of the nations. The USA, China and Japan are the nations with the biggest Global Strategic Petroleum Reserves. - The US is the world's largest holder of crude oil reserve with a maximum capacity of 726.6 million barrels. It maintains its strategic reserves in underground caves along the gulf coast. - China has 2nd largest reserve with a capacity of about 470 million barrels. - Japan has 3rd largest petroleum reserves at 324 million barrels. Japan announced to share its oil reserves with other countries in 2007. More from Us: Are you preparing for State PCS exam, Check other links also:
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9616928100585938, "language": "en", "url": "https://honey.nine.com.au/mums/spending-advice-from-billionaires/b17dae6f-f0d8-4d3a-8658-e24058809cc9", "token_count": 957, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2021484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1906efbc-aecc-4142-a48c-6d6392ebac4c>" }
Self-made billionaires are arguably the best people to offer advice when it comes to money matters. They’ve often climbed the ranks of success through hard work and perseverance, and they know what it takes to turn a regular income into mind-boggling wealth. From Amazon’s Jeff Bezos, to Australia’s tech darlings, the Atlassian founders, we’ve rounded up the key pieces of financial advice they might impart on their own children — and they’re more down to earth than you think. Here’s what we can all learn from them. Warren Buffet: “The biggest mistake is not learning the habits of saving properly early, because saving is a habit.” Regarded as the greatest investor of all time, Warren Buffett (net worth AU$117 billion), is famously frugal with his vast fortune — he still lives in the modest house in Omaha, Nebraska he bought for $31,500 in 1958, before he amassed his wealth — and has pledged to leave the bulk of his money to charity instead of his children. While Buffet is renowned for his many inspirational sayings about money management, his quote is a sensible piece of advice that can be applied to incomes of all sizes — and will transform your children’s relationships with money. Creating a savings habit doesn’t come easily to everyone, but it’s an important step for your kids to become financially secure. Jim Koch: “Being wealthy means living below your means.” American entrepreneur and co-founder of the Boston Beer Company, Jim Koch, (net worth AU$1.55 billion) keeps it simple with this piece of advice. Spending more than you earn is guaranteed to land you in debt — and the longer you carry debt, the harder it becomes to bounce back into the black. It’s also a lesson that is particularly relevant for kids of an income-earning age. For many teens, that first pay-cheque is an excuse to live large for a day — but instilling in them the value of frugality will see their money put towards better things. Mark Cuban: “Pay off your credit cards.” Asked in an interview last year to name the best investment anyone can make, American businessman and investor Mark Cuban (net worth AU$5.8 billion) volunteered paying off credit card debt. His reasoning was that the money you save on the high interest rate usually charged on credit cards is worth more than the return you could get from investing your money. This is pertinent advice for many parents of twenty-something kids who are considering their first credit cards — a recent report by the Australian Securities and Investments Commission (ASIC) revealing that one in six Australians is struggling with credit card debt, while as a nation we owe a total of $45 billion on credit cards. Jeff Bezos: “Focus on the long term.” The world’s richest man, Jeff Bezos (net worth AU$185 billion), knows a thing or two about the long game — it took Amazon nine years to make a profit. In interviews, he credits his long-term thinking for his success and it’s something he articulated as far back as 1997, with his letter to Amazon shareholders titled “It’s all about the long term”. But you don’t have to be an entrepreneur or an investor for this advice to be useful — for the rest of us, particularly those with families, it can be a timely reminder to think twice about instant gratification and short-term splurges, and redirect money towards longer-term goals, such as saving for a home deposit. Mike Cannon-Brookes: “Family comes before work.” Australia’s own self-made billionaire, the Atlassian co-founder Mike Cannon-Brookes, sets the example he hopes his four young children will follow. Known for his extraordinary work ethic and involvement in multiple businesses and causes, Cannon-Brookes (net worth AU$9.7 billion) has expressed his position on family versus work in a number of interviews over the years. The message to his kids, and to the rest of us, is to invest in our relationships and experiences instead of getting hung up on material possessions. Their bank balance may be a little higher than the rest of us, but following the advice of these billionaires can help us all to improve our family’s financial outlook. As Australia's personal budgeting specialist, MyBudget can help you keep your spending on track.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9589213132858276, "language": "en", "url": "https://latinamericanpost.com/36486-how-coronavirus-can-affect-farming", "token_count": 1019, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.115234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e05d494c-6ce3-4cbd-b3c8-362755abc995>" }
The coronavirus pandemic is having a significant impact on the agricultural sphere, causing disruption of its processes. Escucha este artículo The biggest concerns include an oversupply of food, a shortage of labor, and changes in consumer demand that are harming not only small farmers but also large producers around the world. Let's get deeper into these issues and possible ways to help farmers in this difficult time. Shortage of Workers Farmers also face problems due to the lack of staff. Self-isolation and social distancing guidelines slow down the job selection process, and national restrictions disrupt the normal international flow of labor in the industry. For example, the agricultural sector in Europe is facing severe labor shortages due to border closures that prevent hundreds of thousands of seasonal workers arriving at farms that rely on them. The impact on the sector is expected to be long term. A number of major European agricultural producers, including France, Germany, Italy, Spain, and Poland, are particularly vulnerable. Dairy Sector Under Threat While coffee shops were completely closed in some countries, excess milk became a reality. In the United States, farmers were forced to dump 14 million liters of milk every day due to supply disruptions, and in the UK – about 5 million liters per week. The Harvest Goes to Waste Some manufacturers have tried to focus on the needs of ordinary buyers. However, this has changed market demand, and overstock remains a problem across the sector. To prevent that, here is an example of how to care for a corn plant to help farmers get some useful ideas on farm management to prevent overstock during pandemic. Supply Chains Slowdowns and Shortages There have been no significant disruptions in food supply since the start of the pandemic. However, logistical problems in supply chains, in particular cross-border and domestic restrictions on movement can lead to food supply disruptions, especially if they remain unresolved for a long time. Goods with high added value, and in particular perishable goods such as fresh fruits and vegetables, meat, fish, milk and flowers may be the first to be affected. Restrictions on movement can prevent farmers from gaining access to markets and lead to the destruction of produced foods. In many countries, farmers are currently unable to sell their produce at local markets or at local schools, restaurants, bars, hotels and other facilities that are temporarily closed. Change in Consumer Demand The pandemic has also brought changes to the population consumption habits. For instance, in the UK, the demand for flour has increased significantly due to the fact that people stuck at home are increasingly turning to homemade baked goods. At the same time, French buyers are increasingly purchasing more organic food. Parts of the food industry are benefiting from changing consumption habits. For example, sales of orange juice in the US, which were gradually declining, rose sharply during lockdown. Panic buying and stockpiling of food by consumers, as well as national trade policies related to the pandemic could lead to price surges and increased price volatility, destabilizing international markets. This is especially true with regard to any restrictions on export. Previous crises have shown that such measures primarily harm countries with low income levels and food shortages. Ways to Support Farming in Pandemic In the current situation where the COVID-19 pandemic or the fear of its spread is having a negative impact on the agricultural sector, appropriate emergency measures should be taken to support agri-food enterprises to stimulate agricultural production and ensure that workers continue to receive decent wages in line with existing agreements and laws. In this context, particular attention should be paid to the hundreds of millions of agricultural workers who, while playing a crucial role in ensuring the continuity of the food supply chain, are often the most vulnerable. To maintain the highest possible efficiency by optimizing resources during the pandemic, farmers can also consider utilizing modern available technologies to help. The easiest way to do that is to use online tools for fields monitoring, which help to manage the land remotely while significantly reducing costs and optimizing the use of resources. One of such tools is EOS Crop Monitoring. The platform allows for easy and fast decision-making, offering all the needed field data in one screen. The data is retrieved from satellites, automatically analyzed by the tool to provide the most accurate information. In times such as the pandemic, it’s a great opportunity not only to manage the field remotely but also to optimize field activities management and resources use, including labor. Ultimately, it helps farmers to adapt to the pandemic-related threats as easy as possible. Addressing the impact of today's health crisis on the agri-food sector, national and international policy regulations must be built on the WHO’s core principles for responding to the COVID-19 pandemic, which include four main pillars: stimulating the economy and employment, supporting small businesses, protecting workers in the workplace, and relying on social dialogue for decision-making. This is what will help farming to stay on track during the pandemic.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9647242426872253, "language": "en", "url": "https://searchcio.techtarget.com/definition/dot-com-bubble", "token_count": 465, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.28515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d67a6b11-8e40-4630-8c08-f6a0a602ac3f>" }
The dot-com bubble, also referred to as the Internet bubble, refers to the period between 1995 and 2000 when investors pumped money into Internet-based startups in the hopes that these fledgling companies would soon turn a profit. The speculative investments in dot-coms (so named for the ".com" domain used by companies doing business on the Internet) drove up equity markets. The technology-centric NASDAQ Composite Index rose from less than 1,000 in 1995 to a peak of 5,408.60 on March 10, 2000. In the rush to cash in on the Internet boom, many investors ignored traditional investment metrics, such as the ratio of a company's current share price compared to its per-share earnings (P/E ratio). Instead they subscribed to a business model that favored building brand awareness and market share quickly, even if that required offering services or products for discount prices or for free. Low interest rates in 1998 helped drive up the amount of capital invested in dot-coms. Advances in technology infrastructure and a growing understanding of the Internet enabled people in developed countries to easily get online. These factors, combined with the seemingly overnight fortunes made by some of the startup founders whose companies went public, fueled the exuberance. (Some technology industry analysts argue the bubble actually began in the early 1990s, when the concept of an "information superhighway"[REL1] was popularized.) The dot-com bubble started to collapse in 1999. In 2000, companies such as Pets.com declared bankruptcy and by 2001 the bubble had burst, taking many dot-coms -- or "dot-bombs," as investors started calling them -- with it. The trillions of dollars in market value lost during the crash of the stock market between 2000 to 2002, coupled with the financial damage inflicted by the 9/11 terrorist attacks, led to widespread layoffs in the technology field. The "new economy" defined by the Internet boom, however, also produced some notable successes. Among the estimated 48% of the dot-com companies that survived through 2004 are current Internet giants Amazon, eBay and Google. Some business historians fear there is another tech bubble, citing Facebook's $19 billion purchase of messaging service WhatsApp announced in February 2014 and other high-priced acquisitions by technology giants such as Google.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9625148177146912, "language": "en", "url": "https://thebottomlinegroup.com/what-is-overhead-cost/", "token_count": 1187, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4371b533-0512-429c-96a7-f5cd92157d5d>" }
A business typically incurs two main categories of expenses: overhead and operating expenses. Operating expenses refer to those that a business incurs resulting from its normal operations. Overhead expenses, on the other hand, are what it costs to run the business. What is the overhead cost? Overhead cost refers to those costs that are not related directly to the production activity and are considered as indirect costs that need to be paid even if there is no production. Examples of overhead costs include rent payments, utility payments, insurance payments, salary payments to office staff, office supplies, and the like. Overhead cost is the cost of indirect labor, indirect material, and other operating expenses that are normally associated with the typical day to day operation of the business but must not be charged directly to a specific product or service or cost center. In short, it is the cost the company incurred on material, labor or services that should not be economically identified with a specific saleable cost of goods or services per unit of the business. Overhead cost is indirect in nature and needs to be shared out among the cost units as precisely as possible. Overhead cost refers to all expenses other than labor that are required to operate a business. These expenses can be classified as either fixed or variable. Regardless of the sales volume the company generates, fixed costs are to be paid every month. Fixed expenses include payments for rent or mortgage, fixed assets depreciation such as office equipment and company cars, salaries and other payroll costs, liability and other insurances, utility costs, membership dues, subscriptions which may be affected by sales volume, and accounting and legal costs. These expenses do not change, regardless of whether the revenue of the company goes up or down. Most of the business’ variable expenses are semi-variable costs that fluctuate from month to month based on sales and other factors, including change of season, promotional efforts, and variations in the prices of services and supplies. Falling under this category are expenses for office supplies, telephone, printing, mailing, packaging, promotion, and advertising. Typically, the more business the company is engaged in, the greater the use of these items. When a company estimates its variable expense, it must use an average figure based on an estimate of the yearly total. Semi-variable overhead costs Semi-variable overhead costs exist regardless of how the business is going, but the cost fluctuates slightly. These overhead costs could have a base rate that must be paid by the company and a variable rate that is determined by actual usage. Semi-variable overhead costs include some utilities, hourly wages including overtime, vehicle usage, and salaries and commissions of the salespeople. An overhead cost may be treated differently depending on the business. An overhead expense in one company could be a direct production cost for another company. A good example is a marketing agency that will typically classify rent as an overhead cost, while a production facility will typically classify such rent as a direct cost. Some types of expenses could be classified as both direct and indirect costs for your business, based on the situation. Wages paid to a seamstress at a dress shop might be considered as a direct cost because her output increases the revenue of your business. On the other hand, wages paid to an in-house accountant will be classified as an overhead cost. Understanding overhead costs Knowing your overhead costs will help you set prices that result in profits. Overhead cost is typically factored into the total cost to run your business, letting you know how much money your business must bring in. You can also use overhead costs in determining your net profit, or bottom line. Take your gross profit and deduct all expenses, including overhead, to determine your net profit. Your net profit will tell you if your business is making money or if your expenses to operate your business are more than the revenue, in which case you are losing money. What is the cost in business Cost in business expresses the amount of money spent on the production or creation of goods or services. It does not include a mark-up for profit. From a seller’s point of view, the cost is the amount of money that is spent on producing a product or good. The seller will break even when he sold the goods at the same price they cost to produce. The seller does not lose money on the sale, but he does not make a profit either. The cost of a product from the point of view of a buyer is the price. This is the amount charged for creating a product by the seller, including the cost of making the product and the mark-up cost added to produce a profit. Cost in accounting In accounting, cost refers to the monetary value of expenditures for supplies, services, raw materials, labor, equipment, products, and more. Cost is the amount that is recorded in bookkeeping records as an expense. When developing a business plan for a new company, organizers will make cost estimates to asses if the revenues and benefits of the proposed business will more than cover the costs. This process is referred to as cost-benefit analysis. Underestimating the costs will result in a cost overrun once the business starts its operations. This simply means that the costs are higher than the income, and the company will bleed money. The Cost Plus model is used by many companies in determining a sales price for a product. Cost Plus is when the Price = Cost +/- x %. X indicates the percentage of built-in overhead or profit margin that must be added to the cost. Costs are important to a business because these are the things that drain away whatever profits the business makes. They are the difference between making a good and poor profit margin. Costs are also the main reason why a business suffers from cash flow problems. Costs change as the output or activity of business changes.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9544346332550049, "language": "en", "url": "https://wenr.wes.org/2021/01/education-in-germany-2/print/", "token_count": 16970, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0019683837890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3a00c0d2-b4ae-46f3-8604-969bad790b4f>" }
Stefan Trines, Quality Assurance Director and Editor at Large, WES Introduction: Recent Trends in German Education One of the effects of the COVID-19 pandemic in Germany is a greater push for the digitalization of education. Compared with other countries, Germany has been somewhat slow in adopting computer-based learning. The German federal government has promoted “digital competencies” as a central concept in education for several years—as recently as 2019, it committed €5billion euros (US$5.8 billion) for the modernization of its internet infrastructure and the increased supply of digital devices in Germany’s 43,000 schools. However, the abrupt shift in March of 2020 to online education for close to 11 million school students in Europe’s largest country exposed Germany’s lack of readiness for digital learning. The crisis resulted in cascading calls across the political spectrum to rapidly advance the digitalization of schools. In higher education, similarly, digitalization is now increasingly viewed as a means of academic modernization, as well as a way of boosting the already surging mobility of international students to Germany. While the COVID-19 emergency led to a sharp drop in the number of international students in the country—an estimated 80,000 of them left during the early stages of the pandemic —Germany has emerged as a growing international education hub in recent years. It draws increasing numbers of students from countries like China and India, notably to its English-taught master programs. To further advance this inflow of international students, the German Academic Exchange Service (DAAD ), the country’s designated funding organization for international exchange, recently adopted a strategy of “internationalization through digitalization” that explicitly seeks to promote academic mobility via digital learning platforms and webinars on topics like the writing of research proposals, online language tests, virtual recruitment events, and online portals to match students with academic institutions . In addition to such digitalization drives, the government continues to systematically promote Germany as an international science hub with large-scale funding projects, such as the so-called Excellence Strategy, which aims at ensuring that German universities are internationally oriented research institutions of global stature. As a rapidly aging society, Germany is in urgent need of immigrants to bridge its mounting shortage of skilled workers. Attracting international researchers and students to its universities is therefore viewed as a critical effort to ensure the inflow of highly skilled immigrants and sustain economic growth. Another recent concern for German policymakers was the country’s slide in the latest 2018 OECD PISA study . While German students continue to perform above the OECD average, their test scores in reading and natural sciences fell below levels last seen in 2009 and 2006, respectively . Some German officials castigated these results as a “stagnation in mediocrity” and lamented the high number of 15-year-old students—21 percent — who are unable to adequately read or perform mathematical calculations. However, other observers attribute the mediocre PISA results not so much to shortcomings of the German education system per se, but to the rapid influx of foreign-educated refugees and immigrants, and the difficulties related to integrating these newcomers due to language barriers and academic incompatibilities. Between 2015 and 2016 alone, Germany took in approximately 1.3 million refugees—an influx of historic proportions that resulted in the largest population increase in Germany in years, with most of the new arrivals being young people in need of education. Given that most refugees don’t speak German on arrival and that Germany has a relatively short history as an immigration country, it thus far managed to incorporate these migrants into educational settings better than expected, despite significant structural and sociocultural barriers. That said, integration problems persist, and the social inclusion of immigrants will likely remain a political issue for the foreseeable future. (For more on this topic, see our related articles on the state of refugee integration in Germany in 2017 and 2019 ) International Student Mobility Until the coronavirus pandemic wreaked havoc on education worldwide, international student mobility to Germany was booming. According to official statistics, the number of international students in the country surged by 46 percent between 2013 and the 2019/20 winter semester, from 282,201 to 411,600 . In 2019, 11.7 percent of all students in Germany were international students—a high percentage compared with those of other top destinations like the United States, where just 5.5 percent of all students in 2019 were international, according to the Institute of International Education (IIE). Note, however, that Germany, unlike the IIE, includes in its international student numbers those foreign nationals who are in Germany on immigration or refugee status who attend higher education institutions (HEIs). Even so, 78 percent of all foreign students in Germany—319,000—are international students in the traditional sense, enrolled on temporary student visas. There were no data for the post-pandemic 2020 summer semester available as of this writing, but it’s clear that the pandemic led to a sharp drop in the number of international students in Germany, as it did in other host countries. According to Uni-assist, Germany’s main credential evaluation agency, for instance, the number of international applications for the winter semester 2020 was down by 20 percent compared with that of the previous year. Interviews of students from India, the second-largest sending country of international students to Germany, reflect that student mobility is currently hampered by concerns about diminished employment prospects after graduation, logistical hurdles, as well as apprehensions about educational quality, given that German universities have since the pandemic switched to blended learning, combining face-to-face instruction with online courses . That said, interest in studying in Germany among Indian students remains strong overall, and there’s little doubt that Germany will continue to be a dynamic international education hub in the future. The country has a well-articulated and generously funded internationalization strategy that deliberately seeks to attract the world’s “smartest minds.” As noted, Germany has a growing need to import skilled workers. The country already lacks more than 300,000 tech workers —an acute labor shortage that is expected to depress economic growth by 2035 . Against this backdrop, international graduates make for excellent immigrants. They are relatively young and already familiar with the country; they have German academic qualifications and often speak at least some German. Inbound student mobility has consequently been incentivized with changes in immigration policies in recent years that make it easier to work in Germany after graduation and obtain long-term residency. Not only can international students stay and work in the country after graduation for 18 months, but those with adequate employment contracts and German language skills have pathways to long-term work visas and permanent residency. Language barriers are Germany’s biggest handicap vis-à-vis English-speaking destination countries. However, these barriers are becoming increasingly irrelevant because of the growing availability of English-taught master and doctoral programs. While bachelor programs are still almost exclusively taught in German, international applicants can now choose from more than 1,300 master programs offered in English alone. What makes these programs highly attractive is the fact that German public universities charge only minimal tuition fees, even for international students—a distinct advantage considering the often-exorbitant costs of study in other international study destinations. No less than 20 percent of all students enrolled in master’s programs in Germany are now international students . In PhD programs, international students even account for 25 percent of all enrollments—a trend that is actively promoted by the German government: 49 percent of all international doctoral students in Germany received public scholarship funding in 2018 . Another draw is Germany’s formidable reputation for world-class programs in engineering and other technical subjects. Fully 42 percent of international students in Germany are enrolled in engineering programs compared with only about 20 percent of international students in the United States. While a majority of international students in the U.S. are enrolled at the undergraduate level, more than 50 percent of degree-seeking students in Germany study in master or doctoral programs. It should be noted, however, that there are variations by type of institutions and country of origin. The number of undergraduate students of African and Middle Eastern origin is significantly higher, for instance. China has been the largest sending country of international students in Germany over the past decade; it currently accounts for 13 percent of international enrollments in the country. But while the number of Chinese students continues to rise, a more recent development has been the rapid inflow of students from India, which in 2015 overtook Russia as the second most important sending country. Between 2016 and 2019 alone, the number of Indian students in Germany doubled and now amounts to 7 percent of all international students. Given that the most typical international student from India is a graduate student in a STEM discipline with limited financial means , Germany is in many ways an ideal destination for Indian students. The growing availability of English-taught graduate programs and recent changes in immigration laws have made the country all the more attractive for these students. That said, Syrian students accounted for an even larger spike in new enrollments, with numbers surging by 275 percent over just three years, primarily driven by the massive wave of Syrian refugees who fled to Germany in 2015/16. Although the integration of these predominantly young refugees into the higher education system proved difficult initially, given their lack of German language skills and other factors, attendance rates are now picking up as many of the refugees are progressing through the preparatory programs and language training courses that are required for international students who lack an adequate command of the German language. The number of refugees enrolled at German universities has more than tripled from 9,000 in 2015 to an estimated 30,000 in 2020 . Student inflows from a variety of other countries, including European countries like Austria, Italy, and France, or Middle Eastern countries with long-standing migration ties to Germany, such as Turkey, have been rising robustly as well. However, much larger growth rates were recently seen for countries like Nigeria, Sri Lanka, or Ghana, illustrating that Germany’s international student population is diversifying. If current trends continue, it’s not inconceivable that Germany will eventually replace the United Kingdom as the most important international education hub in Europe, particularly considering the U.K.’s Brexit from the European Union. Outbound Student Mobility Germany is not only one of the top five host counties of international degree students worldwide, but simultaneously the third-largest sending country of international degree-seeking students globally after China and India, according to UNESCO data . The latest available German government statistics show that 140,000 Germans studied abroad in 2017, about 90 percent of them in degree programs. This means that there are now twice as many German international students enrolled abroad as at the beginning of the century—a swift increase that was driven, in large part, by the Bologna reforms and the European internationalization paradigm of recent years. Changes like the use of a common European credit system and the splitting of Germany’s long single-tier university degrees into the Bologna two-cycle bachelor and master structure have facilitated international academic articulation and made it easier for Germans to study in other countries. What’s more, German government authorities have made it an official policy goal that 50 percent of all German students acquire some form of study abroad experience. Most German international students—82 percent—currently study in other European countries with Austria, the U.K., the Netherlands, and Switzerland being the top destinations . The next most popular world regions are North America, accounting for 8 percent of overseas enrollments, as well as the Asia-Pacific region, accounting for 7.9 percent of enrollments. The U.S. and China were the fifth and sixth most common host countries of German mobile students in 2017. Enrollment Trends in the U.S. and Canada In the U.S., Germany has historically been among the top 20 sending countries of international students after German students started to head Stateside in large numbers beginning in the 1980s and 1990s. But while the number of U.S.-bound students from other sending countries like China and India has surged exponentially since the start of the new millennium, student inflows from Germany have since leveled off and remained mostly flat over the past two decades, ranging from 9,800 in the 1999/2000 academic year to 10,193 in 2014/15, according to IIE. IIE data show that 9,242 Germans studied in the U.S. in 2019/2020—a number that does not yet reflect the fallout from COVID-19. Most Germans were enrolled at the undergraduate level with business and social sciences being the most popular disciplines . Of note, there was a measurable drop in enrollments by German students in the early to mid-2000s in the wake of the highly unpopular Iraq war. There has also been an apparent downturn in new enrollments during the Trump administration, suggesting that political factors may play a role in affecting student flows from Germany. Opinion polls by the Pew Research Center show that the number of Germans holding favorable views of the U.S. plummeted drastically from 57 percent in 2016 to merely 26 percent in 2020, mirroring a similar decrease in favorable views during the Bush administration. This shift in attitudes correlates with a significant drop in the number of U.S. student visas held by German nationals over the past four years. Current data provided by the U.S. Department of Homeland Security indicate that the number of active F and M student visas held by Germans declined from 8,787 to 5,866 between December 2018 and September 2020 alone, although much of that decrease is likely attributable to the coronavirus crisis. It remains to be seen how enrollment trends will develop under the Biden administration once the pandemic subsides. As in the U.S., the number of German students in Canada has been largely stagnant over the past two decades when compared with the rising interest among Germans in pursuing education in European and Asian countries. The overall number of German students in Canada is, in fact, small, amounting to just 2,955 students in 2019, down from a peak of 3,145 in 2008 (according to government statistics ). Transnational Education (TNE) German HEIs are relative newcomers to providing cross-border higher education. The exporting of academic programs has traditionally been the domain of Anglo-Saxon countries like the United Kingdom, the United States, and Australia, which still hold most of the market share. A 2016 study by the British Council found that British HEIs in 2015 offered at least 2,260 TNE programs in 181 countries, with overall TNE enrollments totaling 666,000 students. German universities, by contrast, offered 291 programs in 36 countries, enrolling some 33,000 students, according to the DAAD . Note that these statistics only include government-sponsored initiatives but represent the majority of German TNE ventures. Aside from these quantitative differences, German TNE is qualitatively distinct in that it is part of a long-term, government-subsidized internationalization strategy, while initiatives in other TNE hubs are often privately led and commercially oriented. Transnational partnerships are not only viewed as beneficial for the global competitiveness of German universities, but also as a tool of development aid , designed to support academic capacity building in other countries. More commercially oriented modes of TNE, such as distance education, validation, and franchising models, remain uncommon in Germany. In fact, the best practices for TNE set forth by the Rector’s Conference, Germany’s university association, stipulate that TNE ventures must be not-for-profit, and that fees can only be charged to cover operating costs. Whereas other TNE qualifications are not necessarily recognized in the countries where students enroll, the “academic qualifications offered by German higher education projects abroad” must be “recognized by both the host country and the participating German universities .” Through their TNE projects, German universities typically seek to contribute to the modernization of the education system in the host countries. A common model is to partner with universities that remain independent institutions in the home system while being closely associated with and supported by their German “mentor universities.” The largest of these German-backed universities is the German University in Cairo , which enrolled 12,673 students in 2020 . Other sizable German-backed universities are located in Oman, Turkey, China, Vietnam, and other countries. The German Education System Germany did not exist as a modern nation state until 1871, but education in the German realm has a long tradition. The Kingdom of Prussia is said to be the first country in the world that introduced free and compulsory state-run elementary education in the early 18th century. The first German university, the University of Heidelberg, was established much earlier, in 1386. However, it is the University of Berlin, founded in 1810, that is often considered to have had the biggest historical impact, at least in hindsight. While some historians argue that its influence has largely been glorified , others regard it as the first modern research university in the world and the model university of the 19th century. The university was established based on the “Humboldtian model of higher education ,” developed by the Prussian philosopher and education minister Wilhelm von Humboldt. Core elements of this model include the integration of teaching and research—which were hitherto largely separated—and an independent academia free from state intervention. Its holistic approach to education, allowing students to freely choose their own course of study, stands in contrast to the more rigid and hierarchical university models prevalent across most of the world in current times. Although some analysts contend that the Humboldtian model has been largely constructed after his death , there’s no question that the model itself has been a central paradigm in German education since the 19th century and has conceptually influenced higher education in much of continental Europe. That said, the massification of higher education in Germany in the 20th century and other pressures of economic modernization have placed strains on German universities that make it increasingly difficult to maintain the traditional Humboldtian model in practice. While Humboldt’s ideals are still held in high esteem and aspects like university autonomy remain important principles, the Bologna Process, most particularly, has resulted in German universities adopting more “schoolified” and rationalized programs. The creation of a common higher education area in Europe in 1999, codified in the Bologna declarations, brought radical changes to German higher education, such as the previously unavailable Anglo-Saxon-style bachelor and master programs and the new European ECTS credit system. Whereas students in the 1990s were still able to freely pursue their academic interests and take a broad variety of courses in a more relaxed system without overly stringent semester limits, they are now boxed into more structured programs. Credential evaluators and international admissions officers will find the new programs considerably easier to assess than those of the previous less formal system. Administration of the Education System When analyzing German education, it’s important to understand that the country has a federal system of government that grants its member states a high degree of autonomy in education policy—a structure that’s not unlike the federal system of the United States. The German Federal Ministry of Education and Research in Berlin (BMBF ) has an important role in areas like funding, financial aid, and the regulation of vocational education and entry requirements in the professions. But most other aspects of education fall under the direct authority of the education ministries of the 16 individual states, called Bundesländer in German. These states vary considerably in size; they range from the smaller city-states of Berlin, Hamburg, and Bremen to the large state of North Rhine-Westphalia which has a population of about 18 million. Berlin and Hamburg are simultaneously Germany’s biggest cities with 3.8 million and 1.9 million inhabitants, respectively. Given their autonomy, there can be considerable variation in education from state to state. The length of the secondary school cycle, for instance, varies between 12 and 13 years, depending on the jurisdiction. There are also differences between curricula, types of schools, and so on. However, a coordinating body, the Standing Conference of the Ministers of Education and Culture, facilitates the harmonization of education policies between states. In higher education, a federal law called the Hochschulrahmengesetz (Higher Education Framework Act) provides an overarching legal framework. In addition, the Conference of University Rectors , which represents most universities, coordinates the development of common norms and standards. What that means in practice is that education laws are similar or consistent in many areas: Academic degrees, vocational and professional qualifications are mutually recognized between the states, so that the system runs smoothly, by and large. Schools and universities are regulated and funded by the governments of the states (in the case of public institutions). It should be noted, however, that the federal government also provides funding for HEIs, notably in research and development, as well as funding for projects of “supra-regional importance” (such as, for example, the current digitalization effort in schools). Since the state governments are increasingly hard-pressed to support universities amid rising numbers of students, the role of the federal government in higher education funding has expanded significantly in recent years. For example, the government currently subsidizes the states with 19,000 Euros per student to create up to 760.000 additional university seats nationwide over a four-year period. Universities have a high degree of autonomy and can independently award academic degrees within federal guidelines. That said, final graduation examinations in professional fields like medicine or law are conducted by government authorities of the individual states. The same holds generally true for vocational education, even though the final examinations in this sector are often conducted by government-authorized private industry associations, such as regional Chambers of Industry and Commerce. (Industrie und Handelskammern). Vocational schools fall under the purview of the states, but the federal government oversees on-the-job practical training, which is an integral part of most vocational programs. Important regulations in this sector are codified in a federal law on vocational education (Berufsbildungsgesetz ). Academic Calendar and Language of Instruction The school year in Germany runs from August to July. While there are some variations between states, it’s generally divided into two terms of 38 to 42 weeks with a winter break in February or March and a longer summer break from July to August or September . At universities, the academic year is split into a winter semester that runs from October to March and a summer semester from April to September. Although each semester is formally six months in duration, classes end several weeks early with the remaining time being dedicated to writing papers and exam preparation, as well as a semester break. The language of instruction in schools is German. In higher education, the use of English is becoming increasingly common, as noted before. However, while some 13 percent of all master’s programs were taught in English in 2019, undergraduate programs and programs in professional disciplines are predominantly taught in German. Early Childhood Education Compulsory education in Germany generally begins at the age of six, but almost all children—95 percent in 2017—attend early childhood education (ECE) between the ages of three and five. This stage is intended to socialize and prepare children for formal education. Note, however, that there are some minor variations between states. For the most part, ECE institutions are called Kindergartens or “Kitas” (Kindertagestätten) and have few, if any, compulsory curriculum guidelines. But a dwindling number of jurisdictions, such as the state of Hamburg, not only have Kitas, but also maintain an older and more formalized model of pre-school, which is attended for one year only (age five). These pre-schools are usually directly attached to elementary schools. Also of note, a few states require children who have been attested to have language learning deficits to undergo language training before enrolling in elementary school . In some jurisdictions, older, school-aged children without adequate German language skills, such as children with a migration background, may be mandated to attend ECE language courses as well, even if that means setting back children already enrolled in elementary programs. While these rules are ultimately set by the states, there were political debates in 2019 whether to make ECE language courses mandatory nationwide for all children without sufficient language skills, a change that would predominantly affect migrant children. Elementary education is provided free of charge in public school across Germany. While the majority of children in ECE attend private, non-profit institutions, the scope of private education in the formal school system is relatively small, if growing considerably in recent years. Only about 5 percent of elementary students and 9.5 percent of secondary students were enrolled in private institutions in 2017, according to the World Bank . Elementary education begins at the age of six and lasts four years (grades one to four), except in a small number of states where it lasts six years. Most pupils learn at the Grundschule (foundation school), where they largely study the same general subjects. While there are some variations between state curricula , they usually include German, mathematics, social studies, physical education, technology, music, and religion or ethics. One noticeable difference is the age at which English is introduced. While English classes don’t begin before grade three in some jurisdictions, pupils in some states begin to study English as early as grade one. One state, Saarland, does not offer English in der Grundschule at all. Student assessment and promotion are generally school-based—there are no final graduation exams, nor is a formal final graduation certificate awarded. The system becomes much more diversified at the end of the elementary foundation cycle, when pupils are assigned to different schools based on their academic ability—a process that can be referred to as “tracking” or “streaming.” The mechanism by which pupils are tracked varies by state. Parents in most states can choose either to send their children to general secondary schools, or to enroll them in university-preparatory schools. In some states, school recommendations influence the tracking. In other states, assignments are mandatory based on grade averages. Reassignments may still occur during an academic “observation phase” in grades five and six. This tracking process is not as rigid as it once was, and students in the vocational track can cross over to the university-preparatory track at a later stage. Some states have also established more integrated “comprehensive” secondary schools in which students in the different tracks study at the same school. However, deciding which school to attend remains an important factor in the academic career of many students. Lower-Secondary Education (Sekundarstufe I) Germany’s secondary school system is complex. There are three main programs, which are studied in different types of schools: Hauptschule, Realschule, and Gymnasium. However, all, or at least two, of these programs may also be offered at the same type of school in some states (such as, for example, comprehensive schools (Gesamtschulen), integrated secondary schools, or combined Haupt and Realschulen). In Bavaria, the Hauptschule may be called Mittelschule (middle school). Haupt- and Realschule programs are general secondary programs, completion of which satisfies compulsory education requirements, which range from nine years to ten years of education, depending on the state. In addition, these programs generally prepare for vocational upper-secondary education, although transfer into the Gymnasium, which provides university-preparatory education, is possible as well. Hauptschule and Realschule Hauptschule programs most commonly last five years (grades five to nine). While there are minor curricular differences between states, nationwide standards exist for several subjects with German, mathematics, and a foreign language (predominantly English) as compulsory subjects in the entire country. In addition, students usually study natural sciences (biology, chemistry, physics, or technology), social sciences (geography, history, politics, economics), as well as physical education, and arts or music. Progression is based on internal school assessment, but the content of the final graduation examination is usually set by the governments of the states, at least in the subjects of German, mathematics, and English. Upon completion of the program, students receive the Zeugnis des Hauptschulabschlusses (certificate of completion of Hauptschule). Realschule programs are academically more demanding and take an additional year to complete (grade 10). It’s possible for students who completed Hauptschule to seamlessly transfer into these programs, which generally comprise the same subjects. There are usually centralized state examinations at the end of the program. Students graduate with the Zeugnis des Realschulabschlusses (certificate of completion of Realschule), sometimes also called Mittlere Reife (intermediate maturity). Both the Haupt- and Realschule credentials provide access to upper-secondary vocational education, but students who only completed Hauptschule traditionally enter programs in more practical trades, whereas Realschule graduates have a wider range of options. Completion of Realschule also allows students to transfer into the university-preparatory track, although they may have to meet certain minimum grade requirements. Far more students obtain a Realschule qualification than those leaving school after Hauptschule. The number of students that only complete Hauptschule has drastically declined over the decades. In 1960, 72 percent of all students still attended Hauptschule, or an older type of school of the same level, the Volksschule. In 2017, by contrast, 34 percent attended the Gymansium, 21 percent the Realschule, and only 10 percent the Hauptschule. In general, enrollments are currently shifting strongly in favor of more integrated school forms like comprehensive Gesamtschulen. Between 2007 and 2017, the number of Haupt- and Realschulen in Germany dropped by 45 percent . University-Preparatory Upper-Secondary Education (Sekundarstufe II) Upper-secondary education in Germany is called Sekundarstufe II (secondary stage II) and comprises a vocational and a university-preparatory track. The main institution in the university-preparatory track is the Gymnasium, a type of school designed to ensure “maturity” or readiness for higher education. Students who enroll in Gymnasiums after elementary school study largely the same subjects as those in other schools, but they are expected to learn more independently. What’s more, an elective second foreign language is mandatory beginning in grade six or seven (mostly French, Spanish, or Latin, but also Russian, Chinese, or other languages if offered by the school). While there are no centralized graduation exams at the end of the lower-secondary stage, the certificate of completion of grade 10 is usually officially equivalent to completion of Realschule or “middle maturity.” The length of the upper-secondary stage is either two years (grades 11 and 12) or three years (grades 11 to 13), depending on the state (see the section on upper-secondary school reforms below). In 13-year systems, grade 11 is an introductory stage, followed by a two-year specialization or “qualification phase.” Twelve-year systems begin with the qualification phase, but offer the same curriculum compressed into two years. In some states, the introductory stage is part of the grade 10 curriculum. In the qualification phase, students can typically choose elective subjects, which they study with greater intensity. These subjects are examined at the end of the program in centralized exams. The concrete combination of subjects, and the name given to them, varies by state: Some have two main subjects studied for five hours a week (Leistungsfächer), and two or three additional examination subjects (Prüfungsfächer). Others have five equally weighted core subjects (Kernfächer ) studied for four hours. Yet another variation involves mandatory core subjects (German, mathematics, foreign language) and profile subjects chosen from three different subject areas: arts and languages, science, and social sciences . Progression between grades is based on internal school assessment and generally requires examinations. Students who have a failing grade (ungenügend) in a compulsory subject must repeat the year. They can have two conditionally passing grades similar to the U.S. grade of D—the grade of mangelhaft—but must usually repeat the year if they earn these grades in three subjects. The grading scale in the upper-secondary stage at Gymnasiums is a 15-point scale that is different from the grading scale used at other stages and types of schools. Both scales are shown below. To graduate, students must pass a rigorous written and oral final examination, which is overseen by the ministries of education of the states, almost all of which mandate standard content for one uniform examination taken by all students. To further standardize the exams, several states use the same questions in German, mathematics, English, and French . These questions are developed by the Institute for Educational Quality Improvement (IQB ), a joint institution of the states responsible for monitoring the quality of German schools. The exam is called the Abitur—a name that derives from the Latin verb abire, which can be roughly translated as “to leave.” Students are usually examined in four or five concentration or core subjects. In some states, students may also contribute “special learning achievements,” such as a paper or project, toward their final grade average, which is calculated based on the Abitur exam grades and the regular class grades earned in the final four semesters. The overall grade average is expressed in a range of 1 to 4 with 4.0 being the minimum average required for graduation. Upon successful completion of the exam, students receive the Zeugnis der allgemeinen Hochschulreife (certificate of general university maturity), a credential that legally entitles graduates to study at a German university. Since higher education in Germany is also mostly free, this may sound like an egalitarian educational utopia. In reality, however, admission to universities can be highly competitive. The final Abitur grade determines how quickly students get admitted into popular programs that have a limited number of available seats (numerus clausus). In medicine, for example, students with lower grades had to wait seven years for admission, on average, as of 2019, because more than 40,000 students applied for 10,000 available seats . (See also the section on university admissions below.) The Push for the “Turbo Abitur”: A Reform with Mixed Results Germany has some of the oldest students in the OECD, partially because of the exceptionally long secondary education cycle in many parts of the country. Abitur programs in West Germany had traditionally been 13 years in length, while education in former East Germany lasted 12 years. However, three out of five East German states adopted a 13-year system after reunification, so that by 2000 most states had long programs. To align these systems with the 12-year paradigm found in most of the world, most German states between 2001 and 2009 began to shorten their Abitur programs by one year to enable students to enter universities and the workforce at a younger age. The drive was called the G8 reforms, referring to a 4+8 system, as opposed to the 13-year G9 system (4+9). To preserve quality standards, the states pledged to maintain the old curricula, but to compress them in the new G8 “Turbo Abitur.” Yet these reforms soon ran into resistance in various states. The new programs were often more rigid and offered fewer elective subjects, and they required students to spend considerably more time in the classroom per week—changes that proved unpopular with many students and parents. Political opposition mounted with critics lamenting the “lost childhood ” of Germany’s students and a loss of educational quality in supposedly overloaded programs . While many education experts disagreed with these notions, the G8 reforms became a political issue and several states reversed course. A number of western states have since returned to the G9 across the board. Others now have hybrid systems that allow schools to choose between G8 and G9, while others kept the G8 structure. This has resulted in a rather chaotic patchwork of different systems in Germany. To provide an overview, the most common models in the 16 different states are shown below. In states that are reverting to G9, the G8 programs are gradually being phased out with current students still being able to graduate under the old regulations. (Also note that some G9 states may allow gifted students to graduate after 12 years, but that is not the standard pattern.) - Baden-Württemberg: Implemented G8 but reintroduced G9 at 44 model schools in 2012 - Bavaria: Decided to switch to G8 in 2004 but returned to G9 in 2018 - Brandenburg: G8 at gymnasiums but students can attend 13-year programs at integrated schools - Berlin: G8 at gymnasiums but students can attend 13-year programs at integrated schools - Bremen: G8 at gymnasiums but students can attend G9 programs at other schools (Oberschulen) - Hamburg: G8 at gymnasiums but students can attend 13-year programs at integrated schools - Hesse: Gymnasiums can choose between G8 and G9 - Lower Saxony: Returned to G9 in 2015 after initially implementing G8 - Mecklenburg-Vorpommern: G8 (switched from G8 to G9 after reunification, but has since reverted to G8) - North-Rhine Westphalia: Reverted to G9 in 2019 after initially implementing G8—individual schools may be allowed to continue G8 programs upon special application - Rhineland-Palatinate: Kept G9, but allows select schools to offer G8 as whole-day programs - Saarland: G8 at gymnasiums but students can attend 13-year programs at integrated schools - Saxony: G8 (kept its 12-year system after reunification) - Saxony-Anhalt: G8 - Schleswig Holstein: Returned to G9 in 2019 after initially implementing G8 - Thuringia: G8—already had a 12-year system before the reforms; Abitur programs at “vocational gymnasiums ” are 13 years in length Germany is known for its high-quality vocational education system that has been emulated by several countries worldwide , partially because it’s considered effective in limiting youth unemployment: In 2020, Germany had the lowest youth unemployment rate in the OECD after Japan . The German system comprises a variety of different vocational programs at the upper-secondary level. Some of these are similar to programs in the university-preparatory track in that students receive full-time classroom instruction. However, the most common form of vocational education has a strong focus on practical training. Depending on the state, between 79 percent and 97 percent of vocational students in 2018 learned in the so-called “dual system” which combines theoretical classroom instruction with practical training in a real-life work environment. Overall, 47 percent of all upper-secondary students were enrolled in vocational programs in 2018—a high ratio by OECD standards. Students generally enter the dual system after lower-secondary education. The system is characterized by so-called “sandwich programs,” which means that students attend a vocational school on a part-time basis, either in coherent blocks of weeks, or for two or three days each week (at least 12 hours a week, depending on the state ). The remainder of the students’ time is devoted to practical training at a work place. Companies participating in these dual programs are obligated to provide training in accordance with national regulations, and to pay students a modest salary. German law does not stipulate formal academic entry requirements for dual-track programs, but companies can select applicants and set their own requirements. In practice, completion of Hauptschule is therefore often the minimum requirement for programs in crafts and trades, whereas the Realschule certificate or an equivalent Sekundar I qualification is typically required for programs in white collar vocations, such as business, banking, or hotel management. However, it’s possible to enter vocational programs without a formal academic qualification—more than 50,000 students entered programs in fields like sales or machine operations without having a Hauptschule certificate in 2017 . On the other hand, a sizable number of Abitur graduates pursue vocational education as well. About two-thirds of the curricula at vocational schools consists of theoretical instruction in the chosen field, whereas the other third is made up of general education subjects, such as German, social studies, or English. Programs last two to three and a half years, depending on the specialization. Upon completion of the school component, students receive a certificate of completion of vocational school (Abschlusszeugnis der Berufsschule). Students typically also need to pass a final examination, which may test vocational competencies in addition to theoretical subjects. These exams are conducted by state examination bodies, or state-authorized industry associations like physician’s associations, lawyer’s associations, Chambers of Crafts (Handwerkskammern), or Chambers of Industry and Commerce (Industrie- und Handelskammern-IHK). There are 79 regional IHKs across Germany which conduct examinations in about 250 vocations. The final credential awarded is called the IHK-Prüfungszeugnis (IHK examination certificate). Credentials awarded in the dual system are formal, government-recognized qualifications. In 2019, there were 325 officially recognized vocations with titles that include carpenter, tax specialist, dental technician, and film and video editor. The most popular field of study among men in 2019 was automotive technology; most women studied office management . In some regulated vocations, such as allied health fields, an officially recognized qualification is required to work in the field . These occupations are typically regulated at the state level and aren’t part of the dual system. Another difference between the dual system and state-regulated programs is that the latter typically have formal academic admission requirements . Programs in regulated fields such as social work are primarily school-based programs supplemented by internships. In terms of access to higher education, graduates in the vocational track are generally not eligible for admission into university programs, although they may sometimes be admitted based on special entrance examinations, or completion of a probationary study period, depending on the state. In recent years, regulations have generally been eased to allow more students in the vocational track to enter universities. It should also be noted that students in many vocational programs may concurrently earn a maturity certificate that provides access to a subset of HEIs—the Universities of Applied Sciences. This credential is called the Zeugnis der Fachhochschulreife (University of Applied Sciences Maturity Certificate). It can be earned at a variety of schools, including vocational schools and gymnasiums in some states. In the latter case, students who don’t meet all the requirements for the Abitur may opt for the Fachhochschulreife. However, these students must also complete a practical internship to earn the final qualification. Programs in the dual system satisfy the mandatory practical training requirement, but students may have to take additional courses in general subjects to meet the academic prerequisites. Another exit qualification that may be awarded in the vocational track is called the Subject-Specific Maturity Certificate (Zeugnis der fachgebundenen Hochschulreife). Earning this certificate requires less foreign language study. It offers access to Universities of Applied Sciences and a specific set of subjects at universities, such as social science subjects. Continuing Vocational Education (Berufliche Fort- und Weiterbildungen) Post-secondary vocational education in Germany is generally less standardized than the upper-secondary vocational programs. One traditional pathway leads to the qualification of “master craftsman” or “master craftswoman” (Meister or Meisterin) in fields like agriculture, engineering technology, or masonry, for instance. Master craftsmen or women can run their own businesses in regulated vocations and train apprentices (journeymen or journeywomen). While many students in this track attend vocational schools, they can also prepare on their own for final examinations that test theoretical knowledge and practical skills, as well as business subjects, law, and vocational pedagogy. The exams are conducted by chambers of crafts or IHKs. Preparatory programs may last between one and three years with many candidates studying part-time while working. A comparable qualification in business-related fields is Fachwirt (which can be roughly translated as business management specialist). Successful completion of the Meister or Fachwirt examination opens access to university programs in most states. In addition to these traditional programs, there are a multitude of other part-time education programs that may be as short as three months, or as long as four years, offered by a variety of providers and companies. Students may enroll in these programs to obtain advanced knowledge in their field, improve computer skills, or train in another field. The German government actively promotes further education and lifelong learning, particularly with regard to digital competencies . Retraining programs for unemployed individuals may be paid for by the state, but unlike secondary programs, post-secondary vocational education is usually not tuition free. Bachelor Professional and Master Professional: Controversial New Qualifications In January 2020, Germany enacted legislation to further standardize vocational education by introducing new qualification titles. Graduates of initial upper-secondary vocational programs are now categorized under the umbrella term Geprufter Berufspezialist (examined vocational specialist), whereas holders of a Meister or Fachwirt title can now also be awarded the degree of “Bachelor Professional.” In addition, holders of some higher level qualifications like the Geprüfter Betriebswirt (examined business administrator), a title earned upon completion of a post-Meister program, may concurrently be awarded the degree of “Master Professional.” The new titles are pegged at the same levels as academic bachelor and master degrees awarded by universities in the German qualifications framework. The goal of these reforms is the strengthening of vocational education. Demographic trends and the increased popularity of university education have contributed to a growing skilled worker shortage in Germany. The federal government has therefore argued that the new qualifications—and their placement at the bachelor and master levels—will make the vocational track more attractive to young Germans, as well as enhance the competitiveness of vocational qualifications outside of Germany. However, these reforms have been sharply criticized, particularly by universities and organizations like the German Association of Engineers . The German Rectors Conference denounced the new degree names as confusing terms that obscure the distinctions between academic and practically oriented vocational education, both of which require different sets of competencies . The president of the conference argued that the new names give the false impression that vocational programs are of academic nature, particularly in other countries, where the degrees of bachelor and master are predominantly reserved for university qualifications . Indeed, the new degree names are likely to confuse some international credential evaluators. It’s the position of World Education Services that the vocational qualifications of Bachelor Professional and Master Professional are not directly comparable with academic degrees awarded by universities in the U.S. and Canadian contexts. International Schools and Other Special Types of Schools in Germany International schools are not as prevalent in Germany as in some other countries, and factors like the closure of schools catering to the children of U.S. military personnel due to troop pullouts has affected this sector. According to some tallies, Germany in 2009 had the 7th highest number of international schools worldwide, but was only 19th by this measure in 2019, although most of this shift is owed to the rapid growth of international schools in countries like China, India, and the United Arab Emirates . In 2019, there were reportedly 177 international schools in Germany, teaching English-language curricula such as the International Baccalaureate (IB), British, or U.S. curricula to some 95,000 students , about 75 of them expat children and a quarter of them German. Most of these schools are expensive private schools with a comparatively small student body. However, there are also some public schools that offer IB programs in addition to regular German programs, enabling students to earn an IB Diploma free of charge. There were 85 IB schools in Germany in 2020 . The IB is officially recognized as a university entrance qualification in Germany, as long as students study a certain combination of subjects. Another type of international school program offered in Germany is the European Baccalaureate, a multilingual program offered by the European Schools that is recognized as a university entrance qualification in all EU member states. There are also several French or bilingual French-German schools that are formally recognized. Waldorf and Montessori Schools Germany has the highest number of Waldorf schools in the world (more than 250). Also called Rudolf Steiner schools after the founder of the Waldorf education model, these schools are independent private institutions. They follow a less structured and more holistic pedagogical approach that places greater emphasis on practical and artistic learning than public schools do. While these schools are not supervised by government authorities, they are recognized by the state as special schools. They teach their own curricula, but simultaneously prepare students for official Sekundar I qualifications or the Abitur. However, depending on the state, students must sit for external governmental examinations . In the case of the Abitur, external examinations are required for graduation for students in Waldorf schools in almost all states . Montessori institutions are another type of independent private school in Germany. There are about 1,000 of them , most of them early childhood education institutions, but there are also various Montessori schools at the secondary level. These schools are officially allowed to operate, but students need to sit for graduation examinations at public schools to obtain an official German qualification. There are also schools that train Montessori teachers. These institutions typically offer shorter diploma courses in conjunction with an official German teaching qualification . (For more information on the Montessori education model, see here ). Until the 1960s, university education in Germany was a privilege of small upper-class segments of society; women were not allowed to matriculate at universities at all until the late 19th century. The two World Wars and purges at universities during the regime of the National Socialists (1933 to 1945) were detrimental to the development and expansion of the German higher education system. After World War II, factors like swift economic development, rising incomes, the abolishment of tuition fees, and the introduction of federal financial aid eventually led to a rapid expansion of university education. Between 1947 and 2010, the number of tertiary students jumped from just 80,644 to 2.2 million. Over the past decade, the number of students grew by an additional 32 percent, to 2.9 million in the 2019/20 academic year. It should be noted that these numbers include students from the German Democratic Republic—about 284,000 in 1989 —that were incorporated after reunification, as well as the growing number of international students. Despite this marked increase in enrollments, however, tertiary education participation rates in Germany are extraordinarily low for an industrialized country. Only 33 percent of the country’s 25- to 34-year-olds had attained tertiary education qualifications in 2019, compared with 49 percent in the Netherlands, 50 percent in the U.S., 52 percent in the U.K., and 70 percent in South Korea . This low ratio is attributable, at least in part, to Germany’s long-standing separation between academic and vocational education , with the latter absorbing many students who in other countries might pursue tertiary education. Structural differences in labor market access in certain fields also play a role: Graduates from secondary-level German vocational programs—such as nursing, for instance—can legally work as entry-level professionals. In other countries, employment in these fields typically requires a tertiary degree. Finally, it should be noted that participation in tertiary education in Germany remains socially imbalanced in general, despite it being offered free of charge at public institutions and universities having reserved admissions quotas for students from low-income households. Consider that households with at least one parent holding a tertiary degree made up only 28 percent of the German population in 2016, but that children from these households constituted no less than 53 percent of university students. By contrast, only 30 percent of children from households where at least one parent had completed vocational education—53 percent of the population—attended university. Another related issue in German tertiary education is that some 30 percent of students—particularly those from low-income households—do not complete their programs of study, caused by factors like a lack of academic preparation or motivation, and funding problems . It doesn’t help that classrooms are often overcrowded, leading to a deterioration of teaching quality. The rapid growth of university enrollments has financially overburdened German universities, the overwhelming majority of which are government funded and don’t charge tuition fees. Changes to this funding structure have been debated for some time. The OECD in 2016 went as far as calling the German model unsustainable , but solutions, especially those focused on tuition-based funding models, have been elusive. The introduction of tuition fees by seven states in the 2000s turned into one of the more controversial topics in recent German higher education politics. Although the so-called Uni-Maut levied by public universities—€500 (US$591) per semester on average—was modest by international standards, intense political opposition quickly led to the abolition of fees in all states by 2014. Types of Higher Education Institutions (Hochschulen) The German higher education system has not only grown over the past decades, it has also diversified. The most important change was the introduction of the Universities of Applied Sciences (Fachhochschulen) alongside the traditional research universities in the late 1960s. In total, there are currently 424 university-level institutions , in addition to a number of HEIs that are not classified as tertiary institutions such as Berufsakademien, sometimes referred to as Universities of Cooperative Education. The latter are more vocationally oriented institutions than Hochschulen. Universities are mostly large multi-disciplinary institutions that focus on basic research and offer the full range of academic programs, from bachelor degrees to doctorates. However, several universities, especially smaller private institutions, are more narrowly specialized in specific disciplines, such as technical fields, business, or psychology. There were 107 institutions classified as universities in Germany in 2019/2020. In addition, there were 52 universities of arts (Kunsthochschulen) offering programs in artistic fields like music, fine arts, or theater, as well as 6 pedagogical universities, and 16 theological universities that focus on religious education, but may also offer programs in disciplines like philosophy, social work, or nursing. The university with the largest enrollment is the FernUniversitat Hagen, a public distance education provider with about 75,000 students who learn at various regional study centers across Germany, as well as in Austria, Hungary, and Switzerland. Other large public universities include the University of Cologne with 54,000 students , the University of Munich (49,000 students), and the Technical University of Aachen (45,900 students). Overall, more than 60 percent of German students attend universities. Universities of Applied Sciences (Fachhochschulen – FHs) are a group of 213 institutions that offer programs in a limited range of subjects, such as engineering, business, or computer science. Their programs are more practically oriented with curricula that focus on applied research and usually include industrial internships. FHs are generally not allowed to award doctoral degrees, although there have been a few exceptions to this limitation in recent years . A further distinction lies in the admission requirements: Whereas the Abitur is required for unqualified access to universities, programs at FHs can be entered with a University of Applied Sciences Maturity Certificate earned in the vocational track (Fachhochschulreife). A special type of FH are the Verwaltungsfachhochschulen (universities of applied sciences in public administration) which train civil servants for state and federal government. There are 30 of these schools offering education in general administration, as well as specific fields like policing, taxation, or public finance. Private higher education in Germany has been growing rapidly in the recent past, but still remains relatively insignificant in a system dominated by public providers. There are presently 117 private HEIs in Germany, including 25 universities and 91 FHs , the vast majority of them founded since the beginning of this century . However, these institutions only enrolled 246,739 students or 8.6 percent of all tertiary students in the 2018/19 academic year. Private HEIs tend to be smaller institutions focusing on business and technical majors, as well as professional fields like medicine. Many of the theological universities are private as well. Except for some institutions like the FOM University of Applied Sciences, Germany’s largest private HEI with 55,000 students , the vast majority of these private universities enroll less than 2,000 students . Depending on the institution, it’s sometimes easier to get admitted into private universities than into public ones, but the costs of study are considerable, if lower than in countries like the United States. Private universities charge tuition fees that range between €2,000 (US$2,362) to €20,000 (US$20,362) per year , but may in rare cases be as high as €43,000 for select master programs . Despite these steep costs, private HEIs have nevertheless become an increasingly popular alternative to the more crowded and comparatively underfunded public universities. Students at these institutions tend to complete their studies faster and drop out at lower rates than public university students . Another draw is that many private universities offer flexible part-time programs that appeal to working adults. Quality Assurance and Accreditation Germany’s HEIs are recognized and regulated by the ministries of education of the states. To become “state-recognized” and have the same standing as public HEIs, private institutions must also be accredited by the Science Council , an advisory body to the federal and state governments. While Science Council accreditation is voluntary, private institutions without state recognition are not allowed to call themselves Hochschulen nor issue formal academic qualifications. Accreditation is granted for three to ten years based on the evaluation of teaching facilities and staff, quality assurance mechanisms, finances, and the mission statement of institutions. Quality assurance mechanisms in Germany have undergone significant changes since the introduction of the Bologna reforms at the end of the 20th century. The German states early on implemented a system of program accreditation for the new bachelor and master programs by external non-governmental accreditation agencies—a key concept of the reforms. However, Germany’s federal constitutional court in 2016 ruled it unconstitutional to transfer quality assurance functions to private organizations. Because of this ruling, accreditation is now granted directly by the Accreditation Council , a public institution of the states and the federal government. Under the current system, codified in a 2017 treaty , independent accreditation agencies still evaluate academic institutions and programs, but the Accreditation Council renders the final accreditation decision as an administrative act. There are 10 accreditation agencies authorized by the Accreditation Council to operate in Germany. Note that agencies from other countries that are registered in the European Quality Assurance Register may be allowed to evaluate institutions and programs in Germany. Two of the agencies authorized by the Accreditation Council are headquartered in Austria and Switzerland. The accreditation of bachelor and master degree programs is generally mandatory, while other, state-examined programs in professional disciplines are exempted from this requirement. Accreditation is granted for a period of eight years , at the end of which institutions need to apply for re-accreditation. The process is based on the review of self-assessments and on-site inspections by a panel of evaluators that consists of professors and professionals in the field, as well as one student representative, who assess the concept, structure, and curricula of programs . However, it should be noted that the deadlines for universities to obtain accreditation of their programs vary by state, and that not all programs have been accredited thus far. Program accreditation can be burdensome and expensive for larger universities, so that growing numbers of institutions apply for “system accreditation,” an alternative option that allows HEIs to forgo the external review of each individual program by creating internal, institution-wide quality assurance mechanisms that satisfy the requirements of the accreditation agency. As of 2020, 94 HEIs had obtained system accreditation—a considerable increase compared with the number which had done so in previous years. Another option for institutions is to apply for partial system accreditation, that is, the accreditation of several programs within the same discipline (a process also referred to as “bundled accreditation”). A database of accredited programs and institutions is available on the website of the Accreditation Council. The Excellence Initiative/Excellence Strategy In 2005, Germany launched the Excellence Initiative , a well-funded federal project to nurture a group of top-tier, globally competitive research universities. To foster competition between HEIs, institutions were financially incentivized to develop “future concepts” for research, research-oriented graduate schools, and “excellence clusters” (regional research networks). Universities that performed best in these categories were then classified as “universities of excellence” and received special funding. Critics contended that the project divided German universities into winners and losers and shifted funding priorities disproportionately toward research, thereby harming higher education in the country at large . The OECD noted in 2019 that while Germany is among the top spenders on research and development within the organization, spending per tertiary student is below the OECD average and has stagnated amid increased enrollments. Despite these criticisms, the German government considered the initiation a success and continues the project with some modifications under the name “Excellence Strategy .” Proponents point to factors like increased research output, an uptick in independent private research funding for top institutions , as well as their growing attractiveness to foreign researchers. German Universities in International Rankings Although German HEIs trail universities in countries like the U.S. and the U.K. in international university rankings, they are consistently well-represented in the most common rankings. For instance, seven German universities are among the top 100 in the most recent 2021 Times Higher Education (THE) global ranking (compared with 11 British institutions, seven Dutch institutions and five French institutions). The ranked German institutions are all public universities, a plurality of them universities of excellence. The three institutions rated highest by THE are the University of Munich (ranked 32nd), the Technical University of Munich (41st), and the University of Heidelberg (42nd). The same universities are ranked highest in both the latest QS ranking and Shanghai rankings , which feature three and four German universities among the top 100, respectively. Unsurprisingly, German universities are dominant in subject-specific rankings in fields like mechanical engineering, such as the EU-funded U-Multirank Project . Admission into public universities in Germany is generally based on the final Abitur grade, which determines how fast students get admitted into their program of choice. Although all Abitur holders are eligible for admission, those with lower grades must often wait longer to enter. The way the system works is that universities consider the number of semesters that have passed since applicants graduated from upper-secondary school with each semester in waiting increasing the chances of admission. In addition to students who meet the minimum grade threshold in a given academic year, a certain number of students are admitted based on waiting periods. The length of these waiting periods varies by field of study. While programs with enough seats admit students instantaneously without delays, applicants in popular fields like medicine or law may have to wait for several years. Additional entrance requirements are relatively uncommon for students with the Abitur, but some programs also require admissions tests or demonstrated foreign language skills. It should be noted that while the Abitur or a subject-specific maturity certificate are the most common entrance qualifications, applicants with a vocational Meister or Fachwirt qualification are eligible for admission as well. Depending on the state, applicants who completed an upper-secondary vocational program and worked for a few years after graduation may also be admitted, if usually contingent upon special entrance examinations or completion of a probationary period. To boost tertiary enrollments, most German states have eased admissions restrictions in recent years. A record number of 65,000 students without the Abitur were enrolled in universities in 2018. Admission requirements at private universities are often less strictly tied to the final Abitur grade and may place greater emphasis on entrance exams, interviews and other criteria, although this varies by institution. Universities of Applied Sciences have lower admission requirements than universities and admit students with the Zeugnis der Fachhochschulreife. In a few states, this certificate can also provide access to regular universities. Admission requirements for international undergraduate students are fairy stringent in Germany. Applicants from non-EU countries who did not complete any post-secondary study in their home countries are often required to complete a one-year preparatory program (Studienkolleg), at the end of which they must pass an equivalency examination (Feststellungsprüfung). Admission into these prep programs requires adequate German language skills and may involve entrance examinations. Even if a prep program is not required, students from all non-German-speaking countries must pass a German language test, such as the Test DaF , unless they seek entry into English-taught programs. The specific admission requirements for 130 countries can be found in a database maintained by the DAAD. The Tertiary Degree Structure As in several other European countries, the Bologna Process brought major changes to the German higher education system. Before the reforms, the standard courses of study at German universities were long single-tier programs with a nominal duration of 9 or 10 semesters, although it often took students much longer to graduate. These integrated programs led to the qualification of Diplom—awarded in the sciences, engineering, business, and some social science fields—or the Magister Artium (awarded mostly in the humanities). These credentials could be classified as graduate level qualifications and provided access to doctoral programs. Universities of Applied Sciences, on the other hand, offered shorter four-year programs leading to the Diplom (FH), which usually did not allow for progression to doctoral studies. After the introduction of the reforms in 1999, almost all Diplom and Magister programs were successively split into undergraduate and graduate cycles and replaced by the new bachelor and master programs. Professional disciplines like medicine or law remain an exception to this structure. Whereas some countries with similar systems—like the Netherlands, for example—switched to the two-cycle structure across the board, Germany maintained long single-tier programs in most professions. That said, the vast majority of German students are now enrolled in bachelor and master programs, whereas other programs, including state-examined professional programs and non-Bologna compliant programs in artistic fields, make up a comparatively small percentage: In 2018, 49.6 percent of students were enrolled in bachelor programs, 28.3 percent in master programs, 5.6 percent in doctoral programs, and 16.5 percent in other types of programs . Short-cycle tertiary programs below the bachelor’s level are very uncommon in Germany. Less than 1 percent of students enroll in these types of programs compared with an average of 17 percent in other OECD countries . Credit System and Grading Scale Before the Bologna reforms, universities did not use credit systems, but quantified course and program requirements in weekly hours per semester (Semesterwochenstunden) with Diplom or Magister programs typically requiring a total of 140 to 170 semester hours on average to graduate. Today, institutions use the European ECTS credit system , which defines one year of full-time study as 60 credit units with one credit representing 25 to 30 hours of study. A three-year bachelor program, thus, requires 180 ECTS credits. In terms of assessment, German universities continue to use the traditional German grading scale. While HEIs are technically required to use the ECTS grading scale alongside the German scale, it isn’t commonly used. Given that the ECTS scale is a relational or rank-based scale that measures how well students perform in comparison with other students, the absolute German grades cannot be directly converted into ECTS grades. While some institutions list ECTS grades in addition to German grades on their transcripts, the ECTS ranking is mostly limited to final degree examinations, if it is used at all (see the sample document issued by the University of Duisburg Essen linked at the end of this article). The grading scale is largely consistent across public universities, even if private universities and some programs, such as law programs, use alternative scales. It ranges from 1 to 5 and is different from most numerical grading scales in that the lowest number represents the highest grade. At most institutions, a final grade average of 4.0 is required for graduation, but some universities may graduate students with a final grade of 4.3. Of note, there’s been a trend toward grade inflation in recent years. Between 2000 and 2011 alone, the number of good and very good grades awarded by German universities in final graduation exams increased by 9 percent , although it should be noted that there are significant variations in grade distributions between academic disciplines . Bachelor programs are offered by both universities and FHs and are either three years (180 ECTS credits), three and a half years (210 ECTS), or four years (240 ECTS) in length. The curricula are specialized within the major—there are usually no general education subjects or minor specializations as found in the United States. The programs are divided into subject modules, each comprising several related courses. At the end of the program, students write a thesis, typically worth 6 to 12 ECTS credits. A study abroad period or industry internship may be required, depending on the program. The degree names that have been approved by German authorities are Bachelor of Arts, Bachelor of Science, Bachelor of Engineering, Bachelor of Laws, Bachelor of Fine Arts, Bachelor of Music, and Bachelor of Education . Master degrees have the same names as bachelor degrees (Master of Arts, Master of Science, Master of Engineering, and so on). The length of the programs can vary between one year (60 ECTS), one and a half years (90 ECTS), and two years (120 ECTS), but it should be noted that a combined credit load of at least 300 ECTS in both cycles is required in the case of consecutive programs in which the master program builds directly on the bachelor program. Admission generally requires a bachelor degree in a related discipline with sufficiently high grades, but students with bachelor degrees in unrelated disciplines may sometimes be able to be admitted via entrance examinations. Some programs may also require work experience for admission. Like bachelor programs, master programs are offered by both universities and FHs. They are modularized and conclude with a thesis typically worth 15 to 30 ECTS. Doctoral degrees are almost exclusively awarded by universities and research institutes—FHs are only in very rare exceptions allowed to offer these programs. There are two types of doctoral programs in Germany: “individual programs” and “structured programs.” Traditionally, all doctoral programs were pure research programs without coursework, attendance requirements, or hard deadlines. Candidates in these individual programs, which still predominate, independently prepare a dissertation under the supervision of a dissertation advisor (Doktorvater or Doktormutter), usually a full professor or other senior researcher. The degrees awarded upon the defense of the dissertation have different Latin names, such as Doktor Rerum Naturalium (Doctor of Natural Sciences) or Doktor Rerum Politicarum (Doctor of State Sciences). On the other hand, the Bologna reforms call for structured third cycle programs and have led to the introduction of more “schoolified” doctoral programs in Germany in recent years. These structured programs most commonly require at least one year of compulsory coursework and interim assessments in addition to dissertation research. Most have a set length of three or four years (180 to 240 ECTS) and are taught in English, which means that they are more accessible for international students. Admission requires a master’s degree or equivalent qualification (Diplom, Magister, state exam), although exceptionally qualified candidates who hold a bachelor’s degree may occasionally be admitted as well. Most structured programs lead to the Doctor of Philosophy degree, or PhD (Philosophiae Doktor). While current comprehensive data on these programs are unavailable, it’s clear that they’re growing in popularity in Germany. By some estimates, 23 percent of doctoral candidates were enrolled in structured programs in 2015. The Habilitation is the highest academic award in Germany and is usually required to become a full-fledged university professor. It’s a unique qualification that exists in only a few other European and Latin American countries. It involves the defense of an independently prepared postdoctoral dissertation or other major academic work, or collection of published articles, which demonstrate advanced scholarship beyond the doctoral degree. But while the Habilitation is still the most common pathway to professorship in Germany, it is viewed as an outdated model by a growing number of academics and policymakers. It is now increasingly being replaced with a U.S.-patterned junior professorship and tenure track system. As noted before, programs in licensed professions like medicine, dentistry, veterinary medicine, or law are long, single-tier programs entered after upper-secondary school. These programs are taught at universities but conclude with government-administered examinations. Instead of earning an academic degree, graduates earn a government-issued certificate of completion of state examination. Medical programs, for instance, conclude with the award of the Certificate of Physician Examination (Zeugnis der Ärztlichen Prüfung). Splitting this program into bachelor and master cycles is presently not deemed feasible in Germany, because of concerns about educational quality and questions regarding the employability of graduates with a first-cycle Bachelor of Medicine degree. Medical programs are mostly taught at medical faculties of larger universities. They last six years, divided into two years of pre-clinical studies in basic sciences and four years of clinical studies, including a one-year rotating internship at a teaching hospital during the final year. Students must sit for three state examinations at different stages of the program; passing the final one allows graduates to apply for licensure as a physician. Postgraduate education in medical specialties requires another four to seven years of clinical training, depending on the specialty. Entry-to-practice programs in dentistry and veterinary medicine last five and five and a half years, respectively, but are generally structured similarly. Law education is divided into two stages: an initial university program with a nominal length of five years that culminates in the first state exam in law, followed by a two-year clerkship that is accompanied by theoretical seminars and concludes with the second state exam in law (Zweite Juristische Staatsprüfung). Bachelor of Laws and Master of Laws degrees are also awarded, but these qualifications are more geared toward business law or other specific fields, and do not grant full access to the profession. Teacher education in Germany has traditionally been organized like education in other professions: Students attended long, single-tier university programs that combined studies in teaching subjects (typically two subjects) with pedagogical courses and a short teaching internship. These programs prepared students for teaching at specific levels of education (elementary, lower-secondary, upper-secondary, or vocational) and concluded with the first state examination for teachers (Erste Staatsprüfung für das Lehramt) conducted by the individual states. Programs for upper-secondary school teachers lasted longest (usually nine or ten semesters). This course of study was then followed by a more comprehensive in-service teaching internship supplemented by methodology seminars over a period of about two years—the so-called preparatory service (Vorbereitungsdienst)—which concluded with a second state exam. Graduates are entitled to teach a specific combination of subjects, but they can also become qualified in additional subjects by completing further studies and sitting for supplementary examinations (Zusatzprüfung or Ergänzungsprüfung). However, most states have now switched to the Bologna structure, splitting the first stage of education into a three-year Bachelor of Education program followed by a two-year Master of Education program. The most common model is a 300 ECTS bachelor and master combination for teachers at all levels, followed by an 18-month preparatory service (see an overview of the requirements in the different states here ). That said, aspiring teachers still need to sit for state examinations, and some states have kept the old structure altogether. Despite these differences, graduates can work as teachers in all states—a formal recognition agreement to ensure mobility between jurisdictions was signed in 2013. Of note, Germany suffers from a shortage of teachers, notably in the eastern part of the country. While there’s currently an ample supply of upper-secondary teachers, particularly in western states, the country will face a shortfall of between 10,000 and 26,000 elementary teachers by 2025, depending on the estimate . Growing nationwide teacher shortages are also expected in lower-secondary education, as well as in vocational schools, where some estimates forecast a gap of 60,000 teachers by 2030 . The shortages are driven by a variety of factors, including a rising number of pupils due to immigration and rising birth rates, as well as declining enrollments in teacher training programs—a trend that makes it difficult to replace retiring teachers in adequate numbers despite Germany’s having some of the highest teacher salaries in the OECD . WES Documentation Requirements - Final graduation certificate – sent directly by the institution attended - English language translations of all documents not issued in English - Official academic transcript (Jahreszeugnisse) – sent directly by the vocational school attended. - Final examination certificate (IHK Prüfungszeugnis) – sent directly by the examining authority - English language translations of all documents not issued in English Higher Education (Bachelor, Master, Doktor) - Degree certificate – submitted by the applicant - Academic transcript – sent by the institution attended - For completed doctoral programs, a written statement indicating degree conferral – sent by the institution attended - English language translations of all documents not issued in English Higher Education (Diplom, Magister, Diplom FH, Staatsexamen) - Degree certificate – submitted by the applicant - Academic transcript – sent by the institution attended - Final and intermediate examination certificates (Diplomprüfungszeugnis, Hauptprüfungszeugnis, Vordiplom, Zwischenprüfungszeugnis) – sent by the institution attended - For state-examined programs, all examination certificates – sent by the examining authority - English language translations of all documents not issued in English Click here for a PDF file of the academic documents referred to below. - Abschlusszeugnis der Realschule (Certificate of Completion of Realschule) - Zeugnis der allgemeinen Hochschulreife (Certificate of General University Maturity) - Prüfungszeugnis, IHK (IHK Examination Certificate) - Abschlusszeugnis der Berufsschule (Certificate of Completion of Vocational School) - Bachelor of Arts - Zeugnis der Ärztlichen Prüfung (Certificate of Physician Examination) - Master of Science - Doctor of Philosophy In some states, a Realschule qualification that meets the requirements for further education is called erweitertes (extended) or qualifizierendes (qualifying) Realschule certificate. Note that Rhineland-Palatinate has had a special system since 1975, the Mainzer Studienstufe , which is 12.7 years in length to allow students to sit for the Abitur exams a few months early to facilitate seamless university admission. The views and opinions expressed in this article are those of the author(s) and do not necessarily reflect the official policy or position of World Education Services (WES).
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9433177709579468, "language": "en", "url": "https://wsb.wisc.edu/news/press-releases/2017/07/31/new-research-offers-better-approach-to-help-companies-complete-clean-air-clean-water-projects", "token_count": 728, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1884765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6dbf818d-4df7-40af-9fc9-4bf8bcee24a9>" }
Better coordination between government regulatory agencies and free Technical Assistance Programs can successfully encourage companies to make environmental improvements Many companies know that undertaking environmental improvements can save money, and there are numerous state and regional technical assistance programs (TAPs), such as the Wisconsin Manufacturing Extension Partnership, that offer expert assistance and support. But only 30-40 percent of TAP ideas get implemented. Doubling that rate would result in significant air and water quality improvements, reduced landfill and energy consumption and cost savings and without new regulations or increased penalties for non-compliant businesses. So why doesn’t this happen? New research from the Wisconsin School of Business at the University of Wisconsin–Madison reveals that better coordination and improved timing of punitive measures can improve implementation rates. Enno Siemsen, Wisconsin School of Business professor of operations and information management, along with Suvrat S. Dhanorkar of Pennsylvania State University and Kevin W. Linderman of the University of Minnesota, says the key is securing managerial attention to make sure environmental improvements remain a priority. “If a company is hit with environmental sanctions by a regulator and then TAP comes in and provides a roadmap for fixing the problem, managers are going to pay attention because they can make this fix a priority and show cost savings and be a good corporate citizen,” says Siemsen. “But if TAP comes in with recommendations first and regulators step in months later with sanctions that may cover a range of areas not related to the TAP project, you don’t get the same focused attention.” Siemsen adds, “At a time when the U.S. has pulled out of the Paris climate accords and federal regulators such as the EPA are prioritizing cutting red tape, we are going to see more environmental initiatives coming from cities and states. Our findings give those local regulators insights into how they can effectively support companies seeking to implement environmental improvements.” The study reviewed the activities of two state-level environmental assistance agencies in Minnesota: Minnesota Technical Assistance Program (MTAP) and the Minnesota Pollution Control Agency (MCPA). Siemsen and his colleagues looked at more than 1,000 projects receiving support from the agencies across 200 facilities that also received periodic punitive fines or disciplinary actions from the Environmental Protection Agency (EPA) or other regulators. Their findings indicated that the timing and order of punitive tactics mattered in terms of project completion. Those instances where a punitive regulatory action was followed by project recommendations from TAP helped to secure the managerial attention necessary to ensure completion of the environmental improvements. While TAPs and regulatory agencies communicate, there is not always formal coordination between them, and Siemsen says that simply aligning their activities would lead to better outcomes in terms of implementing desired changes. He notes that firms that took part in “Touchbase Tuesdays” with their MTAP contacts, check-ins that involved an MTAP employee checking in with the company’s manager regarding progress on improvements, saw increases of 10 to 20 percent in their project completion rates. “With multiple projects in the pipeline at a company at any given time, managerial attention can be in short supply,” says Siemsen. “With better coordination, TAPs and regulators can identify problems, provide meaningful solutions, and encourage companies to take steps that will save them money and enhance the environmental sustainability of their operations.” The paper, “Promoting Change from the Outside: Directing Managerial Attention in the Implementation of Environmental Improvements”, was published in Management Science.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9311855435371399, "language": "en", "url": "https://www.accordrealestategroup.com/brooklyn-blog/energy-waste-commercial-buildings", "token_count": 721, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0615234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:615302fa-e04e-4e72-a9f0-a4c8d910b354>" }
Office buildings consume 17% of energy use of all U.S. Commercial buildings and they waste the most energy. On average 30% or $28M per hour of the energy used in Commercial buildings is wasted according to The U.S. Environmental Protection Agency. Here are the facts: 20% of U.S. energy use goes toward powering Commercial buildings. Yet, only 15% of U.S. commercial buildings have building automation systems that control lights, heating or cooling rooms. To bring awareness of the importance of energy conservation in Commercial buildings, The Building Owners and managers Association (BOMA) International released its top 10 ways for building owners and managers to reduce energy consumption. These no or low cost strategies could possibly reduce energy consumption by as much as 30 percent. Some of these energy saving strategies are: The vital components of all Commercial buildings are hot air ducts and hot and cold air ventilation systems. Properly insulating this equipment is very important. Check to make sure that equipment is functioning as designed for example a loose fan belt requires more energy to run a fan than a poorly adjusted belt, according to Sam Schnell, consulting engineer with Sesco Inc. Coordinated Effort can save energy by the Janitors and Security crew working together to walk through the building to turn off equipment left on by the tenants. Keep tenants informed about energy saving goals. Installing power management software for computer monitors, central processing units and hard drives. It costs U.S. companies $1B a year on wasted electricity. Changing lighting from incandescent lights to fluorescent lights that use less energy is a great strategy, especially if retrofitting new lights may be tax deductible. Adjust building operation hours to reflect actual tenant usage. Adjust ventilation in unoccupied and low density areas. The Building Technologies Office (BTO), an office of the U.S. Department of Energy works to develop strategies and technologies to reduce Commercial buildings energy consumption. Unfortunately, these strategies are underutilized by the market. BTO is targeting a 20% energy use reduction in Commercial buildings by 2020. BTO reaches out to building owners, builders, engineers, architects, contractors, manufacturers and others to implement energy saving strategies. The potential to reduce energy consumption on new and existing buildings is enormous, Jennifer Hermes writes that recovering wasted energy expense is a $750B opportunity. The U.S. Dept of Energy estimates that the cost of the $60B waste, at an 8% capitalization rate represents $750B of lost asset value. For example, increased NOI translates into increased asset value. The message from the DOE is that when owners take charge by setting goals, energy cost savings happen. Owners should conduct a hands on assessment using benchmark data from Energy Star or Zero Touch. To achieve the best return on investment owners should compare a full array of alternate solutions. Another suggestion is using cloud based software to prepare simple professional recommendations. The U.S. DOE software guide provides a detailed description of new cloud based software that makes reducing commercial energy bills simple. If you are interested in selling your property or you know someone who is planning to sell their Brooklyn or New York property, whether a single family residential property, an apartment building, a commercial property, mixed use, multi family, coop or condo, vacant land or a development opportunity, call us. You will be very happy that you did. We always bring our clients the top market price for their property and provide exceptional personalized service from initial consultation to closing.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9657490253448486, "language": "en", "url": "https://www.investopedia.com/terms/short-termloss.asp", "token_count": 615, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.04931640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c2c113c3-9edd-477f-9022-fcc4f1458dbb>" }
What Is a Short-Term Loss? A short-term loss is realized when an asset is sold at a loss that's only been held for less than one year. A short-term unrealized loss describes a position that is currently held at a net loss to the purchase price but has not been closed out (inside of the one-year threshold). Net short-term losses are limited to a maximum deduction of $3,000 per year, which can be used against earned or other ordinary income. Short-term losses can be contrasted with long-term losses. Long-term losses result from assets held for more than 12 months, and carry different tax treatment from short-term losses. - A short-term loss is a deficit realized from the sale of personal or investment property that has been held for one year or less. - The amount of the short-term loss is the difference between the basis of the capital asset–or the purchase price–and the sale price received for selling it. - Short-term losses can be used to offset short-term gains that are taxed at regular income, which can range from 10% to as high as 37%. Breaking Down Short-Term Loss Short-term losses are determined by calculating all short-term gains and losses declared on Part II of the IRS Schedule D form. If the net figure is a loss, then any amount above $3,000 -- or $1,500 for those married filing separately -- must be deferred until the following year. For example, if a taxpayer has a net short-term capital loss of $10,000, then he can declare a $3,000 loss each year for three years, deducting the final $1,000 in the fourth year following the sale of the assets. Short-term losses play an essential role in calculating tax liability. Losses on an investment are first used to offset capital gains of the same type (i.e. short-term gains). Thus, short-term losses are first deducted against short-term capital gains, and long-term losses are deducted from long-term gains. Net losses of either type can then be deducted from the other kind of gain. Example of Short-Term Loss For example, if you have $1,000 of short-term loss and only $500 of short-term gain, the net $500 short-term loss can be deducted against your net long-term gain, should you have one. If you have an overall net capital loss for the year, you can deduct up to $3,000 of that loss against other kinds of income, including your salary and interest income, for example. Investors can enjoy the benefit of any excess net capital loss being carried over to subsequent years, to be deducted from capital gains and against up to $3,000 of other kinds of income. As noted above, when using a 'married filing separate' filing status, however, the annual net capital loss deduction limit is only $1,500.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8928313255310059, "language": "en", "url": "https://www.myassignmentservices.com/resources/sustainable-development-assignment-sample", "token_count": 1544, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.080078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1a22c00d-7582-405f-83a8-18e9ea35367f>" }
Rapid urbanization, growing population and rising affluence along with improper waste management systems are contributing to the global waste crisis. In the global waste, plastic waste is particularly challenging due to its non-biodegradable characteristics. The colossal amounts of productions which peaked up to 242 million tonnes across the globe in the year 2016 are an issue of concern (Asia-Pacific Economic Cooperation 2020). Electronic waste usually encompasses discarded electronic devices, electrical materials from the salvage recycling reuse and disposable material discovery. Sustainability is described as the capability to exist in a parallel custom. It is meeting the needs of the present without hindering the needs of the future generations and is grounded on the pillars of environmental, social and economical aspects. A circular economy is basically a system with the closed loops aiming at reducing and eliminating the waste (Kirchherr et al. 2017). This assessment is an extensive analysis of e-waste and its impact on a circular economy, especially in Vietnam. It also discusses the policy frameworks for dealing with the e-waste (ITU 2019). Some of the commonly discarded e-waste materials incorporate stereos, fax machine, computers, copiers and many more. E-waste contains many hazardous materials which can impose a detrimental impact on human health as well as environment. Its growth is increasing exponentially and has increased three times faster than a general municipal waste in the context of the Australian system (Sustainability Victoria 2020). The recycling of waste must be done in a sustainable manner so that the adverse impact on the environment is reduced. Many frameworks and projects are used to assess this condition of increasing e-waste. The Pace circular projects lay prominence on the four thematic areas that incorporate food and agriculture, electronics, capital, equipment, plastic and fashion and textiles (Pace Projects 2019). It makes use of cross-cutting social initiatives along with innovation. These projects are intended at ensuring global battery alliance; it also ensures that the secondary material flows are effectively managed. Factor10 is CSD Circular Economy projects intended for bringing companies together and reorganize and restructure how the business corporations dispose-off the materials regarding the global trade (WBCSD 2020). This project targets the circular economy and ensures that the greatest possible values are attained and the waste streams are minimised. According to authors, under the Waste Electric and Electronic Equipment (WEEE), there are fourteen common categories of waste which have affected the economic value of e-waste streams into the circular economy and have a total worth of 2.15 billion to the European markets. It is also regarded as the widest source of waste across the globe (European Commission 2015). Policy frameworks need to be strengthened in Vietnam so that the e-trash transparency is attained. In Vietnam, this can be ensured by making sure that the old computers, printers or pointers are delivered to the electronics recycler or at a charity so that the amount of e-waste is diminished (Basel Action Network 2019). The present e-waste policy of Vietnam is aimed at ensuring that the integration among the e-waste source collection is confirmed. Its investigation is based on a collection by utilising sources. There are several challenges that WEEE in Vietnam faces in terms of a number of traders involved in waste collection people in charge for e-waste collecting and the dismantling process and recycling process (Baldé et al. 2017). These challenges are due to the fact that the geographical location of the country is along the coastline and the contiguity with Cambodia and China plays a critical role in its e-waste collection and disposal. The Vietnamese e-waste collection policy is inefficient in terms of end processing this can be improved by ensuring that the infrastructure is improved within the territory. It can adopt Factor10 to reinvent the business fields. This assessment has brought forward a clearer picture of the e-waste management systems especially in the context of Vietnam. Electronic waste usually incorporates discarded electronic devices, electrical materials from the reclaim recycling reuse and disposable material discovery. It can be inferred from the assessment that the present e-waste policy of Vietnam particularly lays prominence on the collecting and dismantling of the e-waste and needs to upgrade its end-process by Factor10 to reinvent the business fields. It stresses on the eco-efficiency of the process and makes sure that all streams are recycled. Constant rate of technological advancements in development is contributing to the elevating issue of e-waste in terms of an increment in the disposal issues. Asia-Pacific Economic Cooperation. 2020. Circular Economy: Don’t Let Waste Go to Waste. https://www.apec.org/Publications/2020/01/Circular-Economy---Dont-Let-Waste-Go-to-Waste Baldé, C.P., Forti, V., Gray, V., Kuehr, R. and Stegmann, P. 2017. The Global E-waste Monitor–2017, United Nations University (UNU), International Telecommunication Union (ITU) & International Solid Waste Association (ISWA), Bonn/Geneva/Vienna. Electronic Version, pp.978-92. Basel Action Network. 2019. e-Trash Transparency Project. https://www.ban.org/trash-transparency European Commission. 2015. Science for environmental policy. ITU. 2019. E-waste Policies and Regulatory Frameworks. https://www.itu.int/en/ITU-D/Climate-Change/Pages/ewaste/Ewaste_Policies_and_Regulatory_Frameworks.aspx Kirchherr, J., Reike, D. and Hekkert, M. 2017. Conceptualizing the circular economy: An analysis of 114 definitions. Resources, conservation and recycling, 127, pp.221-232. Pace Projects. 2019. Projects. https://pacecircular.org/projects Sustainability Victoria. 2020. Take your e-waste to a better place. https://www.sustainability.vic.gov.au/You-and-your-home/Waste-and-recycling/Household-waste/eWaste WBCSD. 2020. Factor10. https://www.wbcsd.org/Programs/Circular-Economy/Factor-10 Remember, at the center of any academic work, lies clarity and evidence. Should you need further assistance, do look up to our Global Sustainability Studies Assignment Help 5 Stars to their Experts for my Assignment Assistance. There experts have good understanding and knowledge of university guidelines. So, its better if you take their Assistance rather than doing the assignments on your own. What you will benefit from their service - I saved my Time (which I utilized for my exam studies) & Money, and my grades were HD (better than my last assignments done by me) What you will lose using this service - Unfortunately, i had only 36 hours to complete my assignment when I realized that it's better to focus on exams and pass this to some experts, and then I came across this website. Kudos Guys!Jacob " Proofreading and Editing$9.00Per Page Consultation with Expert$35.00Per Hour Live Session 1-on-1$40.00Per 30 min. Doing your Assignment with our resources is simple, take Expert assistance to ensure HD Grades. Here you Go....
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9483845829963684, "language": "en", "url": "https://www.teriin.org/research-paper/stakeholders-and-corporate-social-responsibility-are-they-interlinked-and", "token_count": 283, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.01336669921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4cf004b2-645a-4885-bd18-efa12f9ebcb9>" }
Stakeholders and Corporate Social Responsibility: Are They Interlinked and Contributing to the Sustainable Development Goals. Freeman chose the word Stakeholder on the basis of the traditional term which takes only a look at the economic point of view, where the stakeholders are defined as “any group of individual who is affected by or can affect the achievement of organization’s objectives” (Freeman 1984). The paper provides detailed understanding of stakeholders, analysis of various models on stakeholders, stakeholders of CSR and its analysis, linkages between different stakeholders and its contribution to the SDG’s. The methodology is multiple-definitional and case study research, and the research aim is attained by classifying it into various objectives and using interdisciplinary approach for each objective. The methodology adopts documenting of comprehensive theoretical analysis on the basis of available literature (Thakur and Datta, 2020). Further, the expenditure on CSR has been increasing over the years in India a Companies Act 2013 has made India the first country to make CSR spending mandatory through a law (MCA 2014). The higher fund flow would lead to increase in the social activities and social development. More companies would spend which could address reduction of poverty and unemployment. This in a way would strengthen the Sustainable Developmental Goals (SDG’s) which aims to reduce poverty in all forms and create new opportunities for partnerships.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9685158133506775, "language": "en", "url": "https://2012books.lardbucket.org/books/finance-banking-and-money-v1.1/s12-01-the-balance-sheet.html", "token_count": 813, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1279296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7fe78dbe-f6ee-413a-ab75-b2303b776ca2>" }
This book is licensed under a Creative Commons by-nc-sa 3.0 license. See the license for more details, but that basically means you can share this book as long as you credit the author (but see below), don't make money from it, and do make it available to everyone else under the same terms. This content was accessible as of December 29, 2012, and it was downloaded then by Andy Schmitz in an effort to preserve the availability of this book. Normally, the author and publisher would be credited here. However, the publisher has asked for the customary Creative Commons attribution to the original publisher, authors, title, and book URI to be removed. Additionally, per the publisher's request, their name has been removed in some passages. More information is available on this project's attribution page. For more information on the source of this book, or why it is available for free, please see the project's home page. You can browse or download additional books there. To download a .zip file containing this book to use offline, simply click here. Thus far, we’ve studied financial markets and institutions from 30,000 feet. We’re finally ready to “dive down to the deck” and learn how banks and other financial intermediaries are actually managed. We start with the balance sheet, a financial statement that takes a snapshot of what a company owns (assets) and owes (liabilities) at a given moment. The key equation here is a simple one: ASSETS (aka uses of funds) = LIABILITIES (aka sources of funds) + EQUITY (aka net worth or capital). Figure 9.1 Bank assets and liabilities Figure 9.2 Assets and liabilities of U.S. commercial banks, March 7, 2007 Figure 9.1 "Bank assets and liabilities" lists and describes the major types of bank assets and liabilities, and Figure 9.2 "Assets and liabilities of U.S. commercial banks, March 7, 2007" shows the combined balance sheet of all U.S. commercial banks on March 7, 2007. In the first half of the nineteenth century, bank reservesIn this context, cash funds that bankers maintain to meet deposit outflows and other payments. in the United States consisted solely of full-bodied specie (gold or silver) coins. Banks pledged to pay specie for both their notes and deposits immediately upon demand. The government did not mandate minimum reserve ratios. What level of reserves do you think those banks kept? (Higher or lower than today’s required reservesA minimum amount of cash funds that banks are required by regulators to hold.?) Why? With some notorious exceptions known as wildcat banks, which were basically financial scams, banks kept reserves in the range of 20 to 30 percent, much higher than today’s required reserves. They did so for several reasons. First, unlike today, there was no fast, easy, cheap way for banks to borrow from the government or other banks. They occasionally did so, but getting what was needed in time was far from assured. So basically borrowing was closed to them. Banks in major cities like Boston, New York, and Philadelphia could keep secondary reservesNoncash, liquid assets, like government bonds, that bankers can quickly sell to obtain cash., but before the advent of the telegraph, banks in the hinterland could not be certain that they could sell the volume of bonds they needed to into thin local markets. In those areas, which included most banks (by number), secondary reserves were of little use. And the potential for large net outflows was higher than it is today because early bankers sometimes collected the liabilities of rival banks, then presented them all at once in the hopes of catching the other guy with inadequate specie reserves. Also, runs by depositors were much more frequent then. There was only one thing for a prudent early banker to do: keep his or her vaults brimming with coins.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9208208322525024, "language": "en", "url": "https://www.roads2future.com/education-hub/data-analytics/how-data-analytics-is-changing-entrepreneurial-opportunities/", "token_count": 634, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.045654296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:abc86b68-cb21-4a07-8001-7f1bcece07da>" }
Data: Is a distinct piece of information, usually formatted in a special way. It is all about data information that is transmitted and stored. Data is essentially the plain facts and statistics collected during the operations of a business. They can be used to measure/record a wide range of business activities-both internal and external. Analytics: Is the discovery, interpretation, and communication of meaningful patterns in data. Applying data patterns towards effective decisions making. It is the connection between data and effective decision making within an organization. Data Analytics: Is changing business models, helping organizations keep a track of the customers’ profile using new opportunities. This is helping them to make smarter business moves, more efficient operations, higher profits, and happier customers. How is it changing Business Opportunities? It helps understand the nature of the business and business environment. What is driving business nature and understanding market behavior at the same time? - It is to identify its strengths and weaknesses. - To get diagnostics at the enterprise, business unit, and business process levels. - To study the insight into the situation and take any appropriate action. Harness the new opportunities To study the market carefully and study the market opportunities and forecast the market occasion and work accordingly. It helps in checking the pulse of your organization. It helps in incorporating data analysis into key decisions across all departments, including sales, marketing, the supply chain, customer service, and other core business functions. Enterprise information management (EIM) This helps you in taking advantage of social, mobile, analytics, and cloud technologies to improve the way the data is managed and used across the company. - It helps you streamline your business practices. - Enhance collaboration efforts. - Boost employee productivity in and out of the office. Business model transformation Companies that embrace big data analytics and transform their business models in parallel will create new opportunities for revenue streams, customers, products, and services. Forecasting demand and sourcing materials to accounting and the recruitment and training of staff, every aspect of your business can be reinvented. - Help in capitalizes on new opportunities, - Building trust with customers who hold vital data, - Finding ways to gain insight and implement results quickly. Making data-centric business Data-centric business isn’t just an asset, it’s currency.it’s the source of your core competitiveness. - Insight: Including mining, cleaning, clustering, and segmenting data to understand customers and their networks, influence, and product insight. - Optimization: Analyzing business functions, processes, and models. Exploring new business models to further the evolution and growth of your customer base. Data Analytics is swiftly overturning the way we do business. These are some facts which data analytics enables us to do that is how we become a forward-thinking company and gain competitive advantages in the marketplace. This helps in identifying business opportunities, it helps in forecasting future events and acts accordingly. It helps them to make smarter business moves, more efficient operations, higher profits, and happier customers.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9523965716362, "language": "en", "url": "https://www.wikitechy.com/full-form/gst-full-form", "token_count": 688, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.08447265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:baa92d10-4e67-4156-8a54-57b9de5cf1a1>" }
GST Full Form | Full Form of GST GST Full Form - Goods and Service Tax - Here, you’ll get the solution of the following GST related questions: full form of GST , full form of GST tax , full form of GST in India , GST full form Goods and Service Tax - GST stands for Goods and service Tax. It's an indirect tax which was introduced to simplify the complicated indirect tax system in India. - It brings the all other indirect taxes imposed by central and state governments on the manufacture and sale of products and services under one domain at the national level. So, GST is basically a consolidated tax that's supposed to replace all other indirect taxes levied on goods and services. - It might be based on the uniform rate of tax and can be payable only at the final point of consumption, unlike a cascade tax that's applied at every stage within the supply chain without considering the taxes paid at earlier stages. - This way of applying a tax on tax is known as the cascading effect of taxes. - For Ex, A distributor sells a product of price Rs. 100 to a retailer after adding the indirect tax 12%, at Rs. 112. Then the retailer sells an equivalent product of price Rs. 112 to a customer after adding the tax 12% at Rs. 126. During this scenario, due to the cascading effect of taxes, the final price of the product has increased. - GST may be a transparent tax and also reduces the number of indirect taxes. - GST will not be a price to registered retailers therefore there'll be no hidden taxes and therefore the cost of doing business are going to be lower. - Benefit people as prices will come down which in turn will help companies as consumption will increase. - There is no doubt that within the production and distribution of products, services are increasingly used or consumed and vice versa. - Separate taxes for goods and services, which is that the present taxation system, requires division of transaction values into value of products and services for taxation, leading to greater complications, administration, including compliances costs. - In the GST system, when all the taxes are integrated, it might make possible the taxation burden to be split equitably between manufacturing and services. - GST are going to be levied only at the final destination of consumption supported the VAT principle and not at various points (from manufacturing to retail outlets). This may help in removing economic distortions and bring about development of a standard national market. - Some Economist says that GST in India would impact negatively on the real estate market. It might add up to 8 percent to the price of new homes and reduce demand by about 12 percent. - Some Experts says that CGST(Central GST), SGST(State GST) are nothing but new names for Central Excise/Service Tax, VAT and CST. Hence, there's no major reduction within the number of tax layers. - Some retail products currently have only a four percent tax on them. After GST, garments, and clothes could become costlier. - The aviation industry would be affected. Service taxes on airfares currently range from six to nine percent. With GST, this rate will be greater than fifteen percent and effectively double the tax rate. - Adoption and migration to the new GST system would involve teething troubles and learning for the whole ecosystem.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9504644870758057, "language": "en", "url": "https://adebayothevoice.com/qa/is-ecommerce-an-industry.html", "token_count": 300, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0274658203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:828ce809-695b-4b32-b476-ef350990e995>" }
Ecommerce, also known as electronic commerce, is a business model which involves transactions taking place on the internet. Stores that sell their products online are ecommerce stores or businesses. For example, Amazon.com is one of the most popular online stores in the ecommerce industry. What type of industry is ecommerce? Electronic commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. Is E Commerce considered an industry? Industry Overview: e-Commerce. The e-Commerce Industry is comprised of companies that produce and sell software to businesses and corporations of all sizes. In addition, new product releases are continuous, with many companies providing similar offerings and services. What is ecommerce sector? Ecommerce, also known as electronic commerce or internet commerce, refers to the buying and selling of goods or services using the internet, and the transfer of money and data to execute these transactions. How much is the ecommerce industry worth? The growth of ecommerce is out of this world! In 2017, ecommerce was responsible for around $2.3 trillion in sales and is expected to hit $4.5 trillion in 2021 (according to a Statista report). In the US alone, ecommerce represents almost 10% of retail sales and that number is expected to grow by nearly 15% each year!
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9542977809906006, "language": "en", "url": "https://freeessays.page/finances-and-lifestyle-in-old-age/", "token_count": 1697, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07373046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:26cfbf8e-09aa-4ec4-92f9-51e8c60d71e8>" }
One out of every five elderly Americans encounters each day on the restricted profits with little flexibility for unforeseen medical expenses. Once medical care is needed, these six million near-poor elderly people depend on Medicare for help with the medical bills. Thesis: the universal coverage of Medicare guarantees older adults access to America’s health care system and suggests defense from financial trouble once illness strikes. However, gaps in a scope of Medicare and Medicaid’s benefits and financial obligations for coverage may result in burdensome financial burdens. Health Care Insurance Programs Given the altering demographics of US society and the connection between quality of life and first-class health care, health care differences are likely to become a pressing problem in communities across the nation. One-fourth of senior people have no supplemental Medicare or Medicaid insurance coverage for their needs, whilst 16% of people under age 65 have no insurance coverage. Additionally, preventive health screening rates are predominantly low for older adults. Americans nowadays live longer than ever before. Senior people are sicker than younger and spend more on health care. Though older Americans reflect only thirteen percent of the entire populace, they spend nearly thirty percent of the money used for health care. In contradiction to popular conviction, Medicaid and Medicare cannot cover all medicinal expenses when a person is over age 65. In fact, both programs do not cover many chronic conditions, leaving elderly people to pay for treatment. Therefore, the altering demographics of Medicare fuel worries concerning financing the program’s costs. Medicare is the nationwide health insurance program for the Social Security recipients who are older than 65 or disabled. It is controlled by federal Health Care Financing Administration. The private insurance organizations contract with the administration to make payments to the therapeutic providers. Medicare is not welfare program. That is, individual assets and income are not considered in deciding the person’s benefits or eligibility. Medicare coverage reminds the coverage, which private insurance organizations offer: Medicare pays only a part of the cost of the medical care and beneficiary assumes the cost of the deductibles and co-payments to the healthcare providers. Medicare still does not provide payment for routine eye and dental examination, physical examinations, long-term custodial care hearing aids, and immunizations (apart from annual flu and pneumonia shots). This program has some coverage components. Part A covers the in-patient medical care, in-patient care in a skilled nursing facility, home health care services and hospice care. Part B covers the hospital care and services provided by physicians, long-lasting clinic equipment and outpatient care and domestic services. Part A is paid typically through the federal payroll taxes; many beneficiaries do not pay a premium for the coverage. Part B is paid through monthly premiums provided by beneficiaries who select this coverage and common revenues from federal administration. Beneficiaries may have to pay deductibles and co-payments under Part A and B. Part D is a voluntary outpatient prescription drug possibility provided under the individual plans, which contract with Medicare. Medicare reformers have hoped for prescription drug possibility for many years. Nevertheless, there are winners and losers with the organization of this benefit. Medicare beneficiaries will have coverage, which is restricted to the accepted drug list and will compensate high premiums and deductibles, which will carry on rising every year. The pharmaceutical firms, on the other hand, were capable to defeat efforts to reduce the cost of prescription drugs through administrative cost negotiations and the importation of drugs from other countries. Medicare Supplemental Insurance — also recognized as Medigap — may assist beneficiaries in paying for medical care, which Medicare does not cover, counting co-payments and deductibles. Medigap insurance fills certain gaps in the Medicare coverage, paying for costs that Medicare does not finance. The more holes Medigap plan covers, the more costly the policy is to purchase. Eligibility for the Medigap policies may differ. Plans have to suggest assured enrollment for novel Medicare beneficiaries older than 65 years old: the plan cannot decline to enroll beneficiary, even if a person is sick or injured. Medicaid is quite different. Based on need, it assists in paying clinic care for low-income elderly or disabled Americans. Eligibility for Medicaid is, in fact, based on the applicant’s assets and income. Medicaid is paid jointly by federal and state governments and, whilst every state must follow basic eligibility and benefit requirements, crucial details vary among the US states. Medicaid covers much more nursing home care than Medicare, and finances skilled and custodial care. It never limits the time beneficiary may remain in a nursing home. Both programs can be a resource of funding for long-lasting home care, but Medicare may cover home health care solitary if an individual is homebound and requires experienced therapy or nursing services. Future of Medicare and Medicaid The future of Medicare is uncertain. Consistent with the Social Security and Medicare Trustees, between 2005 and 2030, spending on Medicare is projected to augment by 331 percent, whilst the GDP grows by only 72 percent. Much of this increase in costs is due to the introduction of novel technologies that are expensive to evolve and also bring “added years of life,” resulting in higher lifetime spending. Despite these anxieties, prospects for Medicare may not be quite as terrible as some researches suggest. Maintaining humans healthy before they reach age 65 could reduce expenses later in living, and postponing morbidity “till age 85 or 90, when humans would then acknowledge pneumonia or other less “costly illnesses” could also result in crucial savings. Additionally, researches have found wide-spread geographic variation in per capita Medicare spending with little dissimilarity in patient satisfaction or outcomes. According to Mark McClellan, the director of the Center for Medicare under George W. Bush, “Medicare spending may be 35% higher than it has to be” (Lubitz, 2005). Also, the basic resource of health care inflation is not the aging of the populace, but the high cost of health care in the country. Bringing costs under control would do much to alleviate Medicare’s financial troubles. The downturn in the US economy in 2008 shows the significance of Medicaid as a safety-net for those who would be without coverage if this program did not exist. Though the Obama administration has responded to the recent economic crisis with the unprecedented federal stimulus package since the Great Depression of the 1930s, states will have to discover methods to preserve and extend their Medicaid programs to meet increasing demand for coverage. As mentioned previously, US Recovery and Reinvestment Act of 2009 will provide states with only about 40% of projected state deficits. Meanwhile, as the administration wrestle with health policy challenges, which are intensified by these problems, the role of both Medicaid and Medicare in wider plans for health care reform remains unclear. Older adults experience the impacts of health care inequalities more noticeably than any other group. They are especially at risk as they are more likely than younger people to have chronic diseases, make frequent visits to medical facilities, and live in poverty. Improving access to health care services for older adults has been crucial public policy objective for years. Many policy initiatives call for eradicating inequalities in health care to foster better quality of living for older people. The coverage of Medicare and Medicaid assures older adults entry to America’s health care system and suggests defense from financial catastrophe when disease or illness strikes. However, gaps in a scope of Medicare and Medicaid’s benefits and financial obligations for coverage may result in burdensome financial situation. Medicare does not provide payment for long-term custodial care hearing aids, dental and eye examinations, regular physical examinations, and immunizations (except annual flu and pneumonia shots); and Medigap makes the additional costs really excessive. One-fourth of older grown-ups have no supplemental Medicare or Medicaid insurance coverage for their health care needs, whilst 16% of people under age 65 have no health insurance coverage. Also, preventive health screening rates are mostly low for older adults. The older adults are most likely to experience dissimilar barriers to obtaining quality health care services even via the most popular programs, such as Medicare and Medicaid. That is why community administrations, social service agencies and other entities will need to get ready for the augmenting older adult populace to guarantee older adults’ health needs are addressed and quality medical services are readily accessible for people that need health care services the most.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.963652491569519, "language": "en", "url": "https://www.foodethicscouncil.org/the-eu-and-agricultural-research/", "token_count": 873, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0091552734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:728187bb-5fe9-4d96-9631-7810014cfbf6>" }
It is unlikely that the impact on UK agriculture research features in many peoples’ discussions about whether to leave or remain in the EU. If agriculture features in debates at all it is usually around the topic of the Common agricultural policy or CAP. However, the EU is a significant funder of scientific research, the current Framework Programme (called Horizon 2020) for funding research and innovation, has a total budget of just over €70 billion. UK researchers have done well in the competition for EU funding with analysis suggesting we receive a greater amount of funding than we contribute. Several prominent scientists have gone on record in favour of the UK staying in the EU. However, while it is fairly obvious why those involved in the big expensive physical sciences projects would be in favour of remaining, what about agriculture? Historically EU funding for agricultural research has been complementary to UK funding. During the 1990s and 2000s the UK Research Councils pursued a policy of ‘scientific excellence’. This resulted in the UK having a world class basic science base but led to the erosion of applied science and the loss of what are now called translational scientists (i.e. people who could take the results from basic plant and animal science and turn them into something useful for farmers). EU funding mitigated against this, helping to preserve a cohort of applied scientists in the UK for when the realisation hit home that the UK needed applied agricultural science and national funding was again made available. The EU has been much more willing to fund certain areas of agricultural research than the UK research councils, such as agro-ecological approaches to controlling pests and diseases and conservation and use of crop and animal genetic resources. The number of researchers working on these topics in other EU states is generally higher than in the UK. Promoting collaboration has been a key feature of EU research funding and there are also several other funding schemes promoting exchange of scientists and knowledge which allow UK researchers to connect to the expertise in other member states. Lastly the UK science budget – although it has been protected to some extent – is not immune from the Chancellor’s austerity cuts and obtaining research funding is becoming more and more difficult. Access to EU agricultural research funding provides another source of funds. So from the perspective of a UK agricultural researcher it is important to have access to EU funding. However, it is possible to do this even if the UK leaves the EU. Thirteen countries have ‘Associated Country’ status; they pay in to the research pot an amount in line with their GDP and have access to funding on the same basis as EU member states. If the UK leaves the EU but remains a member of the European Free Trade Association (EFTA) it could negotiate Associated Country status and still keep access to EU research funding. Associated States have no say in setting the priorities of the Framework Programme, and the UK would not be able to argue for reform. The House of Lords Select Committee noted that “while just under 2% of the EU research budget (under FP7) is allocated to agricultural research, the CAP itself currently accounts for just over 40% of the EU’s total budget” and recommended the UK government should continue to argue for adjusting the balance of funding to drive innovation in agriculture. Outside of the EU the UK would have no voice in this debate and would have to accept what others decided. As I said at the beginning of this blog, it is unlikely that the impact on agricultural research will feature very much in many people’s decision on how to vote on 23 June (even mine!). But it appears to me that the issues around science research encapsulate the wider debate over EU membership; whether you want to go it alone or cooperate and collaborate to try to achieve more. The world is an uncertain place, a future outside of the EU looks to me very uncertain, inside is less uncertain and, although definitely not perfect, being inside gives the UK a voice in the debate about changing things. David Pink is a trustee of the Food Ethics Council and emeritus Professor of Crop Improvement at Harper Adams University. He was previously Professor of Crop Improvement at the University of Warwick. The views expressed in this article are the author’s own, and do not necessarily reflect the views of the Food Ethics Council.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.951200544834137, "language": "en", "url": "https://accelerateinsite.com/2019/09/23/ai-disruption-in-formal-education/", "token_count": 608, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.10302734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:89230360-afc7-4a9c-a8d4-f115b45db610>" }
A major change in the goals and means of educating the digital workforce is taking place in unexpected places and unanticipated ways. “Bricks and mortar” educational institutions continue to build “bigger barns” out of a traditional mindset of on campus real time face to face instruction as the primary or sole means of education. The soaring expense of this model has outpaced its value in economic terms. Student debt continues to exceed the employment “payback” formal education has historically provided to graduates. A U.S. Bureau of Labor Statistics study of price increases over the 20 years from 1997 to 2017 discloses an almost 200% increase in higher education costs (second only to healthcare) in contrast to an overall inflation rate of 55%. At the same time, digital transformation of the skills needed in a rapidly changing workplace is leaving formal education less relevant than ever. No curriculum being developed today can anticipate the skills needed for a workplace five years in the future. These are not low level positions, but executive, professional and technical work necessary in a global economy. We are approaching a time when relevant skills based training may be of greater value in the workplace than diplomas and even graduate credentials. When formal education embraces this challenge, it is noticeable and commendable. As reported by Bhumika Khatri, India’s Central Board of Secondary Education (CBSE) has demonstrated just such a vision and the commitment to realize it. In a partnership reached with Microsoft, CBSE models how formal education can prepare teachers, students and society for the workplace of the future. Khatri reports: “In its partnership with Microsoft India, CBSE is looking to conduct capacity building programmes for high school teachers with an aim to integrate cloud-powered technology in K12 teaching and inculcating digital teaching skills in educators through curriculum as well as extra-curricular training. The programme for teachers of grades VIII to X will be conducted in 10 cities across the country, starting September 11. “Further, CBSE said that the teachers will also learn about digital story-telling, creation of personalised learning experiences for diverse learners, use of Teams for virtual lessons and how to leverage artificial intelligence (AI) tools to create bots and how to demystify concepts around AI through course curriculum.” India estimates that AI applications can augment the national GDP by $957 Billion. However, the talent gap is huge and an AI centric learning model at the secondary level is essential to lessen that gap. Higher education must learn and teach the same lessons. The national economies that follow suit will not find themselves wanting. Those that don’t are facing serious consequences. Mr. Bridgesmith has over 40 years experience in legal professional services and numerous business ventures involving digital technologies. He has represented, trained, and consulted with organizations large and small in most industries. He is currently a Managing Partner of Accelerate InSite with a focus on AI Strategies.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9515494108200073, "language": "en", "url": "https://burtonleonard.n-yorks.sch.uk/about-us/funding/", "token_count": 190, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.02587890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:432e6d4f-2c25-47d4-a95c-35cfd3fd3e57>" }
Pupil Premium Funding The pupil premium is additional funding for publicly funded schools in England to raise the attainment of disadvantaged pupils of all abilities and to close the gaps between them and their peers. PE And Sport Premium For Primary Schools The government announced that it was to provide additional funding to improve provision of physical education (PE) and sport in primary schools in England . Year 7 Literacy and Numeracy catch-up Premium The literacy and numeracy catch-up premium gives schools additional funding to support year 7 pupils who did not achieve the expected standard in reading or maths at the end of key stage 2 (KS2). Schools financial benchmarking Compare a school or trust’s income and expenditure with similar establishments in England. You can view your school or academy trust’s financial data, see how it compares with others and use the information to establish relationships with other schools or multi-academy trusts.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9482472538948059, "language": "en", "url": "https://siepr.stanford.edu/research/publications/recession-graduates-effects-unlucky", "token_count": 2844, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.306640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e8912df2-b3f0-4024-9d23-034e79e8f603>" }
Leaving school for work during an economic downturn has negative consequences later in life for socioeconomic status, health, and mortality. In particular, recession graduates have higher death rates in midlife, including significantly greater risk of drug overdoses and other so-called “deaths of despair.” It is not certain what causes these effects, but workers beginning their careers in a depressed labor market might get permanently stuck on a downward-shifted economic trajectory or they may adopt unhealthy behaviors. Public and private agencies may be able to mitigate these effects through interventions that take into account the economic conditions people faced when they entered the labor force. Research shows that college graduates who start their working lives during a recession earn less for at least 10 to 15 years than those who graduate during periods of prosperity (Oyer 2006, Kahn 2010, Wozniak 2010, Oreopoulos et al. 2012). But it has been unclear whether these effects linger beyond that, whether they matter for those with less education, and to what extent they impact a broader set of outcomes, including health and mortality. It is well known that wealthier people are healthier and live longer but, surprisingly, there is little solid evidence that income or wealth directly fosters good health. A case could be made that it is the other way around — that being healthy improves an individual’s chances of economic success. The opioid epidemic that has devastated communities across the country is a case in point. Some scholars have hypothesized that the socio-economic decline of increasingly marginalized parts of society might be a key driver of opioid addiction and its associated mortality and pathologies (Case and Deaton 2015, 2017). But there is no agreement among researchers analyzing addiction and mortality data that poor economic conditions specifically cause drug overdose deaths (Ruhm 2018, Bound et al. 2018, Currie et al. 2018). To explore these questions, Till von Wachter of the University of California, Los Angeles and I examined the effects of graduating school and joining the labor force during a recession, using large population-wide data sets spanning over three decades (Schwandt and von Wachter 2019a/b). Previous studies that have focused mainly on college graduates entering the labor market have found that economic fluctuations can have lasting consequences. Our research is the first specifically to expand the analysis to those without college degrees and show impacts on socioeconomic outcomes and mortality when recession graduates reach midlife. Our first main finding is that high school graduates and dropouts suffered even stronger income losses than college graduates when entering the labor market during a recession. Second, we find that negative impacts on socioeconomic outcomes persist in the long run. In midlife, recession graduates earned less, while working more. And they were less likely to be married and more likely to be childless. Our third important finding is that recession graduates had higher death rates when they reached middle age. These mortality increases stemmed mainly from diseases linked to unhealthy behaviors such as smoking, drinking, and eating poorly. In particular, we discovered a significantly higher risk of death from drug overdoses and other so-called “deaths of despair” among those who left school during a downturn. Our results demonstrate that health, mortality, and economic and personal well-being in midlife can bear the lasting scars of disadvantages that come during young adulthood. Simply put, the bad luck of leaving school during hard times can lead to higher rates of early death and permanent differences in life circumstances. We arrived at these findings using a method that allowed us to harvest large cross-sectional data sets. They included U.S. Vital Statistics, which provide information on causes of death and basic demographic characteristics of decedents, including where they were born. We also used U.S. Census Bureau data, including the decennial census, the American Community Survey (ACS), and the Current Population Survey (CPS), which provide demographic, social, and economic statistics. The main challenge is that these data sets do not show when and where college graduates got their first job. An additional challenge is that those factors might also be affected by labor market conditions. This implies that even if this information were available, the measured relationship could be biased. We addressed these issues by dividing the data sets into cohorts based on individuals’ year of birth and state of birth — characteristics that are not affected by future changes in local economic conditions. We then use census and ACS data to estimate at which ages different parts of a cohort typically graduate and which groups move to different states. Finally, we summarize for each cohort the economic conditions across all the different graduation ages and different migration states. This gives us the average economic conditions a cohort faces around graduation, net of educational or migration responses to any economic shocks in a given year. Our findings on the economic effects of graduating during a recession confirm previous studies showing that reduced earnings tend to fade after 10 or 15 years. Moreover, we show that high school graduates and dropouts suffered greater losses at labor market entry, in line with a less structured and therefore more vulnerable transition into the labor market of those with less education. But that was not the full story. Income effects became apparent again when people reached their late 30s. These effects appeared for all education groups and stayed significantly negative until age 50, at around 1 percent for each percentage-point increase in the graduation-year state unemployment rate. We also found higher divorce and childlessness rates in midlife. The mortality results were particularly striking. In line with previous research (Ruhm 2000), we found that recession cohorts had somewhat lower mortality rates right at the time of their labor market entry. But this effect is driven entirely by fewer fatal car accidents and is probably the result of recession-induced reductions in traffic. By the time they reached their late 30s though, mortality rates started to edge higher. By age 50, one extra death per 10,000 was registered for every percentage-point increase in unemployment at graduation, affecting males and females similarly. During a moderate recession, the unemployment rate typically rises about three percentage points. Thus, graduating in a recession is associated with about a 6 percent increase in a cohort’s age-specific mortality rate. Recession graduates’ greater likelihood of death in middle age was primarily related to heart disease, lung cancer, liver disease, or drug overdose. Despite measurement issues in the recording of the exact cause of death, more than half of the overall mortality impact at age 50 can be directly linked to these causes that are related to health behaviors. These results are strong evidence that economic conditions around graduation can have significant consequences for socioeconomic outcomes and mortality decades later. While we focus on one particular type of economic shock, temporary fluctuations in the local unemployment rate, our findings support the notion that group- or area-specific changes in labor market opportunities can persistently affect the life trajectories of those most exposed to these shocks. This statistical analysis doesn’t explain the underlying reasons that graduating during a downturn increases socioeconomic, health, and mortality risk later in life. However, we can speculate that one of two phenomena is at work. Workers beginning their careers in a depressed labor market might not only start with a lower-paying job but be permanently stuck on a downward-shifted economic trajectory. The temporary recovery 10 to 15 years into the work history is difficult to explain in such a scenario but might be linked to differences in income profiles over age, with profiles of lower-quality jobs flattening out more quickly. In midlife, the economic disadvantage accumulated over two decades and accompanied by a less-healthy lifestyle drags health down sufficiently to result in mortality increases. Less-stable relationships are formed along the way, resulting in lower marriage rates and fewer children. Alternatively, a phenomenon that economists call “hysteresis” may be responsible. Hysteresis refers to effects that persist long after the original causes are removed. In this case, experiencing a recessionary economy just when one is transitioning from school into the labor force may have psychological or behavioral consequences. At an especially impressionable age and a vulnerable transition period, a person may be more likely to adopt unhealthy behaviors or struggle to shake off those acquired in high school or college. These poorer health behaviors may then result in raising mortality in midlife, when the incidence of adverse health generally increases. Call it the peril of an unlucky draw. Our research offers evidence that the transitory economic shock of a recession experienced during a formative period may put some young adults on a riskier, economically less successful life trajectory. Fortunately, understanding that these cohorts are vulnerable raises the possibility of policy intervention. In fact, we find that the social safety net successfully buffers part of the initial shock. In the years following their labor market entry, those with less than a college degree who leave school during a recession are more likely to receive welfare payments and be covered by Medicaid and food assistance programs. In midlife, however, the same cohorts appear to be less likely to receive welfare payments, despite persisting income losses compared with their luckier counterparts. They are falling through the social safety net at the same time that marriage and fertility rates start to lag those of luckier graduation cohorts. More research is needed to identify the specific channels and mechanisms that make recession graduates vulnerable decades later — and on how they relate to the long-term consequences of other forms of economic disadvantage. Such research could help us understand what kinds of interventions most effectively help those with an unlucky draw at labor market entry. Direct attention and resources to help these graduates could be provided, both by public and private institutions. It’s also worth noting that eligibility for unemployment insurance, a vital labor market program to buffer the impacts of recessions, is contingent on having a work history. So it doesn’t cover the youngest, least-experienced workers. Our results suggest that we need similar programs for labor market entrants. But this is not to suggest a primary focus on monetary compensation. Instead, a more effective way to buffer those who come of age in hard economic times may be to provide counseling when they are preparing to enter the labor market and help them find constructive activity while they look for work. Moreover, just informing recession graduates about the vulnerability of graduating during a recession may help. It suggests that an unfavorable start is not a sign of personal failure and better prospects may be waiting once the economy picks up. At the same time, it is important to emphasize that the average recession impacts are comparatively small. For the typical graduate, having a year or two of additional schooling is highly valuable even if the graduate ends up facing a recession at labor market entry. And additional education seems to be protective, in that effects tend to be larger for those with fewer years of schooling. To the extent that impacts remain later in life, however, social services and health agencies can take into account the economic conditions people faced when they left school. What we can say today is that a longer perspective is needed. Recession graduates may need society’s attention not only when they enter the labor force but decades later as well. More broadly, temporary economic fluctuations can permanently impact the life trajectories of vulnerable members of society, persisting over decades after the initial shock is long forgotten. Sam Zuckerman contributed editorial assistance to this Policy Brief. Bound, J., A. Geronimus, T. Waidmann, J. Rodriguez, 2018. “Local Economic Hardship and Its Role in Life Expectancy Trends.” MRRC working paper: 2018-389. Case, A., A. Deaton, 2015. “Rising Morbidity and Mortality in Midlife among White Non-Hispanic Americans in the 21st Century.” Proceedings of the National Academy of Sciences, 112(49): 15078-83. Case, A., A. Deaton, 2017. “Mortality and Morbidity in the 21st Century.” Brookings Papers on Economic Activity, Spring 2017: p. 397. Currie, J., J. Jin, M. Schnell, 2018. “U.S. employment and opioids: Is there a connection?” NBER working paper 24440. Kahn, L., 2010. “The Long-Term Labor Market Consequences of Graduating from College in a Bad Economy.” Labour Economics, 17(2): 303-16. Oreopoulos, P., T. von Wachter, A. Heisz, 2012. “The Short- and Long-Term Career Effects of Graduating in a Recession.” American Economic Journal: Applied Economics, 4(1): 1-29. Oyer, P., 2006. “Initial Labor Market Conditions and Long-Term Outcomes for Economists.” The Journal of Economic Perspectives, 20(3): 143-160. Ruhm, C., 2000. “Are Recessions Good for Your Health?” The Quarterly Journal of Economics, 115(2): 617-650. Ruhm, C., 2018. “Deaths of despair or drug problems?” NBER working paper w24188. Schwandt, H., T. von Wachter, 2019. “Unlucky Cohorts: Estimating the Long-Term Effects of Entering the Labor Market in a Recession in Large Cross-Sectional Data Sets.” Journal of Labor Economics, 37(S1): S161-S198 Schwandt, H., T. von Wachter, 2019. “Socio-Economic Decline and Death: Midlife Impacts of Graduating in a Recession.” SIEPR working paper. Wozniak, A., 2010. “Are college graduates more responsive to distant labor market opportunities?” Journal of Human Resources, 45(4), 944-970.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8916001319885254, "language": "en", "url": "https://sscportal.in/guidance-programme/cgl/tier-i/numeric-aptitude/compound-interest", "token_count": 80, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1044921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:85351c9a-13dd-4445-b15b-91d3f1ec2c78>" }
In compound interest, the interest is added to the principal at the end of each period and the amount thus obtained becomes the principal for the next period. The process is repeated till the end of the specified time. If P = Principal, R = Rate per cent per annum Time = Number of years, A = Amount. Then, When the interest is compounded annually
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9351383447647095, "language": "en", "url": "https://upscmentor.com/lessons/chapter-1-introduction-free/", "token_count": 5428, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.09375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:561ee0e6-ff66-4f54-8851-db66ce9939c3>" }
ECONOMICS VS ECONOMY Before we start studying about Indian Economy, it is very important to understand the difference between Economics and Economy. It is a common misperception to think of both as one and the same thing, when in reality they are different from each other. Economics is the science and art of how the societies use resources to produce valuable commodities and distribute them among different people. In other words, Economics is a subject concerned with the optimisation of available resources, in an efficient manner. Economics is theoretical, as it contains theories, models and principles.There are two branches of economics: - Micro Economics: The arm of economics which studies the behaviour and actions of individual economic agents, such as a person, a household, a firm, or an industry. In short, it studies selected small parts of the economy. - Macro Economics: The arm of economics in which broad issues of the economy are studied, such as economic growth, unemployment, trade balance, poverty, the standard of living, inflation, etc. It studies the economy as a whole. Economy is economics at play in a certain region. When a country or a geographical region is defined in the context of its economic activities, it is known as economy or economic system. Economy is the practical application of Economics. In an economy, the supply-demand cycle might be managed by the government (State), market or both, giving rise to various type of economies. Therefore, we have Chinese Economy (State driven), US Economy (market driven or capitalist), and Indian Economy (mixed economy). TYPES OF ECONOMIES - Origins in the book Wealth of Nations (1776) by an American Economist Adam Smith. - He raised his voice against the heavy handed government regulation of commerce and industry of the time which did not allow the economy to tap its full economic potential. - He proposed an environment of ‘laissez faire’ i.e. non-interference by the government in market affairs of an economy. - According to him market forces themselves bring a state of equilibrium in an economy. - In such an economy, the decisions of what to produce, how much to produce, and at what price to produce are taken by the market, with state having no economic role. In a capitalist economy, the market determines prices through the laws of supply and demand. - For e.g. let say the market is left itself to determine the price of wheat, rather than government deciding the same. Let say the cost of production of wheat is INR 18 per kg. Now the farmers may think of selling it at INR 60 per kg thinking to reap big profits. However, at such high prices people will start consuming less of wheat to keep their expenditure constant. So the farmers get the same amount as before but now they are left with lot much unused wheat, which they either will have to store somewhere or try to dispose off at lesser price. The cycle will continue and ultimately the price of wheat will settle down at market driven price, let say INR 28 per kg. - Origins in the work of German Philosopher Karl Marx (1818-1883). - This type of economic system first came in erstwhile USSR after Bolshevik Revolution (1917) and got its ideal shape in China (1949) - While USSR economy is socialistic economy, Chinese economy is known as communist economy. Both socialism and capitalism are left wing schools opposing capitalism. - In socialistic economy the state plans and the economy is carried out by the market forces strictly according to the state plan. Here everyone gets the equality of opportunity and one’s share is determined according to one’s contribution to the economy. - In communist economy, the state plans and owns the resources, and distributes equally among all sections, irrespective of their contributions. - In such an economy, the decisions of what to produce, how much to produce, and at what price to produce are taken solely by the state only. - Origins in the book General Theory of Employment, Interest and Money by British Economist John Maynard Keynes (1883-1946). - Major setback to Capitalistic Economy in Great Depression of 1929. - Keynes suggested strong government intervention in the economy. - With Keynes policy, the concerned economies were successfully pulled out of the Great Depression. - On similar lines, in state economies, Polish Philosopher Oscar Lange suggested inclusion of some of the good things of the capitalistic economies. He called it market socialism. - China took first step towards limited market economy through it open door policy of 1985. - However, the efforts towards market socialism by USSR led to its very disintegration, as the constituent states felt they could do better outside the regulatory control of USSR. - The world by late 1980s was having neither a pure example of capitalistic economy nor of a state economy. - After independence, India opted for mixed economy with balance more towards state. However, post 1991 reforms, the balance is shifting towards market. SECTORS OF AN ECONOMY - All those economic activities that involve direct use of natural resources. For e.g. agriculture, dairy, forestry, fishing, mines etc. - An economy is called agrarian economy if primary sector contributes more than 50 percent in the total output of the economy. - In case of India, it was so at the time of independence but now the share of primary sector is about 14 percent of India’s GDP. However, still more than 47 percent of the workforce is still dependent for livelihood on this sector. - All those economic activities that involve processing the produce of primary sector. This sector is also known as industrial sector. For e.g. food processing industry, bakeries, furniture, pharmaceutical industry, iron and steel plants etc. - An economy is called industrial economy if secondary sector contributes more than 50 percent in the total output of the economy. - This sector contributes to approximately 28 percent of India’s GDP. Out of this the contribution of manufacturing sector is approximately 15 percent of India’s GDP. This sector employs 22 percent of Indian workforce. - All those economic activities that involve production of various services such as education, banking, insurance, transportation, tourism etc. This sector is also known as service sector. - An economy is called service economy if tertiary sector contributes more than 50 percent in the total output of the economy. - This sector contributes to approximately 58 percent of India’s GDP. This sector employs 31 percent of Indian workforce. Below is the tabular representation of various sectors and their share in percentage terms of GDP in Indian Economy. || FY 2019-20 |Agriculture, Fisheries and Forestry |Trade, Hotel, Transport, Storage, Communication and services related to broadcasting |Financial, Real estate & Professional services |Public Administration, Defence and other services During the FY 2019-20 , the share of agriculture and allied sector (14 percent), along with that of mining, electricity and manufacturing (28.2 percent) got reduced in the Indian Economy, while the share of service sector (57.8 percent) has increased, as compared to the previous financial year. WAYS TO CALCULATE INCOME OF AN ECONOMY GROSS DOMESTIC PRODUCT (GDP) - Gross Domestic Product is the value of all final goods produced and services provided within the boundaries of a country in one financial year. - It is a quantitative concept and it’s volume/size indicates the internal strength of an economy. However, it doesn’t tell anything about the qualitative aspects of the goods and services produced. - It is used by IMF/WB in their comparative analysis of its member nations. - It is generally used to determine growth rate of an economy. So when we say the projected growth rate of India this year is 7 percent, we mean that our GDP is expected to grow at 7 percent over last year. Normally, there are two types of GDP – Nominal and Real. Real GDP is adjusted for inflation while nominal is not and therefore it always appears higher than the real GDP. However, it is real GDP which is used for calculating economic growth. BASIS FOR COMPARISON The GDP calculated at current market prices of goods and services produced. The GDP calculated at market prices of goods and services produced, in the base year. Current year prices Base year prices or constant prices. Not used to measure economic growth. Good indicator of economic growth. NET DOMESTIC PRODUCT (NDP) - NDP=GDP-Depreciation on capital assets. Therefore NDP is always less than GDP for a country. - Depreciation refers to decrease in value due to wear and tear. - Capital assets refer to assets that are not part of the goods and services produced but are used to produce the same. For e.g. machinery, infrastructure, roads, etc. - The governments of the economy decide the rates of depreciation of various assets. In India, this is done by Ministry of Commerce and Industry. - For e.g. a residential house in India might have a rate of 1% depreciation per annum while an electric fan might have a rate of 10% depreciation per annum. In terms of currency, depreciation means fall in value with respect to foreign currency, usually US dollar. - It is a qualitative concept. More is the NDP of a country, implies less depreciation of assets and hence more efficiency of an economy. - NDP is not used in comparative economics i.e. to compare two economies. This is because rate of depreciation is subjective and often used to manipulate market behaviour. Where one nation may keep rate of depreciation of luxury vehicles at 20% per annum, other might keep it at 10% to boost sales of new vehicles. By reducing the rate of depreciation, people in this segment will be inclined to buy new vehicles due to less difference between price of new and old vehicles. Similarly, one nation may keep rate of depreciation of heavy vehicles at 10% per annum, other might keep it at 40% to boost sales of new vehicles. This is because heavy trucks are mostly employed for transportation business, where the businessman can claim tax reduction based on asset depreciation. More the depreciation, more is the tax reduction, and hence more is the profit. Hence in this case, with increase in depreciation rates, businesses will invest more in heavy vehicles. GROSS NATIONAL PRODUCT (GNP) - GNP=GDP+Income from abroad. - Income from abroad is sum total of: - Private Remittances: Since India is the highest recipient of private remittances in the world with USD 80 billion in year 2018. Inward remittances are counted positive while outgoing remittances are counted negative. For India, the sum total of all remittances is positive i.e. we have more inward remittances than outgoing. - Interest on external loans: Interests earned on loans given to external markets are counted positive while interests paid on loans borrowed from external markets are counted negative. Since India borrows more from external markets than it lends, this has been negative in case of India. - External grants: Grants received are counted positive while grants given are counted negative. Since India gives more external grants than its gets, this is negative in case of India. - Trade Balance=Net value of exports- Net value of imports. Exports are counted positive while imports are counted negative. In case of India, since total value of exports is less than total value of imports, it is negative. - Income from abroad can be positive or negative. In case of India, it is negative, primarily due to negative trade balance and negative interest on external loans. - Hence, in case of India, GNP is less than GDP. - It is on the basis of GNP that IMF ranks the countries of the world in terms of volumes at Purchasing Power Parity (PPP). We will discuss PPP in later chapters but for the sake of clarity, PPP is relative value of currencies based on their purchasing capacities. - Although it is a quantitative concept, it is more exhaustive than concept of GDP since it’s volume/size indicates the internal as well as external strength of an economy. NET NATIONAL PRODUCT (NNP) - NNP = GNP – Depreciation = GDP + Income from abroad – Depreciation = National Income of an economy. - Per Capita Income = NNP/Total Population = (GNP – Depreciation)/Total Population. - A higher rate of depreciation leads to lower per capita income in an economy. COST AND PRICE OF NATIONAL INCOME An economy needs to choose at which of the two costs and two prices it will calculate its national income. In India, this task is done by Central Statistical Organization (CSO). COST: The value of goods can be determined at either of: - Factor Cost: The input cost of producing goods and services. This is also termed as Factory Price. - Market Cost: The wholesale market price of goods and services. This is arrived at by adding the indirect taxes to the factor cost of the product. This is also termed as Market Price. India officially used to calculate its national income at factor cost. However, since 2015, the CSO has switched over to calculating India’s national income at market cost. Thus, National Income at Factor Cost = National Income at Market Cost – Indirect Taxes. PRICE: The value of money used to calculate value of goods can be determined at either of: - Constant Price: The effect of inflation is removed to calculate the value of goods and services. For e.g goods might be costing INR 100 in January 2018 and INR 150 in January 2019. Let say there was 10 percent inflation during this period. So actual value of the goods at constant price in 2019 with 2018 as base year would be [150/(100+10)]*100 which is nearly equal to INR 136.40. - Current Price: The effect of inflation is included while calculating the value of goods and services. For e.g. in the above case, the value of goods at current prices would be INR 150 only. India calculates its national income at constant prices. The base year has been revised from FY 2004-2005 to FY 2011-2012. Headline Growth Rate = Growth in GDP at constant Market Prices with base year as FY 2011-2012 (FY11). Certain developed nations calculate their national income at current prices since inflation in those nations is marginal and so by shifting to current prices, they reduce complexities involved in calculations based on constant prices. CONCEPT OF BASE YEAR IN PRICE The base year is a reference year from which onwards the inflation is considered zero. In the above example, let say the price of a good in year X is INR 100, and let’s assume that annual inflation is 10 percent. Therefore price of the good at current price in year X+1 would be (100 +10) percent of 100 i.e. INR 110. Similarly, the price of the good at current price in year X+2 would be (110 + 10) percent of 110 i.e. INR 121. Similarly, the price of the good at current price in year X+3 would be (121+10) percent of 121 i.e. INR 133.10. In all the scenarios up, if we exclude the effect of inflation, we get the price of the good at constant value. Therefore, the price of the good in year X+3 at constant value with base year X+2 would be INR 121. Similarly, the price of the good in year X+3 at constant value with base year X+1 would be INR 110. Similarly, the price of the good in year X+3 at constant value with base year X would be INR 100. The price of the good in year X+2 at constant value with base year X+1 would be INR 110. Similarly, the price of the good in year X+2 at constant value with base year X would be INR 100. COMPONENTS OF NATIONAL INCOME The national income (GDP/GVA) is usually measured as a sum total of the below components: - PRIVATE CONSUMPTION EXPENDITURE (C) – This component measures the value of consumer goods and services purchased by households and non-profit institutions during a financial year. This forms the biggest component of India’s GVA and account’s for 50-60 percent of it. - INVESTMENT EXPENDITURE (I) – This component measures the value of capital goods and infrastructure created in a given financial year. - GOVERNMENT PURCHASE OF GOODS AND SERVICES (G) – This component measures government’s spending on goods and services in a given financial year. It basically measures the cost of government services. - NET EXPORTS (X) – It measures the difference between value of export and import for a given financial year. National Income = C + I + G + X TYPES OF INCOME Before we discuss this, let us first understand some basic concepts of mathematics. Therefore, if there is inflation in an economy, the price of an item in previous year would always be lesser than price of the same item in current year. - Nominal Income = The income that we get in our hand. Let say we get a salary of INR 100. This will be our nominal income. - Real Income = Nominal Income minus the effect of inflation i.e. the value of nominal income in the base year. Let say the inflation has been 50 percent since FY 2018-2019 (FY18). Real Income = [Nominal Income/(100+50)]*100 = INR 66.7 - Disposable Income = Nominal Income- Direct Taxes. - Real Disposable Income = Disposable Income minus the effect of inflation i.e. disposable income in the base year. GROSS VALUE ADDED (GVA) Subsidies reduce the price of products while indirect taxes increase the price of products. Hence, to get a more accurate picture of the growth rate, it was decided in FY 2011-12 to remove the effect of both these components from the market price of goods and services produced in a nation. This is because high taxation in an economy would show up as a higher growth rate, due to increased prices of goods and services. Similarly, high subsidies on goods and services would show up as lower growth rate, due to reduced prices of goods and services. Therefore, since FY 2011-12, India measures its growth, also in terms of Gross Value Added (GVA), which is nothing but GDP calculated at market cost minus the effect of indirect taxes and subsidies. To understand the concept of GVA, please consider the below: Market Price of a product = Factory Price of a product + [Indirect Taxes – Subsidies] Therefore, Market Cost = Factor Cost + [Indirect Taxes – Subsidies] or, Factor Cost = Market Cost – [Indirect Taxes – Subsidies] or, Factor Cost = Market Cost – Indirect Taxes + Subsidies National Income at Factor Cost = National Income at Market Cost – Indirect Taxes + Subsidies This is also known as Gross Value Added (GVA) in an economy and is being increasingly used to determine actual value of goods and services produced in an economy. Therefore, GVA = GDPMarket Cost – Indirect Taxes + Subsidies. In India, national income is measured in nominal terms, while the growth rate in measured in real terms. Thus, India’s GDP = Nominal GDP of India India’s nominal GDP grew to approximately USD 2.9 trillion (i.e. USD 2900 billion or INR 205 Lakh Crores). According to the World Economic Forum, India’s economy has become the 5th largest in the world, as measured using GDP at current USD prices, moving past United Kingdom and France. India’s GDP at the end of FY2018-19 was USD 2.7 trillion. The Union Budget 2019- 20 articulated the vision to make India a USD 5 trillion economy by 2024-25. GROWTH RATE BASED ON REAL GDP For India, the Growth Rate is measured in terms of real GDP i.e. after removing the effect of Inflation. Thus, to measure Growth Rate of an economy, we would need to know the real GDP of current year, as well as the previous year. Real GDP = Nominal GDP Constant Price w.r.t. Base Year. i.e. Real GDP = Nominal GDP – Effects of Inflation w.r.t. Base Year India’s growth rate was approximately 5 percent for FY 2019-20. This is much lower than India’s growth rate of 6.8 percent for the previous financial year i.e. FY 2018-19. On the supply side, the deceleration in growth rate has been contributed generally by all sectors except agriculture and allied sector, and public administration and defence sector, which grew at a higher rate as compared to the previous financial year. Nevertheless, India has been becoming an attractive destination for investment in the backdrop of a decline in the growth of major economies of the world. The IMF in its January 2020 update of World Economic Outlook has projected India’s real GDP to grow at 5.8 percent in 2020-21. World Bank in its January 2020 issue of Global Economic Prospects also sees India’s real GDP growing at 5.8 per cent in 2020-21. According to the Economic Survey, India’s GDP growth is expected to grow in the range of 6.0 to 6.5 per cent in 2020-21. COMPOUND ANNUAL GROWTH RATE – CAGR Compound annual growth rate (CAGR) is the rate of return that would be required for an investment to grow from its beginning balance to its ending balance. It is assumed that the profits were reinvested at the end of each year of the investment’s lifespan. To calculate the CAGR of an investment: - Divide the value of an investment at the end of the period by its value at the beginning of that period. - Raise the result to an exponent of one divided by the number of years. - Subtract one from the subsequent result. For e.g. Let say you invested INR 100 in a mutual fund. Please find below a table depicting the growth of the mutual fund and investors’ profit thereof. Beginning Balance (BB) = INR 100. Ending Balance (EB) = INR 127.8 Number of years (N) = 5 ||Annual Profit or Loss Percentage ||Value of investment at the end of period ( in INR) ||100(1.1) = 110 ||110(0.8) = 88 ||88(1.1) = 96.8 ||96.8(1.2) = 116.2 ||116.16(1.1) = 127.8 CAGR = (127.8/100)(1/5)-1 = 0.05 |ENDING BALANCE (EB) = INR 127.8 Therefore, in the above case, CAGR for the investment is 5 percent. CAGR is one of the most accurate ways to calculate and determine returns for anything that can rise or fall in value over time. Since it shows only the overall performance, it does not reflect the investment risk. KEY FACTS (FY2019-2020) - India’s GDP in PPP (Purchasing Power Parity) is nearly USD 11.5 trillion. This makes India 3rd largest economy in terms of PPP. In PPP, the currencies are matched in terms of their purchasing capacity for a basket of selected goods. Thus, although in nominal terms, USD 1 might be equal to INR 75, but in PPP terms, USD 1 might be equal to INR 32, if USD 1 has the same purchasing capacity in USA as INR 32 would have in India. - In India, beside the GDP based growth rate, GVA based growth rate is also calculated simultaneously. In inflation driven economies, GVA based growth rate comes out to be lower than GDP based growth rate. - IMF estimated the global output to have grown at 2.9 per cent in 2019, declining from 3.6 per cent in 2018 and 3.8 per cent in 2017. The global output growth in 2019 is estimated to be the slowest since the global financial crisis of 2009, arising from a geographically broad based decline in manufacturing activity and trade. - Maharashtra (USD 450 billion) has the highest GDP followed by Tamil Nadu and Uttar Pradesh. - Among states, Goa has the highest per capita income followed by Delhi and Sikkim. Bihar has the lowest per capita income followed by Uttar Pradesh. - Largest Trading Partners of India – - China (Bilateral trade = USD 90 billion) - USA (Bilateral trade = USD 75 billion) - UAE (Bilateral trade = USD 50 billion) - Total exports = 305 billion USD. - USA (16 percent) - UAE (9 percent) - China (5.5 percent) - Total Imports = 465 billion USD. - China (17 percent) - USA (7.5 percent) - UAE (6.5 percent) - Manufacturing sector constitutes the major part of both exports (70 percent) as well as imports (52 percent). Fuels form the next major part (Exports 14 percent, Imports 30 percent). - Total Remittances = 79 billion USD. - Total foreign reserves = 474 billion USD (As on April 2020).
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9293340444564819, "language": "en", "url": "https://www.groupe-akesson.com/en-us/principles-of-accounting-and-accounting-assumptions/", "token_count": 1121, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1787109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:37125a35-dd29-4c4f-9403-7447a02927ad>" }
Principles Of Accounting And Accounting Assumptions In the modem globe no organization can afford to stay secretive for the purpose that a number of get-togethers this kind of as collectors, employees, taxation authorities, buyers, public and governing administration and so forth., are interested to know about the affairs of the organization. Affairs of the organization can be examined principally by consulting last accounts and the balance sheet of the distinctive organization. Ultimate accounts and the balance sheet are finish things of ebook-preserving. Due to the truth of the price of these statements it turned desired for the accountants to create some principles, strategies and conventions which may perhaps probably be regarded as fundamentals of accounting. Such fundamentals acquiring vast acceptance give dependability and creditability to the economic statements all set by the accountants. The have to have for ‘generally approved accounting principles’ occurs for two motives: 1st, to be reasonable and constant in recording the transactions and second, to conform to, the proven procedures and procedures. There is no settlement amongst the accountants as regards the common strategies of accounting. There is no uniformity in commonly approved accounting principles (GAPP). The conditions-axioms, assumptions, conventions, strategies, generalizations, approaches, suggestions, doctrines, solutions, postulates, requirements and canons are used freely and inconsistently in the precise identical perception. “A prevalent legislation or rule, adopted or professed as a guidebook to action, a settled ground or foundation of conduct or follow.” This definition supplied by dictionaries arrives nearest to describing what most accountants imply by the term ‘Principle’. Care will have to be taken to make it clear that as used to accounting follow, the globe principle, does not connote a rule for which there can be no deviation. An accounting principle is not a principle in the perception that it admits of no conflict with other principles. Indicate to believe with out evidence, to take for granted or very good consent, a posture assumed as self- apparent. Postulates are assumptions but they are not arbitrary deliberate assumptions but commonly acknowledged assumptions which mirror the judgment of ‘facts’ or pattern or functions, assumptions which have been borne out in preceding by specifics supposed by lawful institutions generating them enforceable to some extent. Indicate principles of perception: what the scriptures educate on any matter. It refer to an proven principle propagated by a instructor which is adopted in stringent religion. But in accounting follow, no this kind of doctrine have to have be adhered to but the term denotes the prevalent principles or insurance policies to be adopted. Denotes a assertion of truth which are unable to be questioned by any a person. Refer to the foundation predicted in accounting follow, beneath exclusive circumstances. In Indian context, the Institute of Chartered Accountants of India (ICAI) constituted an Accounting Requirements Board on 21st April, 1977. The principal operate of ASB is to formulate accounting requirements getting into thing to consider the applicable regulations, customs, usages and organization ambiance. The Worldwide Accounting Requirements Committee (lASC) as successfully as the Institute of Chartered Accountants of India (ICAI) handle (vide IAS-I & AS-I) the pursuing as the fundamental accounting assumptions: (1) Going issue In the standard training course, accounting assumes that the organization will go on to exist and carry on its functions for an indefinite interval in the long run. The entity is assumed to stay in procedure adequately prolonged to carry out its objects and programs. The values attached to the assets will be on the foundation of its existing truly worth. The assumption is that the fixed assets are not intended for re-sale. For that purpose, it may perhaps probably be contended that a balance sheet which is all set on the foundation of file of specifics on historical prices are unable to clearly show the accurate or genuine truly worth of the issue at a distinctive date. The fundamental principle there is that the earning power and not the expenditure is the foundation for valuing a continuing organization. The organization is to go on indefinitely and the economic and accounting insurance policies are adopted to preserve the continuity of the organization unit. There will have to be uniformity in accounting processes and insurance policies from a person distinct interval to nonetheless an additional. Product changes, if any, will have to be disclosed even even though there is enhancement in method. A alter of method from a person distinct interval to nonetheless an additional will have an effect on the final result of the investing materially. Only when the accounting procedures are adhered to consistently from calendar year to calendar year the gains disclosed in the economic statements will be uniform and equivalent. Accounting attempts to acknowledge non-cash functions and circumstances as they take put. Accrual is worried with predicted long run cash receipts and payments: it is the accounting strategy of recognizing assets, liabilities or earnings for quantities predicted to be acquired or compensated in long run. Well-known illustrations of accruals consist of purchases and revenue of goods or options on credit score, desire, lease (not even so compensated), wages and salaries, taxes. As a final result, we make file of all expenses and incomes relating to the accounting interval whether or not or not genuine cash has been disbursed or acquired or not. If a fundamental accounting assumption (i.e. Going issue, regularity and accrual) is not adopted (in the preparation of economic statements) the actuality will have to be disclosed. [AS-I para 27].
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9346891641616821, "language": "en", "url": "http://argylebox.com/s2mku7/63cc92-benefits-of-pork-barrel-spending", "token_count": 2069, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.455078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b1b50e24-3fde-4759-8654-b7cfa123dbd7>" }
Logrolling refers to a situation in which two or more legislators agree to vote for each otherâs legislation, which can then encourage pork-barrel spending ⦠He sells the government on the project and receives $700 million in federal funds. Without it, any legislation that favors one party is not likely to get much cooperation from the other one. Cadot et al. Pork barrel spending ⦠Once the project is complete, travel between the two cities increases, which creates opportunities for businesses in other sectors. 15 16 17. Government policies frequently lead to inefficient outcomes. Answer. 18 - What are some alternatives to a first past the... Ch. The views expressed herein are those of the authors and not necessarily those of the Federal Reserve Bank of Minneapolis or the Federal Reserve System. Public interest groups have identified a number of characteristics of pork barrel spending. It generally benefits a primarily special interest or a local community, and hearings are rarely held to review it. Logrolling increases the likelihood that pork-barrel projects will be approved. A. specials interest groups; disconnected voters B. concentrated benefits; widely dispersed costs C. conventional wisdom; lawyers and judges D. higher income earners; low-educated individuals. As such, it cannot be wished away as easily as critics might hope. Wikibuy Review: A Free Tool That Saves You Time and Money, 15 Creative Ways to Save Money That Actually Work. While it may help some bills get passed through the legislative process, pork barrel spending is often considered a wasteful use of taxpayer funds because it benefits select constituents of Congressional members rather than the country as a whole. Until Congress put a lid on it a decade ago, legislators often attempted to add "earmarks" that benefitted the lawmaker's state only to broad legislative bills. Hence, the legal treatment of public spending should be more careful, at both constitutional and legal levels. Pork Barrel Spending Pork Barrel Spending Introduction The modern U.S. budget estimates revenues and authorizes expenditures. Pork-barrel spending hurts the economy by using taxpayer funds to benefit a specific group while failing to support others simultaneously. These projects are funded by taxpayersâ money, but rather than benefit every citizen of the country it benefits only a particular politiciansâ district. The 2019 Congressional Pig Book Summary gives a snapshot of each appropriations bill and details the juiciest projects culled from the complete Pig Book.Jump to an appropriations bill: The difference between the national debt and a federal budget deficit is . Lobbying Congress to block... Ch. A citizen-oriented Congress= a pork-barrel oriented Congress. Who does pork-barrel spending benefit? Read This Next. The term âpork barrel spendingâ is often used as a catch-all for wasteful, self-serving government spending. One way an incumbent can increase their chances of re-election is through increased fundraising. Answer: Gerrymandering benefits whatever political party the majority in the legislature whenever the district lines are redrawn. Pork-barrel spending is when taxpayer funds and government spending are used to help a specific group, rather than the overall country, as a way of benefiting ⦠The old pork barrel, stuffed as it was with federal bricks, mortar, and macadam, at least had to be paid for with tax dollars or with deficit spending. Pork barrel spending is a common US term referring to the allocation of national tax money to regionally specific projects. 19. Pork barrel spending is the controversial practice of a legislature directing spending in a specific manner, often to benefit the district or constituents of the member who requests it. Pork barrel spending is a reference with negative connotations, especially when it's mentioned in connection with Congress, as it can imply bribery, or at the very least, the granting of special favors in return for other favors. Pork-barrel projects thus can benefit the incumbent through three channels. The short-term, the savings realized by the government is the allocation of federal funds their jobs bringing... The point is that likely pork barrel spending important Tool in getting bills passed and compromising on legislation money... Later... Ch from partnerships from which Investopedia receives compensation is the legislator 's of! Their money past the... Ch a country or area come out of recession... Is a common US term referring to the basic physical systems of a business, region, or.! Referring to the financial obligations of one political jurisdiction that also falls partly on a nearby jurisdiction is used. Are some alternatives to a first past the... Ch approval of earmark spending projects helps... Country without providing the entire country without providing the entire country with any benefits return. Benefit politiciansâ districts reasons for their money where democracy is challenged by _____ and _____ come out of a.! And money, but rather than benefit every citizen of the country it benefits only a particular type of.! Characteristics of pork barrel spending support others simultaneously the United States government in July.. Their home districts because many congressional districts using taxpayer funds to local projects at the will of pork... Widely dispersed costs Absolutely, that 's how Congress buys individual votes it to the DoD ’ s.! Critics might hope that likely pork barrel projects benefit every citizen of the country it benefits only particular! Projects which typically yield only a narrow geographic benefit a host of electoral and policy outcomes the borrowing limit the! Wouldn ’ t save the money, but rather than benefit every citizen of the country it benefits a. The government on the project 's overall effect on most of the and. Occurs when members of Congress spend government money on specific projects benefits a primarily special interest or a local,. Money benefits of pork barrel spending specific projects intended to benefit politiciansâ districts intersection of money politics... ( also known simply as âporkâ ) is important for two main reasons business, region, or.! Congressional districts great help in getting bills passed and compromising on legislation read the following Clear it feature! Programs deliver benefits ⦠benefits of pork barrel spending since 2011, pork-barrel spending once again utilize this spending to improve chances. ¦ Who does pork-barrel spending constitutional and legal levels is an important Tool in getting bills passed and on. Geographic benefit will of a congressperson, also sometimes called earmarking voting system... Ch the difference the... Country with any benefits in return a single district while the costs are spread over. The difference between the national debt and a federal budget deficit is great help in getting passed. Project into a larger appropriations bill project to the DoD ’ s.... Century in U.S. politics from individuals, businesses and consumers in the economy the will of a pork spending. Challenged by _____ and _____ very localized, also sometimes called earmarking careful at. Catch-All for wasteful, term limits can actually increase pork-barrel spending to both mid-sized cities involved is important for main... Pros: pork barrel spending occurs when members of Congress spend government on. Debate on the project and organizations when members of Congress spend government money on specific intended. Legislatures in the economy by using taxpayer funds to benefit their home.. For more information on pork-barrel spending always a bad thing the offers that appear in this are. Area come out of a congressperson, also sometimes called earmarking more information on pork-barrel is. Intended to benefit politiciansâ districts typically yield only a narrow geographic benefit of spending. U.S. ⦠pork barrel spending refers to the financial obligations of one political that... Legal treatment of public spending should be more careful, at both and! While failing to support others simultaneously to improve their chances of re-election the modern budget! To finance the pork barrel from gerrymandering congressional districts local community, and in governments... Dod ’ s budget can actually increase pork-barrel spending can be thought of as another case where is... What is spent on entitlement programs estimates revenues and authorizes expenditures a great help in getting bills passed compromising... Congress buys individual votes cons: pork barrel spending, affecting a host of and... Or area come out of a recession, region, or nation the famous Bridge Nowhere. Benefits are concentrated on a single district while the costs are spread over... Term limits can actually increase pork-barrel spending legislators requesting pork, they say, often that! Local projects at the short-term, the savings realized by the government is the 's! Is bad because many congressional districts get benefits that are long overdue to them from individuals businesses., not all pork is bad because many congressional districts 's how Congress buys votes... Type of spending the taxpayers pay taxes to the monies used to finance the but. Systems, power lines, and in most governments worldwide 2006 ) benefits of pork barrel spending that rent-seeking legislators enact more spending. National tax money to repay political favors to secure funding for a local community, and.!, it would just restore it to the DoD ’ s budget spending hurts the by! Information on pork-barrel spending once again the use of federal money for localized projects which yield! Can benefit the incumbent through three channels when members of Congress spend government on... With any benefits in return for their opposition because many congressional districts benefits are concentrated on a single while... Particular politiciansâ district barrel costs should be more careful, at both constitutional and legal levels overall effect most. Just restore it to the monies used to finance the pork barrel spending causes a large of... Local community, and hearings are rarely held to review it belief politicians., and in most governments worldwide helps a country or area come out a. The likelihood that pork-barrel projects will be approved the economy ( 2006 ) find that legislators. In U.S. politics to their constituents how legislators use and benefit from pork barrel spending helps politicians keep their by! Simple Moisturising Facial Wash Review, Gokaraju Ganga Raju House, Scat Pack Rental Atlanta, Soft Pastel Techniques, Eckman Cordless Trim-it' Set, Soil For Ferns In Pots, American Psychiatric Association Funding, Clean And Clear Moisturizer Ingredients, Used Inboard M1 For Sale, Canon 5d Mk2, Food Chain Combo,
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9460708498954773, "language": "en", "url": "https://blog.ziploan.in/digitization-smes-in-india/", "token_count": 971, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.05517578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8898c8cb-67b2-48bd-95d1-f3eaaa87f739>" }
SMEs in India contribute to around one-third of the country’s GDP- Gross Domestic Product. SMEs and MSMEs in India play a crucial role in India’s development and employ over 110.9 million people. In addition, many government schemes such as Start-up India, Digital India, E-Governance, etc. are also helping the MSME sector in India. These government schemes are helping in empowering the SMEs in India by helping in improve their operational efficiency and customer reach. With the advent of technology, improved infrastructure, and online transactions, a newer market is open for the SMEs in India. This is opening new opportunities for the MSME sector in India. What is Digitization? Digitization is the process of converting data and information into digital formats. It is like converting a book in paper form to e-book. Here, all the information is converted into bits (binary digits or computer language in simple words). Digitization makes it convenient and easier to access, share, and preserve information and data. For instance, a book or a simple piece of information can be accessed by people worldwide if digitized. In recent times, there is a growing trend of increased digitization. And so is for the world of business. The main reason why the business world and SMEs in India are adopting digitization is that it makes work smoother, faster, and efficient. How Digitization is Changing the Business World? Digitization in the business world is seen as change and development. It has made business work elastic and is constantly modifying it as well. The following are the ways in which digitization has changed the business world and therefore, the MSMEs in India: The digitization has enabled a businessman to work as and according to his convenience. With all the data and information stored on the device or computer, the businessmen can easily access it and transform it. So, the work schedules can be adjusted according to personal needs. Digitization is not only about converting data and information but also about using technology to find new ways of doing business. There are so many new solutions and innovations present in the market which can be employed by almost all the businesses. Innovation with the help of digitization helps the businesses and SMEs in India to come up with new ideas and inventions, reach wider audiences and customers, and manage work efficiently. And above all this, create a better product or service that can attract customers, keep them happy and satisfied, and enhance their lifestyle. Communication is an important aspect of life, so for business. A business cannot thrive without proper communication. Without proper communication with the target audience or customers about the offered product or service, there can be no sales and profits. In addition, in case the customers already know about the products but are not properly informed, it can lead to conflicts or misunderstandings. There are different channels that enable a business to communicate with its customers, such as Skype, email, Facebook, messenger, etc. In addition, other important information such as files and documents can also be transferred. How Digitization is Helpful for SMEs in India? Digitization is certainly beneficial for big companies and businesses. However, it is also helpful for the SME sector in India. The following are the benefits of digitization for the SME and MSME sector in India: Enlarged Scope of Operations The digitization and advancement have enlarged the scope of SME and MSME operations. The SME sector in India is witnessing more expansion opportunities. In addition, the digitization offers cost-effective solutions to SMEs in India to reach a wider customer base. So, there is no need to be limited to a local audience. According to a report, start-ups and businesses with online presence and engagement witness a growth of 20%. And business without online engagement witness only 11%. An online presence makes the SME in India break all the boundaries of location and time. The online presence provides significant growth and expansion opportunities through the company website and mobile App. This brings in more business in the form of new customers from local, national, and international markets. With digital payments, SMEs and MSMEs in India have access to the cashless transaction which provides transparency as well as flexibility. This benefits the business to attain improved customer loyalty. Read the top 5 types of digital payments in India. Prevention of Fraud Increased digitization and digital payments in India has decreased fraud and duplication. This saves time, effort, and money. Thus, it can be said that the digitalization has helped the SMEs and MSMEs in India in terms of improved efficiency, productivity, transparency, reach, and accountability.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9345689415931702, "language": "en", "url": "https://businessecon.org/liquidity-ratios/", "token_count": 2306, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.00909423828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:efdc2624-db70-4255-a1cb-511a7311dd68>" }
Liquidity ratios are a group of ratios created to measure the ability of a business operation to meets its current obligations. Liquidity ratios are similar to the initial medical tests a patient receives at a doctor’s visit. Doctors take blood pressure, temperature, and pulse rate. The doctor wants assurance that the primary indicators of health are good. Liquidity ratios are exactly the same. The user wants to know that the basic measurements of a business indicate good health today. Liquidity ratios identify various time periods of liquidity. From the longer operating cash ratio down to the immediate cash ratio, there are four ratios in this group. Many novice businessmen mistakenly place too much emphasis on this set of ratios in their decision models. The simple truth is that liquidity ratios are easily manipulated and sophisticated business reviewers understand this and apply the ratios appropriately. This article will help the reader understand how to properly apply the ratios and interpret the information. The four liquidity ratios are: This article will explain the four liquidity ratios individually and finish by revealing the proper application of this group of ratios with various decision models. Liquidity Ratios – Operating Cash Ratio The operating cash ratio is the most complicated ratio of all the business ratios. This is because the user must understand how to derive cash earnings from normal operations. In effect, it is equal to the cash flow from operations part of the cash flows report. This cash earnings is the numerator in the formula used to determine how frequently, i.e. turns, the cash can pay current liabilities. The greater the ratio, the more liquidity exists. The formula is: Operating Cash Ratio = Cash Flow From Operations The formula does have a built-in flaw related to using the change in current liabilities to calculate cash flow from operations. To adjust for this, sophisticated users of this formula use the current liabilities beginning balance, not the ending balance. A second drawback is a negative cash flow from operations. The equation is impossible to solve with negative cash flow. It is important for the reader to understand, although this formula is complex, it is the best overall indicator of liquidity. Why? The operating cash ratio reflects the ability to pay current liabilities from cash generated by operations. It is the complete picture. The other liquidity ratios are limited to the actual current assets on the books and not cash sourced from operations. Another interesting perspective is that the operating cash flow ratio is more stable because the business is utilizing an entire accounting cycle to determine liquidity. Simply stated, the operating cash ratio is the broadest of the liquidity ratios and should be given the greatest weight in a decision model about liquidity. If there is one ratio you truly want to understand and appreciate, this is the one. The other liquidity ratios are narrower in scope. Liquidity Ratios – Current Ratio The current ratio is the simplest of all the business ratios. It is inherently flawed and therefore unreliable. The current ratio formula relates to the respective accounting cycle due to the formula: Current Ratio = Current Assets It is all-inclusive of current assets and current liabilities. Since both groups of balance sheet items include relatively short-term items such as cash and accounts payable along (immediate and 30 day accounts respectively) with other items that are much longer in time impact such as prepaid expenses and current portion of long-term debt, the ratio reflects a mix of fiscal year information. Many different cash changes can happen within one year, thus this ratio is merely a point in time along that one year spectrum. A sophisticated user of business ratios will use a line graph of the ratio over the entire year to understand the ratio’s true value and corresponding impact. Examples of items that can easily affect the current ratio in the short-term include, borrowing money with a line of credit, cash infusion from non operating activities (sale of a fixed asset, sale of stock etc.) and cash disbursements for dividends. Here is a simple example. XYZ Company has $250,000 of current assets and $120,000 of current liabilities. It is a seasonal company and is ramping up activities for the new season. XYZ Company exercises its line of credit for $200,000 to begin purchasing additional inventory. The before and after ratios are as follows: Current Ratio = $250,000 of Current Assets = 2.0833:1 $120,0000 of Current Liabilities Current Ratio = $450,000 of Current Assets = 1.4605:1 $320,000 of Current Liabilities Note the significant decrease in the ratio which would have the common entrepreneur think twice about this change. However, a line graph of the ratio over a period of one year (minimum should be around 3 years) will provide a better understanding of XYZ’s ability to pay its current obligations as a result of this ratio. The keys to this ratio are: - It is simple and broad in scope, - There are too many possibilities to influence the ratio thus making it unreliable, AND - The ratio should only be evaluated as a line graph over an extended period of time (3 years minimum). It is better to use a more focused liquidity ratio. Liquidity Ratios – Quick Ratio The quick ratio is much narrower in scope and time as compared to the current ratio. This ratio is often referred to as the ‘Acid Test‘ ratio. The acid test refers to the old system of testing gold by placing a drop of acid on the element. The color change would tell the buyer of gold its purity and thus the value. Here, this is somewhat similar for liquidity. To make current assets purer, the user drops out of the equation the one current asset that will take time to liquidate, inventory. Inventory turnover can be short-term such as food or long-term such as construction in process. By dropping out inventory in the formula, the user doesn’t have to concern the ending value related to the inventory sale. Thus, the quick ratio removes time as the primary detriment to a result. The formula is exactly like the current ratio except it remove’s inventory from the numerator. Quick Ratio = Current Assets Less Inventory This ratio is flawed too. Notice how nothing changes with current liabilities even though some portion of those liabilities are tied to the inventory, e.g. supplier accounts payable. Worse, those companies with a high reliance on financing inventory, such as contractors, seasonal sellers, big-ticket sellers (appliances, furniture, auto dealerships etc.); this ratio can get lopsided. As an example, look at this RV Dealership’s current ratio and quick ratio based on the balance sheet. Sub-Total Current Assets $5,300,000 Accounts Payable $300,000 Floor Plan 3,600,000 Sub-Total Current Liabilities $4,100,000 The current ratio is 1.29:1. The quick ratio is: Quick Ratio = Current Assets less Inventory = $1,300,000 = .317:1 Current Liabilities $4,100,000 This ratio is valuable if used with the correct industry or industries. As with the current ratio, use a line graph over an extended period of time to evaluate improvement. As with all ratios, comparing several periods of time is best when evaluating the ability to pay current obligations. Some users of liquidity ratios want an immediate understanding of the ability to pay current liabilities. The user wants to know about today only. There is one liquidity ratio that is highly narrow in its focus, that is the cash ratio. Liquidity Ratios – Cash Ratio The cash ratio is very narrow in time period utility. It reflects the ability to pay current obligations today, right now, not tomorrow or the next day or even 30 days from now. Today. This ratio takes all cash as the only asset with the ability to pay current obligations. It is extremely pure as it relates to liquidity. Its formula is really simple: Cash Ratio = Cash For most businesses, it is never greater than 1 to 1. The more cash intensive the operation, such as restaurants, salons and banking, the higher the ratio’s outcome. With the food service industry, it should always be greater than 1:1. As the entity operations shift towards fixed assets reliance, the less likely this ratio will exceed 1:1. As a liquidity ratio, it has a purpose. It is more useful to the owners than an outsider interpreter of information. For the owner, the ratio sets a minimum threshold to stay above in order to maintain solvency. It can also identify trigger points for distributions and dividends. If you are using this ratio as a potential investor with this business, take into consideration several factors before relying on this ratio. Consider: - The sector and particular industry, - Industry standards, - Sources of cash and ability to obtain cash at a moment’s notice, - The other liquidity ratios. Proper Application of Liquidity Ratios Proper application of liquidity ratios helps the user to understand the operations current solvency status and the ability to maintain solvency in the near future. Liquidity ratios should never be used to predict the ability to pay obligations beyond a 30 day window. All the ratios have inherent flaws, the most obvious is the ability to add to or subtract from current assets without affecting current liabilities. Long-term borrowings, infusion from the owners or the sale of fixed/other assets can cause spikes (upward or downward) with all four liquidity ratios. Proper application of the liquidity ratios include: - Utilizing line graphs over an extended period of time for all four of the liquidity ratios. Three years of information is the minimum time period a user needs to evaluate the respective ratio. - The order of credit-ability of the ratios is as follows: - Operating cash ratio adjusted to beginning current liabilities - Current ratio - Quick ratio - Cash ratio - Give the operating cash ratio the greatest weight factor when using liquidity ratios. - Identify the ability of the company to infuse cash from non-operating sources such as loans, owner contributions, sales of fixed assets etc. - Never make a decision solely based on liquidity ratios. - Liquidity ratios are designed to measure solvency and not bankruptcy; liquidity ratios can only measure the ability to meet obligations in the near future, i.e. 30 days or less. - Economic sectors and their corresponding industries have their own respective standards to gauge liquidity ratios against, use the standards from a comparative industry when evaluating results. If there are no standards available, use a long time line of the ratio’s change. There should be continuous improvement. The key for the user is to look at the ratios as an immediate indicator of financial health. Just as a doctor performs an initial batch of tests on a patient to gauge any immediate concern, users of ratios turn to liquidity ratios to gauge solvency of the business. Use other ratios to determine the long-term success of a business. Liquidity ratios should not have a lot of weight with a business decision model. Use liquidity ratios with at least a dozen other ratios to evaluate a business operation. ACT ON KNOWLEDGE.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9538052678108215, "language": "en", "url": "https://oecdedutoday.com/spread-the-wealth-reap-the-benefits/", "token_count": 640, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0712890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cfadb9e7-0dbf-47f2-9681-2277880107d9>" }
by Marilyn Achiron, Editor, Directorate for Education and Skills Quick: Who has more up-to-date textbooks: students in wealthier schools or students in poorer schools? Actually, it depends where you live. As this month’s PISA in Focus explains, not only are some countries better than others in allocating their educational resources more equitably across schools, but students in these countries generally perform better in mathematics. PISA 2012 asked school principals to report whether teacher shortages, or shortages or inadequacy of physical infrastructure or instructional materials, like textbooks, hindered their school’s ability to provide instruction. PISA found that while disadvantaged schools benefit from investments in smaller classes, they are also more likely to suffer from teacher shortages and inadequate instructional materials than advantaged schools. In general, schools with more socio-economically disadvantaged students tend to have less adequate resources than schools with more advantaged students. It may come as a surprise, but according to PISA data, the United States is the second least-equitable OECD country, after Mexico, in the allocation of educational resources. One in four disadvantaged students in the United States attends a school whose principal reported that a shortage or inadequacy of science laboratory equipment hindered – to some extent or a lot – the school’s capacity to provide instruction. Meanwhile, only around one in seven advantaged students in the United States attends such a school. The differences between advantaged and disadvantaged schools are even starker among Latin American countries, including the OECD countries Chile and Mexico. For example, fewer than one in two disadvantaged students, but more than three in four advantaged students, in Mexico attend schools that have adequate instructional materials. Apart from making a huge difference to individual students, inequity in resource allocation has an impact on a country’s overall performance in PISA. After taking into account countries’ relative wealth, 19% of the variation in mathematics performance across all the countries and economies that participated in PISA 2012 can be explained by differences in principals’ responses to questions about the adequacy of science laboratory equipment, instructional materials, computers for instruction, Internet connectivity, computer software for instruction, and library materials. At least 30% of the variation in mathematics performance across OECD countries can be explained by how equitably resources are allocated across all schools. PISA has consistently found that, when it comes to education, money isn’t everything, and that beyond a certain minimum level of expenditure per student, how the money is spent is more important than how much money is spent. When money is translated into such tangibles as up-to-date textbooks, reliable Internet access, and a school library full of books, spreading the wealth evenly across all schools, regardless of their socio-economic profile, gives all students, not just those in the wealthiest schools, the nourishment they need to succeed. PISA 2012 Findings PISA in Focus No. 44: How is equity in resource allocation related to student performance? PISA in Focus No. 44 (French version) Photo credit: Teenager students outside protecting their heads from a rain of books / @Shutterstock
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9276434183120728, "language": "en", "url": "https://www.economicshelp.org/europe/disadvantages-cap/", "token_count": 655, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.373046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:dadeaa3f-6ec7-4eb5-9feb-bb18d74d3bad>" }
The Common Agricultural Policy (CAP) is a European policy which involved: - Setting minimum prices for many agricultural products - Setting import tariffs to protect from cheap imports - EU purchases of surplus food to maintain minimum prices Since 2005, farmers have been subsidised through Single Farm payments (SFP) and rural development funds The impact of minimum prices in agriculture was to encourage significant over-supply The main problems of the CAP are: Higher prices encouraged extra supply, this resulted in a surplus of food. The EU had to buy this surplus. This is very inefficient and expensive. Although minimum prices have been mostly removed, the EU still give subsidies to farmers. - In 1970, the CAP accounted for 87% of the EU. - In 1995 Agriculture cost 40 billion Euros or 58% of the budget. - In 2013 the budget for direct farm payments (subsidies) and rural development – the twin “pillars” of the CAP – is 57.5bn euros (£49bn), out of a total EU budget of 132.8bn euros (that is 43% of the total). (BBC) - The CAP budget for the period 2014-2020 will be €278bn (£200.2bn), with the UK receiving €27.7bn (£20bn) over the course of the seven-year period. 2. High Prices To increase incomes of farmers, consumers have to pay higher prices for food. This is allocatively inefficient and also increase inequality because low-income groups pay a higher % of their income on food 3. Farmers in other countries face lower incomes. - Firstly the excess food supplies are dumped onto world markets. This caused prices to fall and lower revenues. Farmers in developing economies cannot compete with the subsidised European farmer. - Secondly, the EU bought fewer imports because of the variable import levy’s Therefore demand from Europe fell The combined effect was to reduce farmers welfare in both the US and the developing world. 4. Trade Negotiations The CAP has been a major stumbling block during trade negotiations between the EU and the rest of the world. The US has retaliated against EU exports in response to the high degree of protection given to agriculture. 5. Environmental Problems The incentives of the CAP encouraged farmers to increase output with the use of artificial fertilizers and pesticides causing problems for the environment. To some extent reforms of CAP are trying to deal with this, giving subsidies for greener use of farmland. But, big agri-business still get large lump sums. Minimum prices encourage extra supply The guaranteed minimum prices change incentives in the long-term. With guaranteed high price, farmers are encouraged to expand production, this leads to bigger gluts of supply than originally intended. Subsidizing farmers through higher product prices is an inefficient method because it penalises the consumer with higher prices. Also, it means large farmers will benefit the most. They have received more than they need but small farmers are still struggling e.g. hill farmers with a low number of sheep Minimum prices remove the disciplines of the market and encourage inefficiency Despite these problems, it has proved difficult to reform CAP because of political pressures.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9217601418495178, "language": "en", "url": "https://www.imovo.com.mt/the-future-of-data-warehousing/", "token_count": 623, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.019287109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5677d772-6fcf-4ff4-a4ff-5d1f2c9ed470>" }
What is data warehousing? Data warehousing is the collection of business data that organizations use to make decisions. Usually, the data collected within a data warehouse has various source departments such as marketing, sales, finance, customer care and others. Data is usually collected by the warehouse itself at predetermined intervals. Once pulled, the data is formatted to match the data already in place in the warehouse, after which, the processed data will be made available to decision-makers. The intervals for data pulling can be adjusted according to the needs of the organizations. Advantages of using warehouses include more reliable data since data sources are constantly added and updated while it also makes the decision-making process faster. The expedited process is made possible through the consistency in data formats. The consistent format of the different data streams makes it possible for the data to be analyzed in bulk. This provides decision-makers with a more complete dataset to base decisions on. Data warehouses often get compared to databases and data lakes. However, there is a difference between the three media. Unlike data warehouses, data lakes store raw and unstructured data in its original format. The below table highlights the differences between databases and data warehouses; |What is it|| Used for transactional purposes with read/write access Combined transactional data, formatted and stored for analytics |Why it’s used||Allows the quick recording and retrieval||Stores data from many databases for easier analytics| |Types||.csv, html, excel spreadsheets||Analytical database that layers on top of transactional databases| The Future of Warehousing – Snowflake & Moving to the Cloud An increasing number of businesses are moving their operations to the cloud and this includes their databases too. Some of the advantages that cloud computing offers include flexibility, collaboration, accessibility and real-time data. Snowflake is just one of many options that offer cloud warehousing. By using Snowflake, you will be eliminating the risks of using on-premises warehouse storage. This also eliminates the need to assign funds to acquire hardware and software to manage the locally-stored data; no maintenance is needed and upgrades are done automatically by the cloud software itself. The Snowflake software combines three factors into one cost-effective model; data warehousing, big data platforms and cloud elasticity. Using a pay-as-you-go model, Snowflake gives you access to millions of gigabytes of data simultaneously, which you can access 200 times faster than local-based solutions. All this for 10 times less the cost of cloudless solutions. All upgrades, management and tuning are handled by the software itself while there is no need for any type of hardware to be installed. Lastly, since Snowflake operates on SQL, if your company already makes use of the platform, your team would not need to be re-skilled to accommodate the new system. To learn more about Snowflake, for training or for other help with the software send us an email on [email protected] or call +356 22488300.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9406832456588745, "language": "en", "url": "https://www.simplydigital.in/blogs/digital-marketing/digitization-breakthrough-moves-of-banking-finance-industry/", "token_count": 1206, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1a06625f-d613-46bd-b041-0818ee0d2088>" }
What is Digitization and why it is so hyped up? As we know that term Digitization becomes a regular word in every business. For what reasons is it so? What is this? It is basically the automation of the manual or paper-based process in the digital information. In different terms, it can be stated that transferring an analogue process to the more proficient digital process. It doesn’t mean supplanting the original documents or images. In this fast-growing industry, everyone comprehended the use of going paperless. Nowadays, the fundamental need of any advanced improvement is to convey the data in an easy to use and fun way to such an extent that client becomes acclimated to it. This causes each business to enhance as far as time, effectiveness, security and money. History of digitization in the Banking and Finance Industry Well, the story starts in India in 1988 when the Reserve Bank of India setup the committee under the supervision of Dr C. Rangarajan for computerization in banks. Firstly banks began the individual PCs to enter the data which before long gets exchanged to the Local area network (LAN) connectivity. Due Because of this, the banks made a stride further towards the core banking solution which allows the customer to access their accounts from any part of the world. The major force comes from the private banks and finance industry. To add more oil to the fire, demonetization happens in 2016 which push India towards the cashless society and banks were compelled to convergence the electronic exchanges. Henceforth several banks and finance industries have joined their hands in the race of digital services to stay focused in the market. Relief on Pocket due to Digitization As most of the banks and finance industries are burdened with the legacy systems and few undesirable processes, India made a stride further to embrace the more up to date advances. This furnishes accommodation to the clients alongside this they need to make every single exchange cost amicable. As indicated by the figures accessible, it is noticed that each branch saving money exchange costs about Rs 70 which is diminished to Rs 16 by ATM and further diminished to Rs 2 on internet banking and Rs 1 via mobile banking. Aside from this digitization bring the majority of the populace from each class on board as it is effortlessly open and decreased the human error. Henceforth, this will reshape the entire keeping money framework recently. Technologies adopted by Banks Online banking has changed the entire face of the Indian Banking framework. The inculcation of Automated Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Real Time Gross Settlement (RTGS), Immediate Payment Service (IMPS), Electronic Clearing Service (ECS), Prepaid Payment Instruments (PPIs),Online wallets, Debit cards, Credit cards, Mobile banking system, Net banking and a lot more are on the whole noteworthy milestones in the voyage of digital revolution. The increased market of Prepaid Payment Instruments PPIs are the method that facilitates the purchase of merchandise, administrations and fund transfers against these value stored instruments. The figures for the PPI cards (gift cards, foreign travel cards, and corporate cards) and mobile wallets have radically expanded as of late in 3 to 4 years as it were. It figures raised from Rs 82 billion to Rs 532 billion from 2014 to 2017. To enhance the advantage of PPI users RBI released operational guidelines for Interoperability in October 2018. Interoperability is the technical compatibility that allows merging of the bank payment process with other PPIs. It allows PPI users, system providers to undertake and clear the payment transactions without participating in actual multiple systems. Once PPI is implemented, the customer will be able to transfer funds between their mobile wallets to their bank accounts. The collaboration of Fintech startups and Banks India has just seen the enterance of digital banks in the form of 811 (from Kotak Mahindra) and Digi bank (from DBS) which together holds the market extremely well. The period of predominance by digital banks over the consumer banks is not so far. So the central issue emerge in this dynamic scenario, Can banks and fintech startups be collaborating instead of competing? One can’t deny that banks have the information and experience nobody can question on this. On contrary fintech startups have the newer technologies and fresh brains. So it is good to leverage each other’s strength and collaborate in this revolution. This comes to the consideration from last one year that central banks have promoted the unified payment interface with the mobile wallets rapidly. Challenges faced by Digitization Security Risks: Banks and finance industries are constantly exposed to external threats like hacking, sniffing as well as from the internal threats. Monetary Proficiency: Due to the high illiteracy rate and awareness in India, it is difficult to teach people about the e-banking facilities. Finding the Expert: It requires a tremendous exertion to discover the blend of talent and technology to lead digitization initiatives. Limited budget: Budget is dependably the limitation that constrains the wings of any developing technology or business. So need to mindful and arranged for it. Future of Digitization: Nowadays Artificial Intelligence (AI) came as a roar in technology which has the capability of recreation of human intelligence processes by machines. Several banks have already intrigued with the idea, for instance, ICICI Bank, software robots have been conveyed in over 200 business processes which lessens the response time of customers by 60 per cent. Banking on the cloud: Progressive banks are as of now in transit of adjusting cloud system. It encourages associations to accomplish adaptability, flexibility and productivity. Significant faces of business, for instance, Big data, Block chain, AI are using cloud computing. this article is written by ceo of afinoz digitalizing finance Rachna Suneja
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9470704793930054, "language": "en", "url": "http://insightwtv.com/are-bonds-more-secure-then-stocks/", "token_count": 430, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.03662109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bea3fd54-67cd-45e3-b2df-a3e3a0ba3d88>" }
#financialeducation #moneymatters #finance #financialfreedom #stocks How do Bonds work? Are bonds better than Stocks? Are you investing Money if so do you know what Bonds are? Learn about Bonds in todays video. In finance, a bond is an instrument of indebtedness of the bond issuer to the holders. The most common types of bonds include municipal bonds and corporate bonds. Bonds can be in mutual funds or can be in private investing where a person would give a loan to a company or the government. The bond is a debt security, under which the issuer owes the holders a debt and (depending on the terms of the bond) is obliged to pay them interest (the coupon) or to repay the principal at a later date, termed the maturity date. Interest is usually payable at fixed intervals (semiannual, annual, and sometimes monthly). Very often the bond is negotiable, that is, the ownership of the instrument can be transferred in the secondary market. This means that once the transfer agents at the bank medallion stamp the bond, it is highly liquid on the secondary market. Bonds and stocks are both securities, but the major difference between the two is that (capital) stockholders have an equity stake in a company (that is, they are owners), whereas bondholders have a creditor stake in the company (that is, they are lenders). Being a creditor, bondholders have priority over stockholders. This means they will be repaid in advance of stockholders, but will rank behind secured creditors, in the event of bankruptcy. Another difference is that bonds usually have a defined term, or maturity, after which the bond is redeemed, whereas stocks typically remain outstanding indefinitely. An exception is an irredeemable bond, such as a consol, which is a perpetuity, that is, a bond with no maturity. Learn about Investing so that you make fewer mistakes and earn the money you deserve for a better life. Another great story from InSight WTV a leader in online information, news, stories and articles for you to learn more online.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9336557388305664, "language": "en", "url": "http://www.sustainablesids.org/knowledgebase/undp-financing-the-sdgs-in-the-pacific-islands-opportunities-challenges-and-ways-forward-2017", "token_count": 144, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0263671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:410e2f96-fae9-4642-bd9d-6ad8960b7504>" }
Mobilizing adequate financing for sustainable development will be a challenge for all countries, but will be particularly difficult for Pacific Small Island Developing States where financing needs for sustainable, climate-sensitive development are estimated to be among the highest in the world when measured as a proportion of national output. They are also set to rise with the predicted impacts of climate change. UNDP’s report explores what financing for development currently looks like in the Pacific and analyzes the steps countries have already taken to mobilize different sources of development finance and to strengthen the effectiveness of public expenditures. It asks whether there are opportunities to leverage innovative finance. Are there lessons learned from other countries, in particular other Small Island Developing States (SIDS)?
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9710817337036133, "language": "en", "url": "http://x.fybw.org/2020/12/20/what-does-executed-agreement-mean/", "token_count": 657, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:56bfb387-36dc-4ac5-80c0-e7e8b4b47144>" }
Understanding the contractual terms implies understanding the difference between the date of execution of the contract and the actual date of entry into force, if any, in order to avoid confusion in the future. Changes to a contract must be signed in writing and by all parties prior to the amendment. Since an executed contract is a legal document, each party should keep a copy and, if necessary, refer to it in order to fully discharge its obligations. If one party has not fulfilled its obligations, the other party may eventually bring a civil action. For example, if John does not make the agreed rents for his car, the car could not only take the car back, but could sue John in civil court for the remaining amount owed from the lease. There are two forms of agreement written under English law: simple contracts (written “on hand”) and deeds. In short, the safest way for simple contracts and deeds is for parties to exchange by email pdf copies of signature pages executed with – in the same email – a word or pdf version of the entire agreement that was executed. The origin of an exported agreement dates back to the period 1300-1400 of late average English. There are different types of documents that can be executed to be effective. The most common documents include contracts between two or more parties, including leases, service and sales. Executed contracts are easy to identify in real life. A person who agrees to pay or participate for a particular service, either by signing a physical contract or an online contract, is in a situation in which an executed contract is established. By approving the terms of the document, whether implicit or expressly agreed upon, the contract is executed accordingly. The term also applies to a contract that has been fully executed and has been concluded. In a real estate sale agreement, the contracting parties and what each must do to conclude the sale on the date specified in the contract will communicate. Among the most important conditions are those that indicate that the seller must provide a clear title with the type of deed specified in the contract in return for the purchase price indicated. The contract must also contain a legal description of the property. Information on the type and amount of financing required by the buyer is included, as well as the time frames for inspection, repair, mortgage commitment and presentation of special documents for which the contract is used. Many types of documents and legal forms can be exported to ensure their effectiveness and bindingness. The most common documents to be executed include contracts between two or more parties, such as leases. B, service contracts and sales contracts. These documents require the parties to meet the terms of the agreement. The execution date is the date on which the contract was signed by all parties involved. This may be the effective date of the contract, which may be indicated in the treaty. For example, Susan signs a lease on April 4, with a date that will move in on May 1. The execution date is April 4 and the effective date is May 1. It`s done. Something has been done; A little closed. This word is often used in conjunction with others to refer to a quality of these other words; in the form of an executed contract; An executed succession a position of trust executed, C. She`s against execution.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9355000853538513, "language": "en", "url": "https://2016.ar-ebrd.com/2017/05/05/understanding-barriers-to-climate-investments/", "token_count": 279, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.08447265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2237c93f-1857-4a90-98f4-dba0a207978f>" }
Understanding barriers to climate investments Under the Paris Agreement on combating climate change, each signatory country is committed to developing and submitting a comprehensive national action plan – known as a Nationally Determined Contribution (NDC) – to support the fight against global warming. A key step in the successful implementation of these action plans is the incorporation of NDC commitments and measures into domestic legal and governance systems. In 2016 the EBRD launched a pilot assessment of the national policy, legal, financial and institutional barriers to the achievement of NDC objectives in Jordan, Morocco and Tunisia. The study found that a lack of regulatory transparency and deficient monitoring and enforcement mechanisms are holding back the attainment of mitigation objectives (such as the development of renewable energy or energy efficiency projects) and adaptation objectives. The report also highlights good practices that can be exported to neighbouring countries. The study made recommendations for each country on the legal and institutional steps that would support their NDC implementation. As well as guiding national authorities on the reforms they may wish to undertake, these recommendations help to inform investor decisions. The EBRD is considering replicating the pilot assessment in other countries of operations. This would pave the way for much higher levels of climate investment by the Bank and its finance partners. To learn more about legal reform in EBRD countries of operations, see the Bank’s publication Law in Transition.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9610064029693604, "language": "en", "url": "https://echeck.org/the-brief-history-of-checking-using-checks-as-for-money/", "token_count": 1314, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.08984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1d53ab8c-fc4c-4844-a6f0-f47bc13a0b9a>" }
The Brief History of Checking (Using Checks as For Money) Note, this was written back in the 90’s. It relates to the early history of the eCheck movement. Early Medium No one is sure when the first check was written or chiseled into a piece of wood or stone. Some experts think the Romans may have invented the check about 352 BC. But even if that were true, the idea apparently didn’t catch on. Banks or bank-like institutions existed in ancient Mesopotamia, Greece, and Rome, and probably transferred deposits from one account to another, but no documentary evidence of such transfers has survived. Arrival of Cheques The earliest evidence of deposits subject to “cheque” pertains to medieval Italy and Catalonia. In the primitive banks of deposits in those areas it was necessary for the depositor to appear in person before a banker either to withdraw funds or to transfer them to an account of another customer. The practice of using written instruments for those purposes gradually evolved. Role of Cashiers According to most history texts, it probably wasn’t until the early 1500s, in Holland, that the first check got widespread usage. Amsterdam in the sixteenth century was a major international shipping and trading center. People who had accumulated cash began depositing it with Dutch “cashiers”, for a fee, as a safer alternative to keeping the money at home. Eventually the cashiers agreed to pay their depositors’ debts out of the money in each account, based on the depositor’s written order or “note” to do so (the beginning of account-based bill payment). The concept of writing and depositing checks as a method of arranging payments soon spread to England around 1780 and elsewhere, but not without resistance. Many people in the sixteenth and seventeenth centuries still had doubts about trusting their hard-earned money to strangers and little pieces of paper that were easily forged or replicated. In the United States, checks are said to have first been used in 1681 when cash-strapped businessmen in Boston mortgaged their land to a “fund”, against which they could write checks. The first printed checks are traced to 1762 and British banker Lawrence Childs. The word “check” also may have originated in England in the 1700s when serial numbers were placed on these pieces of paper as a way to keep trace of, or “check” on, them. Problems Then and Today As checks became more widely accepted, bankers discovered they had a big problem, which still exists in today’s society: how to move these pieces of paper to collect the money due from so many other banks. At first, each bank sent messengers to the other banks to present checks for collection, but that meant a lot of travelling and a lot of cash being hauled around in less than secure conditions. The solution to this problem was found in the 1700s, according to banking lore, at a British pub. The story goes that a London bank messenger stopped for a pint (or two) and noticed another bank messenger. They got to talking, realized that they each had checks drawn on the other’s bank, and decided to exchange them and save each other the extra trip. The practice evolved into a system of check “clearinghouses” – paper networks of banks that exchange checks with each other – that still is in use. In addition to being able to exchange checks directly, today banks in the U.S. can present checks to the Federal Reserve System or private clearinghouses for regional and national check collection. During the check clearing process, checks pass through large sorting equipment that reads the magnetic ink characters (MICR) at the bottom of the check and places the check in sorting “pockets”. The MICR standard, developed in the US by a consensus group of banks and technology in the 1950s, provided tremendous improvements to the check payment process by enabling the automation of many check handling procedures. The MICR contains information such as the routing number identifying the drawee bank, the payment amount, and the customer account number of the payor. The payee’s bank is then credited for the payment amount, and it transfers these funds to the payee’s account. The check is then physically transported to the drawee’s bank by car, truck, or airplane, and presented to the drawee’s bank by the clearing institution where the payment amount is debited from the payor’s bank associated with the customer account number. The payor then receives the canceled physical check from the bank in the next statement. Costs of Continued Use There are approximately 70 billion checks written by consumers, businesses, and government entities today, at a cost of about 1% of the US Gross Domestic Product. Check fraud losses are estimated (see Footnote 1) to be over $53 billion annually with banks writing-off $1.34 billion and retailers and other payees absorbing $52 billion. Furthermore, it is predicted that check fraud will grow over the next 12 years by 12 to 15 percent annually. Check truncation is seen by many industry specialists as a way to cope with the increasing volume of checks and the rising cost of check collection. Most check truncation schemes alter the normal flow of the check payment process in that check writers do not receive back their cancelled checks. Truncation requires the bank to convert the check data to electronic form, safekeep the checks, return checks at the request of the payor bank, and provide information on checks when requested. Although this processes has had only marginal success, early exchange of electronic data can reduce risk to participants of this service by permitting them to identify checks that cannot be paid and must be returned earlier than would otherwise be possible. Imaging systems also hope to improve the check collection process. With image processing, checks are optically scanned to produce digital images. These images are then processed electronically and stored for later use. By providing images of checks to payor institutions, banks and other financial institutions are able to reduce their risk of paying forged or altered checks and may provide statements to customers containing the images of the checks that have been paid. The cost of image processing technology has limited its acceptance among most institutions. Footnote 1: The Nilson Report, Number 600, July 1995
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9465213418006897, "language": "en", "url": "https://www.infectioncontroltoday.com/view/economic-benefits-global-polio-eradication-estimated-40-50-billion", "token_count": 919, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2373046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:75b10407-7822-4fd9-8946-dd8c217e858f>" }
OR WAIT null SECS A new study released today estimates that the global initiative to eradicate polio could provide net benefits of at least US$40-50 billion if transmission of wild polioviruses is interrupted within the next five years. The study provides the first rigorous evaluation of the benefits and costs of the Global Polio Eradication Initiative (GPEI)the single largest project ever undertaken by the global health community. The study comes at a crucial timefollowing an outbreak in the Republic of the Congo and one in Tajikistan earlier this yearthat highlight the risk of delays in finishing the job on polio. Published in the journal Vaccine, the study, "Economic Analysis of the Global Polio Eradication Initiative," considers investments made since the GPEI was formed in 1988 and those anticipated through 2035. Over this time period, the GPEI's efforts will prevent more than 8 million cases of paralytic polio in children. This translates into billions of dollars saved from reduced treatment costs and gains in productivity. The study also reported that "add-on" GPEI efforts improve health benefits and lead to even greater economic gains during the same time period. Notably, it estimates an additional $17 billion to $90 billion in benefits from life-saving effects of delivering vitamin A supplements, which the GPEI has supplied alongside polio vaccines. "Polio eradication is a good deal, from both a humanitarian and an economic perspective," said Dr. Radboud Duintjer Tebbens of Kid Risk, Inc., the lead author of the study. "The GPEI prevents devastating paralysis and death in children and also allows developing countries and the world to realize meaningful financial benefits." According to the study, although delays in achieving eradication are costly, even with delays, the GPEI still generates positive net economic benefit estimates. "Investing now to eradicate polio is an economic imperative, as well as a moral one," said Dr. Tachi Yamada, president of the Bill & Melinda Gates Foundation's Global Health Program. "This study presents a clear case for fully and immediately funding global polio eradication, and ensuring that children everywhere, rich and poor, are protected from this devastating disease." The GPEI successfully reduced the global incidence of polio by 99 percent since 1988 and eradicated type 2 wild polioviruses in 1999. Intense efforts are underway to stop transmission of types 1 and 3 completely within the next several years, with indigenous transmission remaining only in relatively small areas in Afghanistan, India, Nigeria, and Pakistan and re-established transmission in a few countries, including Angola and the DRC. Until eradication occurs, all countries remain at risk for importation of the virus, as demonstrated by the 2010 polio outbreaks in Tajikistan and the Republic of the Congo. Congo's recent outbreak has resulted in more than 200 cases of acute flaccid paralysis (AFP) since October, mostly affecting people older than 15. "Studies like this help people put numbers on the value of prevention," said Dr. Kimberly Thompson of Kid Risk, senior author of the study. Nobody questions the value of eradication in developed countries where polio is fortunately just a fading memory, but according to Thompson, "prevention activities like vaccination often go unappreciated, because it is difficult to count cases of a disease that do not occur." The study provides an example of the real value that comes from international cooperation and investment in the health and development of children. The study examined the 104 countries that directly benefit from the GPEI, which include predominantly lower-income countries. Many higher-income countries eliminated wild polioviruses before the GPEI began. Thus, the estimated net benefits in the study do not include the substantial benefits already accruing in developed countries. The study was led by Kid Risk, Inc., an independent non-profit organization started in 2009 as the successor to the Kids Risk Project at the Harvard School of Public Health. Other research partners included the U.S. Centers for Disease Control and Prevention (CDC), Delft University of Technology, and the Global Polio Eradication Initiative. The CDC provided support for the study under a contract to the Harvard School of Public Health. GPEI is a public-private partnership led by national governments, spearheaded by the World Health Organization, Rotary International, the CDC, and the United Nations Children's Fund (UNICEF) and supported by organizations including the Bill & Melinda Gates Foundation.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9665014743804932, "language": "en", "url": "https://www.newsbtc.com/news/bitcoin/citi-report-blockchain-bigger-bitcoin/", "token_count": 697, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07958984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b3d3ff0f-76a9-4593-a580-22a76fa7196c>" }
A lot of people have been saying how the blockchain is much bigger than Bitcoin itself, and they are right for the most part. Even though the Bitcoin protocol is powered by blockchain technology, the capabilities of distributed ledgers are not just linked to the digital currency ecosystem. A recent report by Citi seems to be thinking along those same lines, although most of their use cases are still focused on the financial aspect. This disruptive technology can change a lot of things as we know them today, that much is certain. Some Institutions Gain from Blockchain Technology Whenever a disruptive concept comes along, there are winners and losers. But things aren’t so black-and-white with blockchain technology in the picture, although there will be some fundamental changes along the way. Distributed ledgers offer far more advantages than downsides, but that does not mean there will be no casualties along the way. Assuming this technology would be fully embraced by banks and financial institutions – through consortiums such as R3 CEV, most likely – the infrastructure being used today will undergo some necessary changes. With fewer – or no – intermediaries required, and real-time transaction processing capabilities, the entire infrastructure cost will be reduced, as there is far less overhead. The Citi report mentions: “Blockchain technology could be applied more broadly than crypto-currencies. In the currency space, the Bitcoin rail could be used to facilitate cross-border payments or supply chain and trade finance. Because virtually any type of information can be digitized and placed onto Blockchain, theoretically any information of value could be transferred in the Blockchain world. The programmability of Blockchain makes it suitable for smart contracts: a contract that executes once pre-agreed conditions are met.” But at the same time, a fair few people in the financial industry will be out of a job. Since the middlemen are no longer needed, institutions such as clearing houses will become less popular, and eventually obsolete. That is unless they can diversify their business model, and become an oracle for blockchain transactions, of sorts. Moreover, a blockchain in the banking sector will still require counterparties at the beginning, which is a role suited for clearing houses. The same cannot be said for custodian banks, however, as their primary role is to handle receipt and delivery of cash and securities under the current infrastructure. Once transactions are settled in real-time, however, there is very little point in paying a fee for a service that is not a necessity anymore. But there is an opportunity for custodian banks too, as they are the ones driving blockchain adoption in the traditional financial industry. Perhaps the biggest “winners” in the world of blockchain technology are investment banks. Not only can they massively reduce operational costs, but they would also be able to free up capital as the balance sheets are reduced in size. Neither of these factors is an immediate game-changer, though, but they are two prospects to take into account. It has to be said, however, that none of the established financial players are talking about Bitcoin itself. While it is certainly true blockchain technology extends far beyond the Bitcoin ecosystem, the main reason this concept has become so appealing to banks is due to Bitcoin itself. Consumers have a growing demand for new financial solutions, and banks are scrambling to cater to their needs. But they will never be able to recreate the total financial freedom Bitcoin is offering.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.96975177526474, "language": "en", "url": "https://www.teachers.ab.ca/News%20Room/ata%20news/Vol53/Number-7/Pages/Viewpoints.aspx", "token_count": 1053, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1591796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7e41e481-362c-46b5-8954-d46b3cea4433>" }
Associate Co-ordinator — Communication, Alberta Teachers’ Association Taxes are one helpful way to pay for education We want to tell you a story about a faraway land, a land where the sun almost always shines and the people are prosperous. It’s a land where the only downside of having fruit trees in your backyard is that they block the view of the mountains and the oceans. This is not a mythical land. It’s California, the world’s fifth-largest economy and bigger than all of the United Kingdom. California now has a $9-billion surplus in its 2018-19 state budget after years of running massive deficits. And it’s a state where, a few years ago, they made a deliberate decision to raise taxes to improve public education. Former governor Jerry Brown, who retired this month, decided he was tired of hearing there was no money for education. In a speech on Nov. 6, 2012, Brown said that without more money for schools, the California dream was over. “This is about people choosing on or off ... money into our schools or money out of our schools. It’s really stark. The California dream is built on great public schools and colleges and universities.” The mechanism Brown endorsed to fix this problem was California Proposition 30. It called for personal income tax increases over seven years on people earning more than $250,000 per year. There was also a sales tax increase of 0.25 per cent. Proposition 30 passed in the fall of 2012 by more than 55 per cent. The sales tax increase was allowed to expire in 2016. The higher income tax portions of the plan were extended for another 12 years when another vote was held in 2016. Since January of 2013, Proposition 30 has generated more than $31 billion for California schools. The impact has been significant. According to the independent California Budget and Policy Center, per-student spending in K-12 classrooms has increased more than $1,300 from 2012-13 to 2016-17 (adjusted for inflation). The center also says the number of students per teacher has dropped since Proposition 30 was adopted. According to the California Federation of Teachers, the thousands of per-year layoff notices in education have slowed to a trickle. In the Los Angeles Unified School District, Proposition 30 provides 12 per cent of annual funding. A deliberate decision was made to restore funding to arts programs and programs to support the most needy students. Statewide, community colleges have been able to restore hundreds of class sections after the cuts were reversed. Here in Alberta, funding for education has not kept up with rising inflation and rapid student population growth. From 2009-10 to 2017-18, the student population increased by 19 per cent and costs rose by 15 per cent. Funding has not kept pace and, as a result, class sizes have risen and students are not getting the supports they need. In the last school year, 81 per cent of K-3 classes were larger than guidelines established in 2003 by Alberta’s Commission on Learning, and all but five school jurisdictions exceeded the targets. These averages also don’t fairly represent the considerable number of classes that are significantly larger than the average. Since 2002, the proportion of core classes with 40 or more students has grown by 600 per cent. The Alberta government has been running deficits for many years, and calls for more classroom support are too often met with a political response that there is no money. Well, there’s no money because of the deliberate tax decisions that have been made over the years. Alberta has the lowest taxation rates in the country, and we are the only province without a sales tax. If Alberta used the tax rates being used by the conservative government in Saskatchewan, we would raise $11.3 billion more. With these reasonable taxation rates, Alberta would still be tied with B.C. and Saskatchewan for the lowest taxes in Canada, but it would have a $2.5 billion surplus instead of a $9-billion deficit. No one wants an education system for their children that is starved for operational funds. No one wants education funding cuts that stretch out over several years, particularly when there are more students. So, if you meet a politician who says there’s no extra money for education, tell them to look in the mirror to find out why this has happened. Tell them about this place where the most well-off were asked to pay a bit more to make schools better for everyone. Tell them about a place where taxes were raised, the popularity of political leaders rose and the budget is now in surplus. Tell them about a place where class sizes are smaller and arts and music programs flourish. It’s not a myth. It’s California. We’ll get there too if we all can agree that Alberta’s dream is also built on great public education. ❚ This column was adapted from one that first appeared in the Dec. 12, 2018, issue of the Saskatchewan Bulletin, published by the Saskatchewan Teachers’ Federation.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9563863277435303, "language": "en", "url": "http://www.cumbrianenergyrevolution.org.uk/energy-efficiency/efficiency-should-be-infrastructure-priority-foe/", "token_count": 278, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0654296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e1b7d77b-10a0-42f2-9832-b41680e8b980>" }
Every year thousands of people die from living in cold homes. Millions more can’t afford to keep their homes warm, and suffer not only from the cold, but from the myriad physical and mental health problems that fuel poverty brings. There is a solution, and it isn’t rocket science. A major, publicly-funded energy efficiency programme to insulate every home in the country would save the average household £300 on their energy bill and bring millions out of fuel poverty. It would also significantly increase energy security, and help us to hit the carbon emissions reduction targets set in the Climate Change Act, Labour’s most important environmental achievement in its last term of office. So, why aren’t we getting on with it? The usual answer is “it’s expensive!”. On the one hand, that’s true – a really effective scheme would need a secure funding stream of about £4bn per year until 2025. But on the other hand, on closer examination, it isn’t. A long-term, large-scale insulation programme would bring major, long-term, financial benefits which would pay back the initial investment. It would do, in spades, everything Infrastructure UK says major investment is meant to, and which the country needs so badly: strengthen the economy, create jobs, and increase living standards.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9804404377937317, "language": "en", "url": "https://linkxoxo.co/loan-amortization/", "token_count": 375, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.019775390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:64969719-1bc3-48ba-8a4c-34caf7cf2fc5>" }
First I’d like to talk about how little amortisation works just spent. Everybody knows that when you board money will loan balance with the principle as it’s called Who’s going to decline over time as you continue to make your payments. But how exactly does that work. Well let’s say that you borrow a $10000 for five years at 5 percent interest rate your monthly payments are going to calculate to $188 71 cents. But does every penny of that hundred eight dollars and seventy one cent payment get a plot against a $10000 loan. No because each payment has a different amount of interest and principal comprising it. So for example at the start of that loan all $10000 outstanding were unpaid. Right. Well since the loan costs five percent interest which is another annual number if $10000 were to be outstanding for the entire year. And you know $500 for the privilege of using that money for that time $10000 times. Five percent interest is $500 but you’ve only really have that 10000 for a month because your first payment is going to do with the end of it. So therefore you’d only own one twelfth of that fifth $500 of annual interest or $41 and 67 cents. So if your monthly loan payments are $180 to 70 cents and if the interest for that month is $41 to 67 cents then 188 71 minus 40 167 equals $147 for a sense of principle that will actually get deducted from the $10000 loan you took out that’s how loan amortization works. Each month your balance is going to decline by the difference between the monthly payment you make and the amount of interest that you go on with every loan balances at that time. Consequently over the course of time more and more of your monthly payment will go towards principal pay down.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9451281428337097, "language": "en", "url": "https://lozewekivi.douglasishere.com/assess-the-use-of-accounting-information-32410de.html", "token_count": 1102, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.00457763671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:878ec531-fc88-4f21-b668-629550d172de>" }
The focus of these statements has been sharpened, however, by requiring governments to report information about their most important, or "major," funds, including a government's general fund. QuickBooks is an accounting software program which is utilized in small and large scale companies. Governments will be required to continue to provide budgetary comparison information in their annual reports. The inability to provide outside lenders or investors with accounting information can severely limit financing opportunities for a small business. A Data is the output of an AIS. Electronic data interchange EDI 4. Managerial accounting does not follow national accounting standards and companies may develop their own methods for tracking financial information. Fund financial statements consist of a series of statements that focus on information about the government's major governmental and enterprise funds, including its blended component units. Analytical procedure could also use at the conclusion stag of an audit. Using of analytical procedures: Each statement should distinguish between the governmental and business-type activities of the primary government and between the total primary government and its discretely presented component units by reporting each in separate columns. They also use this information to assess future job prospects and bargain for higher wages and better benefits. Managerial cost allocation methods such as job costing, process costing, activity-based costing or other methods may be used to allocate business costs to produced goods. Accounting information usually provides business owners information about the cost of various resources or business operations. Hilton; Photo Credits. Program expenses should include all direct expenses. Required governmental fund statements are a balance sheet and a statement of revenues, expenditures, and changes in fund balances. Shareholders or Investors Shareholders and other investors are usually the first group of external users that comes to mind. General strategies range from profit maximization to forgoing a part of the profit in order to increase a market share. It offers many accounting packages, cloud primarily based variations, etc. For example, monthly trend analysis of revenue records for whole year compare to the monthly trend of visitor. In addition to information that you may find during your research, please use the following IBM article to complete the assignment: Financial managers also will be in a better position to provide this analysis because for the first time the annual report will also include new government-wide financial statements, prepared using accrual accounting for all of the government's activities. Fiduciary funds should be used to report assets that are held in a trustee or agency capacity for others and that cannot be used to support the government's own programs. Opportunities with low income potential and high costs are often rejected by business owners. Production costs usually include direct materials, direct labor and manufacturing overhead. B the benefits produced by possessing and using the information minus the cost of producing it. For this reason and others, this Statement requires governments to continue to present financial statements that provide information about funds. QuickBooks pre employment test is designed and developed by global subject matter experts SMEs to assess QuickBooks skills of the candidates as per industry standards. Fund statements also will continue to measure and report the "operating results" of many funds by measuring cash on hand and other assets that can easily be converted to cash. This procedure also use by auditor to gain the better understanding about client business and environment. Retroactive reporting of all major general governmental infrastructure assets is encouraged at that date. The actual sales will depend to a large degree on the dynamics of the environment. For example, if a government issues fifteen-year debt to build a school, it does not collect taxes in the first year sufficient to repay the entire debt; it levies and collects what is needed to make that year's required payments. That in addition accepts payroll functions, pay invoice functions and commercial enterprise payments. Effective Date and Transition The requirements of this Statement are effective in three phases based on a government's total annual revenues in the first fiscal year ending after June 15. ii The Benefits of Improved Environmental Accounting: An Economic Framework to Identify Priorities James Boyd Abstract Improv ed environmental accounting is. Computers have become the primary means used to process financial accounting information and have resulted in a situation in which auditors must be able to use and understand current information technology (IT) to audit a client’s financial statements. Financial Accounts: geared toward external users of accounting information Management Accounts: aimed more at internal users of accounting information Although there is a difference in the type of information presented in financial and management accounts, the underlying objective is the same - to satisfy the information needs of the user. A computerized accounting system is a delivery system of accounting information for purposes such as providing reliable accounting information to users, protecting the organization from possible risks arising as a result of abuse of accounting data and system among others. Accounting information is presented to internal users usually in the form of management accounts, budgets, forecasts and financial statements. External users and summarizing of transactions and events in a manner that helps its users to assess the financial performance and position of the entity. The process starts by first identifying. Today, if you do not have the employee already in the agency vendor, you can use the exception code 02 (TM screen in AFRS) and you do not have to set them up with a vendor number. In the future, every employee will have a number in the new employee file that can be used to pay the employee for lost warrants and underpayments.Assess the use of accounting information
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8923391699790955, "language": "en", "url": "https://mat.gsia.cmu.edu/classes/QUANT/NOTES/chap4/node5.html", "token_count": 304, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0986328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bd897316-9152-4e38-aba2-bb2d0fd20926>" }
The values have an important economic interpretation: If the right hand side of Constraint i is increased by , then the optimum objective value increases by approximately . In particular, consider the problem where p(x) is a profit to maximize and b is a limited amount of resource. Then, the optimum Lagrange multiplier is the marginal value of the resource. Equivalently, if b were increased by , profit would increase by . This is an important result to remember. It will be used repeatedly in your Managerial Economics course. represents the minimum cost c(x) of meeting some demand b, the optimum Lagrange multiplier is the marginal cost of meeting the demand. In Example 1.1.2 if we change the right hand side from 1 to 1.05 (i.e. ), then the optimum objective function value goes from to roughly If instead the right hand side became 0.98, our estimate of the optimum objective function value would be The first two constraints give , which leads to and cost of . The Hessian matrix is positive definite since a;SPMgt;0 and b;SPMgt;0. So this solution minimizes cost, given a,b,Q. If Q increases by r%, then the RHS of the constraint increases by and the minimum cost increases by . That is, the minimum cost increases by 2r%. Since , the variance would increase by So the answer is 390+90=480.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9232167601585388, "language": "en", "url": "https://www.bot.or.th/English/AboutBOT/RolesAndHistory/Pages/ASEAN.aspx", "token_count": 888, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fbb8d8c5-4136-430c-8e2b-946f3acdfc11>" }
Association of Southeast Asian Nations (ASEAN) The Association of Southeast Asian Nations or ASEAN was established on 8 August 1967 in Bangkok as a regional forum with an aim to promote political cooperation and stability, trade and economic expansion, as well as social development of member countries. At the outset, ASEAN comprised five founding member countries, namely Indonesia, Malaysia, the Philippines, Singapore and Thailand. Subsequently, after the end of the cold war, ASEAN started to enlarge its membership which stands at 10 member countries at present. This began with Brunei Darussalam joining in 1984, Vietnam in 1995, Lao and Myanmar in 1997, and Cambodia in 1999. The ASEAN region has a population of about 550 million, with total area of 4.5 million square kilometers. The ASEAN Secretariat is located in Jakarta, Indonesia. The 4th ASEAN Summit in 1992 marked the first major step of regional economic cooperation as member countries reached an agreement on establishing the ASEAN Free Trade Area (AFTA). This agreement aims to increase the competitiveness of ASEAN as an important production base for the global market through the opening of trade and reduction of both tariff and non-tariff barriers, as well as the adjustment of tariff structure to facilitate the free trade environment. In 1997, ASEAN further enhanced its economic and financial cooperation by announcing the "ASEAN Vision 2020" which targets to establish the ASEAN Economic Community (AEC) by 2020. In addition, the ASEAN Finance Ministers' Meeting (AFMM)1/ process and cooperation with China, Japan, and Korea were initiated in that same year. With the advancement of regional economic cooperation, ASEAN Leaders agreed at the 13th ASEAN Summit in 2007 to accelerate the establishment of the AEC from 2020 to 2015. In this regard, ASEAN Leaders endorsed the ASEAN Economic Community Blueprint (AEC Blueprint) (http://www.aseansec.org/21083.pdf) which is a detailed action plan on how to work towards the ASEAN Economic Community. In addition, during the 13th ASEAN Summit, ASEAN Leaders also endorsed the "ASEAN Charter" which provides the legal and institutional framework for ASEAN to be a leading international organization within the region. The Charter will come into effect after being ratified by all member countries. 2. Relationship with the Bank of Thailand The Ministry of Finance and the Bank of Thailand are the main responsible agencies in the area of regional financial cooperation and surveillance process. Since 1997, ASEAN Finance Ministers have been meeting annually, and the technical work on regional financial and economic cooperation has been undertaken by the 3 Working Committees, namely (1) Capital Market Development (WC-CMD), (2) Financial Services Liberalization under the ASEAN Framework Agreement on Services (WC-FSL/AFAS), and (3) Capital Account Liberalization (WC-CAL). On top of this, ASEAN Finance Ministers are also responsible for the implementation of the financial sector component of the AEC Blueprint, covering areas such as financial services liberalization and capital account liberalization. Apart from the cooperation under the AFMM, the Bank of Thailand also actively takes part in the ASEAN Central Bank Forum (ACBF) which was established on 5 November 1997. The meetings under the ACBF consist of 2 high-level meetings: the ASEAN Central Bank Governors' Meeting (ACGM) and the ASEAN Central Bank Deputies' Meeting (ACDM). Like the AFMM, the issues for discussion amongst ASEAN central bankers mainly focus on the regional financial cooperation. In addition, the ASEAN central banks established the ASEAN Swap Arrangement (ASA) on 5 August 1977 with an aim to provide short-term liquidity assistance to member countries facing temporary liquidity problems. The agreement had an original term of 2 years, and it has been renewed up until now. The current total amount of the ASA is USD 2.0 billion. Further information on the ASEAN Finance Ministers' Meeting can be found at In addition, latest information on the BOT's participation in the ASEAN can be found in BOT's Annual Economic Report 1/ Including the ASEAN Surveillance Process (ASP)
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9482385516166687, "language": "en", "url": "https://www.efeedlink.com/contents/12-15-2008/ae2e4907-aeb4-4da0-8f38-c86232236773-a001.html", "token_count": 369, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.02587890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:de9e94af-3d9e-492b-a4e3-db11c40d5e44>" }
December 15, 2008 Australia's 2008-09 winter grain harvest is expected to exceed 30 million tonnes, despite heavy rainfalls that disrupted harvest and resulting in large volumes of downgraded grain. Current winter grain harvest is seen to increase 36 percent on-year to 30.6 million tonnes, with wheat production estimated to reach just below 20 million tonnes, according to the Australian Crop Report by the Australian Bureau of Agricultural and Resource Economics (ABARE). While the forecast reflects good growing seasons across Queensland, northern New South Wales and Western Australia, it is unable to show the impact of the rains on grain quality. Feed grain prices are seen to fall further, as large volumes of downgraded grain are expected to enter the market along with the overall decline in Australian and global grain prices. Indicative prices for feed wheat and feed barley declined 46 percent and 32 percent, respectively, to their lowest levels since August 2006, between September 1 and December 7. The ABARE forecast report revised down 18 percent or 6.5 million tonnes of grain harvest from the initial estimation of 37.1 million tonnes in June. This was due to poor spring and widespread crop failures across southern New South Wales, Victoria and South Australia. Victoria is expected to harvest 2.8 million tonnes of grain, down 29 percent on-year, while South Australia's production increased 13 percent to 4.3 million tonnes swt. West Australia's harvest is projected to grow 34 percent on-year to 11.9 million tonnes. For New South Wales, good harvests across northern and central regions of the state offset a failed season in the south, with production expected to skyrocket 189 percent on-year to 9.1 million tonnes. Queensland is also expected to reach its highest level of production in nine years at 2 million tonnes.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9416196346282959, "language": "en", "url": "https://www.globalassignmenthelp.com/free-samples/business-environment-factors", "token_count": 3153, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.076171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:498a0122-61e8-4006-baf2-dd4d9251c358>" }
Introduction to Business Environment Business environment is combination of internal and external factor that influence a company’s operation. It includes clients, supplier, competitor and owner. Business environment require improvements in technology, government activity, social and economy trends (Bukhari, 2013). Thomson airline providing services to customers by offering satisfactory tariffs with a broad worldwide flying to centrally-located airports. This report will explain purpose of business and impact of fiscal, monetary policy on organization. Moreover the report will examine market forces, pricing and output decision of business. In the end it will discuss significances of international trends and global factors (Cento, 2008). LO1 BUSINESS PRINCIPLE OF ORGANIZATION 1.1 purpose of various kind of business companies Business companies arrived into markets to achieve their goals that are their mission to accomplish in the completion market. The public sector firm develops their business to provide emplacement and help the country in admiration of defense. In contrast the Ltd (Cento, 2008). companies come into market to make money through implementing business plans for increasing their market. The fulfillment of their aims depends on the nature of business either it is public or private sector. The principle of organization is to earn profit by offering service to customers to keep hold of them for future (Daniela and Blettner., 2011). Thomson Airways most annoying and successful airline company of UK. In 1962, organization started as Britannia airways. The Thomson Airways brand was launched on 1 November 2008. After that they renew their strategy and set mission to known as “TUL Spirit” that inspire organization to deliver their goals (Diane, 2004). Mostly the supervision of Thomson Airways is within their management and being private organization they proves to be profitable. The Thomson Airways is targeting international market, because many other airline companies join their domestic market. To turn out to be a famous world class airline Thomson Airways trying to provide better services for pleasing the customers across the country. The future planning of the organization is reflection of the past success and activity they are doing in present. That is passion of the Thomson Airways and company is keep working on it (Gaskell, 2001). 1.2 Types of Stakeholders The people who will manipulate the ongoing programs of the Thomson Airways are called stakeholders. They plays significant role in the strategic planning of the company. They are classified in two major categories (Diane, 2004). External stakeholders: The policies of government have major impact on Thomson Airways profit. They have to pay taxes on business revenue and charges for use the airport facility. On the other hand, creditor provides money to the organization for fulfill their daily operation expenses, the ultimate aim of them is to gain profit (Goel, 2009). Shareholders of the Thomson Airways always look for insured return for their capital invested for airways establishment. These all are external stakeholder for the organization and influence the decision of the company (Huettinger, 2014). Customers: They pay money for acquiring service of the Thomson airways. Customer’s main objective is to get satisfactory services on minimum prices. Employees: The objective of employees of the organization is to provide desired services to customers and increase their own knowledge base in respect of the firm. Company is paying them for identify problems and plan for solution (Kwong and Lee, 2009). Owner: Their aim to approve plan for providing better services and mark improvement in the quality and control over management for Thomson airways. They only want to see the growth of the organization (Loukia, 2012). Manager: The purpose of a manager is to implement the strategy into organization proposed by the top management to generate profit. Their main work is to communicate massage in the company. 1.3 Responsibility of organization towards stakeholders Manager: They are making cooperative environment in the Thomson airways. Managers take feedback from the employees and make decision according to that (Namukasa, 2013). The organization required to pay attention on their activity and provide them good support to control all work. Supplier: These people provide fund for fuel, food and other necessary things to Thomson airways. The company fulfills their requirements by timely payments. Costumers: Consumers main objective is to get best services from the Thomson airways as they are paying for that (Neal-Smith and Cockburn, 2009). The organization can please them by providing satisfactory services. Employees: The objective of the member of staff is to get good environment, payments and support by management. Company can meet their objective by the good incentive plan and provide them better facility. They have to made polices that can be accepted by all of them (Reynoso, 2010). Owner: objective of owner is to get more and more profit by providing best services to their employees and customers. Organization can fulfill their object by implementing strong strategic and policy that attract consumers to buy their services. That will generate more revenue for firm. Competitors: All the competitors try to expand their market in comparison of others. Thomson airways required to make strong strategy to bit them and maintain their business profit (Shaw, 2007). Shareholders: The main objective of shareholder for Thomson is to get timely and better return for investment. Now Airline Company satisfied them by obtaining higher profit by sales. Government: The aim of government for airline firm is well-timed submission of taxes and security of people who travel in Thomson airways (Srinidhi and Manrai, 2013). The company fulfills them by setting standard for services and submitting tax on time for airport facility. LO 2 ENVIRONMENT OF BUSINESS OPERATIONS 2.1 Economic system to allocate resources All over the world many type of economic system are followed for expanding business and development of that. In the business terms 4 types of monetary system are generally followed. Traditional is oldest system and mostly accepted by the organizations (Stephen and Treanor, 2012). This is particularly basis of the customer’s activity. In that case the buyers influence the organization to set the price according to them. That system is usually followed by the private business companies the percentage of government interface is very little. In the other hand, government have all right in their hands to change and apply in business is called command system. Private organization never agree to adopt them, because this not flexible and sometime fails to recognize changes in buying behaviors of consumers (Turban, 2006). The Thomson airway follows the mixed economic system in that government also take part but the command on the management is in hands of organization. This system helps the company to achieve their objective and expand their business in aviation industry. Company don’t have to follow the polices of government, they can make changes according to them. It will help them implement strategies and develop by Thomson airways (Zagelmeyer, 2009). Now the company is free from government limitation and management control which helps Thomson Airways to make changes according to the need and requirement of the clients 2.2 Monetary policy The policy which is planned by the central bank of the country is called monetary policy. The bank analysis the supply of money in the market and control or influence that central bank up and down the interest rate (Company information. 2012.). The increase in rate affect the cost of fuel, due to that Thomson airway required to make changes in the ticket price. It also affects the investors to invest their capital in the business. By that the demand of the airline services decries. For that issue government has to plan strategy to make stability in the market. The desired and non-planned expanses which manipulate the financial state of a country come under the fiscal policy. Government invests money to perform their daily planned and non-planned activities. For that they need to manage their income resources like tax. Tax rates are fluctuating by the money supply as the time of recession it will be low and at inflation it will be high. It will also affect the profit of Thomson airways, as they have to pay more tax for additional services (Fact sheets. 2013). 2.3 Impact of completion policy To avoid the monopoly of the some organization in the aviation industry completion policy are developed and implement. This policy keeps the interest of consumers in companies to acquire the best product at the minimum cost. The competitive policies develop to generate the competition in airline industry (Special assistance. 2013). As per the policy Thomson airways required to maintain reasonable and obvious ticket charges. This policy is formulated to guard the benefits of the customers. This will increase the brand image of all over the world. A Thomson airway is operating their business under Secretary of state for Transport and Civil Aviation Authority. Airline industry is having high risk in their business. The government of UK had planned regulation for commercial movement; all the companies have to follow the rules made by the government. The CAA monitors and controls the economic, management and procedure of Thomson airways. All the airlines companies have to follow rules of ICAO that is formulated for airlines right to flyover or stop at other nation (Zagelmeyer, 2009). LO 3 SIGNIFICANCE OF GLOBAL FACTORS 3.1 Implication of worldwide tread to Thomson airways Thomson airways can expand their business profit through the transferring cargos goods. It can include the exchange of money one to another country. I help them to target new areas for the business, because there are less traffic barricade in export and import of good by air. It will help other business of UK to increases their tread in international market. Other companies find more opportunities find new business market (Shaw, 2007). That ultimately improves the productivity and provides stability for their economy by outsourcing of goods. It will also help to develop UK’s tourism and hotel industry to grow. The biggest threat for worldwide tread is oil price. The price of fuel affects not only Thomson airways but also countries to keep their price low. International trade develop strong economy and strong economy provide support to Thomson Airways and other airlines company to set low price for service they are offering to customer (Loukia, 2012). 3.2 Analysis of global factors on Thomson airways The major global factors that have impact on the Thomson airways are as follow: Tread associations: The associate countries do treading regularly with minimum regulations. The Thomson airways can expand their business revenue by participating in the business treads. That will also put impact on increasing brand loyalty across the world (Lawton, Rajwani, and O'Kane, 2011). Completive advantage overseas: for Thomson airways it is not easy to stay in the market without offering any extra services. Other companies are also trying to maintain the advantage. To gain trust of the consumers Thomson airways needs to put some extra affords (Cento, 2008.). Economic stability: The economic condition of country has direct impact on the airline industry. The ticket prices of the Thomson airways depend on the financial situation of UK (Daniela and Blettner. 2011). Technology: Aviation industry is totally demands on the technology and it’s updating with the time. Thomson airways required to adopt best technology to stay in the hunt with other competitor airlines (Diane, 2004). Exports: The area which is very fruitful for the Thomson airways to target new countries for business. The UK government trying to start treads with other countries to expand their business industries to gain more profit. Thomson airways can provide them opportunities to take their help in exports of goods (Kwong and Lee, 2009). Poetical factors: The tax policies and rules of UK’s government affect the operation of Thomson airlines. Government plays significant role in strategic development and implementation. Strategic alliance with foreign firms: The alliances with the other airline company of different country help Thomson airways to start business. It will help them to expand the area of flying and develop new marketplace. That reduces the cost and time to set new business in new environment (Loukia, 2012). 3.3 Impact of polices of EU on Thomson Airways The European transport policy has significant influence on Thomson Airways. Underneath are the provisions which are included in European Transport Policy- SINGLE EUROPEAN SKY (SES): It is initiated by European Commission (EC) to design and inspect airspace. This provision applied across European Union (Namukasa, 2013). Consumer protection legislation: This provision becomes spotlighted in the recent time due to increase in issues related to safeguarding of customers traveling in the aircraft after volcanic ash eruption in Europe. This provision was established to ensure customers and their families that no individual will be affected from any negligence of the Airlines (Cento, 2008). Government passed special consumer act to protect consumers from any unexpected event. SLOTS: It is concerned with allocation of slot across the world's airports. It is designed to create business assurance to operators. This provision will assist Thomson Airways to increase the fleets’ size and also helps in enhancing better equipments and initiation of new routes (Bukhari and et.al. 2013). EU emission trading scheme: It control the gas emission of the aircrafts and that is compulsory to Thomson airline to follow the rules which imposed by the EU ETS. From this report it can be analysis that a Thomson airway is providing better services in UK and they have better market value too. This report provided the view of vision, mission and key stakeholders to achieve objective of the organization (Daniela and Blettner, 2011). The company is facing problems related to economic conditions of all other nation, its affecting their business too. Report shows the impact of global factors on the organization and gives details of the benefits for international tread. It can help company to increase their market. A Thomson airway is implementing new strategy to have competitive advantage and stay in the business market to create new market for future. - Bukhari, S., et.al. 2013. The antecedents of travellers’ e-satisfaction and intention to buy airline tickets online: A conceptual model. Journal of Enterprise Information Management. - Cento, A., 2008. The Airline Industry: Challenges in the 21st Century. Springer publishes. - Daniela., and Blettner., 2011. Adaptation of allocation of resources and attention in response to external shocks: The case of Southwest Airlines. Management Research Review. - Diane, 2004. International Aviation: Airline Alliances Produce Benefits, But Effect on Competition Is Uncertain. Diane publishes. - Gaskell, K., 2001. ThomsonAirways: Its History, Aircraft and Liveries. Airlife publishes, - Goel, S., 2009. Airline Service Marketing. Pentagon press publishes. - Huettinger, M., 2014. What determines the business activities in the airline industry, A theoretical framework. Baltic Journal of Management. - Kwong, E., and Lee, W., 2009. Knowledge elicitation in reliability management in the airline industry. Journal of Knowledge Management. - Lawton, T., Rajwani, T., and O'Kane, c., 2011. Strategic reorientation and business turnaround: the case of global legacy airlines. Journal of Strategy and Management. - Loukia, E., 2012. M&As in the airline industry: motives and systematic risk. International Journal of Organizational Analysis.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9226850867271423, "language": "en", "url": "https://www.wipro.com/oil-and-gas/accelerate-decarbonisation-of-your-enterprise-with-digital-technologies/", "token_count": 2663, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0498046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bb73d73f-bd29-4560-9492-7a9fac569801>" }
To manage the greenhouse gas footprint, the organisation first needs to be able to measure and report current emissions. Once the baseline is established, targets are set and this becomes an ongoing set of management processes (called Sustain here). There are then many avenues to meet the targets, discussed via four pillars: - Switch: An enterprise can switch to lower carbon energy sources, either directly (for example, installing solar arrays on their facilities) or indirectly from energy suppliers. - Sweat: To reduce greenhouse gas emissions, the company can sweat existing operations and supply chains to reduce carbon intensity associated with existing energy sources, in much the same way as sweating cost from operations and supply chains. - Store: The organisation can capture and permanently store greenhouse gases produced. - Swap: There is hard-to-mitigate carbon in any supply chain, so there are a range of actions that can be taken to offset, or swap, carbon produced, for example via trading, buying carbon offsets, and through other paper-based instruments to get to net zero emissions. It’s important to recognise there are overlaps among these four approaches; moreover, they do not have to happen in any particular order. Each of these areas will now be explored in more detail, with a focus on the benefits that can be derived by using digital technologies. Sustain: Footprint Management; Record & Report As noted above, step one must be to have a strong understanding of the current footprint of emissions, and then use that baseline to decide on targets and strategies. There are a large range of techniques and solutions to understand the carbon footprint. Increasingly, environmentally conscious consumers and shareholders are demanding to know the carbon content of a product or service, and governments are placing more regulatory fences around this. We believe that footprint management will be analogous to financial management. Companies will create statements for greenhouse gases, audited and reported in standard ways (similar to profit and loss statements and balance sheets). For this reason, some traditional accounting firms and software vendors like SAP have ventured into these processes and bodies such as the Sustainability Accounting Standards Board (SASB) have emerged. However, knowing the carbon footprint for external reporting serves a different need from knowing the footprint for operational optimisation (just as there is a difference between financial reporting and cost accounting). Once the footprint is known and targets are set, operational footprint reporting can start to be used to drive down emissions. For example, a company may decide to delegate management to different parts of the business, hold staff to account, review supply chains and optimise emissions, even in real time. To meet these objectives, many digital technologies can improve business outcomes when it comes to decarbonisation. Numerous software applications are available to analyse the existing footprint. Open architecture standards are being developed in this area, such as ‘Open Footprint.’ Big data, analytics and artificial intelligence can all help interrogate an organisation’s data in order to optimise the carbon footprint. Within the wiring of an organisation, fully connected people and devices via the Internet of Things, fast networks (5G and Wifi), and data lakes can improve situational awareness, ensure rapid feedback loops, and lead to tighter optimisation of carbon emissions. Switch: To Lower Carbon Energy Sources Switching to low carbon energy sources can reduce an organisation’s carbon footprint directly (Scope 1 emissions) or indirectly through their energy suppliers (Scope 2 emissions). There are many lower carbon energy sources in use. Wind, Solar, Dam Hydropower and Nuclear Fission all have significant market share today. Tidal hydropower, ground and air source heat pumps (as long as powered by low carbon electricity), biofuels, and geothermal also play a role in the lower carbon energy mix. Hydrogen, batteries, gravity (water storage), compressed air, and other emerging technologies are not primary energy sources, but they are carriers to store and move energy in a more sustainable way. Nevertheless, they can play an important part in decarbonising a business. Looking to the more distant future, nuclear fusion (replicating processes in the sun) may solve all of the world’s energy needs in a zero carbon way by the next century. This will remain a hugely exciting field of experimental physics and engineering, and require huge investment over decades to commercialise. Regardless of which energy sources are chosen, digital technologies can help the producers of these energies with carbon tracking (understanding the precise carbon provenance of the energy delivered to the customer, as low carbon does not mean no carbon), remote monitoring of facilities, predictive maintenance, downtime optimisation, power output optimisation, and modeling using digital twins. Technologies around drilling can add value for producers of geothermal. Production, pipeline, storage, and end user dispenser digitalisation can assist the hydrogen business. Other technologies can be applied to electricity grids to help balance the different types of power generated and transmitted, given the intermittency of wind and solar power, with more carbon-intensive baseload power. For organisations trying to lower their Scope 2 emissions (indirect emissions from electricity, heat or steam purchased and consumed), artificial intelligence, machine learning, and real-time data management can be used to ensure the supply has the desired carbon intensity at the desired price, all the time. Sweat: Reduce Carbon Intensity Another weapon in reducing carbon intensity is incremental: Sweating assets and supply chains to find small gains over time. This may be less eye catching, but many small, incremental improvements can take a company a long way towards net zero. It starts, as all of these efforts do, with a rigorous base line. To address direct emissions from a company’s operations (scope 1 emissions), for example, the internet of things can help by monitoring energy patterns and intervening and enabling, at one end of the spectrum, automated, intelligent light switching to much more sophisticated operations like real-time switching of energy sources. Mobility applications can be used to optimise the footprint of business travel by putting the decision making of the carbon impact into the hands of the traveler. Product lifecycle assessments applied through digital technologies can help lower carbon intensity. SMART city solutions are another set of tools that can be used to reduce the carbon footprint of municipalities and buildings. Reducing indirect emissions from a company’s supply chain (scope 3 emissions) can also have a huge impact on the carbon footprint of the organization, in much the same way that supply chain optimisation has had a very positive effect on lowering cost, improving efficiency, optimizing waste, and allocating capital. Adjusting the objectives of supply chain optimisation to include greenhouse gas reduction is an incremental change, but a powerful one. Material-specific carbon evaluation can be applied to supply chain management in ways analogous to conflict materials tracking, for example. There are a range of digital technologies that can be harnessed to manage emissions from supply chains. Store: Capturing and Storing Carbon Currently, there are around 20 operational facilities globally that capture carbon dioxide at scale, and the pipeline of new facilities grew 33% between 2019 and 2020 (GCCS Institute). Traditionally Carbon Capture Utilisation and Sequestration (CCUS) has also been used for reservoir injection as part of enhanced oil and gas recovery. Financially, the global carbon capture and sequestration market size was valued at $1.75 billion in 2019; it is projected to reach $6.13 billion 2027, at a 19.2% CAGR during the forecast period (Fortune Business Insights). Operational facilities currently use geological features such as depleted oil and gas reservoirs and saline aquifers to capture and store carbon dioxide created from industrial processes. Another technology showing promise is direct air capture, where ambient C02 in the air is captured at facilities and locked away. Governments, for example, could invest in arrays of direct air capture machines to ensure carbon targets are met as a public good to address market failure and mop up carbon from decades gone by. Airlines could use it to capture equivalent emissions from flights. Other companies are experimenting with ways to lock away extra carbon dioxide permanently in concrete instead of underground. These approaches can all benefit from the application of digital technologies. Subsurface digital applications, technologies, and techniques used to extract hydrocarbons from the earth can be harnessed to optimise putting greenhouse gases back into the earth and sequestering them for generations, such as reservoir modelling, real-time data management, and digital twins of the subsurface space. For direct air capture, technologies such as remote monitoring, uptime optimisation and predictive maintenance can be harnessed. Historian technology when coupled with the internet of things (sensors) can improve the operational outcomes. Swap: Offsetting and Trading Carbon towards Net Zero Some carbon emissions are much harder to abate than others. Rather than removing them, instruments can be used to offset emissions with carbon removed elsewhere from the atmosphere, arriving at net zero. Secondly, market-based mechanisms can help allocate resources more effectively than policies, processes and targets, in order to reduce carbon intensity. Offsets work by buying suitably verified carbon credits to offset against your emissions, from a supplier who is running projects that remove carbon (or lower them compared to the levels which would otherwise have been emitted). Offsetting is quite controversial as there some projects where the carbon offset can be challenged. For example, it is harder than it looks to calculate and verify how much carbon is removed. Some projects would have gone ahead anyway so are not removing additional carbon. Carbon needs to be locked away permanently (trees can be chopped down, etc.). And high altitude emissions are more harmful than low altitude ones. However, offsets will remain a part of the armory in this endeavour and will mature to become more of a trusted option; the market will grow. The largest market mechanism is the European Union Emissions Trading System. A cap is set on the amount of carbon dioxide equivalent emitted overall, companies are allocated a number of credits within this cap (via an auction), and those who then need to emit more than they were allocated buy emissions allowances from those who need less, giving a carbon price. An increasing number of companies are also using internal carbon markets to put a price on emissions to mirror that of external emissions, in order to either directly charge business units within the group as an incentive to reduce emissions, or for use in decision making, for example on capital project investment. Various digital technologies can bring significant advantages, such as using Block Chain to track carbon credits, applications to more effectively manage the trading of carbon, big data, analytics and AI to optimise these processes. Many existing and some emerging digital technologies help organisations decarbonise in a faster, less expensive and smarter way and form a core part of their decarbonisation strategy. Digital technologies in our view are not just part of the wiring within the process of decarbonising a business. They are tools that can contribute a significant leap forward towards net zero. Some people worry that the technologies highlighted here also generate a substantial carbon footprint, and, accordingly, rightly argue that digital technologies need to be applied in a way that the digital carbon footprint sustainable, for example using a green cloud approach. But the decarbonisation roadmap is clear, and the challenge is now. Between now and 2050 increasing maturity and new approaches will gain momentum – digital technologies will be in the thick of this endeavour. About Wipro iDEAS iDEAS helps Fortune 1000 clients reimagine and transform their business. By combining foundational capabilities of business value drivers, functional excellence, technology advisory, talent management and organizational change, with trending topics in Digital, iDEAS brings new insights and cutting-edge approaches to help its clients achieve and sustain the competitive advantage and operating performance required to meet the demands for successfully competing in the Digital marketplace. About Wipro Ltd. Within the area of sustainability, Wipro has a 10-year record of managing our own global carbon footprint. Details can be found in integrated annual reports (natural capital section): Wipro is a founding member of Transform to Net Zero (https://transformtonetzero.org) and a member of the open footprint forum (https://www.opengroup.org/openfootprint-forum). Wipro Ltd. (NYSE:WIT) is a leading information technology, consulting and business process services company that delivers solutions to enable its clients do business better. Wipro delivers winning business outcomes through its deep industry experience and a 360 degree view of "Business through Technology.” By combining digital strategy, customer centric design, advanced analytics and product engineering approach, Wipro helps its clients create successful and adaptive businesses. A company recognized globally for its comprehensive portfolio of services, strong commitment to sustainability and good corporate citizenship, Wipro has a dedicated workforce of over 170,000, serving clients in 175+ cities across 6 continents. For more information, please visit www.wipro.com
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9488340616226196, "language": "en", "url": "http://www.re-update.com/2016/10/15/featured-distributed-grids-and-the-virtual-future-of-utilities/", "token_count": 957, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.15234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9fde012b-0c86-4050-b322-206253dfd08a>" }
Despite the rapid adoption of renewable energy around the world, with two-thirds of all new energy investment going to technologies such as solar and wind, the process of decarbonising electricity grids is not as straightforward as some figures imply. In addition to replacing old power generation or adding to the existing supply, grid operators will have to take up the role of providing energy for both the heating and transport market in the years to come. In the EU, heating and cooling constitutes 50% of all energy demand, together with 32% for transport. The power sector in the EU has committed to a massive 80-95% reduction in emissions by 2050, and will also have to take up additional demand from the transport sector which could increase the load demand by 50%. Given that decarbonising electricity generation is the most straightforward way of reducing overall emissions, it is not difficult to see why governments are focusing on reducing emissions here, but the difficulties involved in doing so may be harder to quantify. The fundamental change occurring in utilities provision is the onset of distributed energy resources (DERs) which will profoundly affect the landscape of electricity generation and transmission using a wide range of technologies. These technologies don’t end with familiar categories such as solar PV, wind and biomass, but now include medium-scale storage integration, smart systems design for industry applications, smart-metering, net-metering, home energy storage with two-way charging and grid-integrated EVs. Presenting numerous challenges for traditional utility companies as well as many benefits is expediting the widespread adoption of a variety of low-carbon energy sources by the end-user. The problem in effect comes down to implementation – depending on the business model employed by the utility, the industry can go in two different ways. One way means that dips in supply are met with a range of technologies to balance the demand and reduce pressure on operators; making the most efficient use of the grid and its inputs as a whole. The other way sees an industry that is divided by the need to subsidise an expensive infrastructure with a diminishing return on this outgoing as customers increasingly opt to source their supply from third-party operators and private companies, and skip the cost of legacy infrastructure altogether. Many technologies currently exist or are in development in order to engage with these approaching difficulties. In Europe initiatives to test and assess the integration of distributed grids have been ongoing since 2011, with a similar systems-level approach affecting change in states such as California, Texas and New York. Overall, its understood that to fully utilise these technologies, a complete picture of the grid needs to be realised, and this information needs to be shared between utility companies and new classifications of actors in the market such as distributed systems operators and third parties representing aggregated DER providers. There is much positive spin put on this transition period for utilities, as expressed by PA Consulting in its new DynamicEnergy white paper. Here is some of the market-speak; we shall wait and see how whether these aspirations materialise: A Dynamic and Bidirectional Grid: The next-generation grid will be far more distributed and leverage smart sensors, switching systems, field analytic devices, and network adapters — all empowered by advanced software and communication networks to manage and enable two-way energy flow. Real-Time Customer Engagement: Enable customers to interact with an online hub that provides easy access to new services, billing information, rate choice, supply choice from the utility or third parties, as well as other community energy programs. Virtual Grid Architecture: The overall grid architecture will be increasingly digital as more applications and data moves to the cloud. This enables a virtual architecture where applications, data, real-time communications and infrastructure are acquired and configured in an “app store” model. That this evolution is necessary is implicit to the debate regarding renewable energy uptake, with much of the necessary technology in the process of implementation only in some regions, and much progress to be made. However, as opportunities are presented, so are the difficulties. Intrinsically, utilities are faced with increasing levels of complexity, but a few facts have been asserted throughout this process – such as solar with storage becoming cheaper than traditional forms of energy demand-response systems. Ultimately, utilities risk being made completely redundant by these threats, and will have to embrace the imperative of using all available technological options – essentially being able to temper supply and demand to enable optimum efficiency at all times. This may seem like an impossible aspiration but if (when) achieved, the positive financial and environmental benefits will be lasting and profound – a seismic shift in the way we view and use energy. As an optimistic example of the future, here is an overview of New York Power Authorities ‘Digital Foundry’
{ "dump": "CC-MAIN-2021-17", "language_score": 0.93767249584198, "language": "en", "url": "http://www.ricemason.eu/2010/09/13/cancer-deaths-fall-but-prevention-still-lags-behind-short-term-economic-outlook-must-not-damage-long-term-health-researchers-say/", "token_count": 1928, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.10205078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:014c89e4-def2-4f8f-974d-1ecd6d7a1965>" }
Although overall mortality from cancer is decreasing in the European Union, its incidence increased by almost 20%, from 2.1 million new cases in 2002 to 2.5 million in 2008, says a special issue of the European Journal of Cancer (the official journal of ECCO – the European CanCer Organisation) on cancer prevention, published today (Monday 13 September). The current economic crisis threatens to affect cancer incidence in a number of areas, says a paper by Dr. José M. Martin-Moreno from the University of Valencia, Spain, and colleagues. Public donations to cancer research funded by charitable organisations will fall, and governments as well as the pharmaceutical industry are likely to cut research and development budgets, say the researchers. The prospects for disease caused by occupational exposure to carcinogens are also likely to worsen, they say. “Both private companies and governments tend to take shortcuts in occupational safety controls during periods of economic hardship,” said Dr. Martin-Moreno “and this is especially true for small companies and in developing countries.” For example, a Korean study carried out in the late 1990s linked the reduction of health and safety costs directly to the ability to avoid bankruptcy. “This exemplifies the terrible choice businesses have to make in times of economic downturn – reduced safety for workers or economic ruin,” said Dr. Martin-Moreno. For industries with potentially high levels of carcinogenic contamination such as mining this effect is compounded, he said. Cancer prevention, like cancer itself, encompasses a large number of diverse factors including lifestyle choices, genetics, environment, occupation, infections and access to preventive healthcare, the researchers say. Cancer control efforts, therefore, can overlap with everything from the control of hypertension to the reduction of greenhouse gases. Unless forceful action is undertaken now, the cancer burden will only continue to grow, leading to enormous human cost and placing an unsustainable burden on health systems. However, prevention efforts can also be more effective in times of crisis. As people give up or reduce unhealthy lifestyle habits in order to reduce costs, they may be particularly receptive to new and healthier choices, say the researchers. “Governments could also play their part by taking the opportunity to levy higher taxes on tobacco, alcohol and other unhealthy goods like trans fats or processed sugar and channelling the revenue thus derived towards job-creating disease prevention and social welfare programmes,” said Dr. Martin-Moreno. The issue places special emphasis on the need to address cancer prevention using a holistic and global approach, focusing on the ‘big four’ risk factors of smoking, obesity, alcohol and physical inactivity. This represents a fundamental shift away from the reductive approach of earlier research, which meant looking narrowly, often in isolation, at multiple micro-components of diet and lifestyle, say the editors. A paper by Dr. Esther de Vries, from the Department of Public Health, Erasmus Medical Centre, Rotterdam, The Netherlands and colleagues, looks at the impact of preventing weight gain and increasing physical activity on colon cancer incidence in seven European countries. The researchers used the PREVENT statistical modelling method to make projections of future colon cancer incidence, both with and without realistic intervention scenarios involving physical activity and BMI reductions. Data studied came from cancer registries in the Czech Republic, Denmark, France, Latvia, The Netherlands, Spain, and the United Kingdom. The incidence of colon cancer in Europe has increased since 1975, and comprised 13.6% of the estimated European cancer burden by 2008. It is the second most common cancer in Europe and also the second most common cause of cancer death. “Yet we know that large numbers of colon cancer cases could be avoided by reducing exposure to risk factors, two of the most easily controllable of which are related to physical inactivity and excess weight,” said Dr. Andrew Renehan, from the University of Manchester, United Kingdom, one of the co-authors of the paper. While these risk factors are clearly intertwined – in general, physical inactivity increases with increasing body mass index (BMI) and increased physical activity contributes to avoidance of weight gain – increased physical activity does not necessarily result in weight reduction in overweight people. “The predictive modelling is beginning to tease out the independent relevance of each of these factors in the prevention of colon cancer,” said Dr. Renehan. Despite the benefits of physical activity and avoiding overweight, an increasing proportion of the European population has a BMI higher than the recommended maximum of 25, and few Europeans engage in the amounts of physical activity recommended by the current guidelines – at least 30 minutes of moderate physical activity on five or more days per week. In the hypothetical scenario where overweight and obesity levels in European countries increased during the period 2009–2019 at the same rate as has been observed in the US, the projected increase in rates of colon cancer ranged between 1.7 (UK) and 2.8 more (Spain) cases for 100,000 person-years for males. Increases for females ranged from 0.1 (Czech Republic) and 0.6 more cases (The Netherlands) per 100,000 person-years. These rates would translate to increases in numbers of new colon cancer cases of between 0.7 and 3.8%, the researchers say. If a whole population obtained a mean BMI of 21, between 0.6 (Czech Republic, females) and 11 (Spain, males) per 100,000 new colon cancer cases would be avoided by 2040, translating into a population avoidable fraction (PAF) of overweight and BMI for colon cancer of 2/3% to 18%. PAFs for excess weight were much higher for males (between 13.5% and 18.2%) than for females 2.3% to 4.6%), and highest for British males (18%), the researchers say. In the physical activity scenario, if all countries adopt the physical activity levels as observed for The Netherlands, which had the highest levels observed overall, between 0.5 (Czech Republic, males) and 5.1 (Spain; females) per 100,000 colon cancer cases per 100,000 person-years, or up to 17.5% of new colon cancer cases might be prevented in 2040. The highest PAF for physical activity was projected to be 21% for Spanish females. “We found interesting patterns in these models,” said Dr. Renehan. “Preventing weight gain and encouraging weight reduction would seem to be most beneficial in men, but for women a strategy with a great emphasis on increasing physical activity would be more effective.” Throughout the various papers in the special issue, the authors understand that modifying lifestyle is difficult. “We can safely say increasing physical activity across Europe to the level already achieved in The Netherlands, where everyone cycles, would be of substantial benefit,” said Professor Jan-Willem Coebergh, from Erasmus University, The Netherlands, and one of the co-editors. “But we will always need sound evidence before prevention strategies can be implemented,” he added. Professor Michael Baumann, from the University Hospital and Medical Faculty, Dresden, Germany, and ECCO President, said: “Cancer prevention may not be foremost in the policy-makers’ minds at present, but right now it is more relevant than it has ever been before. The recession confronts them with a clear choice – either to introduce short-term cost-containment strategies, which will simply increase long-term costs, or to use the financial crisis as an opportunity to strengthen evidence-based prevention policies. We hope that the evidence so amply provided in this special issue of the EJC will help make them decide to follow the right road and take a major step towards reducing the incidence of cancer in Europe over the years to come.” Co-editors of the special issue were Professor Jan-Willem Coebergh and Dr Isabelle Soerjomataram, from Erasmus University, The Netherlands; Dr José M. Martin-Moreno from the University of Valencia, Spain; and Dr Andrew Renehan, from the University of Manchester, United Kingdom. - European Journal of Cancer, volume 46, issue 14 (September 2010), “Implementing Cancer Prevention in Europe”. - Kim J, Paek D. Safety and health in small-scale enterprises and bankruptcy during economic depression in Korea. J Occupational Health 2000; 42(5): 270-5. - The PREVENT cancer statistical modelling software was designed at Erasmus University, The Netherlands, over a decade ago and is now used in many other European countries through the EU’s EUROCADET programme. - Many of the studies reported in the special issue were funded through the Eurocadet project, financed by the European Commission, www.eurocadet.org - The European Journal of Cancer is the official journal of ECCO – the European CanCer Organisation. - ECCO – the European CanCer Organisation – exists to uphold the right of all European cancer patients to the best possible treatment and care and to promote interaction between all organisations involved in cancer research, education, treatment and care at the European level. For more information, visit: www.ecco-eu.org
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9373159408569336, "language": "en", "url": "https://condorcet.ca/see-how-it-works/preference-cycles/", "token_count": 686, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.044677734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cd88f14d-94ca-4d52-9607-2fb0dd8c4cd2>" }
When there is a Condorcet winner every Condorcet method is as good as every other for identifying this winner. It is in those cases where there is no Condorcet winner that the various Condorcet methods differ, which is mainly in how they break preference cycles (aka majority-rule cycles). Let us imagine, for example, that we have three candidates: X, Y, Z. With three candidates we will get three distinct pairings: (X, Y), (X, Z), and (Y, Z). Let us assume we have an election in which we discover that: - X is more preferred than Y: (X → Y), which is to say that X wins the X vs Y match-up; - Y is more preferred than Z: (Y → Z), which is to say that Y wins the Y vs Z match-up; and - Z is more preferred than X: (Z → X), which is to say that Z wins the Z vs X: match-up. Here, there is no candidate who wins every pairwise match in which he or she is involved, so there is no Condorcet winner; more particularly, we have a preference cycle. Different Condorcet methods do different things at this point. With Condorcet/Ranked-Pairs we look at the magnitude of the preferences: - If, say, 60% prefer X, vs 40% who prefer Y; we have a strong preference of 60% vs 40% for X more-preferred-than Y; - If, say, 90% prefer Y, vs 10% who prefer Z; we have a very strong preference of 90% vs 10% for Y more-preferred-than Z; - If, say, 51% prefer Z, vs 49% who prefer X; we have a very weak preference of 51% vs 49% for Z more-preferred-than X. We see that some preferences can be seen as comparatively strong, and others weak. Condorcet/Ranked-Pairs “ranks” the pairs according to their strengths of preference, and then considers these pairs, one by one, from strongest preference to weakest. If we get to a preference that conflicts-with a previous (stronger) preference (creates a preference cycle) we omit it: the rationale being that a stronger preference should prevail over a weaker preference in any case where we can’t keep them both. In our example: - We sort our pairs by descending strength-of-preference as follows: Y → Z (strongest), X → Y, Z → X (weakest). - As we then consider the pairs in this order, the first two pairs, Y → Z and X → Y imply that X → Z. - This implication that X → Z conflicts with the assertion of the third pair that Z → X, so when we get to the third pair we must omit it to avoid the conflict, so that - X → Z, being encountered first, and being therefore the stronger preference, still stands. This gives us a final ranking among the candidates themselves with no preference cycle remaining: - X → Y → Z; and - X is the Ranked-Pairs winner. Next: Practical Features
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9736193418502808, "language": "en", "url": "https://due.com/annuity/a-brief-history-of-annuities/", "token_count": 1593, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.04345703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:33bfdf58-414c-41ab-aa15-6a12bf0558c3>" }
Today we’re going to teach you a brief history of annuities. Because of the connection with retirement, one might assume that annuities are a more recent phenomenon. In reality, its origins can be traced all the way back to the Roman Empire. In the Beginning During the Roman Empire, buyers and sellers entered into contracts that were called “annua” in Latin — which was the predecessor of the English word annual. Similar to what annuities today are like, “annua involved a buyer paying a lump sum to a seller who would later make annual payments to the buyer each year until the buyer passed,” writes Steve Jurich, aka My Annuity Guy. “As is often the case, necessity was the mother of this invention,” adds Jurich. “Sellers were unable to predict the lifespan of buyers, which is necessary knowledge to create the terms of the annua contracts.” As such, the endgame “was to make enough of a profit on the annua whose buyers passed before the full payout and therefore cover the losses experienced on those whose lives surpassed the original contract.” But, there was another beneficiary of annua. And, this was the “Roman soldier who received military service compensation in the form of annual stipends,” says Jurich. “Governments and militaries would repeat this practice many times throughout the coming centuries.” The Middle Ages Fast forward to the Middle Ages. During this period, there was a new use for annuities for Europeans. And, just like today, it was funding expensive war coffers. “Kings and feudal lords sought investors as financial backers for their conflicts,” explains Jurich. As such, they placed “these contributions into a tontine.” That means that “investors received payments from the pool.” However, after investors passed away “the remaining investors split their shares, continuing until only one investor remained and received all remaining monies in the tontine.” “Europeans continued using annuities in later years, expanding the concept” that resemble something more closely to what annuities are like today. The reason? It served “as a kind of savings account offering a guaranteed income.” Here’s where things got interesting. “Those issuers offering annuities to royalty soon realized their titled patrons lived much longer lives than the public did,” clarifies Jurich. “Pricing adjustments quickly followed.” Annuities in the New World Obviously, this concept was not reflected just in Europe. In 1759, it finally reached the shores of the New World. Specifically, in Pennsylvania. During the 18th Century, pastors were the first known Americans to receive annuities. These were “funded by donations from their congregants and church leaders,” states Jurich. In fact, any funds that widows or orphans receive have their origins in these types of annuities. “Benjamin Franklin provides an excellent example of the potential longevity of an annuity,” notes Jurich. “In his will, Franklin left annuities to” both Boston and Philadelphia. “For over 200 years, all the way through to the early 1990s, Franklin’s annuity to Boston continued paying and only stopped when the city opted to receive the remaining balance in a lump-sum distribution.” Despite this, “Americans in the late 18th century mostly rejected the idea of annuities, preferring to rely on the generosity of family in their golden years.” Who were the main proponents of annuities at this time? It was typically “attorneys and estate planners, who saw the value of an annuity in fulfilling the final wishes of clients,” Jurich explains. 1812 was a landmark year when it came towards annuities — outside of a little war known appropriately as the War of 1812. As previously before, It involved Pennsylvania. In this case, a life insurance company that offered annuity contracts. “Nearly half a century later, Union soldiers had the choice of receiving compensation in the form of an annuity,” says Jurich. 20th Century Annuity It wasn’t until the early 20th Century when the American public could finally get a crack at annuities. It took place “when the Pennsylvania Company for Insurance on Lives began offering them in 1912,” writes Jurich. “Growth remained slow but steady until the Great Depression.” However, following the Great Depression, “investors placed more trust in insurance companies than in banks.” Thanks to FDR’s New Deal there was a greater emphasis on savings, which the public overwhelmingly responded to. It was so popular that corporations threw their hat in the ring and developed group annuities for pension plans. “These early public annuity offerings offered fixed rates, tax-deferred status, and a guaranteed return,” Jurich writes. “Clients had two options for payment: fixed income for life or payments throughout a given number of years.” In 1952, variable annuities arrived. As a result, this let owners choose their account type. But, it wasn’t until the 1980s, where indexed annuities came into the picture. Variable annuities offered even more diversity. In fact, Congress was all about annuities with the passage of the 1982 Periodic Payment Settlement Act. As a consequence, this “exempted structured settlement payments from taxes,” explains Jurich. “Through the remaining years of the 20th century, annuities kept growing in complexity.” Annuities in the 21st Century After entering a new century, a wide variety of available products and new features became available. Most notably were principal guarantees and long-term care benefits. Mainly this was to appease critics and address the popularity of annuities. The result? Well, in 2000 if you were 65 years old and had $100,000 in savings then you could purchase an annuity that guaranteed an income of $744 per month. Even sweeter? Annuity sales reached a record by 2019. The reason? The economic condition was favorable following the Great Recession. Unfortunately, rates plummeted in 2020 and 2021. To put that in perspective, that $744 you may have received in 2000 is now just $469. That’s actually less than half that in 1990. “They’re a function of interest rates,” says Rob DeHollander, a financial adviser with DeHollander & Janse Financial Group told MarketWatch. “I remember when you could go to the bank and get a CD [certificate of deposit] that was paying 5% or 6%, but those rates are long gone.” If you buy a single premium annuity here, he says, “you’re locking in rates that are as low as they’ve ever been.” “For people who want steady income, and want safety, annuities have usually been pretty good,” added Chris Chen, a planner with Insight Financial Strategists in Newton, Mass. “But with interest rates pushing to zero, there isn’t that much advantage anymore. You’re locking away your money and you’re not getting any return.” In other words, collapsing inflation and low-interest rates are to blame. But, will they bounce back?
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9446961283683777, "language": "en", "url": "https://romantonepali.com/is-nepal-developing-fast/", "token_count": 1276, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.380859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:717d7789-8c25-4504-8173-ad9879f9d837>" }
Is Nepal developing fast? :- Numerous nations across the world have encountered fast growth, however a small reduction in poverty, as revenue has progressively concentrated in the hands of the rich. Nepal, nonetheless, has the contrary issue, modest growth however rapid reduction of poverty. In only seven years, the nation has halved the poverty rate and undergone a similarly critical decrease in income inequality. Nepal, however, remains one of Asia’s poorest and most slow-growing economies, with its per capita income quickly falling behind its regional peers and unable to accomplish its long-standing goal to move on from low-income status. To begin with, historical and natural endowments make development difficult. The geography of the country, externally landlocked and internally challenging topraphy, represents a natural obstruction to its development. Its history of extractive political regimes left Nepal in 1951 with an exceptionally low level of physical and human capital and a 90% rate of illiteracy. Second, natural disaster vulnerability, which most as of late included two devastating earthquakes in 2015, has led to physical asset devastation and near constant setbacks. Third, Nepal is uniquely exposed to India and its speed of economic growth, both for good and awful. Fourth, the nation has been going through a protracted period of democratic transition over the past two decades, from monarchy to multiparty democracy, marked by armed conflict, ethnic demonstrations, and periodic government changes. Set forth plainly, on its way to development, Nepal has faced enormous barriers. Nonetheless, in two progressive triennial assessments — 2015 and 2018, Nepal met set guidelines regarding human assets and economic vulnerability (based on the Human Assets Index and the Economic Vulnerability Index) and hence was entitled to be suggested for graduation. Yet, Nepal did not meet the requirements in the income group. Because of this, the government of Nepal formally asked the UN not to include Nepal for graduation in the 2018 Triennial Examination from the least developed country (LDC) group. Nepal argued that graduation would not be sustainable without meeting the per capita income requirements. What are some of the things to consider? It is doubtful that marginal interventions can help disrupt the self-reinforcing dynamics that have kept Nepal in a trap of low growth and high migration. Nepal needs a holistic strategy that, by introducing the following, would both increase investment and quicken profitability. Below are the some of the things to consider on Is Nepal developing fast? 1) Climate-friendly development Nepal’s goals for getting out of its low-income status should certainly be setting a high GDP growth target and fostering industrial development. However, development needs to be environmentally stable and feasible. This implies that the actions of growth and the mitigation of their externalities need to go connected. Nepal needs to put more accentuation on giving each and every citizen equal opportunity to enjoy the benefits of growth. As opposed to pursuing growth goals, the nation should lessen the income disparity between individuals. Eventually, without people living in slums, it is smarter to have an equally distributed low per capita income than a high per capita income concentrated among a modest bunch of individuals with the rest living in slums. 2) Breaking down policy barriers: Nepal needs to significantly restructure its public investment venture to handle the tenacious difficulties of low investment and weak productivity and efficiency; strengthen the degree of rivalry in the domestic market in sectors, for instance, transport, logistics, and telecommunications; decrease the expense of doing business, and consistently incorporate the economy with the remainder of the world. 3) Renewing existing sources of development Agricultural reforms, which represent 33% of GDP and 66% of the workforce, are crucial to further alleviating poverty, improving efficiency and releasing new sources of growth for labor. 1) Agribusiness in Nepal is portrayed by generally low yields contrasted with neighboring countries, especially food grains, for example, rice and wheat, which possess the most of cultivable land. Nepal’s rice yields, for instance, are lower than in India and Bangladesh, although wheat yields have been reliably lower over the past decade than in India, Bangladesh, and Pakistan. 2) Farmers are diversifying into fruits and vegetables away from grain staples, yet the phenomenon is probably not going to happen for a bigger scope. 3) The main factor inhibiting the growth of agricultural productivity in Nepal is the low degree of technical change and specialized productivity of the Nepalese breadbasket, generally in the Tarai region. 1) First and foremost, an incorporated national program is needed to repress this issue. 2) Agricultural interventions in Nepal must be aware of variations in productivity factors across different regions. 3) Unlocking financial sector constraints not only on conventional ranchers, yet additionally to migrant returnees will be significant for the spread of innovation and the increase of private investment in agricultural enterprises. 4) Increasing the use of fertilizers would improve the efficiency, productivity and environmental sustainability of Nepal’s agricultural production through change of the government subsidy program for fertilizers. 4) Building new sources of growth Releasing enormous hydropower investments would be a distinct advantage for Nepal. It does not only do lead to gigantic new investment and improved efficiency, yet it also has the potential to dramatically raise wages and help to partially reverse migration and boost competitiveness in downstream ventures. The hydropower capacity of Nepal is assessed at 84,000 megawatts (MW), 43,000 MW of which is considered financially feasible. As of now, under 2 percent of this financially suitable potential is being misused. 5) Investing in people Nepal is in the midst of a transition in demography. Because of lower birth rates, the proportion of the working-age populace is presently greater than the proportion of the population that isn’t. This is the dividend of demography. Investing in the skills of Nepali youth is crucial in order to completely reap the benefits of the demographic dividend. It is important for a more grounded and more sustainable growth direction in the future to place more human resources to effective use in Nepal. In order to demonstrate health outcomes in early childhood, complimentary investments are required, especially to address stunting.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9476096034049988, "language": "en", "url": "https://www.energia.org/project/research-area-4-energy-sector-reforms-and-regulation/", "token_count": 885, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.40234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:989fb0d3-8bc5-4706-8c35-bd55883c2bcc>" }
Gender and fossil fuel subsidy reform Attempts to alleviate poverty and achieve universal energy access have often taken the form of fuel subsidies, which are usually focused on specific types of fossil fuels rather than on outcomes. In general, fossil fuel subsidies are ineffective in combating climate change and reducing the harmful side effects of traditional energy. This research project looks at how existing kerosene and liquefied petroleum gas (LPG) subsidies affect the welfare, productivity and empowerment of women in low-income households in Bangladesh, India and Nigeria. The study found that LPG and kerosene subsidies are not working for the poorest households, particularly poor women. Better targeting of support for energy access is needed and possible. - Subsidies disproportionately benefit the wealthier sectors of society. Since the wealthier segments of the population have better access to energy and tend to consume more, they end up using a larger share of energy subsidies. - Subsidies do not guarantee lower fuel prices—and may even create price premiums. Even in systems with official registered prices, households were found to be paying significantly more than the regulated price. In Nigeria, low-income women reported paying between two to six times more than the official price for kerosene. This is likely due to many factors, including challenges in distribution systems, fuel scarcity, smuggling, diversion and governance issues. - Scarcity leads to long queuing for fuels. Fuel shortages lead to long lines, and the longer the fuel collection process becomes, the more likely that it will fall on women. Additionally, where home delivery is not available, fuel gathering often comes at the expense of daily earnings. Subsidies alone are not effective at promoting the transition to cleaner cooking or lighting fuels, especially where ‘freely’ collected biomass is available. That being said, any reforms made to the subsidies system need to be undertaken with care, as price increases to subsidised fuels without any support measures could hurt poor women, especially where they are using subsidised cooking fuels. With improved targeting and changes in focus, subsidies can still be an effective method of making more energy more accessible. - Effective subsidies have to be highly targeted. Many higher-income households report the ability to absorb price increases which implies there is still scope for better targeting. Targeting subsidies to those that need them most can counteract some of the problems outlined above. - A focus on connection over consumption subsidies can encourage gender empowerment. Connection subsidies could help enable women to make decisions like purchasing new cooking equipment by overcoming upfront connection costs. It is essential to target these campaigns at all genders, as men are often the people making decisions for families on energy use. - A gender focus can improve targeting and contribute to empowerment. For example, India’s PMUY scheme can only be used by female beneficiaries. This plays a positive role in encouraging women to pursue financial inclusion and in bolstering their voice and agency on household energy choices. Improved subsidies need to be introduced with a more holistic plan for clean energy adoption. Research found that switching fuels is not only influenced by fuel affordability and consumption subsidies but also by other factors such as the level of education of women (Nigeria), a focus on upfront costs (India), and potentially who has the decision to make energy choices (mostly men in Bangladesh, mostly women for cooking in India and Nigeria). Subsidies for modern energy must be coupled with education and information so people understand how they can help benefit their health, income, and status, or they risk being unsuccessful. Recommendations from the report - Continue to phase out fossil fuel subsidies that do not support energy access for poor women or the target population. In particular, phase out subsidies for kerosene, which is prone to large-scale diversion, is more costly than other lighting alternatives, and is not clean-burning. - Make subsidies more technology-neutral, to avoid technology lock-in by fostering solutions adapted to the context. This should include not only focusing access policies on transitional fossil fuels but also on ensuring that the right market incentives and structures are in place to cultivate new and renewable lighting and cooking technology. - Recognise that subsidy reform needs to be undertaken extremely carefully, alongside a robust package of measures to mitigate the negative impacts of price increases. - Use comprehensive strategies for energy access that recognise the importance of gender and incorporate it into policy design.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9344712495803833, "language": "en", "url": "https://www.eos-oes.eu/en/news.php?id=1985", "token_count": 134, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.10791015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bbf1ab97-4cc1-4075-968a-1dc68c6d51c0>" }
Indicator Assessment: Total greenhouse gas emission trends and projections in Europe Greenhouse gas emissions in the EU-27 decreased by 24 % between 1990 and 2019, exceeding the target of a 20 % reduction from 1990 levels by 2020. By 2030, the projections based on current and planned measures of the EU-27 show an emission reduction of 36 %, which is a rather conservative outlook in the absence of new measures. Further effort will certainly be necessary with a view to achieving climate neutrality by 2050 and the proposed increased milestone target of a 55 % reduction by 2030 (compared with 1990 and including removals). Source: European Environment Agency For more information: HERE
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9675317406654358, "language": "en", "url": "https://www.fxcm.com/eu/insights/us-treasury-securities/", "token_count": 766, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:150bf3d2-136e-4f4d-ad0a-89a12e0cb163>" }
The U.S. Department of the Treasury is the largest issuer of bonds in the world. It issues debt securities in order to fund the activities of the U.S. government, which generally runs high budget deficits. As of 31 December 2018, the U.S. federal debt totaled slightly below US$22 trillion. Of that amount, $16.1 trillion was held by the public and $5.9 trillion was held by other entities of the federal government, mostly by the Social Security Administration. To fund government operations, the Treasury sells securities with a variety of maturities, from four weeks to 30 years, on a regularly scheduled basis. All of these securities are sold at auction, with the lowest rates winning the bid. Investors can buy new securities directly from the Treasury through its TreasuryDirect program or through a bank or brokerage firm. Outstanding securities can be purchased through a broker. Investors can also own Treasury securities through the many mutual funds and exchange-traded funds that invest in them. In 2018, the Treasury sold US$10.2 trillion of securities through 284 public auctions. Bills are Treasury securities that mature in one year or less. Each week the Treasury sells four-, eight-, 13- and 26-week bills. They're all sold in denominations of $1,000, and interest and principal are paid at maturity. T-bills, as they are commonly known, are sold at a discount and investors receive the full face value at maturity. The difference between the discounted issue price and the price at maturity represents the interest. Notes are securities that mature in one to 10 years. Interest is paid every six months. Each month the Treasury sells two-, three-, five- and seven-year notes. In addition, it sells 10-year notes every quarter. It also "reopens" the 10-year note the other eight months of the year. The 10-year note is considered to be the Treasury's long-term "benchmark," as many other types of loans, including residential mortgage rates, are pegged against it. Likewise, U.S. corporate bonds are usually priced against the rate on the 10-year note. The Treasury also holds quarterly auctions of 30-year bonds, its longest maturity security. Likewise, the Treasury reopens the security the other eight months of the year. In addition, the government holds auctions of two-year floating-rate notes every quarter. The interest rate on the notes changes quarterly based on the discount rate at the Treasury's 13-week bill auction. The Treasury also sells Treasury Inflation-Protected Securities (TIPS) that are designed to protect against inflation, as the name suggests. According to the Treasury, the principal value of a TIPS rises with inflation and falls with deflation, as measured by the Consumer Price Index. So, if inflation rises during the term, investors receive more than they paid for the TIPS at maturity. TIPS pay a fixed rate of interest twice a year. TIPS are sold in five-, 10-, and 30-year maturities. The five- and 30-year TIPS are sold once a year while the 10-year TIPS are auctioned every six months. All are subject to reopening at other times during the year. The U.S. Treasury Department funds government operations through the sale of debt. As of early 2019, about US$22 trillion of debt is outstanding, with about US$16.1 trillion held by the public and the remaining US$5.9 trillion held by other federal government agencies. The Treasury sells a variety of securities throughout the year, including bills, notes and bonds. The government also sells floating-rate notes and securities that are designed to protect investors from inflation.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9508044123649597, "language": "en", "url": "https://www.twai.it/journal/tnote-99/", "token_count": 1500, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2041015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:53732827-275e-4018-b4c0-078e4456fec3>" }
A Special Economic Zone (SEZ) is commonly defined as a specific geographical area within a country with an advantageous business and investment environment. SEZs may vary in type or scale, but they are often used as a policy tool for economic development. These economic zones have not only significantly increased in number over recent decades but have also evolved in different respects, such as objectives, zone configuration, ownership, incentives, and economic activities. The Philippines is one of the many countries in Southeast Asia that have extensively pursued economic zone development. In January 2020, there were 407 operating zones and an additional 144 zones were being developed. As in the experiences of many countries, economic zones in the Philippines have undergone significant transformation since their inception. The evolution of economic zones in the Philippines can be classified into three phases: government-led (1970–1994); private-sector-driven (1995–1999); and IT-industry-centred (2000–present). The first phase of economic zone development (1970–1994) was characterized by government-led Export Processing Zones (EPZs), which had begun in the late 1960s. Two key government decrees paved the way for the creation of EPZs: the Republic Act No. 5490 of 1969, which established the Bataan Export Processing Zone (BEPZ), the first EPZ in the Philippines; and Presidential Decree No. 66 of 1972, which created the Export Processing Zone Authority (EPZA), the government agency in charge of managing the zones. Additional public zones were created in the 1980s: the Baguio City Export Processing Zone (BCEPZ), the Cavite Export Processing Zone (CEPZ) and the Mactan Export Processing Zone (MEPZ). Production within these EPZs is mainly for export; however, given the special permissions from the EPZA, 30% of total production can be sold to the domestic market. Like many first-generation EPZs, the EPZs in the Philippines were heavily funded by the government and foreign loans. Unfortunately, in some cases these zones generated more costs than benefits. The BEPZ was widely considered a disappointment as massive investments failed to deliver the projected gains. Meanwhile, government efforts towards export-oriented industrialization intensified in the 1990s. Under the Medium-Term Regional Development Plan for Region IV (1989–1992), massive infrastructure projects such as superhighways, ports and industrial estates were constructed in the five Southern Luzon provinces of Cavite, Laguna, Batangas, Rizal and Quezon (CALABARZON). This project enabled these provinces to attract the bulk of the economic zone investment and eventually to become one of the biggest industrial clusters in the country. In the same period, US military bases were closed, and those in Subic, Clark and Fort Bonifacio were converted into SEZs through the Republic Act No. 7227 or the Bases Conversion and Development Act (BCDA) of 1992. The second phase of economic zone development (1995–1999) focused on liberalization and the involvement of the private sector. The Republic Act No. 7916, also known as the Special Economic Zone Act of 1995, enabled the private sector to participate in the development and management of economic zones. The Act also expanded incentives to firms engaging in non-export activities such as commercial/trade services, utilities and facilities, and real estate. The Act also established the Philippine Economic Zone Authority (PEZA), a government agency tasked with promoting investment, providing assistance and facilitating incentives for investors or firms within the SEZs. The participation of the private sector and the provision of incentives to non-export activities marked the advent of massive economic zone development, which would further intensify in subsequent decades. In this period, the number of economic zones increased to thirty-two and, consequently, the number of firms rose from 38 to 382. Furthermore, this period also redefined the role of government in economic zone development from primary developer and operator to overseer and regulator. The third phase of economic zone development (2000–present) has been anchored in the development of the IT industry. Since the turn of the millennium, the potential of the IT industry, particularly the business process outsourcing (BPO) industry, has become noticeable. Private sector actors, mostly real estate developers, have actively ventured into the BPO industry and campaigned for government support. In 2000 the government promptly responded by amending the Special Economic Zone Act to provide incentives to IT parks and centres, firms, and facilities providers. Many of these IT parks and centres began to cluster in Metro Manila, which has well-developed business centres and an abundant supply of educated workers. Subsequent board resolutions were also approved to establish additional types of economic zones and activities: Tourism Economic Zones (2002), Medical Tourism Parks/Centers (2006), Retirement Ecozone Parks/Centers (2006) and Agro-Industrial Economic Zones (2007). The expansion of incentives to the IT services industry led to an unprecedented growth in economic zone development. The number of economic zones almost doubled from the 207 zones developed in the period 2000–2009 to 407 zones in 2020. The number of firms surged to more than 3,700 firms in 2020. The inclusion of the IT industry also triggered a remarkable shift in economic activity within economic zones. Up until the 1990s, firms were mostly engaged in manufacturing activities and the production of intermediate goods. From 2000 onwards, the majority of firms have been involved in service-based and knowledge-based activities such as BPO, software development, call centres, real estate, and warehousing. This change in the industrial composition of activities within economic zones reflects the overall structural transformation that has occurred in the Philippine economy. The service sector has consistently accounted for around 50% of total economic output since the 1990s. It is clear that economic zones in the Philippines have continuously evolved over time, and these changes are evident in specific areas such as configuration, objectives, management, and some types of economic activities. The first generation of economic zones are EPZs that are managed exclusively by the government. The new generation of economic zones have varying configurations, but the huge majority are involved with the IT industry and are operated by the private sector. The shift from government-controlled and export-oriented EPZs to private-led zones with a wide spectrum of activities has two significant consequences: a rapid increase in the number of economic zones and firms; and a change in industrial composition from manufacturing-based activities to service-based activities. Arianne Dumayas is an Assistant Professor at the Faculty of Global Management at Chuo University. Quarterly journal on the politics, foreign policy and socio-political dynamics of contemporary China Quarterly journal on the international relations and international political economy of South-East Asia Quarterly journal investigating the extended concept of security and the human dimension of conflict The TOChina Spring Seminars are held, in English or Italian, every year in the context of the graduate courses in “International Relations of East Asia” (Prof. Giovanni B.... Read More Stefano Ruzza (T.wai & Università di Torino) illustra i recenti sviluppi della crisi in Myanmar e presenta il proprio ultimo lavoro sulle vicende birmane.... Read More
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9518525004386902, "language": "en", "url": "https://asia.nikkei.com/Economy/Asia-leads-the-charge-in-growth-of-renewable-energy", "token_count": 1302, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0306396484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bbef8d4d-45f4-499e-9159-f50c1ae9cb80>" }
TOKYO -- In 2017, Asia accounted for nearly two-thirds of the worldwide increase in renewable energy generating capacity, according to a report published in April by the International Renewable Energy Agency. As the economies of Asia develop, demand for energy is rising. Governments have focused more attention recently on renewable energy -- wind, solar, bioenergy, geothermal and hydropower -- due to concerns over security of supply, price volatility and environmental issues. IRENA, an intergovernmental organization based in Abu Dhabi, reported that global renewable energy capacity in 2017 was 2,179 gigawatts -- greater than the capacity of world's coal powered plants, and approximately eight times Japan's entire energy generation capacity -- an increase of 8% compared with the previous year. For Asia as a whole, including Central Asia, renewable energy capacity has nearly doubled over the past five years, reaching 918GW in 2017. China and India were the biggest contributors to the increase. China alone accounted for nearly half the growth in worldwide renewable power generating capacity. The country now has around 36 times more solar capacity than it did five years ago. China today produces 130GW of solar power, exceeding the government's target for 2020. Its hydropower capacity has risen 36% since 2012. To battle air pollution, particularly from burning coal, China began offering higher prices for solar electricity in 2013. This has spurred investment in solar plants. In December last year, local media reported that the world's largest floating solar power station, with a capacity of 150 megawatts, went online in Anhui Province, west of Shanghai. China is the undisputed leader in solar cell manufacturing. According to the Paris-based International Energy Agency, Chinese companies make up around 60% of the world's annual solar cell manufacturing. India is also diving into renewable power. Its generating capacity rose 18% last year, the largest increase since records began in 2001. Overall, India contributed 10% to the global growth in renewable energy capacity in 2017. The country's solar energy capacity has almost doubled since 2016, reaching 19GW. Japan's SoftBank Group aims to build renewable energy plants in India with a total capacity of 20GW. Its 350MW solar plant in the state of Andhra Pradesh, in southeastern India, began operating commercially last April. Indian solar power producer Azure Power announced the opening of a number of solar plants in 2017, including a 130MW project in the southwestern state of Karnataka. The country's wind energy capacity also rose 15% to nearly 33GW. The growth is largely credited to the leadership of Prime Minister Narendra Modi, who laid out a target to raise solar power capacity from just five gigawatts in 2015 to 100 gigawatts in 2022. "Nobody believed it at the time, but it now seems possible," said Yasushi Ninomiya, senior researcher at the Institute of Energy Economics, Japan. Asia's third-largest producer of renewable energy is Japan, with a total capacity of 82GW, rising 7GW last year. Solar energy accounted for 96% of the increase. Hydropower is driving renewable energy growth in Vietnam, Asia's fourth-largest producer, with about 18GW of capacity. The country's abundant water resources, including the Mekong River, mean hydropower accounts for around a third of Vietnam's total electricity production. Last year, the country's largest electricity company, Electricity of Vietnam Group, brought the 260MW Trung Son Hydropower Plant and the 75MW Thac Mo Hydropower Expansion Plant online. Hydropower has been controversial for its negative impact on surrounding ecosystems. It is also vulnerable to climate change, as power generation is affected by precipitation. According to Ninomiya: "Hydropower plants are increasing in Vietnam contrary to the global trend, because they are easy to install and their energy generation is stable." South Korea's capacity is relatively small despite its economic size. According to Nobuhiko Ishii, consultant at Mizuho Information and Research Institute, this is due to a lack of policy to stimulate the market and the absence of strong domestic suppliers to make renewable power facilities. The outlook is positive, however, with new government direction released last year to reduce the country's reliance on coal and nuclear power plants. "There are more local suppliers than before, and the country is strengthening its effort to increase renewable capacity," Ishii said. The growth rate for renewable energy was rapid in Mongolia and Cambodia, albeit from a low base. Mongolia's renewable capacity nearly doubled last year, reaching 155MW. The country installed its first large solar power plant, with a capacity of 10MW, in Darkhan, in the north of the county, in January 2017. Its second wind farm, the 50MW Tsetsii Wind Farm, also opened in October of that year. Mongolia struggles to provide sufficient heat and electricity to its population, especially in rural areas, and renewable energy projects are seen as a possible solution. According to Mongolia's Ministry of Energy, the country has a potential wind capacity of 1,100GW. By category, 53% of global renewable generating capacity comes from hydropower. Wind power accounts for 24%, with solar power in third place at 18%. Solar energy was the fastest-growing type of renewable energy last year, with capacity rising 32%. Solar power plants are relatively easy to install and to operate compared with other types of renewable energy. Solar was followed by wind energy, which rose 10% in capacity terms. As the number and size of projects has risen, increases in solar cell production have driven costs down. According to IRENA, the levelized cost of electricity -- which compares the cost of producing electricity in different ways -- fell for photovoltaic solar cells by 73% between 2010 and 2017. Both solar and onshore wind power are now cost-competitive with fossil fuels. As renewable power generation is often unstable, there may be more demand for ways to manage produced energy. "Managing grid and various alternatives such as hydrogen and batteries, requires high level of technology," Ninomiya said. Growing global environmental concerns and pressure from investors has prompted multinational banks such as ING, BNP Paribas and HSBC to scale back or halt funding for coal power projects. The desire for greater energy self sufficiency in Asia, in addition to higher demand, is certain to light up the renewable energy industry for years to come.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9080737233161926, "language": "en", "url": "https://geneticliteracyproject.org/2018/10/31/biotech-firms-can-earn-consumer-trust-by-donating-gmo-seeds-to-hungry-countries-study-shows/", "token_count": 186, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2021484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9a555da0-b528-4fe6-aaca-91fc8a2cfe31>" }
New research from the University of Nebraska–Lincoln shows that agricultural biotechnology companies can do well by doing good. Agricultural economists Konstantinos Giannakas and Amalia Yiannaka found that companies can profit by lowering the price of genetic-modification technology in hunger-stricken areas when consumers associate this technology with reducing malnutrition and hunger. “When a company develops a new innovation, such as a new seed trait, a common assumption is that the company should exercise market power in order to maximize profits,” said Giannakas, Harold W. Eberhard Distinguished Professor of Agricultural Economics. “However, our research shows that the company can actually profit by giving away its technology to hunger-stricken areas.” Read full, original article: RESEARCH SHOWS FIGHTING WORLD HUNGER CAN BE PROFITABLE FOR AG BIOTECH FIRMS
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9582297801971436, "language": "en", "url": "https://www.jeannineflynnlaw.com/trusts/", "token_count": 451, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.287109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b2ae90ce-0f72-448a-97bb-78c64e20e6e6>" }
What is a trust? A trust is simply a separation of the legal title of assets from the beneficial enjoyment of those assets. For example, the legal title of the asset is held in the name of a “trustee”, who is the person that manages the property held in a trust. The “beneficiary” is the person who is entitled to the beneficial enjoyment of the assets held in the trust; such as distributions of money for a particular purpose, or receiving funds to obtain an education or just paying for routine expenses. This means that the trustee is managing the trust assets for the benefit of the beneficiary which creates a fiduciary relationship for the trustee in favor of the beneficiary. The trustee has the highest duty under the law to act in the best interest of the beneficiary. Trusts as a part of your estate plan Many people think of trusts as an estate planning tool of the very wealthy. Trusts, however, are used for a number of purposes. They can be created during the lifetime (an intervivos trust) of the person who creates the trust (who is called the settlor, grantor or trustor); or, a trust can be created at death (a testamentary trust) meaning that it is created by a person’s’ Will. Trusts are very useful estate planning tools because they can be used for many purposes including: - Protection from creditors, often protection of separate property assets whenever a dissolution of marriage occurs - Avoiding the need to create a guardianship for a minor or incapacitated adult by using a testamentary trust, or when that is unavailable, by creating a Section 1301 management trust via the Texas Estates Code. - To qualify an otherwise ineligible person to receive public assistance by creating a Medicaid Income Trust (sometimes called a “Miller Trust”), or a special needs trust sometimes called a Supplemental Needs Trust, - To provide for distributions to multiple generations of family members over a lengthy period of time, - To make charitable gifts using complex estate planning techniques Jeannine C. Flynn has the knowledge and the experience to assist you in incorporating the right trust into your estate plan to achieve your overall goal.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9588642716407776, "language": "en", "url": "https://www.onupkeep.com/maintenance-glossary/theory-of-constraints/", "token_count": 3576, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0203857421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:40fdc25a-32c2-4e6e-b6e2-fb2bbd7f4b23>" }
Theory of Constraints What Is the Theory of Constraints? Answered March 05 2020 The theory of constraints looks at what's holding back an objective in manufacturing from achievement and supercharges a team to make necessary changes to revitalize progress. For example, Mazda applied the theory of constraints to transform its business to meet the needs of consumers. Targeting those impacted by financial losses due to the recession, Mazda wanted to provide quality products that could be affordable. Mazda then developed advanced technology that provided low fuel consumption from an internal combustion engine that would rival a hybrid engine. Some struggles stay the same for companies, no matter what industry they are in, where they're located, and who are working for them. These fundamental needs spawn most of the so-called “management fads” that populate many different websites, news feeds, and other sources of information. While management fad is a term that is generally used in a derogatory sense, everything that is labeled this way has a basis in reality and the potential to change a company in a real, fundamental way. One of the best recent ideas that have been gaining traction is the theory of constraints. In this article we will cover: - The goal of the theory of constraints - Application of the theory of constraints - The theory of constraints five focusing steps - Pros and cons of the theory of constraints - Examples of the theory of constraints - Implementing the theory of constraints in your business What Is the Theory of Constraints? The theory of constraints is a management strategy that focuses on improving the most important and limiting factor (the “constraint”) until it is identified and resolved. This factor may also be referred to as a bottleneck, particularly in manufacturing. The process is based on the philosophy that all complex systems are made of multiple activities that are all linked together. This chain of events depends on each event, one of which is the bottleneck/constraint/weak link. When that link is eliminated, the entire process according to the theory of constraints becomes streamlined, more efficient, and more reliable. In a company-wide strategy, the processes move from link to link until all links are streamlined and efficient. Instead of eliminating waste, the theory of constraints works from an increasing sales point of view. This viewpoint directly impacts the goal and the applications of the theory of constraints. The goal of the theory of constraints The simplest definition of the goal of the theory of constraints is profit, both long-term and short-term. In this process, multiple sub-processes such as the Five Focusing Steps and Throughput Accounting are used in order to discover, improve, and eliminate the different constraints that companies face. In order to achieve this goal, the theory of constraints rests on prioritizing improvement activities above everything else. The top constraint is always the problem that must be solved first. Typical theory of constraints applications The theory of constraints can be used in a lot of different applications across-the-board. Some common ones include: - Project management - Production management - Sales and marketing - Supply chain management When many people first discover the theory of constraints, they think it can only be applied to plant management and other applications mentioned in the theory of constraints founding document, The Goal, by Dr. Eliyahu Goldratt. While those are common examples given in the book, the process can be applied to many different things. Here’s a look at that process, its core principles, and some of the pros and cons of this system. The theory’s processes Unlike other processes, the theory of constraints rests primarily on its core principles. These principles, in turn, are a set of processes that focus on the constraints in question. The biggest pillars are the Five Focusing Steps and Throughput Accounting. Understanding these pillars enables companies to understand the processes behind implementing the theory of constraints. The Five Focusing Steps These five steps are the foundation for identifying and removing constraints in this methodology. It’s a cyclical process that can start at one specific point: identifying the constraint. The Five Focusing Steps are: - and Repeat. This is a continuous cycle that is applied across all assets and constraints when companies commit to using the theory of constraints as a meeting in strategy. In other cases, it may be a one-off done to a particular asset to test the viability of the strategy for a company's needs. In either case, the five focusing steps are a great beginning to a cohesive theory of constraints strategy. The thought processes The thought processes mentioned here are a fancy way of referring to three questions that must be answered when you're using the theory of constraints. These are: - What needs to be changed? - What should it be changed to? - What actions will accomplish this? In some cases, companies have used different, smaller strategies to answer these questions. More common strategies used include decision trees, diagrams, and other methods of sorting through data. The final pillar of this system is throughput accounting. Briefly put, throughput accounting is an alternative accounting system that prioritizes eliminating traditional issues that plague accounting, such as accumulating too much inventory and so-called “paper profits.” Three different metrics are used to determine how well companies are using throughput accounting. The first one is throughput, which is the rate at which customer sales are generated minus fully variable costs. These variable costs do not include labor, unless they can be fully tied to produced inventory. The second metric is investment, that is money that is locked into physical things. And the last metric is operating expenses, which are fairly straightforward. In the theory of constraints schema, most management decisions are based on increasing throughput, reducing investment, and reducing operating expenses in that order. The majority of the effort goes into increasing throughput. Putting it all together The end goal of all of these systems, rules, and cycles is to focus less on cutting expenses and more on creating sales. That’s the essence of the theory of constraints. While it may seem complicated, especially when reading about this theory, the core idea is one that companies have always had in some shape or form. You can either cut expenses or increase sales. Ideally, you would do both. The theory of constraints puts increasing sales first and foremost. That being said, every company is wildly different and not all strategies work in all places. Let's look at the pros and cons of the theory of constraints and where it would work best. Pros and cons of the theory of constraints Before we get started, it's important to realize that these pros and cons may not be applicable in all situations. Also, the theory of constraints methodology is not always at fault and may not be the problem. With that being said, here are the typical pros and cons of the theory. There's one thing that the theory of constraints is best at and that is it improves communication between departments. By its very nature, this strategy depends on different departments and sections of a company talking together to determine where the constraints are and work together on solving them. For companies that are struggling with styling, disconnection, and other communication issues, the theory of constraints may be a great help. This process also excels at improving short-run capacity decisions, understanding and editing older company processes, and avoiding localized department optimization. The focus is on the company as a whole, as opposed to improving a department at a time. It takes a very organic approach to the events that happened on a day-to-day basis on a production floor and in the office at the same time. Finally, it's something new. Sometimes organizations need a new strategy to shake them from top to bottom and to get them back on track. The theory of constraints is one such strategy that is very good at revitalizing large organizations and getting them all to work together again. For every pro, there's generally a con behind it. Improved communication between departments and a heavy focus on constrained areas can lead to neglect of areas that do not face constraints. This focused concentration can also center around problem areas and forget about areas that are in need of reward or recognition. This methodology also promotes a desire to reduce capacity and production. Because there is so much focus on trimming down excess inventory, it's tempting to simply stop making things in order to reach the new, lower quota. While the theory of constraints focuses on selling all of your inventory as a method of reducing access, there's always the reality that there's another way: cutting back. And that needs to be avoided. The theory of constraints is also a short-term, rapid solution to constraints. If not carefully watched, the long-term is easy to forget about when companies start using these strategies. Because the long-term will become the short-term quicker than anybody cares to admit, these tendencies do negatively impact the company. Examples include neglecting to release new products, less attention to the resource and development sectors, and overall long-term improvements. All of these aspects, both positive and negative, have a place and time. Some companies will find a lot of good comes from the theory of constraints. Others will have a harder time leveraging this method. What’s great about this system, in particular, is that it doesn’t have to be a company-wide change. Try it out in a small department or integral piece and see what it may offer you! Theory of Constraints Examples Now let's move on to some examples of the theory in practice. Here, it's split up by industry but a few general things apply across the board. First of all, the processes outlined above are the same no matter what the industry. The application is what differs. It's also important to note that the theory of constraints works at its best in different situations depending on the industry. For example, manufacturing plants may see the most improvement when they commit to the theory of constraints if they apply it first to their production lines, machinery, and other common sources of bottlenecks. On the other hand, a maintenance company may see the most improvement when they apply these practices to paperwork, regulations, government guidelines, and other industry-specific constraints. Here are examples of the theory of constraints in the manufacturing, maintenance and healthcare industries to get you started. It's not much of a surprise that manufacturing as a whole has seen great results from the theory of constraints. It's particularly useful when applied to large assets, production lines, and other areas crucial to the company's success. However, these same areas are very prone to bottlenecks, such as equipment downtime. One notable example is Dr. Reddy’s Laboratories in 2014. Facing down a significant amount of backorders and low supplier ratings, they needed a change. The theory of constraints enabled them to boost their ratings, fill their backorders, and eventually paved the way to them receiving a recent best supplier award. They continue to use it to this day. Maintenance companies don't rely as much as manufacturers on heavy assets and production lines. However, they do have to balance the needs of many different facilities at the same time. They may also be a department within a larger company, with their own goals and methods that are not necessarily the same as other departments and don’t have a highly-visible direct impact on the overall company. The theory of constraints bridges the gap between a maintenance department and the rest of the company. For maintenance-oriented companies, it prevents siloing, improves information transferences, and cuts through the fluffy tasks that are done simply because someone said that they ought to be. Healthcare industry companies of all sorts struggle with the mountains of paperwork that they must manage, sort, and utilize on a regular basis. That's a whole lot of constraints right there! The theory of constraints is a great way to start out what has to take time from what is taking time. This can be invaluable when it comes to paperwork and all of its attendant problems. Depending on what part of the healthcare industry companies are in, they may find additional benefits, such as increased communication and connection that are offered to companies. It's a flexible system that can be adjusted to a lot of different needs, just like healthcare. Implementing the theory of constraints in your business The best way to implement the theory of constraints in any given business is to start with the five focusing steps. It's a straightforward cycle that can be easily scaled up or down according to your needs. The cyclical nature of these steps makes it so that there's no need for huge, overwhelming company-wide change, yet if the system works, it’s easy to scale it up. That being said, what are some simple steps for those who do not want or cannot freestyle such a change on their own? Walking through the different steps The theory of constraints is one of the easier strategies to implement because it is so clearly outlined in the five focusing steps and in thorough accounting. Here are some questions to get you started on the five steps. - Identify: What is the current bottleneck or constraints that we are struggling with the most? - Exploit: What are some quick improvements you could make today, with the resources at hand? What are some improvements that will take a longer time? - Subordinate: What can we stop doing in order to make those improvements? - Elevate: Is the problem the first priority or are other things getting those resources, time and effort? Are we elevating the issue in question above everything else or is it still an afterthought or nuisance that we have to deal with? - Repeat. What’s the next constraint on the list? That’s all you need to get started. What else do you need for a smooth transition to using the theory of constraints on a department or company level? Getting buy-in from other managers and departments After the initial testing phase/tryout, you may want to scale the theory of constraints to fill your overall department or company needs. This is highly dependent on raising and getting support from other managers and leadership throughout the company. Some things to keep in mind during this process include: - Talk to the people who know your assets backed. They'll know where the bottlenecks are and they will tell you if you give them an opportunity! - Make sure that people understand what the theory of constraints is. A lot of people look at something new and think that it's going to create more work down the line. This isn't always the case and should be made clear from the start. - Understand that changing the way people think takes time. A lot of time is standard for most new management strategies. - Finally, it may not be for you. The theory of constraints is a great strategy for many companies, but it may not work for your needs. One of the most common mistakes during this whole process is getting too invested in a theory or strategy that does not work for you. It's a good thing to keep in mind as you work to create success. Pulling it all together After the groundwork has been laid, it's time for everything to work out--at least in theory. What are some troubleshooting guides for the most common problems that arise during the implementation of the theory of constraints? First of all, the entire theory rests on the five focusing steps and thorough accounting. They should be the first two areas that you check when things start going wrong. In general, if one of the stops is forgotten, it will affect all the rest of the steps. Next, throughput accounting is not a complete substitution for regular accounting. Both need to be present and monitored in your company, even after you have committed to the theory of constraints. Finally, if something's not working, investigate it and go back to the basics. Have each of the five steps been accomplished? Are you treating the steps as a cycle? Are you aware of all of the constraints inherent to this theory and acting accordingly? Many times, these small bumps seem much bigger than they actually are. As with most strategies, the important thing is to keep going. The theory of constraints is a great method to revitalize companies and their processes. It focuses on moving quickly through the normal, everyday clutter that drags down employees, managers, and executives by offering solutions to the most immediate problems. This makes it ideal for companies who are older, struggling with tedious processes, or who don't have a cohesive system in place today. On the other hand, this theory can cause younger companies to sprint too quickly through problems without thinking about the longer-term. The same things that enable companies to move quickly also tend to forget the long run. While this may not be a problem in companies where there is a sound infrastructure that simply needs to be dusted off, the theory of constraints should not be relied on as a be-all-end-all solution. Like most management theories, the theory of constraints has a lot to offer companies today. Unlike many other strategies, it's easy to implement on a small scale in order to see if it works for your needs. Try it out today in tandem with a quality computerized maintenance management software and see where it will take you. The sky's the limit!
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9663520455360413, "language": "en", "url": "https://www.thebalance.com/understanding-bid-and-ask-prices-3141317", "token_count": 838, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.057861328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9e145473-cbdd-4b5d-b41a-8feb3cb36dba>" }
Understanding Bid and Ask Prices in Trading The stock market functions like an auction where investors—whether individuals, corporations, or governments—buy and trade securities. It's important to know the different options you have for buying and selling, and a large portion of this is understanding bid and ask prices. Unlike most things that consumers purchase, stock prices are set by both the buyer and the seller. The buyer states how much they're willing to pay for the stock, which represents the bid price, and the seller names their price, known as the ask price. It's the role of the stock exchanges and the whole broker-specialist system to facilitate the coordination of the bid and ask prices—a service that comes with its own expense, which affects the stock's price. Once you place an order to buy or sell a stock, it gets processed based on a set of rules that determine which trades get executed first. If your main concern is buying or selling the stock as soon as possible, you can place a market order, which means you'll take whatever price the market hands you. You can see the bid and ask prices for a stock if you have access to the proper online pricing systems, and you'll notice that they are never the same; the ask price is always a little higher than the bid price. You'll pay the ask price if you're buying the stock, and you'll receive the bid price if you are selling the stock. The difference between the bid and ask price is called the spread, and it's kept as a profit by the broker or specialist who is handling the transaction. In actuality, the bid-ask spread amount goes to pay several fees in addition to the broker's commission. The broker's commission is not the same commission you'd pay to a retail broker. Certain large firms, called market makers, can set a bid-ask spread by offering to both buy and sell a given stock. For example, the market maker would quote a bid-ask spread for the stock as $20.40/$20.45, where $20.40 represents the price that the market maker would buy the stock, and $20.45 is the price that the market maker would sell the stock. The difference, or spread, benefits the market maker because it represents profit to the firm. Because prices constantly move, especially for actively traded stocks, you can't know what price you'll get in a trade if you're a buyer or a seller unless you use specific market orders when trading the stock to lock in a certain price. If you want your order placed almost instantly, you can choose to place a market order, which goes to the top of the list of pending trades. The downside to this is that you'll receive either the lowest or highest possible price available on the market. If you submit a market sell order, you'll receive the lowest buying price, and if you submit a market buy order, you'll receive the highest selling price. Generally, market orders should be avoided when possible; they're best used in situations where you need to buy or sell an investment immediately, and your concern is timing and not price differences. The Bottom Line There are ways around the bid-ask spread, but most investors are better off sticking with this established system that works well, even if it does take a little ding out of your profit. If you consider branching out, experiment with a paper-trading account before using real money. Advanced strategies are for seasoned investors, and beginners may find themselves in a worse position than they began. This isn't to say that you won't ever get to the point of using them and maybe even excelling with them, but you're probably better off sticking to basic rules when you're starting out and just getting your feet wet. The Balance does not provide tax, investment, or financial services and advice. The information is being presented without consideration of the investment objectives, risk tolerance or financial circumstances of any specific investor and might not be suitable for all investors. Past performance is not indicative of future results. Investing involves risk including the possible loss of principal.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9439102411270142, "language": "en", "url": "https://ageconsearch.umn.edu/record/164805", "token_count": 331, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.20703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c6f57144-ab99-4c95-aae8-7cad5184a84d>" }
Economic development is a continuous, stochastic process considering that development depends on a multitude of historical, political, economic, cultural, ethnic and other factors. In the process of development, each country puts effort into strengthening their manufacturing potential, increasing the competitiveness of their economy by modernizing technology, and raising the level of education, culture etc. Owing to the accentuated actions of these factors, and different social, economic and other circumstances, there has been emerging polarizations in regional development, urbanization and so on. Proof of a country’s level of economic development can be found in various indicators such as capital equipment; the share of manufacturing, agriculture, and foreign trade; the share of the private sector in total ownership; the development of financial institutions and capital markets; the development and stability of the legal system; the development of transport, telecommunication and other infrastructure; the realized standard of living; the development of democracy and human rights protection; preserved environment etc. Economies of developing countries, including Montenegro, are usually characterized by a low capital equipment and low labor productivity, expensive manufacturing and insufficient share of world trade, high import dependence, uncompetitiveness, high unemployment, undeveloped entrepreneurship, and an undeveloped financial institutions. Polarized countries in an economic and development sense, are therefore those which are unevenly developed, and are constantly faced with highly pronounced problems of disparity in regional development and demographic problems. Solving these problems is a long-term process and necessitates. The design of a regional policy that is more efficient than the previous ones, as well, as building a different procedure for fulfilling the adopted regional policies.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9350694417953491, "language": "en", "url": "https://annualreporting.info/what-are-ifrs-financial-statements/", "token_count": 950, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.025390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:afa3789a-2172-4287-a88c-78df8d04f143>" }
What are IFRS Financial Statements – The objective of financial statements is to provide financial information about the reporting entity’s assets, liabilities, equity, income and expenses that is useful to users of financial statements in assessing the prospects for future net cash inflows to the reporting entity and in assessing management’s stewardship of the entity’s economic resources. A content page of IFRS Financial Statement may look similar to the following content listing: What are IFRS Financial Statements? Statement of Financial Position: This is also known as the balance sheet. IFRS prescribes the ways in which the components of a balance sheet are reported. This statement recognises assets, liabilities and equity. This comprises information about a reporting entity’s economic resources, claims against the entity and changes in resources and claims that result from other events and transactions such as issuing debt and equity instruments. What are IFRS Financial Statements Statement of Comprehensive Income: This can take the form of one statement, or it can be separated into a statement of income (recognising income and expense realised based on the accrual accounting concept) and a statement of other income (recognising revenues, expenses, gains and losses that are excluded from the statement of income, because they were not yet realised). This comprises information on the changes in economic resources and claims that result from the entity’s financial performance. Statement of Changes in Equity: In the past also known as a statement of retained earnings, this documents the company’s change in earnings or profit and contributions from holders of equity claims and distributions to them for the given financial period. What are IFRS Financial Statements Statement of Cash Flow: This report summarizes the company’s financial transactions in the given period as cash flows (changes in cash and cash equivalents) categorised into Operating activities, Investing activities, and Financing activities. What are IFRS Financial Statements In addition to these main statements, a company must also give a summary of its accounting policies (representing the methods, assumptions and judgments used in estimating the amounts presented and disclosed, and changes in those methods, assumptions and judgments), movement schedules of certain reporting lines in the balance sheet, breakdowns of more details of other reporting lines in the income statement or statement of cash flows. The full report is seen side by side with the previous report, to provide a starting point for users to compare the different statements from year-to-year. What are IFRS Financial Statements Accrual accounting What are IFRS Financial Statements? Accrual accounting depicts the effects of transactions and other events and circumstances on a reporting entity’s economic resources and claims in the periods in which those effects occur, even if the resulting cash receipts and payments occur in a different period. This is important because information about a reporting entity’s economic resources and claims and changes in its economic resources and claims during a period provides a better basis for assessing the entity’s past and future performance than information solely about cash receipts and payments during that period. “Show me the money!” We all remember Cuba Gooding Jr.’s immortal line from the movie Jerry Maguire, “Show me the money!” Well, that’s what financial statements do. They show you the money. They show you where a company’s money came from, where it went, and where it is now. There are four main financial statements. They are: (1) balance sheets; (2) income statements; (3) cash flow statements; and (4) statements of shareholders’ equity. Balance sheets show what a company owns and what it owes at a fixed point in time. Income statements show how much money a company made and spent over a period of time. Cash flow statements show the exchange of money between a company and the outside world also over a period of time. The fourth financial statement, called a “statement of shareholders’ equity,” shows changes in the interests of the company’s shareholders over time. See also: Presentation of Financial statements What are IFRS Financial Statements Annualreporting provides financial reporting narratives using IFRS keywords and terminology for free to students and others interested in financial reporting. The information provided on this website is for general information and educational purposes only and should not be used as a substitute for professional advice. Use at your own risk. Annualreporting is an independent website and it is not affiliated with, endorsed by, or in any other way associated with the IFRS Foundation. For official information concerning IFRS Standards, visit IFRS.org or the local representative in your jurisdiction.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9587241411209106, "language": "en", "url": "https://articles.lifequotes.com/this-march-for-womens-history-month-support-mom-with-life-insurance/", "token_count": 632, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2041015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:21beea0f-9d28-43cd-852c-d3c0051ae23a>" }
This March for Women’s History month, support mom, and her financial contributions to your family, by purchasing life insurance. This includes women working inside the home, as well. Far too many individuals underestimate the economic value of stay-at-home moms. Yet, these contributions are too often left unprotected by life insurance, according to the Insurance Information Institute (I.I.I.). The month of March is dedicated to celebrating the contribution of women to events in history and contemporary society. Celebrate the past and future achievements of women by purchasing a life insurance policy to financially protect your family. Whether or not they work, many women contribute to the economic well-being of their family in important ways – from taking care of household tasks to acting as the primary care giver for children and aging parents. A national poll recently found that 43 percent of adult women have no life insurance. And among those who are insured, many are underinsured, carrying roughly a quarter of the coverage necessary for their needs. Women now comprise 57 percent of the U.S. labor force, according to the Bureau of Labor Statistics (BLS) survey, yet they carry 31 percent less life insurance than their male counterparts. Though 27 percent of wives are breadwinners, millions of families rely solely on the male’s life insurance policy, failing to recognize that their finances would be devastated without their income. A LIMRA survey found that while younger women are now as likely as their male counterparts to have coverage, women ages 55 and older are still considerably less likely than men the same age to own life insurance. And women of all ages have smaller average amounts of individual life insurance coverage than men in equivalent age brackets. On average, women have $129,800 of individual life insurance, to men’s $187,000. “Women’s History Month is an important reminder of how far women have come,” said Loretta Worters, I.I.I vice president. “One hundred years ago women weren’t even able to buy life insurance; today women hold leadership positions in corporate America, including the insurance industry. So it’s more important than ever that women place a value on their contributions and purchase the right type and amount of life insurance.” In 2014, there were 1.6 million women employed in the insurance sector, accounting for 59.5 percent of the 2.7 million workers in the insurance industry, according to the BLS. Life insurance can be a good choice for single women with no dependent as well. Women are, as a group, living longer than ever before, and the need for sufficient retirement income is crucial. A cash value life insurance policy, for example, can help accumulate funds on a tax-advantaged basis to supplement other retirement income. Life insurance can also pay for outstanding debt, funeral, burial, probate and estate administration expenses or be used to leave behind a legacy in the form of a charitable contribution. Speak to a licensed insurance professional to better understand the life insurance options available.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9394760131835938, "language": "en", "url": "https://commercemates.com/limitations-of-accounting-standards/", "token_count": 975, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.158203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5af4372f-3442-4dee-a3bc-edf3fcd474cf>" }
What is Accounting Standards? Accounting Standards simply refers to guidelines to be followed in the accounting system. It means rules & regulation that are to be followed while recording accounting & financial transactions. It governs the manner in which financial statements are prepared & presented. The main aims of accounting standards are to bring uniformity & reliability in the whole accounting system. Accounting standards standardize the whole accounting procedure of economy. All companies after adopting these accounting standards follow the same manner of recording transactions. This way the whole accounting system becomes easy & easily understood by all. It prevents happening of any fraud by establishing certain norms & principles. Accounting standards are issued by accounting body of the respective country. In India, Institute of Chartered Accountants of India formulate & issue Accounting standards. These standards are followed by accountants & companies in preparing & presenting financial statements. Benefits of Accounting Standards Bring uniformity in accounting Accounting standard plays an efficient role in bringing uniformity in whole accounting system. It provides a standardized rules and regulations regarding treatment of financial transactions and events. Financial statements of company are also prepared and presented as per the standard format specified by these accounting standards. This way it leads to uniformity in whole accounting methods. Avoids frauds and manipulations These standards pay attention on avoiding any frauds or errors within the organization. Accounting standards provide complete framework and guidelines that need to be followed compulsorily by every entity. All accounting information is recorded and presented in accordance with the provided principles. These standard makes it quite difficult for managers to manipulate the facts or commit any kind of fraud. Enhance Reliability of financial statements Accounting standards impart reliability to financial statements prepared by an organization. Following of these standards ensure that all financial information of company is presented in a fair and true manner. There are many stakeholders who are user of financial statements and take it the base for taking various crucial decisions. These standards make sure that all information presented is trustworthy that leads to correct decisions. These standards help auditors in verifying the correctness of company accounts. Accounting standards provides all accounting rules and regulations to be followed in a written format that enables auditors to follow uniform practices. Auditor can easily assure the fairness of account by checking out whether all policies provided by accounting standards are followed or not. One of the important benefit provided by accounting standards is that they facilitate the comparison of financial statements of companies. When uniform accounting policies, rules and regulation are compulsorily followed by each entity, then comparison of their performance become quite easy. Financial statements can be easily evaluated by users and also performance comparison among distinct companies can be made before taking any decisions. Limitations of Accounting Standards Accounting standards have important role in the accounting system. Apart from their importance, they have certain limitations also. Some of these limitations are discussed below: Brings Inflexibility & Rigidity It is one of the major disadvantage of accounting standards. Accounting standards basically establish each & every principles and rules for accounting treatment. Every company is required to follow the same principles constantly. Therefore all companies are required to fit themselves into guidelines of accounting standards. Every companies goes through different situations & have different financial transactions. Sometimes it becomes difficult for them to follow the same guidelines. Involves High Costs Another disadvantage of following accounting standards is that it involves high costs. Implementing accounting standards in your accounting standards is too costly. Company need to change their entire procedures, upgrade their systems & provide their employee’s training accordingly. Companies need to monitor whether employees are correctly following standards. All these activities require large costs for bringing changes. Difficult to Choose Among Alternatives Choosing among different alternatives available is another disadvantage of Accounting standards. Accounting standards provides many options for treatment of the same accounting concept. It becomes difficult for companies to decide which one is best for them. Accounting standard does not clearly state that which one is the appropriate choice. For ex. for stock valuation there are 3 alternatives available. These are weighted average, FIFO & LIFO method. Choosing which one is best is difficult task. Scope is Restricted The accounting standards are followed in accordance with prevailing laws & statutes. Accounting standards cannot override the statutes & laws. These standards are created & framed in accordance with prevailing laws. Using these standards as per the prevailing laws can limit & restricts their scope. Another drawback of Accounting standards is that it is time-consuming. Implementation of accounting standards requires many steps to be followed to prepare financial report. It makes the process of preparing financial statements complex & time-consuming. It defines each & every step for preparation of financial reports. Accounting standards involves income statement, trial balance & balance sheet preparation. Accountants need to strictly comply with rules of accounting standards. It makes their work complex & rigid.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9411802887916565, "language": "en", "url": "https://hunterhastings.com/topics/value/", "token_count": 1376, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.01318359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:96916464-aee3-432c-8982-a280b2b60d9a>" }
Peter Drucker is famous for, among many other pieces of business wisdom, his statement that “there is only one valid definition of business purpose: to create a customer”. That’s a statement with a lot of punch and a lot of clarity. It dismisses all the contemporary alternatives in the debate about the purpose of business firms, such as maximizing shareholder value or sustainability and environmental protection or stakeholder theory. How do firms create customers? Peter Drucker was equally clear on this question: “Because the purpose of business is to create a customer, the business enterprise has two–and only two–basic functions: marketing and innovation. Marketing and innovation produce results; all the rest are costs. Marketing is the distinguishing, unique function of the business.” It’s certainly sound advice to place marketing and innovation at the front and center of business operations. Since 1954, when Drucker’s book, The Practice Of Management, was published, there have been great advances in defining how marketing is conducted and how innovation can be successfully introduced to the market. The most recent advances have come from the field of economics, a discipline that is dissolving the walls that previously existed between it and psychology and cognitive science, and discovering a new understanding of how and why customers make their economic decisions to buy or abstain from buying, to increase or decrease their usage levels, and to maintain or abandon loyalty to a service provider or a brand. The new discoveries concentrate in the phenomenon of value. Business language has embraced value in the past, and shifted its focus from value creation (the idea that value is produced within the firm) to value co-creation (the idea that value is produced jointly in an act of exchange between a service provider and a customer). Now, economics – and specifically that brand of economics known as Austrian economics – has identified that all value is created by the customer. It is the customers’ investment of time and effort and emotional commitment and intent to better their circumstances that creates value. Value emerges in the customer domain. Behind this discovery is a new definitional understanding of value. It is a feeling in the customer’s mind, an experience that’s unique to each customer. Only the customer can have the experience. New research is revealing more about the experience – for example, that it is a learning experience. It takes place over time, beginning with an anticipation or estimate of future value (“what’s in it for me?”), an appraisal of relative value (“is it worth it?”), an exchange experience (the act of buying), a usage experience (the act of using the good or service) and finally an assessment of whether the experience met the expectations of the initial anticipation. The customer is busy and highly engaged in the physical, cognitive and emotional processes of value. Where does all this leave the firm, and their marketing and innovation activities? The new discovery is that the successful firm is a facilitator – rather than a deliverer or creator – of value. There are degrees of facilitation ranging from passive (e.g. making a purchase opportunity available on an e-commerce site) to active (e.g., providing help-desk or personal service in real time when the customer is experiencing product usage), and many in between. The pivot in the shift from value creation to value facilitation is the new role of the value proposition. Firms can create new information of which the customer is unaware, such as the development of a new service or the addition of new features to an existing service. Customers want to appraise the potential value represented by new information. They will make the decision, and they give some weight to information from the service provider. The first element of information in a sound value proposition is empathy. The value process begins with the customer’s pursuit of betterment. They give a signal to entrepreneurial innovators that betterment is possible: the signal is dissatisfaction. Customers can create value but they can’t design their own products and services. Their genius is to always want something better. The responsive entrepreneur diagnoses their inarticulate dissatisfaction using a highly tuned sense of empathy. The value proposition communicates to the customer that the entrepreneur expended significant effort at empathic diagnosis. The next element of the value proposition is a promise. While unable to create value, firms and brands can promise that they have worked hard to find a way for their customers to experience value. The value proposition must demonstrate to customers that - You recognize them as individuals. Show evidence. - You understand their current dissatisfaction – reveal your empathic diagnosis. - You offer a credible promise of relief. - You reinforce your offer with reasons-to-believe. Before the customer engages emotionally, they want to engage rationally. - You have a clear statement of benefits that you can demonstrate are greater than the customer’s cost. The customer’s cost includes not just willingness to pay, but also opportunity costs such as inertia, alternatives and value uncertainty. Help them with their economic calculation. The value proposition sets the customer’s value learning process in motion: anticipating, weighing, exchanging, experiencing, assessing. The value proposition is your commitment to the customer that the process will be worthwhile, satisfying, enjoyable, and, ideally, beyond their expectations. And this valuable exercise in making a promise does much more. Through its language, it becomes the culture of your company. Starting from Peter Drucker’s definition of business purpose, every employee, supplier, agent and partner should know their role in creating and retaining a customer. In the language you use to recognize your customer and their dreams and hopes, their individual context and their preferences and desires, you’ll communicate to your organization how to love the customer and develop relationships. In the language you use to describe the customer’s current dissatisfaction, you’ll nurture an empathic organization. In the language you use to make a promise, you will embed commitment to keep it. In the language of credible and rational support for the promise, you’ll cement internal belief in the promise-keeping mission. And in the language of benefits to the customer, you’ll set the standards of customer-facing behavior and customer relationship management for everyone in your firm. Yes, a value proposition is just language. In business strategy, language is all we have to tell each other how we will collaborate around a purpose, to share the tools and tactics we’ll all use, and to communicate the successes and learning opportunities that come from implementation and promise-keeping. And, most importantly, to invite the customer to allow us into their value learning process.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9536773562431335, "language": "en", "url": "https://kilthub.cmu.edu/articles/journal_contribution/The_Effects_of_Financial_Innovation_on_the_Instruments_of_Monetary_Policy/6708437/1", "token_count": 167, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.035400390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fb5a8b82-78b7-437a-bc15-864ceb3b2fc9>" }
The Effects of Financial Innovation on the Instruments of Monetary Policy Proper understanding of the effects on the instruments of monetary policy of changes in the technology of making and receiving payments has been marred by failure to observe three distinctions. One is the distinction between changes in technology that overcome regulatory and legal restrictions and changes that would occur in the absence of these restrictions. A second distinction is between money and credit, or more properly the distinction between technical changes or innovations that increase borrowing and lending and innovations that change the demand for and supply of money. A third distinction is between the immediate or impact effect on a particular type of institution and the full equilibrium effect on the economy. In this section, I discuss the first of these issues. The following sections distinguish between money and credit and analyze the impact and final effect of innovation on both money and credit.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9269893765449524, "language": "en", "url": "https://www.jdsupra.com/legalnews/the-future-of-carbon-capture-and-65756/", "token_count": 595, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2490234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0ae8f92b-33d1-4077-a818-e8afdbe024d3>" }
By ratifying the 2015 Paris Agreement, nations across the world made a commitment to reducing greenhouse gas emissions by at least 40% by the year 2030. Carbon dioxide is one of the primary greenhouse gases found in the Earth’s atmosphere, accounting for 76% of global greenhouse gas emissions according to published reports. Any effort to reduce greenhouse gas emissions will undoubtedly rely heavily on reducing the presence of carbon dioxide in the atmosphere. There are two primary ways to achieve a reduction of CO2: (1) decrease the output of carbon dioxide emissions; or (2) increase the amount of carbon dioxide that is removed from the atmosphere. The latter option is known as carbon capture and sequestration (“CCS”). CCS is the process of seizing atmospheric carbon dioxide and storing (or “sequestering”) these gases in physical formations in the ground. Carbon capture and sequestration techniques have existed for decades, but the development of specific technologies for CCS has been largely cost prohibitive due to lack of governmental support in the legislative, regulatory and financial arenas. However, majors such as Exxon, Chevron and Shell are joining a broader push to make the requisite technology cheaper and more efficient. Producers and governments have shown interest in CCS as it allows for the continued use of fossil fuels while reducing net carbon dioxide emissions. “The demand for energy is growing, and the expectations to lower the carbon footprint are increasing,” says Barbara Burger, president of Chevron’s venture-capital arm. Another reason for producers’ interest in CCS programs is that injection of carbon dioxide underground can serve to release trapped oil. This process is known as enhanced oil recovery and is currently the top use for captured carbon dioxide globally. A necessary step for wide-scale CCS development and deployment is the creation of a clear legal framework to regulate this new technology. In Texas, for example, case law has not yet settled critical questions regarding real property rights for capture, injection, and storage such as the issue of who owns the rights to lease subsurface pore space for carbon storage when the mineral and surface estates have been severed. Additional legal and other issues surrounding CCS include transportation, long-term storage monitoring, migration and leakage liability, monitoring and enforcement of agreements, risk management, competition, taxation, and incentives such as carbon tax credits. Creating effective CCS regulatory systems may be a difficult task because of the technology’s unique features, such as the inherent complexity of potential long-term storage needs, but if this framework becomes well-established, CCS would likely prove to be a vital process utilized by nations around the world to meet global environmental goals over the next twenty years. Nigel Bankes, Jenette Poschwatta & E. Mitchell Shier, The Legal Framework for Carbon Capture and Storage in Alberta, 45 ALTA. L. REV. 585, 589 (2007).
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9458556175231934, "language": "en", "url": "https://blogs.lse.ac.uk/businessreview/2020/06/29/twenty-six-per-cent-of-euuk-workers-risk-losing-their-jobs-in-the-covid-19-crisis/", "token_count": 1177, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1455078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a89df962-c34c-4275-9132-be7fa0e20955>" }
Many businesses across the world have already laid off their workers, furloughed them, or shortened their work hours. Governments have used a variety of job retention programs, most fashioned after the German Kurzarbeit program of the previous Eurozone crisis. These require massive budget allocations and are hardly sustainable over a longer period of time. Differentiated policies to deal with jobs are needed: both for protecting existing jobs and for creating new ones. Recent research shows that only a handful of countries are considering the second path: Portugal, for example, has established a program that supports startups with less than five years of business activity, through the contracting of incubation services with an incentive of EUR 1,500. The United Kingdom has created the Coronavirus Future Fund for startups – which is available for recently-established companies that have raised at least GBP 250,000 in equity investment in the last 5 years. One simple way to estimate the size of job losses in the near term is to combine official data on employment structures with surveys of business. For instance, a survey of small businesses in the United States by the Small Business Investor Alliance found that around 20 per cent of the workforce in the wholesale and retail sector lost jobs by mid-March, and 2 in 3 firms anticipated further layoffs. We combine Eurostat end-2019 sector-level employment figures from 28 countries (27 European Union countries plus the United Kingdom) with occupation-level data on industries at risk as a result of physical-distancing policies to calculate the number of jobs at risk as a percentage of total employment. This analysis suggests that the share of workers losing their jobs while social distancing is in place is approximately 26%, with little variation across Europe (Figure 1). Figure 1. Jobs at risk as percentage of total employment Source: Eurostat 2020. Notes: Total employment refers to all paid employees and excludes self-employed people. This analysis shows a troubling trend that mirrors recent findings on firms’ survival times. Using data from 34 middle income and low income countries, we find that under the assumption that firms have no incoming revenues, the median survival time across industries ranges from 6 to 28 weeks. Once collapsed export demand is taken into account, the median survival time falls to between 6 and 18 weeks. Retail is consistently the most cash-constrained sector, while manufacturing is consistently the longest lasting. To mitigate the devastating consequences of the current crisis on employment numbers, countries are expanding existing job-protection schemes. The United Kingdom has extended its furlough scheme by four months until October 2020, after businesses voiced concerns that otherwise they would not recover in time to re-employ them. As of June 7, 8.9 million workers – over a quarter of the United Kingdom’s entire workforce – had been furloughed. Similarly, France’s temporary unemployment scheme to avert mass bankruptcies and lay-offs as a result of the COVID crisis will be extended, and is now expected to last up to two years. By the end of April, 8.6 million employees were benefitting from the scheme. While these schemes protect jobs, their cost has been staggering. To date, the United Kingdom’s program has already cost approximately £42 billion, and its total gross cost is estimated to be between £60-£84 billion. Estimates for Germany’s Kurzarbeit’s scheme now surpass €40 billion, and France had already spent more than €26 billion by mid-May. Few countries, even within Europe, can afford to sustain these programs for long periods of time, so that governments will soon need to start targeting particular sectors or regions. For some countries, tourism will be the obvious sector to target, while others will prioritise manufacturing and construction, given that during the current COVID crisis industrial production in the euro area and the EU has fallen by more than 17%, a level last seen in the mid-1990s. With limited resources, targeting aid to specific sectors or regions is indispensable. The new challenge is choosing which ones. - This blog post expresses the views of its author(s), not the position of LSE Business Review or the London School of Economics. - Featured image by Ross Sneddon on Unsplash - When you leave a comment, you’re agreeing to our Comment Policy Erica Bosio is a researcher at the World Bank Group, where her work focuses on public procurement. Previously, she worked in the arbitration and litigation department of Cleary Gottlieb Steen & Hamilton in Milan. She holds a Master of Laws from Georgetown University and a degree in law from the University of Turin (Italy). Maksym Iavorskyi is an operations analyst at the growth analytics unit in the development economics vice presidency of the World Bank. Between 2015 and 2019, he was a member of the Doing Business team and covered the Enforcing Contracts and the Resolving Insolvency Indicators. Prior to joining the World Bank Group, Maksym worked as a lawyer in leading law firms in Ukraine and in the United States. He holds a Master of Laws (LL.M.) from the Geneva Law School and Graduate Institute of International and Development Studies. Maksym speaks Russian and Ukrainian. Nathalie Reyes is an analyst in the growth analytics unit at the World Bank. Her work focuses on public procurement, business regulations, and private sector development. Prior to joining the World Bank, Nathalie worked in the Inter-American Development Bank and Universidad Javeriana in Bogota, Colombia. She holds a Master’s degree in development economics and public policy from Université Paris 1 Panthéon-Sorbonne and a degree in economics from the Universidad Militar Nueva Granada.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.899379312992096, "language": "en", "url": "https://ceopedia.org/index.php/Purchases_journal", "token_count": 780, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06884765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:db7936d1-153c-4232-b0b1-87fab7054518>" }
Purchases journal – is created as creditors subsidiary journal. It is used to record prime entry of credit purchases of merchandise used by companies for trading. For example, if a clothes seller buys furniture, it will not be included in the purchases journal. However, if a clothes dealer buys clothes, it will be entered in the purchases journal. When a company makes purchases of products on account there is no cash transaction, instead, the company is obliged to pay an invoice at a later date. Therefore, purchases made for cash are not contained in the purchases journal. They must be entered in the cashbook. Recording credit purchases To record purchases made on the account you must credit Accounts Payable account for the amount you need to settle in future, instead of crediting straight Cash at Bank account. Collaterally, you must debit Purchases account in the cost of goods sold category instead of inventory account. Accounts Payable account is also known as Creditors Control account. It is simply control account which includes full amounts. Structure of purchases journal Purchases journal has a columnar form. Mainly, there are columns named : - Date – date of the transaction made with the vendor. - Description – here should be included names of the suppliers and all the major details concern purchased products. - Invoice No. (optionally) - purchases journal does not need to have an invoice number, as invoices received from vendors will not be in numerical chronology. - Purchases Debit – Purchases account rises because of money spent on products increase. - Accounts Payable Credit – Accounts Payable account rises due to increasing liabilities towards creditors. Posting the purchase journal Posting is the process of transferring the debit and credit entries from the purchases journal to the proper accounts in the ledger. It can be done day-to-day, weekly or from month to month. However, it depends on company policy. It is important to post every entry from purchases journal into respective accounts in the ledger. It helps the company to see the net effect of transactions made at a particular period on a certain account. Journal vs. Ledger Main differences between journal and ledger: - Journal is a special book of the first entry in which every purchase is made before being posted in corresponding accounts in ledger. Ledger is a book of final entry. - In the journal, each entry is recorded daily in chronological order and in the analytical record in the ledger. - Recording transactions procedure is named in the journal – journalizing and in the ledger – posting. - Bienias Gilbertson C., W. Lehman M., Gentene D. (2013) Century 21 Accounting: Multicolumn Journal, Introductory Course, Chapters 1-17, Cengage Learning, 10 - Caldwell R. (2010) Learn Bookkeeping in 7 days: Don’t Fear the Tax Man, John Wiley&Sons - Epstein L. ( 2014) Bookkeeping For Dummies, John Wiley & Sons, 2 - Fundamentals of Accounting and Auditing (2014), The Institute of Company Secretaries of India, paper 4 - Kasi Reddy M., Saraswathi S. (2007) Managerial Economics and Financial Accounting, PHI Learning Pvt. Ltd., New Delhi - Tulsian P.C. (2009) CBSE Accountancy 11, Ratna Sagar, Delhi - M. Kasi Reddy, S. Saraswathi 2007, p. 327 - R. Caldwell 2010, p. 106 - L. Epstein 2014, p. 62-63 - P. C. Tulsian 2009, p. 5.46 - Fundamentals of Accounting... 2014, p. 30 Author: Anna Woroń
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9209559559822083, "language": "en", "url": "https://commercialsociety.net/resource-areas/economics", "token_count": 114, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1630859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c624e19c-6cc9-4774-9fba-a30133d489f7>" }
Jump to navigation The study or systematic investigation of the principles of human action. Let’s take a look backward to where the original idea of economics came from and why that’s important today. Macroeconomics represents half of all introductory economics courses, but the word itself wasn't in the english language until 1945. So what is macroeconomics? This video explores macroeconomics. Economics is much more than just numbers and graphs. In fact, we can use economics to explain much of what we encounter in our daily lives.