meta
dict | text
stringlengths 224
571k
|
|---|---|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.941351056098938,
"language": "en",
"url": "https://www.greencarreports.com/news/1109133_overall-u-s-vehicle-fleet-gas-mileage-rose-only-slightly-over-last-25-years",
"token_count": 647,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1630859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4fe4be73-84af-4ddb-b838-31fbd7a03d25>"
}
|
The average fuel economy of the new cars and light trucks sold in the U.S. is appreciably higher than it was 10 years ago.
The University of Michigan Transportation Research Institute has demonstrated that in its regular reports calculating new-car average fuel economy since October 2007.
But researchers Michael Sivak and Brandon Schoettle found progress to be less clearcut when they looked at the aggregate fuel economy of all cars on U.S. roads—a fleet that now numbers roughly 250 million vehicles.
While the fuel economy of new vehicles sold has improved for the most part, the gas mileage of the overall U.S. vehicle fleet has only increased slightly over the last 25 years, researchers found.
A report on the fuel economy of vehicles on U.S. roads between 1923 and 2015 found that fuel economy fluctuated for most of the 20th century, with smaller improvements over the last 25 years.
Overall fleet fuel economy in 2015 averaged 17.9 mpg, compared to 16.9 mpg in 1991.
1974 Ford Mustang II
And the year 1991, in fact, marked the end of a period of major fuel-economy improvements.
Overall fleet fuel economy stood at 11.9 mpg in 1973, which was actually a decrease from the 14.0 mpg recorded half a century earlier in 1923, according to the report.
The steep increase in fuel economy between 1974 and 1991 resulted from the 1973 oil crisis and ensuing legislation in 1975 to raise fuel efficiency for both national security and environmental reasons.
ALSO SEE: Overall U.S. Fuel Economy: Higher Now Than In 1923, But Only A Little (Aug 2015)
Researchers calculated the fleet fuel-economy averages by analyzing distances driven and fuel consumed, with averages calculated for different classes of vehicle spanning most light-duty categories: cars, pickup trucks, SUVs, and vans.
Even large increases in the fuel efficiency of new vehicles take time to make a substantial impact on the overall vehicle fleet.
That's because new cars must displace older, less efficient models to register a true reduction in fuel consumption.
Automobile accident, Washington, D.C., 1923
The average age of a car on U.S. roads is close to 12 years old—the highest since World War II.
Both the increasing reliability of modern cars and the lingering effects of the Great Recession have influenced Americans to keep their cars longer.
At the same time, the rate of fuel-efficiency improvements for new cars appear to be slowing down.
MORE: Modern electric cars at 20: from EV1 to Bolt EV, where are we now? (Dec 2016)
New-car average fuel economy stayed at the same level—about 25 mpg—between 2014 and 2016, according to UMTRI.
This stagnation roughly corresponds with a period of low gas prices, which are widely blamed with driving consumers toward less-efficient vehicles.
Sedans and hatchbacks made up just 37 percent of new-vehicle sales last month, a record low. The balance, 63 percent, was composed of light trucks: SUVs, crossover utility vehicles, minivans, and pickup trucks.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9863526225090027,
"language": "en",
"url": "https://www.mentalfloss.com/article/56218/what-first-1040-tax-forms-looked",
"token_count": 309,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.447265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fccdd721-bd48-4988-b0de-46f5ca6a0131>"
}
|
This Is What Tax Forms Looked Like in 1864
Nothing in life is certain except for death and taxes, but that wasn't always the case. (Well, the death part was, obviously.) The United States didn't impose a personal income tax until 1861 in an effort to help fund the Civil War. It was a flat tax of 3% on anyone making over $800 annually. This was repealed and replaced a year later with a scaled tax of 3% on incomes between $600 and $10,000, and 5% on all incomes higher than $10,000. It also had a built-in termination date: 1866.
In 1864, you would fill out a 1040 form just like today, but it looked a little different:
While this wartime effort eventually went away, taxes temporarily came back in the form of the Wilson–Gorman Tariff Act in 1894. However, an enforced, annual income tax wasn't instituted until 1913 with the passing of the 16th Amendment. Americans were subject to a normal tax of 1%, and the 1040 forms they filed looked like this:
Good to know: Items lost in shipwrecks were deductible, but slaughtered animals were not.
If you're trying to waste some time before filing your actual, current-day taxes, see if you can calculate what you would've owed in 1864 or 1913. Just don't send those in today—we refuse to be responsible for your audit.
All forms are from the IRS archive.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9635022282600403,
"language": "en",
"url": "https://www.ns-healthcare.com/analysis/coronavirus-government-funding/",
"token_count": 1600,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.294921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:97978529-d1be-4f1f-b3f7-63944e09d79c>"
}
|
After the head of the UK treasury pledged £12bn in funding to tackle the coronavirus outbreak, we take a look at how other governments have responded
Increased strain on public health services and the urgent need to develop a vaccine mean the importance of government funding can not be underestimated in the fight against the coronavirus (Covid-19) outbreak.
The US and China have already announced significant amounts of money will be put towards dealing with a global pandemic that has already killed thousands and infected more than 100 countries.
On 11 March, Chancellor of the Exchequer Rishi Sunak unveiled the UK Budget, which included plans to invest £12bn ($15.1bn) into tackling the spread of the virus.
In light of this announcement, we take a look a closer look at how the UK and several other national governments have responded to the outbreak.
Coronavirus funding across the world
On 11 March, Rishi Sunak revealed plans to inject an extra £30bn ($37.8bn) into the UK economy – with £12bn of this sum being specifically targeted at measures to cope with the coronavirus outbreak.
This announcement came on the same day the World Health Organisation (WHO) declared the virus a global pandemic – the first since the HIV/AIDS crisis – and the UK confirmed 74 new cases, bringing its total number of infections to 456.
Sunak’s budget included statutory sick pay for anyone forced to self-isolate after displaying symptoms, a £500m ($630m) “hardship fund” for local authorities to help vulnerable people, and plans to give employees working zero-hours contracts easier access to financial benefits.
Some £5bn ($6.3bn) of the budget was also pledged specifically to the NHS as a coronavirus emergency response fund.
As well as aiding the country’s health service, Sunak promised to support businesses that are struggling as a result of the outbreak.
He announced a temporary coronavirus business interruption scheme, meaning banks will offer loans of up to £1.2bn ($1.5bn) to small and medium-sized firms.
Companies with fewer than 250 employees providing statutory sick pay to staff who are off work because of the coronavirus will also be fully subsidised by the government.
While the UK Budget was revealed in an effort to cope with the spread of the infection – and with confirmed domestic cases still only in the hundreds – China’s largest coronavirus-related funding announcement came on 5 March at a time when it was thought to have already seen the worst of the epidemic.
The Chinese government said it had allocated close to $16bn – although the country’s vice finance minister Xu Hongcai said more than $10bn of this has already been spent.
Much of this money has gone towards building hospitals specifically to treat coronavirus patients.
This includes the construction of a 1,000-bed makeshift hospital in Wuhan, which was completed in just 10 days and opened on 3 February.
Several other venues in Wuhan – where the virus first broke out in December 2019 – have also been converted into makeshift care facilities, while an additional 20 mobile hospitals and 1,400 nurses from across the country are said to have been deployed in the city.
These measures have left China with about $5.63bn to spend on dealing with the ongoing effects of the coronavirus outbreak.
Xu Hongcai said the workings of local governments, and continued support for people and industries in the Hubei province will now be prioritised.
Outside of China, Italy is currently the country worst affected by the coronavirus – with more than 15,000 cases and 1,016 deaths as of 12 March.
The government initially announced it would invest €7.5bn ($8.3bn), before increasing this figure to €25bn ($28.3bn) on 10 March.
This sum is being put towards suspending all debt payments – including mortgages – across the country during the outbreak.
It will also be used to mitigate the economic effects of the nationwide lockdown in Italy, which the government enforced on 9 March.
On 12 March, the Iranian government asked the IMF (International Monetary Fund) for $5bn in emergency funding to tackle the coronavirus outbreak.
Abdolnaser Hemmati, the governor of Iran’s central bank, said in a statement that it was needed to fund preventative measures like travel restrictions and reduced working hours for its citizens, as well as medical treatments.
On 4 March, the IMF had made a $50bn aid package available to low and mid-income countries affected by the coronavirus.
Along with China, South Korea and Italy, Iran has been one of the worst affected countries with more than 9,000 cases and 350 deaths as of 12 March.
On 4 March, South Korea’s finance minister Hong Nam-ki announced plans to invest $9.8bn into mitigating the impact of the coronavirus domestically.
Around $2.7bn of this total was put aside to make up for the revenue deficit caused by the outbreak, while the remaining $7.1bn will act as an additional financial boost to the country’s economy.
Hong said as well as providing investment for the healthcare system, the supplementary budget will also help to minimise the economic fallout in South Korea, with emphasis on small and medium-sized businesses, and self-employed people.
The previous day (3 March), South Korean president Moon Jae-in declared that the whole country had entered a “war” with the coronavirus, and said the government planned to inject a total of $25bn into measures to contain the outbreak.
In January, the EU pledged €10 million ($11.1m) towards tackling the spread of the coronavirus.
On 9 March, the European Commission secured an extra €37.5m ($41.7m) – which it said would be put towards developing a vaccine for the disease, as well as new treatments, diagnostic tests and other medical systems to mitigate its spread.
A day later, this number increased further with European Commission president Ursula von der Leyen announcing the EU had mobilised a total of €140m ($155.5m).
In addition to this, on 11 March, the EU announced plans to raise €25bn ($27.8bn) – which will be used separately to cope with the economic fallout caused throughout its member states.
This “corona response investment initiative” will specifically focus on national healthcare systems, small and medium-sized businesses, workers, and other vulnerable parts of the economy.
Through January and February, the majority of government action in the US involved imposing travel restrictions on those leaving and entering the country – and evacuating American citizens from the Diamond Princess cruise ship quarantined in Japan.
But, on 4 March, the country announced its first significant piece of funding to combat the spread of the infection, as President Donald Trump passed an $8.3bn Emergency Coronavirus Response bill.
This included pledging more than $3bn into the research and development of coronavirus vaccines, and about $800m into researching other treatments.
More than $2bn was promised to the US Centers for Disease Control and Prevention (CDC), and $61m to the US Food and Drug Administration (FDA).
The government said more than $1bn will also be invested into state and local public health efforts – including community health centres, and state and local governments.
On 14 March, Trump declared the coronavirus outbreak a national emergency giving the country access to $50bn in federal aid.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9535371661186218,
"language": "en",
"url": "https://www.oklahomaminerals.com/an-elephant-in-the-desert",
"token_count": 1593,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1298828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fd429f96-84da-42ed-9983-05a2eb14aed6>"
}
|
Only time will tell whether OPEC will effectively implement its recent decision to curb oil supplies and reverse a price slump that’s persisted for almost 3 years. But amid the many predictions of where the price of oil is going, something else has become clear. When it comes to oil production, the U.S. is in a much better position today than it has been in years. The big unconventional plays here have changed the economics of world oil, plays like the Bakken, the Eagleford, and the Permian.
Since the dawn of the petroleum industry, thousands of oil fields have been discovered and over 70,000 oil fields are still in use. However, not all fields were created equally. Even among all of the biggest fields we have in the U.S., Saudi Arabia still has bragging rights to having the biggest oil field in the world – The Ghawar Oil Field.
GIANT FIELDS DEFINED
Publications by the American Association of Petroleum Geologists over the past three decades have identified 500 to over 900 oil fields as world-class “giants;” those with proven oil reserves of 500 million barrels or natural gas reserves of 3 trillion cubic feet. A “supergiant field” is one with 5 Bbo (or oil equivalent) reserves. Despite their relatively lower population, the giant fields account for over half of the world’s oil resources. Their distribution, however, is uneven; over 200 of them are concentrated in the Persian Gulf region. Giant fields are hard to come by, and the life span and reservoir management of these fields will have a drastic impact on the global oil industry and market in the coming years. (1)
GHAWAR OIL FIELD
Although the Ghawar Field is a single field, it is divided into six areas. From north to south, they are Fazran, Ain Dar, Shedgum, Uthmaniyah, Haradh and Hawiyah. Although Arab-C, Hanifa and Fadhili reservoirs are also present in parts of the field, the Arab-D reservoir accounts for nearly all of the reserves and production.
Ghawar is entirely owned and operated by Saudi Aramco.
This massive structure is so productive that it typically gets compared to other countries, not other fields. In fact, according to the Energy Information Administration (EIA), the field has more oil reserves than all but seven countries.
Discovered in 1948 and located some 200 km east of Riyadh, Ghawar has produced on average about five million barrels of oil per day in the past three decades. Ghawar is based on the name the Bedouin tribes used for the region. Production began in 1951 and reached a peak of 5.7 million barrels per day in 1981. This is the highest sustained oil production rate achieved by any single oil field in world history. Oil from Ghawar has a density of 30-34° API and the oil column in the field has been reported to be 396m.
Approximately 60–65% of all Saudi oil produced between 1948 and 2000 came from Ghawar. Cumulative production has exceeded 65 billion barrels. Ghawar also produces approximately 2 billion cubic feet of natural gas per day.
At the current rate of oil production, it is estimated that Ghawar should produce 40 more years, supporting Saudi Arabia’s hold on the oil market. It’s impossible to anticipate if production will increase or at what rate. Without Ghawar’s oil reserves, Saudi Arabia’s influence in the world would be greatly diminished. As of 2008, Ghawar’s reserves had been depleted by 48% of its entirety. The increase of U.S. production will continue to decrease the power and reach of OPEC upon the world.
The field has long undergone secondary recovery methods including gas and water injection. In 1995 Aramco conducted a 3-D seismic survey to examine the reservoir structure and fracture distribution to guide future development of the field.
Ghawar had more than 3,000 injector and oil producer wells by the end of 2012. Halliburton is the prime contractor and was awarded a five-year contract in 2009 to develop as many as 185 oil production, water injection, and evaluation wells.
No one exactly knows how much oil lies beneath Ghawar. Some estimates previously put the oil in place as high as 250-300 billion barrels. However, recent and more technically updated estimates show recoverable oil reserves to be in the 75-100 Bbo and natural gas reserves of around 186Tcf.
Will Another Ghawar Be Discovered?
The oil we pump up today was formed between 10 and 600 million years ago, from dead plants and animals falling to the bottom of the sea. Turning these organisms to oil is a precise recipe, involving burial and slow maturation at temperatures of between 70 and 160°C. Every year a few million barrels worth of new oil matures underground somewhere, but this is a mere droplet compared to our global consumption of around 30 billion barrels of oil per year. (2).
Since the geology of virtually all of the world’s sedimentary basins is at least partially known, geological inference indicates that it is unlikely that any undeveloped region will be found to contain such another giant field. Drilling and exploration have taken place in some of the most remote places on earth and no field the size of Ghawar has been found. If such a field does still exist, it is most likely offshore and finding it will prove difficult and costly to produce.
Saudi Aramco – Saudi Arabia’s National Oil Conglomerate
Most popularly known just as Aramco (formerly Arabian-American Oil Company), is a Saudi Arabian national petroleum and natural gas company based in Dhahran. Saudi Aramco’s value has been estimated at over US$1.25 trillion.
In 1973, following US support for Israel during the Yom Kippur War, the Saudi Arabian government acquired a 25% stake in Aramco. It increased its shareholding to 60% by 1974, and finally took full control of Aramco by 1980, by acquiring a 100% stake in the company.
In early 2016 the Deputy Crown Prince of Saudi Arabia, Mohammad bin Salman Al Saud, announced he was considering listing shares of the state-owned company, and to sell around 5% of them in order to build a large sovereign fund.
Back in 2008, Abdallah Jum’ah, Saudi Aramco’s president and CEO gave CBS’s 60 Minutes a tour of the company’s command center, where engineers scrutinize and analyze every aspect of the company’s operations on a 220-foot digital screen.
- Sorkhabi, Rasoul: Source: GeoExPro Vol. 7, No. 4 – 2010 –The King of Giant Fields
- Ravilious, Kate: Source: The Guardian, March 1, 2015
- Abdallah Jum’ah. Interview by Leslie Stahl. 60 Minutes. CBS. WCBS, New York: 7 Dec., 2008. Television.
Compiled and Published by GIB KNIGHT
Gib Knight is a private oil and gas investor and consultant, providing clients advanced analytics and building innovative visual business intelligence solutions to visualize the results, across a broad spectrum of regulatory filings and production data in Oklahoma and Texas. He is the founder of OklahomaMinerals.com, an online resource designed for mineral owners in Oklahoma. ☞Email: [email protected]
CLICK HERE TO SUBSCRIBE TO OUR WEEKLY OIL AND GAS INDUSTRY NEWSLETTER DELIVERED DIRECTLY TO YOUR INBOX FROM OKLAHOMAMINERALS.COM
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9460858702659607,
"language": "en",
"url": "https://yourbusiness.azcentral.com/read-ratios-nonprofits-financial-statement-11550.html",
"token_count": 575,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0167236328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1003e3b9-785d-435d-9919-be11fb0917a1>"
}
|
Just like any large business, a small nonprofit needs to understand its financial health. This analysis can only come from good accounting and the ability to comprehend financial ratios. Accounting should be completed using Generally Accepted Accounting Principles, or GAAP, to create financial statements, from which ratios can be generated. Financial ratios are used to ascertain the financial strength of a business and offer evidence of the company’s risks and management's strengths. They can be broken down into four general categories.
Income ratios look at the revenue stream of your nonprofit company. The most important of income ratios for a nonprofit are the reliance ratios. These ratios are a percentage of revenue from a specific funding source. They give you the reliance on a funding stream and allow you to assess risk based on this dependence. Due to the nature of grant funding, a nonprofit can end up with a disproportionate amount of its funding coming from one source. This would give a high reliance ratio and indicate a significant risk to the company if that grant source were to stop its funding.
The management ratios give executives an idea of the profitability strength of the company. Of particular interest for a nonprofit is the change in unrestricted net assets, or CUNA, ratio. The CUNA ratio is the unrestricted net assets divided by the total unrestricted income. This ratio is important for a nonprofit because grant funding and some donation funds are generally restricted funds that must be used for a specific program or purpose. Restricted funds do not help establish a business's overall strength. Unrestricted funds can be used for whatever overall expense is necessary, including administrative expenses and savings used for cash flow.
One of the most important cash ratios is the current ratio. This analytic tool is the proportion of current assets to current liabilities. In essence, this ratio indicates whether a company can pay bills on time. Because nonprofits often have irregular funding schedules, having enough cash on hand to pay bills is important. This ratio and an analysis of funding patterns will give management an idea of how much cash is needed in reserve to maintain healthy payment capacity.
Debt ratios give you a solid idea of how much is owed by a company. Of importance to a small nonprofit are the aged accounts payable ratio and the aged accounts receivable ratio. These are ratios of aged accounts over 90 days and they give the nonprofit a good idea of how much is owed to and from the agency. If the accounts receivable is too long, the company may need to maintain extra funds in reserve or find other funding streams. If the accounts payable is too high, the company may have debt payment issues.
Paul Reyes-Fournier has served as the chief financial officer for social service organizations, churches and schools. In 2009, he created his own marketing firm, RF Media. Reyes-Fournier holds a B.S. in physics and an M.B.A.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8486891388893127,
"language": "en",
"url": "http://search.ndltd.org/search.php?q=subject%3A%22does.%22",
"token_count": 4774,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.13671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:65b9e1fb-347f-44e3-ae09-22b36bccbb62>"
}
|
Spelling suggestions: "subject:"dos.""
01 May 1985
The purpose of this study was threefold: (1) to test the existing theory which explains inflation as a result of its self-generating nature; (2) to investigate the contribution of foreign trade upon inflation; and (3) to test the casual relationship between the rate of inflation and the deficit. A system of four equations has been used to explain the relationship between the price level and the monetary expansion, between the rate of growth of the monetary base and the rate of the monetary expansion, the deficit and the monetary base, and, finally, between the deficit and the price level. As the existing model was exposed to open economy assumptions by introducing foreign reserves as another source of variation of monetary base, the explanatory power of the model increased. That is, as the results suggest, explaining the inflation/deficit chain in the context of a closed economy assumption leaves much of the process unexplained. Even though part of the increase in the monetary base is caused by foreign trade, a major portion of the expansion in monetary base is caused by the deficit. That is, a government's expenditure exceeds its revenue in any given year, which results in financing that deficit through borrowing from the central bank--that is, monetizing the deficit. This study suggests that no generality can be made regarding the source of inflation in Latin America. In some countries, the source of inflation is only the deficit, while in others it is only foreign reserves and deficit contribute to the rate of inflation simultaneously, the effect of foreign reserves is less expansionary. This can be seen from the magnitude of the respective parameter estimates. In the last part of the study, the Granger test of causality has been used to test the causal relationship between the price level and the deficit. Again, countries exhibit heterogeneous results. In some, inflation apparently causes the deficit, while in others, the deficit is the cause of inflation. In several countries, strong feedback exists between these two variables. As a result, it can be concluded that the extent and sources of inflation for countries under study are different. In conclusion, a few policies are suggested which could be used to bring both deficits and inflation at least to some acceptable level.
The aim of this essay is to learn how I as a teacher can work with reading in different ways to promote learning for students in upper secondary school. This is discussed with examples from Mark Haddon's The Curious Incident of the Dog in the Night-time and Randa Abdel-Fattah's Does My Head Look Big In This?. In this study, I found out that there are many factors that contribute to students' attitude towards reading and that affect their experience of a text. These factors consist of five emotions that affect reader response: assimilation, accommodation, sympathy, memories and identification, as well as four categorizing factors: age, gender, ethnicity and class. Knowing these factors, we teachers have the tools to turn students' resistance to reading into something positive, and by doing this, we open up a myriad of learning opportunities through reading.
Ryel, Ronald J.
01 May 1980
The purpose of this study was to investigate the relationship between the fall proportion of fawns among fawns and does in a mule deer population and two measures of productivity, the spring recruitment rate and the reproductive performance as measured in the fall. The spring recruitment rate was defined to be the number of fawns per doe which were recruited into the population at 1 year of age. The reproductive performance was defined to be the number of fawns produced per doe 2 years or older which survive to a specified time. The relationships between these quantities were measured by calculating linear coefficients of correlation from data generated by a projection matrix model of a mule deer population. A coefficient of correlation of 0.86 was found between the fall proportion of fawns and the rate at which fawns are recruited into the spring population. A coefficient of correlation of 0.89 was found between the fall proportion of fawns and the reproductive performance as measured in the fall. The effect of misclassifying fawns as does and does as fawns on estimates of the proportion of fawns among fawns and does was also investigated. A comparison was made between the expected values of two estimates of the fall proportion, one with misclassification and one without misclassification. The misclassification of fawns and does was found to bias estimates of the proportion of fawns. The bias was found to be a function of the amount of misclassification and the actual pro, portion of fawns.
The history of the flute in jazz, basic techniques, and how jazz and improvisation can inform a classical performanceRodriguez, Florida January 1900 (has links)
Master of Music / Department of Music, Theatre, and Dance / Karen M. Large / This report covers a history of the flute in jazz music as well as the advancement of the flute in jazz, starting from the late 1920s. The lives of jazz flute pioneers Alberto Socarrás, Wayman Carver, Herbie Mann, Hubert Laws, and Ali Ryerson are discussed, as well as their contributions to the history of jazz flute. Basic jazz techniques such as improvisation are broken down and explained for classically trained flutists and others who have an interest in playing jazz music but do not know where to begin. This report also discusses how practicing these techniques can further aid in preparing a classical performance. Examples included in this report are excerpts from Mozart’s Concerto in D Major for flute and Mike Mower’s Sonata Latino.
This diploma thesis examines the impact of the amendment to the Act on Budgeting of Taxes, effective from 1 January 2013, on the financing of municipalities setting up a school in their territory. The main aim of the diploma thesis is to find out what influence the amendment to the School Act and the Act on the Budgeting of Taxes has on the financing of municipalities establishing school and municipalities that do not establish school in their territory. Furthermore, whether the municipality, which establishes a school in its territory, is able to cover the costs incurred by education when fees are abolished, and the pupils from the municipalities in whose territory the school is not established are also able to attend. The diploma thesis deals with the issue of financing the municipality, which establishes a school and a municipality that does not establish a school in its territory. Then the issue is related to the whole Pardubice region, where I find out whether the municipalities setting up the school in their territory will be able to cover the costs related to the operation of the school after the amendment of the Act on Budgeting of Taxes.
Why does corruption havedifferent effects on economicgrowth? : – A case study of Sub-Saharan Africa and Southeast Asia / Varför har korruption olika effekter på ekonomisk tillväxt? : En fallstudie av Afrika söder om Sahara och SydostasienBrandt Hjertstedt, Amalia, Cetina, Hana January 2016 (has links)
The purpose of this study is to examine and analyse how corruption can have different outcome on economic growth. A clear division can be seen in Sub-Saharan Africa and Southeast Asia where corruption have different economic outcomes. The countries in this study are the following: Botswana, Nigeria, Kenya, South Africa, South Korea, Thailand, Vietnam and Indonesia. The thesis composes of data over corruption indexes, annual growth in GDP, and socio-economic indicators such as political stability and Rule of Law. The result from theassembled statistics is analysed through the Principal -Agent theory as well as previous research. Previous research includes both positive and negative studies on corruption. The conclusion is that corruption has not a direct effect on economic growth but socio-economic indicators have an important role to explain the different outcome on corruption. The Principal-Agent theory helps us to un derstand the structure of the governmental body and the outcome of corruption.
Moquist, Tod Nolan
Permission from the author to digitize this work is pending. Please contact the ICS library if you would like to view this work. / There are many excellent studies of the life and thought of Reinhold Niebuhr (1892-1971), prominent Christian ethicist, social philosopher, and political activist of the American Century. Most studies focus on his mature works of mid-century, particularly his theological ethics. The following study treats his emergent theory of history between 1927-1934, especially the idea of progress and the narrative of modern capitalist society. During this formative period Niebuhr wrote three major books (Does Civilization Need Religion? , Moral Man and Immoral Society , and Reflections on the End of an Era ) which reflect his intellectual passage from religious liberalism and the politics of persuasion to "Christian-Marxism" and the politics of power. The following thesis will trace the diverse historiographical influences found in these works, from the church-historical perspective of Ernst Troeltsch to the dialectical materialism of Karl Marx. It is common to say that Niebuhr was purely a theologian of history. But following Ricoeur and White, I describe the main ingredients of a philosophy of history that are present in these writings: myth, plot, social processes, patterns of progress and cycle. Moreover, he was a "thinker in time"--these philosophical elements combined to render a plausible and meaningful narrative context for social action. In the early period Niebuhr began his lifelong critique of Enlightenment, capitalism, and the idea of progress. Following Robert Nisbet's analysis of the concept of progress in Western cultural history, I will argue that Niebuhr traverses his own peculiar dialectics of history, moving from the idea of progress-as-freedom (in the twenties) to the idea of progress-as-power (in the thirties); from the form of irony to the form of tragedy; from the concept of the voluntary reform of the excesses of captialism to the concept of the frank use of coercion to implement a socialist alternative to captialism. His philosophy of history in this period thus reflects in Christian idiom aspects of the very antinomies of the Enlightenment regarding personality and power, freedom and fate, which he desires to overcome.
Correia, Fátima Daltro de Castro
11 September 2007
Made available in DSpace on 2016-04-26T18:16:22Z (GMT). No. of bitstreams: 1 Fatima Daltro de Castro Correia.pdf: 981156 bytes, checksum: c066fd1d35e52db7caca7f986cb0bf9e (MD5) Previous issue date: 2007-09-11 / The increase of the images media exposition of wheels chair bodies promotes the impression to collaborate for the social inclusion of the body that we nominate deficient. However, the action of the nominations consecrated for the press, as of pcd (person with deficiency), operates exactly in the contrary direction, exposing the deficiency, transforming it into a stigma. The images of wheels chair bodies that are spreadon media are always associates to the question of the overcoming of limits, tying those bodies only to the values in circulation in the sport world. The prominence that the Paraolimpics Games occupies in the media inserts there, in the consolidation of the boarding of these bodies with images that congeal around an uniform way of the deficiencies in an only niche, delimited for the adopted concept of efficiency/productivity. A critical reflection regarding this situation becomes necessary to be able to deal with the dance that this body practises outside the narrow limits imposed for this stigma mechanism. For this, we here adopt the Theory Corpomedia (Katz & Greiner), with which if it presents the hypothesis of that the wheels chair body dancer is a complex system and able to breach with the perverse speech that the frozen images produce. The concept of corpomedia, formulated from the study of the communication between body, and its environments, favor the agreement of the paper that the media exploration has when it congeals the images that produce around the deficiency and not of the deficient one with its singularities. The methodology contemplates a Study of Case, Judite wants to cry, but it does not get it! , created for the wheels chair dancer Edu Oliveira; interviews aiming qualitative research; critical analysis of these interviews, carried through after the presentation of the spectacle in two different cities (Salvador and Votorantim); bibliographical revision of the subject of the deficient body; video registers. The bibliographical research allowed a brief historical panoramic sketch of the access of the wheels chair bodies to the world of dance and its social implications. Conclusion is that that the wheels chair dancer is cultural and biologically implied in a system of construction of images that associates him with a poor body and that they feed who it cognitivaly, with ominous consequences for the process of its social inclusion. From there the urgency in promoting actions that can breach with the media action in course. This is the role for the dance in wheels chair has and, to accomplish it, cannot be remained based in the criteria of the sport. The dance in wheels chair needs to discover its poetical, therefore they are who potencializes an effective social insertion / O aumento da exposição midiática de imagens de corpos cadeirantes promove a impressão de colaborar para a inclusão social do corpo que nomeamos de deficiente. Todavia, a ação das nomeações consagradas pela imprensa, como a de pcd (pessoa com deficiência), opera justamente no sentido contrário, midiatizando a deficiência, transformando-a em um estigma. As imagens de corpos cadeirantes que ganham divulgação estão sempre associadas à questão da superação de limites, vinculando aqueles corpos somente aos valores em circulação no mundo do desporto. O destaque que a Paraolimpíada ocupa na mídia se insere aí, na consolidação da abordagem desses corpos com imagens que se congelam em torno de uma uniformização das deficiências em um nicho único, delimitado pelo conceito de eficiência/produtividade adotado. Faz-se necessária uma reflexão crítica a respeito dessa situação para poder tratar da dança que esse corpo pratica fora dos estreitos limites impostos por esse mecanismo estigmatizador. Para isso, aqui se adota a Teoria Corpomídia (Katz & Greiner), com a qual se apresenta a hipótese de que o corpo do dançarino cadeirante é um sistema complexo e apto a romper com o discurso perverso que as imagens congeladas produzem. O conceito de corpomídia, formulado a partir do estudo da comunicação entre corpo, e seus ambientes, favorece o entendimento do papel que a exploração midiática tem quando congela as imagens que produz em torno da deficiência e não do deficiente com suas singularidades. A metodologia contempla um Estudo de Caso, o do espetáculo Judite quer chorar, mas não consegue! , criado pelo dançarino cadeirante Edu Oliveira; entrevistas com fins de pesquisa qualitativa; análise crítica dessas entrevistas, realizadas após a apresentação do espetáculo em duas cidades diferentes (Salvador e Votorantim); revisão bibliográfica do tema do corpo deficiente; registros em vídeo. A pesquisa bibliográfica permitiu um breve esboço panorâmico/ histórico do acesso dos cadeirantes ao mundo da dança e suas implicações sociais. Concluiu-se que o dançarino cadeirante é cultural e biologicamente implicado em um sistema de construção de imagens que o associam ao corpo coitadinho e que são elas que o alimentam cognitivamente, com conseqüências nefastas para o processo de sua inclusão social. Daí a urgência em promover ações que possam romper com a ação midiática em curso. É esse o papel que a dança em cadeira de rodas tem e, para desempenhá-lo, não pode se manter pautada pelos critérios do desporto. A dança em cadeira de rodas precisa descobrir as suas poéticas, pois são elas que potencializam uma inserção social efetiva
Le dualisme juridictionnel français à l'épreuve de l'Europe / French juridictional dualism put to the test of EuropeDi Filippo, Alessandra 13 December 2014 (has links)
La perspective européenne a renouvelé l’intérêt de la question du maintien ou de la suppression du dualisme juridictionnel en France à travers deux approches : la résistance aux modèles concurrents d’organisation juridictionnelle d’une part, le crible des standards européens d’autre part. Considéré comme un modèle d’organisation juridictionnelle, le système français a inspiré la majorité des Etats européens. Le phénomène a néanmoins été temporaire. Désormais, la plupart des Etats européens ont un système d’organisation juridictionnelle qui se rattache à un autre modèle. Engager le système français dans une telle voie est juridiquement faisable mais peu opportun en pratique. Le système français en tant que modèle – quoique minoritaire – d’organisation juridictionnelle a donc vocation à perdurer. Sur un autre front, l’alignement du système français sur les standards européens a également fait émerger l’hypothèse de sa suppression. Les condamnations, réelles ou potentielles, de la juridiction administrative et du procès mené devant elle, ainsi que du Tribunal des conflits et de la procédure suivie devant lui, ont montré que les réformes étaient inévitables et imposé de revenir sur des pratiques séculaires bien établies. Elles ont également contribué à rapprocher la juridiction administrative de la juridiction judiciaire et le procès administratif du procès civil. « Sauvé » au prix de nombreuses transformations, le système n’en est pas moins parvenu à préserver ses caractéristiques essentielles, prouvant sa capacité d’adaptation. Un temps affaibli, le dualisme juridictionnel n’a, en définitive, pas été altéré. Mieux, son fondement technique, justification contemporaine du dualisme juridictionnel, en est sorti renforcé. / The European perspective has shed new light on the question of whether maintaining or ruling out jurisdictional dualism in France through two main approaches: on the one hand, the resilience of substitutable models and, on the other hand, the scrutiny of European standards. Considered as a model, the French system has inspired the majority of states in Europe. This wave of inspiration has nevertheless been short-lived. Indeed, most states in Europe have established a judicial system, which is different from the French model. Bringing the French system towards the one adopted by most states in Europe is juridically feasible but raises several issues in practice. As a result, the French system, albeit minor amongst the European states, is probably prone to live on. Furthermore, bringing the French system towards European standards raised the question of a likely end of it. In fact, the French system came under critics, whether effective or potential, of its administrative courts and legal proceedings, together with critics concerning its so-called “Tribunal des conflits” and the legal proceedings. These critics point to the fact that reforming the French system was inevitable. Such reforms led the French system back on some of its secular anchored practices. Eventually, reforms also contributed to bring closer both the administrative court and the administrative legal proceedings to both civil court and civil legal proceedings. Saved at the cost of numerous reforms, the French system nevertheless managed to preserve its basic structures. This in turn provides evidence that the French system is able to adapt itself to an evolving European environment. Finally, the technical founding principles of jurisdictional dualism have been reinforced.
Propojení tepelného manekýna s termofyziologickým modelem člověka / Coupling of Thermal Manikin with Human Thermophysiological ModelDoležalová, Veronika January 2019 (has links)
thermal manikin, thermophysiological model, thermal comfort, climatic chamber, clothing thermal resistence
Page generated in 0.0489 seconds
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9492373466491699,
"language": "en",
"url": "https://african.business/2020/11/energy-resources/mozambique-looks-to-road-project-to-boost-growth/",
"token_count": 1191,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.03662109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:45e565c9-9ca7-4b5c-a201-c57052c0c522>"
}
|
Despite sustained economic growth since 2005, rural poverty in Mozambique persists. Low agricultural productivity, particularly in the northern and central provinces, is exacerbated by poor physical connectivity, including limited access to agricultural extension services, credit markets, and market information.
Limited transport infrastructure means that economic activity is effectively segmented into three geographical regions – north, south, and central – creating conditions for regional price swings that are not smoothed by integrated trade.
A new initiative, the Integrated Feeder Roads Project (IFRDP), is being financed by the World Bank. It focuses on the rehabilitation and maintenance of tertiary roads, with a large percentage of the investments targeting the construction and repair of bridges and culverts to improve accessibility, particularly during periods of heavy rain or flooding. The IFRDP will utilise $185m to rehabilitate and upgrade existing roads in four key provinces: Sofala, Manica, Tete, and Zambezia.
Damage from cyclones and flooding
Mozambique suffers from an exposure to extreme rainfall and flooding that may become even more frequent due to global climate change. Its geography and long coastline, coupled with changing land use patterns and the impact of climate change, mean that it is regularly affected by extreme weather events.
Catastrophic flooding occurs almost annually during the rainy season and is largely influenced by the La Niña weather system and the Intertropical Convergence Zone. Climate change projections indicate that rainfall patterns may become less certain for the country as a whole and vary by region. Since 1960 the proportion of days with heavy rainfall events have increased by 2.6% per decade or an estimated 25 days per year in total.
In 2019, Cyclone Idai damaged or destroyed an estimated 240,000 houses while Cyclone Kenneth damaged or destroyed an additional 50,000. Cyclone Idai alone caused an estimated $115m damage to the private sector. Prior to those devastating events, floods in 2015 affected 326,000 people, killed 140, and caused damage estimated at $371m in parts of Zambezia, Nampula, and Niassa. The road and rail networks have suffered extensive damage over the last 20 years, with substantial sums being diverted from network improvement to the repair of flood-related damage.
These disruptions isolate communities for extended periods of time. Following Idai and Kenneth, the United Nations, World Bank and European Union, in partnership with the government, conducted a Post-Disaster Needs Assessment (PDNA).
The PDNA process documented the severe damage and loss of the recent events. It identified over $3bn worth of damages, with road sector needs estimated at nearly half a billion dollars. In the central region, about 1,962km of roads, 90 culverts, 15 bridges and 24 drifts were damaged, resulting in widespread impassability. This situation resulted in the reduction of transit by about 7% for the national network.
Enhancing road access
This new project is focused, in part, on resolving the economic losses which result from such disruption. The analysis assessed flood risks based on two levels: flood likelihood under various climate change scenarios; and the vulnerability of bridges, culverts, and road surfaces. The project’s core objective is to enhance road access in rural areas in support of the livelihoods of local communities and to enable immediate responses by road in crises and emergencies.
It is hoped that the project will give a boost to Mozambique at a time when it is also being buffeted by the effects of the Covid-19 pandemic, which has further damaged the country’s economic prospects. The pandemic is heavily impacting economic activity as social distancing and travel restrictions affect demand for goods and services. At the same time, low prices for commodities are slowing the pace of investment in gas and coal.
Growth is expected to decline to 1.3% in 2020, down from a pre-Covid forecast of 4.3%. Mozambique is also expected to experience large external and fiscal financing gaps in 2020 and 2021.
The country’s twin challenges are daunting: maintaining the macroeconomic stability considering exposure to commodity price fluctuations; and reestablishing confidence through improved economic governance and increased transparency, including navigating the aftermath of a hidden debt scandal. Structural reforms are needed in support of the struggling private sector.
The political reality on the ground is not helping matters: The Front for the Liberation of Mozambique (Frelimo) and the Mozambican National Resistance (Renamo) remain the country’s main political forces. Renamo maintained a considerable arsenal and military bases after the peace accord of 1992 that ended the civil war, and the country has registered flare-ups of armed confrontations and violence ever since.
A new peace accord was reached in August 2019, but it has been violated several times by a Renamo breakaway military faction. The new deal is aimed at integrating Renamo fighters into the national army, and dismantling Renamo military bases.
Meanwhile, the government is grappling with an Islamist insurgency in parts of the gas-rich province of Cabo Delgado. Initially confined to one locality, the killing of civilians by the insurgents has now spread to other districts and towns inside the province. Recent estimates show the conflict has killed more than 1,000 people and forced 100,000 from their homes.
An additional major challenge is diversifying the economy. The strategy is to move away from the current focus on capital-intensive projects and low-productivity subsistence agriculture, but that will require huge infrastructure spending.
Meanwhile, the work is focused on strengthening the key drivers of inclusion, such as improved quality education and health service delivery, which could in turn improve social indicators. Improving connectivity and boosting the decrepit transport network will be just one step on an even longer road to regaining national prosperity.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.930255115032196,
"language": "en",
"url": "https://articles.bplans.com/product-and-brand-failures-a-marketing-perspective/",
"token_count": 2017,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07373046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b8719330-f50f-4165-9108-680193e86a8b>"
}
|
Product and brand failures occur on an ongoing basis to varying degrees within most product-based organizations. This is the negative aspect of the development and marketing process. In most cases, this “failure rate” syndrome ends up being a numbers game. There must be some ratio of successful products to each one that ends up being a failure. When this does not happen, the organization is likely to fail, or at least experience financial difficulties that prohibit it from meeting profitability objectives. The primary goal is to learn from product and brand failures so that future product development, design, strategy and implementation will be more successful.
Studying product failures allows those in the planning and implementation process to learn from the mistakes of other product and brand failures. Each product failure can be investigated from the perspective of what, if anything, might have been done differently to produce and market a successful product rather than one that failed. The ability to identify key signs in the product development process can be critical. If the product should make it this far, assessing risk before the product is marketed can save an organization’s budget, and avoid the intangible costs of exposing their failure to the market.
Defining product and brand failures
A product is a failure when its presence in the market leads to:
- The withdrawal of the product from the market for any reason;
- The inability of a product to realize the required market share to sustain its presence in the market;
- The inability of a product to achieve the anticipated life cycle as defined by the organization due to any reason; or,
- The ultimate failure of a product to achieve profitability.
Failures are not necessarily the result of substandard engineering, design or marketing. Based on critic’s definitions, there are hundreds of “bad” movies that have reached “cult status” and financial success while many “good” movies have been box office bombs. Other premier products fail because of competitive actions. Sony’s Beta format was a clearly superior product to VHS, but their decision to not enable the format to be standardized negatively impacted distribution and availability, which resulted in a product failure. The “Tucker” was a superior vehicle compared to what was on the market at the time. This failure was due to General Motors burying the fledging organization in the courts to eliminate a future competitor with a well designed product posing a potential threat to their market share. Apple has experienced a series of product failures, with consistent repetition as they continue to fight for market share.
Product failures are not necessarily financial failures, although bankruptcy may be the final result. Many financially successful products were later found to pose health and safety risks. These products were financial and market share successes:
- Asbestos-based building materials now recognized as a carcinogenic—Insulation, floor tile and “popcorn” ceiling materials produced by a number of manufacturers.
- Baby formula that provided insufficient nutrients for infants resulting in retardation—Nestle’s.
- The diet medication cocktail of Pondimin and Redux called “Fen Phen” that resulted in heart value complications—American Home Products (http://www.settlementdietdrugs.com/).
What successful products may be next? Frequent and high dosages of Advil are suspected to correlate with liver damage. Extended use of electric blankets are suspected by some to increase the chance of cancer. The over-the-counter availability and high use of Sudafed is feared by some physicians and is currently under review by the U.S. Food and Drug Administration.
Product failures and the product life cycle
Most products experience some form of the product life cycle where they create that familiar—or a variant—form of the product life cycle based on time and sales volume or revenue. Most products experience the recognized life cycle stages including:
- Maturity (or saturation)
In some cases, product categories seem to be continuously in demand, while other products never find their niche. These products lack the recognized product life cycle curve.
Failure, fad, fashion or style?
It is important to distinguish a product failure from a product fad, style or a fashion cycle. The most radical product life cycle is that of a fad. Fads have a naturally short life cycle and in fact, are often predicted to experience rapid gain and rapid loss over a short period of time—a few years, months, or even weeks with online fads. One music critique expected “The Bay City Rollers” to rival the Beatles. Do you know who they are? And the pet rock lasted longer than it should have, making millions for its founders.
A “fashion” is what describes the accepted emulation of trends in several areas, such as clothing and home furnishings. The product life cycle of a “style” also appears in clothing as well as art, architecture, cars and other esthetic-based products. The “end” of these product life cycles does not denote failures, but marks the conclusion of an expected cycle that will be replaced and repeated by variations of other products that meet the same needs and perform the same functions.
The benefits of studying failures
Gaining a better understanding of product failures is important to help prevent future failures. Studying the history of product failures may generate some insight into the reason for those failures and create a list of factors that may increase the opportunity for success, but there are no guarantees.
Examples of product failures
The following is an abbreviated list of product failures that may provide insight that will help to identify product and brand success factors:
Automotive and transportation
- Cadillac Cimarron
- Pontiac Fiero
- Chevrolet Corvair
- Ford Edsel
- The DeLorean
- The Tucker
- The Gremlin, the Javelin and a complete line of other models by American Motors
- GM’s passenger diesel engine
- Mazda’s Wankel rotary engine
- Firestone 500 tire
- Goodyear tires used on the Ford Explorer
- Concorde—supersonic airliner
- IBM’s PCjr—introduced in March 1985
- Apple’s Newton
- Apple’s Lisa
- Coleco’s Adam
- Percon’s Pocketreader—hand held scanner, now operating under the company name PSC
- Bumble Bee’s software version of the book “What Color is Your Parachute”
- Quadraphonic audio equipment
- World Football League
- Women’s National Basketball Association
- World League of American Football
- United States Football League
- “He and She,” “Berrengers,” every spinoff done by the former cast of “Seinfeld,” and dozens of other television shows each year.
- “Of God’s and Generals,” “Heavens Gate,” “Water World,” “The Postman” and other movies—with a disproportionately high number produced by Kevin Costner.
Food and beverage
- Burger King’s veal parmesan
- Burger King’s pita salad
- McRib—and still being tested and tried
- Nestle’s New Cookery—but a successor, Lean Cuisine, is a big hit
- Gerber’s Singles—dinners in jars, for adults—early ’70s
- Chelsea—“baby beer”
Photographic and video
- Polaroid instant home movies
- SX-70 (Polaroid instant camera)
- RCA Computers (Spectra-70)
- Video-disc players
- DIVX variant on DVD
- Susan B. Anthony Dollar coin—niche in San Francisco, Las Vegas
- Two-dollar bill
- Twenty-cent piece
- DuPont’s CORFAM —synthetic leather
- Mattel’s Aquarius
- Timex’s Sinclair
- Clairol’s Touch of Yogurt Shampoo (1979)
- Sparq portable mass storage
- Rely tampons
- Relax-a-cizor—vibrating chair
- Louisiana World Exposition—and its gondola
Common reasons for product failures
In addition to a faulty concept or product design, some of the most common reasons for product failures typically fall into one or more of these categories:
- High level executive push of an idea that does not fit the targeted market.
- Overestimated market size.
- Incorrectly positioned product.
- Ineffective promotion, including packaging message, which may have used misleading or confusing marketing message about the product, its features, or its use.
- Not understanding the target market segment and the branding process that would provide the most value for that segment.
- Incorrectly priced—too high and too low.
- Excessive research and/or product development costs.
- Underestimating or not correctly understanding competitive activity or retaliatory response.
- Poor timing of distribution.
- Misleading market research that did not accurately reflect the actual consumer’s behavior for the targeted segment.
- Conducted marketing research and ignored those findings.
- Key channel partners were not involved, informed, or both.
- Lower than anticipated margins.
Using these potential causes of a product or brand failure as a guide while you write your marketing plan can help you avoid committing the same errors. Learning from these “lessons” can be beneficial to avoid some of these pitfalls and increase the chance for success when you launch that next product or brand.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9658403992652893,
"language": "en",
"url": "https://business.time.com/2012/04/25/6-common-misconceptions-about-financial-aid/",
"token_count": 860,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.039306640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:77efbcfd-42e6-432a-adaf-0258592effbd>"
}
|
So, you’ve gotten into college (congrats!), filled out your FAFSA (great!) and are now staring at a financial-aid award letter, wondering what to do next (uh-oh). As the May 1 deadline to accept financial-aid offers approaches, TIME Moneyland spoke to Mark Kantrowitz, publisher of FinAid.org and Fastweb.com, to thwart many of the common misconceptions families have when it comes to financing a college education.
1. Loans reduce the cost of college
Student loans help families manage their cash flow by spreading out the cost of college over many years, but they do not lower the expense. “A loan is a loan is a loan,” says Kantrowitz. “But a lot of financial-aid award letters treat loans as if they reduce the cost.” They don’t. Instead, loans simply reduce the amount of money families have to write a check for up front. Additionally, because of interest, they actually increase the final cost.
2. Net price is the same as net cost
Many financial-aid award letters confuse students and parents by listing both the “net price” (the total cost of college, minus grants) and also the “net cost” (the total cost of college, minus grants and loans) without explaining the difference between the two. As a result it looks like the (lower) net cost is what a family will be responsible for, when it is actually the (higher) net price (since unlike grants, loans are not free money). “Too often families think they are getting a free ride when the award letter includes $10,000 or $20,000 or more in student-and-parent loans,” Kantrowitz said.
3. A lot of students get a free ride
In reality, Kantrowitz says, fewer than 0.3% of students receive enough scholarships and grants to cover the full cost of attendance. Even those students who come close to having all their costs covered are rare. Only 1% of students have 90% of their costs covered, while 3.4% have 75% of their costs covered. A slightly more significant 14.3% have half of their costs covered. But because many parents go into the financial-aid process thinking their child will get a free ride, they often overestimate their eligibility for merit-based aid and underestimate their eligibility for need-based aid.
4. College costs about the same amount each year
Wrong. Each year the cost of college goes up. Currently, tuition is increasing at a rate twice that of inflation, so according to Kantrowitz, a student can expect to pay 20-25% more their senior year than they do in their first year.
(MORE: The Jobless Generation)
5. The grants awarded will be the same each year
It’s important to keep in mind the award letter you receive only applies to one year of college. Not only does the cost of college increase each year, many colleges practice “front-loading” — meaning they award more grants in freshman year and fewer in later years, forcing many students to take out more loans in upper-class years.
6. Student loans are good debt
The wisdom behind this misconception is that education loans are good debt because they are an investment in the future, as opposed to credit-card debt, which is most often for consumable goods. But, of course, it’s still debt, and as Kantrowitz notes, “too much of a good thing can hurt you.” Student loans are only a good thing to the extent that you don’t overborrow, he says. At graduation, your total debt should be less than your predicted annual starting salary, in order to ensure you can afford the standard 10-year repayment plan.
Webley is a staff writer at TIME. Find her on Twitter at @kaylawebley, on Facebook or on Google+. You can also continue the discussion on TIME’s Facebook page and on Twitter at @TIME.
MORE: Here We Go Again: Is College Worth It?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9454528093338013,
"language": "en",
"url": "https://testbook.com/blog/ibps-clerk-quant-ratio-and-proportion-6/",
"token_count": 589,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0244140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a36a96a2-8d85-4811-aca8-5ebf0da5555d>"
}
|
Solve IBPS Clerk Quant Ratio and Proportion Quiz 6
Here is a quiz for upcoming Banking exams like IBPS Clerk V and other Banking Exams. This quiz contains important questions which match the pattern of banking exams, so make sure you attempt today’s Quantitative Aptitude IBPS Clerk Simplification Quiz to check your preparation level.
Watch this video to learn Quant easily –
Amit, Sonu and Monu played cricket. Runs scored by them are in the ratio 2: 3: 4 respectively. If sum of runs of Amit and Sonu was 180, find the runs scored by Monu.
If x:y = 2:3, y:z = 2:5. The find the value of x:y:z
The ratio of income of A, B and C is 3 : 7 : 4 and the ratio of their expenditure is 4 : 3 : 5 respectively, if A saves Rs. 300 out of Rs. 2400, find the savings of C.
10 years ago, the ages of A and B were in the ratio of 13:17. 17 years from now the ratio of their ages will be 10:11. What is the age of B at present?
38% of first number is 52% of the second number. What is the respective ratio of the first number to the second number?
A sum of money is divided among A, B, C and D in the ratio of 4 : 6 : 11 : 13 respectively. If the share of C is Rs.7,854, then what is the total amount of money of B & D together?
Pinku, Rinku and Tinku divide an amount of Rs 4,200/- amongst themselves in the ratio of 7 : 8 : 6 respectively. If an amount of Rs 200/- is added to each of their shares, what will be the new respective ratio of their shares of amount?
35% of a number is equal to 3/4 of another number added to itself. The ratio of first number to second number is :
The electricity bill of a certain company is partly fixed and partly varies as the number of units of electricity is consumed. When in a certain month, 540 units are consumed the bill is Rs. 1800. In another month 620 units are consumed and the bill is Rs. 2040. If in a month the bill is Rs. 1260, how many units must have been used?
Two numbers are in the ratio 5 : 7. On diminishing each of them by 40, they become in the ratio 17 : 27. The difference of the numbers is
As we all know, practice is the key to success. Therefore, boost your preparation by starting your practice now.
Furthermore, chat with your fellow aspirants and our experts to get your doubts cleared on Testbook Discuss:
Check out more IBPS Clerk articles:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9513852000236511,
"language": "en",
"url": "https://www.genpaysdebitche.net/when-will-libra-crypto-be-available/",
"token_count": 889,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.030029296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d0d435a4-38c8-4b91-9976-0b57bfe42ce4>"
}
|
When Will Libra Crypto Be Available – What is Cryptocurrency? Basically, Cryptocurrency is digital cash that can be used in location of standard currency. Essentially, the word Cryptocurrency originates from the Greek word Crypto which means coin and Currency. In essence, Cryptocurrency is just as old as Blockchains. However, the difference in between Cryptocurrency and Blockchains is that there is no centralization or ledger system in place. In essence, Cryptocurrency is an open source procedure based upon peer-to Peer deal technologies that can be carried out on a dispersed computer network.
As an open source procedure, the protocol is highly flexible. This means that unlike Blockchains, there is an opportunity for the community at big to modify the core of the procedure to fit their needs. As such, a lot of development has actually occurred all over the world with the intent of supplying tools and techniques that help with wise contracts. One particular method in which the Ethereum Project is attempting to resolve the issue of wise agreements is through the Foundation. The Ethereum Foundation was established with the objective of establishing software application services around smart contract functionality. The Foundation has actually launched its open source libraries under an open license.
For beginners, the significant distinction in between the Bitcoin Project and the Ethereum Project is that the former does not have a governing board and therefore is open to contributors from all walks of life. The Ethereum Project takes pleasure in a much more regulated environment.
As for the jobs underlying the Ethereum Platform, they are both aiming to provide users with a brand-new method to take part in the decentralized exchange. The significant differences between the two are that the Bitcoin protocol does not use the Proof Of Consensus (POC) procedure that the Ethereum Project utilizes. In addition, there will be a hard work to integrate the most recent Byzantium upgrade that will increase the scalability of the network. These 2 differences may prove to be barriers to entry for prospective business owners, but they do represent crucial distinctions.
On the other hand, the Ethereum Project has actually taken an aggressive approach to scale the network while also dealing with scalability issues. In contrast to the Satoshi Roundtable, which focused on increasing the block size, the Ethereum Project will be able to carry out improvements to the UTX procedure that increase deal speed and decline costs.
The decentralized aspect of the Linux Foundation and the Bitcoin Unlimited Association represent a conventional model of governance that positions a focus on strong neighborhood participation and the promo of consensus. This model of governance has been adopted by a number of dispersed application groups as a means of handling their projects.
The major difference in between the two platforms comes from the reality that the Bitcoin community is mostly self-dependent, while the Ethereum Project anticipates the participation of miners to fund its advancement. By contrast, the Ethereum network is open to contributors who will contribute code to the Ethereum software stack, forming what is called “code forks “. This feature increases the level of involvement desired by the neighborhood. This design also differs from the Byzantine Fault model that was adopted by the Byzantine algorithm when it was utilized in forex trading.
Similar to any other open source technology, much controversy surrounds the relationship between the Linux Foundation and the Ethereum Project. Although both have actually embraced different perspectives on how to finest use the decentralized element of the innovation, they have actually both nonetheless worked hard to develop a positive working relationship. The developers of the Linux and Android mobile platforms have actually openly supported the work of the Ethereum Foundation, contributing code to protect the functionality of its users. Likewise, the Facebook team is supporting the work of the Ethereum Project by offering their own structure and creating applications that incorporate with it. Both the Linux Foundation and Facebook see the ethereal project as a way to enhance their own interests by offering an expense effective and scalable platform for users and developers alike.
Just put, Cryptocurrency is digital money that can be used in location of conventional currency. Generally, the word Cryptocurrency comes from the Greek word Crypto which means coin and Currency. In essence, Cryptocurrency is just as old as Blockchains. The distinction between Cryptocurrency and Blockchains is that there is no centralization or ledger system in location. In essence, Cryptocurrency is an open source protocol based on peer-to Peer deal technologies that can be performed on a distributed computer system network. When Will Libra Crypto Be Available
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9687430262565613,
"language": "en",
"url": "https://www.investopedia.com/terms/f/foreign-tax-credit.asp",
"token_count": 848,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.016845703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9a12bbfc-e2d1-47e6-a401-440a50bec8b3>"
}
|
What Is Foreign Tax Credit?
The foreign tax credit is a non-refundable tax credit for income taxes paid to a foreign government as a result of foreign income tax withholdings. The foreign tax credit is available to anyone who either works in a foreign country or has investment income from a foreign source.
Tax Deductions Vs. Tax Credits
- The foreign tax credit is a tax break provided by the government to reduce the tax liability of certain taxpayers.
- The foreign tax credit applies to taxpayers who pay tax on their foreign investment income to a foreign government.
- While some or all of the foreign earned income can be excluded from federal income tax, a taxpayer cannot claim both foreign earned income and foreign tax credit exclusions on the same income.
Understanding the Foreign Tax Credit
The foreign tax credit is a tax break provided by the government to reduce the tax liability of certain taxpayers. A tax credit is applied to the amount of tax owed by the taxpayer after all deductions are made from their taxable income, and it reduces the total tax bill of an individual dollar to dollar. If an individual owes $3,000 to the government and is eligible for a $1,100 tax credit, they will only have to pay $1,900 after the credit is applied. A tax credit can be either refundable or non-refundable. A refundable tax credit usually results in a refund check if the tax credit is more than the individual's tax bill. A taxpayer who applies a $3,400 tax credit to their $3,000 tax bill will have their bill reduced to zero, and the remaining portion of the credit, that is $400, refunded to them.
On the other hand, a non-refundable tax credit does not result in a refund to the taxpayer as it will only reduce the tax owed to zero. Following the example above, if the $3,400 tax credit was non-refundable, the individual will owe nothing to the government, but will also forfeit the amount of $400 that remains after the credit is applied. The most commonly claimed tax credits are non-refundable, one of which is the foreign tax credit.
The foreign tax credit applies to taxpayers who pay tax on their foreign investment income to a foreign government. Generally, only income, war profits, and excess profits taxes qualify for the credit. The credit can be used by individuals, estates, or trusts to reduce their income tax liability. In addition, taxpayers can carry unused amounts forward to future tax years, up to ten years.
Not all taxes paid to a foreign government can be claimed as a credit against the U.S. federal income tax. A taxpayer is not eligible for a foreign tax credit if they did not pay or accrue the tax, the tax was not imposed on the taxpayer, the tax is not a legal and actual foreign tax liability, or the tax is not based on income. So, an American taxpayer that has the U.K. government impose a legal and actual property tax on them will not be able to claim this tax as a foreign tax credit because it is not an income tax.
The foreign tax credit is claimed on Form 1116, unless the taxpayer qualifies for the de minimis exception, in which case, they can claim the tax credit for the full amount of foreign taxes paid directly on Form 1040. The credit can only be claimed on income that is also subject to domestic taxation. For example, if some of the taxpayer's foreign income is taxable and some of the income is exempt, then the taxpayer must be able to break down the taxes paid on the foreign income only, and only claim the credit for taxes paid on that foreign income.
While some or all of the foreign earned income can be excluded from federal income tax, a taxpayer cannot claim both foreign earned income and foreign tax credit exclusions on the same income. If the taxpayer chooses to exclude either foreign earned income or foreign housing costs, they cannot take a foreign tax credit for taxes on the income you can exclude. If they take the credit, one or both of the choices may be considered revoked by the Internal Revenue Service (IRS).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.7902877926826477,
"language": "en",
"url": "http://africajournal.ru/en/2019/01/22/renewable-energy-in-the-east-african-communitys-and-its-role-in-sustainable-development/",
"token_count": 1426,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.11865234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c599d72c-3097-4b7e-b4d4-48a4ff2acf8b>"
}
|
The article analyses the role of renewable energy in the process of the development of the energy market of the East African Community (EAC) . The author underlines the necessity of finding solutions for such challenges as rising wood and charcoal prices, deforestation, lack of affordable and reliable electricity for a large number of consumers. The study reveals that nowadays the percentage of people with access to modern sources of energy is very low, varying from 7 % in Burundi to 36% in Kenya, although the EAC countries made significant progress in 2000s. Most people in rural areas rely on traditional biomass for cooking and heating, which leads to ecological and health problems. The author concludes that renewable energy development is considered by the Community as one of the prospective ways for providing energy to remote regions in view of abundant solar, wind and geothermal resources. Their strategy aims at the construction of micro and mini hydro stations, stand-alone solar PV systems and off-grids for rural population usage. The study shows that the investment in off-grid renewables has been steadily rising in recent times . Analyzing grid-connected power generation electricity, the author elicits that it is also based on renewable electricity, which accounts for 65% of the total amount. Kenya, with the highest installed capacity in this sector, is investing mainly in geothermal, solar and wind sources of energy, while the others are focusing on hydropower and solar. For the purpose of attracting private investment, the EAC partner states adopted different regulations, including Feed-in Tariff, zero-VAT and GET FIT Programme. The author assumes that renewable energy financing is one of the main challenges despite the support of different international financial institutions, such as the World Bank, UNIDO, AfDB and others. Nowadays energy efficiency measures are becoming important instruments for the EAC countries resulted in power savings. The other important trend is increasing cooperation among them due to their grid-connected power systems in the East African Power Pool. In this context, in November 2017, the EAC Partner States adopted Energy Security Policy Framework, in order to ensure the sustainable development of their energy sector.
the EAC partner states, renewable sources of energy, solar energy, wind energy, geothermal energy, Feed-in Tariff, mini hydro station, off-grids, traditional biomass, power pool
Abramova I.O. Fituni L.L. Perspektivy razvitiya TEK Afriki i Interesy Rossii (The Prospects for Africa’s FEC Development and the Interests of Russia). Aziya i Afrika segodnya. 2014. № 11 (688). pp. 3–12.
Abramova I.O. Novaya rol’ Afriki v mirovoy ekonomike XXI veka (New Role of Africa in the World Economy of the XXI Century) Moscow. Institut Afriki RAN. 2014. 17 p. ISBN 978-5-91298-141-8.
EAC Energy Security Policy Framework. July 2017. UN ECA. https://www.uneca.org/ sites/default/files/images/SROs/EA/executive_summary_revised.pdf (accessed 30.01.2018)
EAC Renewable Energy and Energy Efficiency. Regional Status Report. 2016. 79 p. UNIDO. www.ren21.net/wp-content/upljads/2016/10/REN21-EAC-web-EN.pdf.pdf (accessed 05.02.2018)
East African Community Adopts its Energy Security Agenda. https://www.uneca.org/stories/ (accessed 10.12.2016)
Energy Access Outlook 2017. From Poverty to Prosperity. WEO. www.iea.org/ publications/freepublications/publication/WEO2017SpecialReport_EnergyAccessOutlook.pdf (accessed 02.12.2017)
Energy in the East African Community: the Pole of the Energy Charter Treaty. www.energycharter.org/fileadmin/DocumentsMedia/Occasional/Energy_in_the_East_African_Community.pdf (accessed 18.01.2018)
Four Priorities for Sustainable and Inclusive Energy Security in Eastern Africa. 18 October 2016. http://www.ictsd.org/bridges-news/bridges-africa/ (accessed 11.11.2017)
Kalinichenko L.N. Nastoyashcheye i budushcheye afrikanskoy energetiki (The Present and the Future of African Energy). Aziya i Afrika segodnya. 2012. № 12 (665). pp. 6–12.
Kalinichenko L.N. Novikova Z.S. Afrika na puti innovatsionnogo razvitiya (Africa on the Way to Innovation Development) Aziya i Afrika segodnya. 2017. № 9 (722). pp. 48–55.
Power up Delivering Renewable Energy in Africa. A Report by Economist Intelligence Unit. 2016. 34 p. https://www.eiuperspectives.economist.com/sites/default/files/powerup.pdf (accessed 15.10.2017)
Sharova A.Yu. Razvitie vozobnovlyaemoyo energetiki v arabskikh stranakh (Development of Renewable Energy in Arab Countries). Aziya i Afrika segodnya. 2017. № 5 (718). pp. 56–64.
Sharova A.Yu. Energoeffectivnost’ v arabskikh stranakh: problemy i perspektivy (Energy Efficiency in Arab Countries: Challenges and Prospects). Aziya i Afrika segodnya. 2017. № 12 (725). pp. 61–69.
Sub-Saharan Africa Power Outlook 2016. KPMG. https://assets.kpmg.com/content/dam/kpmg/ pdf/2016/05/kpmg-sub-saharan-africa-power-outlook.pdf (accessed 12.09.2017)
The Eastern Africa Power Pool. Addis Ababa. IRENA. www.irena.org/DocumentsDownloads/ events/2013/July/AfricaCECsession2_EAPP_Gebrehiwot_220613.pdf (accessed 23.09.2017)
World Development Indicators 2017: Sustainable Energy for all. The World Bank, 2017. wdi.worldbank.org/table/3.13?tableNo=3.13 (accessed 15.02.2018)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9744174480438232,
"language": "en",
"url": "https://graduateguide.com/are-specialty-degrees-the-new-mba/graduate-schools/",
"token_count": 741,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07275390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2c4fe1bc-2de4-4f27-adae-3369c63c4374>"
}
|
Certain degrees may matter for specific career tracks.
In past years, students often looked to a master's degree in business administration as the next step if they wanted a successful career in any business role. However, that may no longer be the case.
Admissions to several mid-level master's programs has declined in the past few years, discouraging students interested in pursuing higher education. As a result, some schools have tried to cut their tuition costs hoping to encourage students to apply. However, other schools are trying a new trick. They are offering specialized business master's degrees — and it's working.
"50% of the top 25 schools in the country have developed a specialized master's program."
Getting a chance
Like students in any master's program, business students want to get a job as soon as possible after graduation. These specialized degrees give students that chance. As students are learning about a specific skill or trade for a few years, they become very attractive to employers upon graduation. For example, if people want to enter a specific field, such as marketing, accounting or finance, a specialized master's degree can help them learn more about that field. Often times, employment requires a detailed set of knowledge that can only come from years of experience. With specialized education, a person becomes more desirable in the workforce.
While these types of programs have become more attractive to students, what's more, they do not require a year of professional experience like an MBA does. They also help students transition easily from the subject matter they were learning in their undergraduate program.
Schools have caught on to the popularity of these programs. Approximately 50 percent of the top 25 schools in the country have developed a specialized master's program. Many programs are developed around industry's demand. So if jobs in accounting are booming, schools will develop programs that are related to accounting in one way or another. For instance, recent demand has surrounded jobs involving big data. As a result, several schools began to develop master's of science in data analytics programs simply to fill the void.
Specialized programs may also offer students a higher salary than those with an undergraduate degree who pursue a business career. For example, people who pursue a master's of science degree in finance may end up making the same amount of money as a student who earned an MBA and chose to enter the finance field.
The relevance of MBAs
So when does an MBA degree matter? Has this master's degree totally been phased out?
No, it hasn't. MBAs are still very relevant, especially if a person has plans to go into management. While a master's of science offers that specific skill set that some employers might be looking for, it doesn't offer overall general skills and a solid understanding of the business world as a whole. If people are interested in climbing the corporate ladder and becoming a manager or entering a position that oversees others, an MBA might still be a necessity. Some students are choosing to pursue a master's of science and follow it up with an MBA later. Companies are still seeking students who have an MBA.
Meanwhile, specialized degrees give people options which they didn't have before. People might be able to graduate from an undergraduate program, enter a master's of science program and immediately enter their career field with a job waiting. In a few years, if they decide they want to pursue a management job or try out a different field in business, they can return to school to get an MBA. At this point, they will have the knowledge and specific skill set for a certain field, as well as the overarching business knowledge they need to manage others and potentially run a company.
By Monique Smith
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9607526063919067,
"language": "en",
"url": "https://instantcryptocurrencyexchange.com/exchange/BTC/to/REP",
"token_count": 706,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.23046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d1ccc15a-e2a2-4dd1-957e-2f3fcd927232>"
}
|
Bitcoin is the first successful internet money based on peer-to-peer technology; whereby no central bank or authority is involved in the transaction and production of the Bitcoin currency. It was created by an anonymous individual/group under the name, Satoshi Nakamoto. The source code is available publicly as an open source project, anybody can look at it and be part of the developmental process. Bitcoin is changing the way we see money as we speak. The idea was to produce a means of exchange, independent of any central authority, that could be transferred electronically in a secure, verifiable and immutable way. It is a decentralized peer-to-peer internet currency making mobile payment easy, very low transaction fees, protects your identity, and it works anywhere all the time with no central authority and banks. Bitcoin is design to have only 21 million BTC ever created, thus making it a deflationary currency. Bitcoin uses the SHA-256 hashing algorithm with an average transaction confirmation time of 10 minutes. Miners today are mining Bitcoin using ASIC chip dedicated to only mining Bitcoin, and the hash rate has shot up to peta hashes. Being the first successful online cryptography currency, Bitcoin has inspired other alternative currencies such as Litecoin, Peercoin, Primecoin, and so on. The cryptocurrency then took off with the innovation of the turing-complete smart contract by Ethereum which led to the development of other amazing projects such as EOS, Tron, and even crypto-collectibles such as CryptoKitties.
Augur is a trustless, decentralized platform for prediction markets. Augur is an Ethereum-based decentralized prediction market that leverages the wisdom of the crowds to create a search engine for the future that runs on its own token, REP. Augur allows users to create their markets for specific questions they may have and to profit from the trading buys while allowing users to buy positive or negative shares regarding the outcome of a future event. Prediction markets are markets created to trade the probability of an event happening. The market prices indicate what the crowd thinks the probability of an event happening. Predictive markets have shown to have been effective in accurately forecasting many results however it is still not widely used due to the many regulatory hurdles involved in setting up such a market. Augur aims to set up such a market in a decentralized manner. Augur is an Ethereum-based decentralized prediction market that leverages the wisdom of the crowds to create a search engine for the future that runs on its own token, REP. Augur allows users to create their markets for specific questions they may have and to profit from the trading buys while allowing users to buy positive or negative shares regarding the outcome of a future event. Augur REP is the gambling cryptocurrency. It’s the crypto token you can use to bet on sporting events, political outcomes, economies and just about everything else in the prediction markets. Online gambling is a $52 billion a year industry. At its founding the project included Intrade founder Ron Bernstein, Robin Hanson, and Ethereum founder Vitalik Buterin among its advisers. In April 2015, Augur's first contract was uploaded to the Ethereum network.The first beta version was released in March 2016. In October 2016, all the reputation tokens that were for sale during the 2015 crowdfunding campaign were distributed to their owners on the live Ethereum network and the two largest cryptocurrency exchanges, Poloniex and Kraken, added support for these tokens on their trading platforms. The project was delayed until it was launched in July 2018.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9794303178787231,
"language": "en",
"url": "https://paigirl.com/4-important-things-small-businesses-need-to-know-about-sales-tax-filing?replytocom=2821",
"token_count": 742,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0240478515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6f376ca7-6d67-40e6-b3ff-20c88659c057>"
}
|
Small businesses have complained that the last few years have seen a lot of changes being done to sales tax issues. Ask any business owner about his are of concern, and he is likely to raise the entire process of sales tax filings.
This is because the process is not only complex and confusing, it also keeps changing repeatedly. In other words, the laws and rules governing sales tax issues do not seem to be fixed. They are rather quite fluid and subject to the whims and fancies of governments.
In this article, we are going to look at four top things, businesses need to know about sales tax. However, before we get to the list, let us first briefly look at Sales Tax and what it really means.
Sales Tax: Meaning and Definition
In very simple terms, a sales tax is a percentage of money paid by the buyer in lieu of buying a tangible product or service within a geographically designated taxation jurisdiction. In other words, even though businesses do not pay the sales tax themselves (as it is paid by the consumer), the responsibility of collecting the same and depositing it with the authorities rests on them.
Customers who buy goods and services, which are taxable pay the sales tax in reality. The tax is collected by the business and deposited with the taxation authority. In this case, it can be a state, county, or district, which is the taxation authority.
In the United States, there are no Federal Sales Tax Laws, which means that all the states are free to make and enforce their own laws. Sales taxes is one of the major sources of revenue for the states in the United States.
List of 4 Things Small Businesses need to know about Sales Tax-
1.Where do Sales Taxes Apply-
According to laws, sales taxes are applicable to all forms of cash purchases made by the buyer. They are also applicable to purchases made by the buyer through credit cards as well. Even if you are exchanging property, you will have to pay sales tax. The same goes for installments and EMIs. Some states also levy layaway sales and nexus state taxes in their territory.
2.What are some Taxable Goods under Sales Tax-
It is common knowledge that the list of goods and products where sales tax is applicable are endless. However, these products and goods also differ from state to state within the Federation. We have created a list of goods, which are common to all states- automobiles, furniture, toys, electronics, gadgets, plants, books, home equipment/appliances, computer and IT parts, etc.
3.Sales Tax Laws vary from State to State-
Small businesses that are looking to expand their operations in different states need to take this factor into consideration. Many businesses, which are into ecommerce, fail to take note of these factors and end up paying hefty fines in different states. Nexus Taxes too vary from state to state and businesses need to be aware of them.
4.Sales Tax Filing is Complicated-
As a small business owner, you might think that it is easy to file the sales tax. However, the reality is much harder. It is always a good idea that you employ the services of an expert for sales tax filing. This will ensure that you never miss out on the dates, have proper paperwork at hand and ensure complete compliances for your small business at every level.
The Final Word
If you are a small business looking to grow into newer states and territories, you need to understand how state taxes work. AS states are very particular about this revenue-generating tax for themselves, they come down harshly on businesses looking to evade or delay the same.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9450544714927673,
"language": "en",
"url": "https://spice-spotlight.scot/2019/05/13/how-to-count-to-net-zero/",
"token_count": 1207,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.2138671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3e2e1e06-f782-4524-abde-e74d8466d18f>"
}
|
Counting to zero is easy. But budgeting for zero carbon, that’s more difficult.
Taking on the challenge, the Environment, Climate Change and Land Reform Committee are the first committee in the Scottish Parliament to kick-start their evidence-gathering on the 2020/21 budget. Last week they held a session on environmental tax and low-carbon infrastructure.
This week, climate deliberations continue with the Committee hearing from the UK Committee on Climate Change, whose advice has led to the Scottish Government adopting a “net-zero” greenhouse gas emissions target for 2045.
The annual cost of meeting this target is estimated to be around 1-2% of GDP (with the costs of not acting significantly higher). Clearly, if the net-zero target is to be met, climate issues will need to be a priority for Scottish budgets for years to come.
How can the Scottish Parliament learn from others?
In 2017, French President Emmanuel Macron started an initiative called the Paris Collaborative on Green Budgeting.
The Collaborative is run by the Organisation for Economic Co-operation and Development (OECD) and it aims to help countries “embed” climate and other environmental goals within national and subnational budgets. Here is the OECD’s Secretary General explaining the Collaborative in 55 seconds:
SPICe has been building links with the OECD over recent years, as we do with other international organisations – we need access to the best research and contacts in Scotland and beyond to do our job effectively. To that end, earlier this month I took the train to Paris to find out more, and to share what work is going on in the Scottish Parliament in this area.
We heard examples from France, Switzerland, Netherlands, Ireland and the European Commission of different ways to build environmental issues into financial decision-making. Across the day, one phrase stuck out in the discussions: “gilet jaunes”. This movement of protests in France – sparked by the rising cost of fuel and fuel taxes – was used by multiple people to demonstrate that, to be successful, the climate policies that governments and parliaments set though their budgets need to be coherent with a whole range of other priorities, such as poverty and inequality. The UN’s Sustainable Development Goals (SDGs), which are mapped onto the Scottish Government’s National Performance Framework, were given as a good example of priorities that embed climate, poverty, health, decent work and more.
The OECD hopes to build on countries’ existing practice and develop a green budgeting roadmap for others. This roadmap is still under development, but the slide below provides examples of possible green budgeting tools at different points in the budget cycle.
At the event, Ireland and the European Commission gave examples of expenditure tagging. This is an approach that identifies all spending in a budget deemed to be climate-related.
- Ireland identified over €1.6 billion of government funds in 2019 allocated to programmes which can help Ireland to achieve its climate goals, describing this approach as a “necessary first step” to green budgeting.
- The European Commission propose that 25% of EU expenditure, across the next seven-year budget framework, contributes to climate objectives. To track this, the EC are refining the EU climate marker approach which assign a weighting to budget lines based on their contribution towards climate objectives.
France and the Netherlands gave examples of environmental budget statements and regular evaluations:
- France has designed a new so-called “yellow book” published alongside its draft budget which tags environmental expenditures and describes the effect of environmental taxes on households and firms.
- The Netherlands has a general rule to evaluate all financial policy measures every seven years. For example, its recent evaluation of a 2012 tax liability reduction for sustainable energy investment found it was cost effective and had high customer satisfaction.
On fiscal sustainability, representatives from the Bank of England spoke about the macroeconomic risks of climate change and Switzerland spoke about how to deal with declining revenues from energy taxes.
What is happening in Scotland?
For ten years, the Scottish Government have published a carbon assessment of spending proposals in the Scottish Budget. This document gives an estimate of the greenhouse gases associated with the goods and services bought by the budget. What it doesn’t tell us is the outcome of this spending. The Scottish Government give the following illustration:
“For example, while the emissions associated with manufacturing and installing insulation are included, we do not count the carbon that may be saved in future as a result of making that improvement to the housing stock.”
This effect is likely to be significant for infrastructure projects that last a long time and can therefore “lock in” a pattern of emissions for many years. Our understanding of the impact of current infrastructure plans on future greenhouse gas emissions is poor; but the Cabinet Secretary for Finance, Economy and Fair Work, Derek Mackay has indicated he is open to improvements on the current high-level categorisation method.
A strong budget framework and multi-year budgeting were highlighted by the Collaborative as important foundations. Scotland’s new budget process is only one year old, but if it works as intended it should provide a strong foundation for better outcomes.
Last year, the Scottish Government started to publish annual reports tracking the implementation of the Climate Change Plan – this is the plan to deliver on Scotland’s existing climate targets. If designed and timed well, these reports have the potential to help inform the Scottish Budget process.
With the “gilet jaunes” example in mind, the Scottish Government’s Just Transition Commission is tasked with advising on “a carbon-neutral economy that is fair for all”. The Commission is expected to report near the beginning of 2021.
Counting to zero might seem easy, but that depends on where you start.
Photo image: Thomas Claveirole – Abacus
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9326108694076538,
"language": "en",
"url": "https://ucnedu.org/things-to-know-before-choosing-between-mpa-and-mba/",
"token_count": 1896,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0267333984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c9cd8708-2505-477b-88d1-64b5059bbad2>"
}
|
Tired of Reading? Please listen to the blog
An MBA degree typically emphasizes on developing business administration and management skills. Generally, this degree proves beneficial to set up a career in the modern business and the private sector. MBA programs normally cover the aspects of marketing, finance, management, and other associated areas.
Usual MBA programs comprise the subsequent coursework:
- Role of a manager in a corporation
- Fundamentals of business
- Identify and implement business prospects
- Handling overall operations of the firm
- Implementing leadership skills in business environments
- Fundamentals of accounting and finance
A master of public administration degree imparts the required knowledge and skills. With these skills, one can transform institutions and address unrelenting social issues. To attain this, a master of public administration course can be useful. It instills a profound knowledge of the public sector, non-profit management, and administration.
Usual MPA programs highlight the following:
- Way of public interactions with government bodies
- Policy evaluation
- Fiscal management
- Organizational behavior
- Administrative research and analysis
Importance of MPA for a Career in Public Administration
An MPA degree may be the right choice to have a dedicated career in public or community service. The curriculum of master’s in public administration is explicitly designed to maximize knowledge of the political system. It also inculcates public service values and endorses severe decision-making abilities.
Three common types of master’s in public administration courses are available that is designed for different careers:
- Political Science: This course emphasizes the political system, government organizations, national and international affairs. This program is best suited for graduates wishing to land a career in politics or international relations.
- Policy Analysis: This program leans more towards the academic and research areas. This degree teaches students cost-benefit analysis to assess various public policies. It is best for graduates who wish to be policy analysts or academics.
- Management and Leadership: This program helps prepare students to lead and handle non-profit organizations and public institutions at all stages. This degree is apt for professionals seeking to progress their profession to a management level inside the public or non-profit making sector.
Difference Between MPA and MBA
The significant difference between the two degrees is that a master’s in public administration are usually used to get government jobs. However, the MBA in public administration is commonly applied to corporate jobs. An MPA program sheds light more on governmental guiding principles, ethics, and rules. An MBA program concentrates more on finance, marketing, human resources, and organizational leadership.
The main distinguishing features of both the master’s programs are tabulated below:
|MBA graduates gain managerial and analytical skills.||MPA programs focus on helping the government agencies and government-run programs.|
|Most MBA programs provide optional specializations.||MPA programs also offer specializations, enabling students to customize their education according to their interests and goals.|
|The MBA program requires the completion of an undergraduate degree.||Many MPA programs either call for one or more years of work experience or completing a semester-long internship all through the program.|
|Students can analyze how business managers develop and implement strategic decisions.||Students get to know how the government and associated bodies function in real-time and find out the ways to improvise them.|
|Internships are offered during the final year which can kick start their career.||Internships are also offered and are an excellent way to network with potential employers and develop their resumes.|
|MBA programs are available online and let the students specialize in a business subfield.|
1. Technology Management
2. Strategic Management
6. General business
|Like MBA programs, many online masters of public administration programs are available that offer specializations. The following list gives a few options:|
1. Managing Local Governments
2. Managing Non-Profits
4. Public Finance
5. Leadership in the Arts
Reasons to Select an MPA over MBA
- The master of public administration is essential for a career in either the non-profitmaking or public sector than in the private sector.
- An MPA degree is best suited for dealing with public affairs and selecting from specializations in public management, governance, and non-profit management
- Candidates with a sharper focus on public administration are most sought out profiles as they can develop a business and also build stronger relationships in the communities.
- Working in the non-profit or government sector tends to be immeasurable. MPA degree benefits those who are interested in non-measurable
- An MBA degree is all about studying the market, whereas an MPA is about learning market failures.
- MBAs are recruited to increase profit, whereas an MPA is recruited to help lead the organization and to create a better world.
Career Prospects with a Master’s Degree in Public Administration
The career opportunities are abundant for professionals who have completed from universities that offer public administration programs. It is applicable for graduates who have undertaken an online master of public administration programs as well. They can find careers as:
- County or City Managers
- Legislators or Legislative Staff Members
- Non-Governmental Organization Directors
- Public Analysts
- Tax Examiner
- Budget Analyst
- Public Administration Consultant
- International Aid
- Fund Raising Manager
Moreover, MPA graduates every so often functions as managers in non-profit organizations. It includes activist groups and charitable organizations. Furthermore, those in public safety services, police, military, and fire pursue a master of public administration for career progress.
Ellen Johnson Sirleaf is a notable person who holds a master of public administration degree. President of Liberia, Sirleaf is the first and the only female to head the state in Africa. Sirleaf undertook Economics and Public Policy at the John F. Kennedy School of Government from 1969. Then, she graduated with an MPA degree. In 2006, she was elected as the first female Head of State in Africa. In 2011, she even received the Nobel Peace Prize for her efforts for women’s rights.
Few of the other famous graduates of MPA programs include:
- David Petraeus, Director of the CIA
- Lee Hsien Loong, Prime Minister of Singapore
- Felipe Calderon, President of Mexico
- Ban Ki-Moon, Secretary-General of the United Nations
Transform the Public Sector with an MPA Degree
Many MPA programs are designed to transform the public sector. The following are a few options to avail:
- Handling Local Government: professionally experienced graduates can work as city managers or high-level officials inside city limits and regional governments.
- Healthcare: Similar to non-profits, healthcare services deliver care and support to people who lack health insurance. Coursework comprises of methods to fundraise and cooperate with medical professionals.
- Managing Non-profits: Non-profits help the public by rendering services to people and societies in need. Students taking up this specialization study in what way they can coordinate their efforts with local government bodies
- Leadership in the Arts: Students can take up a career in handling public art programs like a city museum.
- Public Finance: these students learn about macroeconomics and ways to examine the influence of government activities on the economy.
Students can choose from a sequence of courses or select 2–3 from the electives list, which aids in personalizing their education.
Texila American University’s Online MPA Program
The online master of public administration program is a two-year course. Candidates possessing a bachelor’s degree from a renowned university with at least three years of public sector work experience are eligible.
The program focuses on the various public administration principles, policy-making and management, and its implementation. Candidates are enriched with the knowledge of how to deal with specific challenges that occur in the public administration domain. It also encompasses ways to develop workable solutions. The scope of our online Master of public administration course is to help address new complications in public institutions.
Why Texila American University?
Texila American University in academic partnership with University of Central Nicaragua (UCN) offers internationally recognized MPA program. The program is lucratively designed for the convenience of the students. It benefits primarily the working professionals who cannot spend much time to attend full-time programs on campus. The other advantages are:
- Flexible online programs
- Interactive curriculum and module-based program
- A diverse network of students for broad exposure
- Internationally accredited degree
- Opportunity to discover the real potential as a qualified public administrator
- Guidance from faculty, student members, and academic advisors
- Up-to-date online resources
Nowadays, society’s main concerns appear to be evolving towards social well-being and not only profits. There are more efforts undertaken today by the corporations to host social missions and to contribute towards social responsibility. Since more people are worried about social well-being, having an MPA degree is a better fit for the public sector.
Professionals who are keen on pursuing a career in public administration can choose an MPA degree than an MBA.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9440925121307373,
"language": "en",
"url": "https://www.financialfreedom.guru/financial-independence/budgeting/",
"token_count": 855,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0225830078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a9a78c6e-757e-4b1d-95c7-462fcc39cb8d>"
}
|
What Is Budgeting: 3 Reasons to Budget
Updated: January 8, 2021
Financial independence takes work. However, you’ll never maintain wealth if you cannot manage your finances by budgeting. You can use the three pillars of financial independence (build wealth, invest income, and generate passive income) to make money. Yet, if your spending is out of control, you will not stay wealthy. Countless millionaires who have filed for bankruptcy have learned this lesson the hard way. Knowing how to build and stick to a budget can help you retain wealth at any income level.
The value of this skill is even more critical with today’s turbulent economy. If you want to be financially independent, you have to know how to budget.
What Is Budgeting?
A budget is a way to plan how to use your money. When you follow your income and expenses over time, you’re taking the first steps to making a budget. What doesn’t help is just stashing your receipts in a pile and forgetting them. Paying your bills on time, without reviewing them first, doesn’t aid in budget creation either.
So what do you have to do to create a budget? A proper budget should be:
- Written down on paper or digitally. It can be as simple as a written document or a spreadsheet. If you don’t want to create one yourself, there are several budgeting tools available that can make the process easier.
- Completed before the month over which it should be applied. If your budget is for May, it should be completed before May begins.
- Documenting every dollar you bring in as income.
- Accounting for every dollar spent.
The Main Reasons We Make a Budget
People choose to budget for a number of reasons. Some may be chasing a financial goal, and use budgeting as a way to achieve it. We create budgets to find money for savings and retirement accounts. Budgets are useful for managing monthly financial obligations while juggling credit cards and other loans. Perhaps you’re tired of just barely having enough money to get them to the end of the month. You may want to build your emergency fund to cover an unforeseen event (like car repairs or losing your job). In some cases, we may have extra income, and we want to know the best way to manage it.
While the reasons above are all good, the three most common ones are listed below:
- To be more mindful of your money. A budget shows how much money you earn, what you spend and where you spend over a period of time. Understanding how money enters and leaves your possession helps you create a baseline that can be used for future budgets.
- To manage your finances better. By tracking how and where your income is spent, you can identify discretionary income. Discretionary income is the money that remains after you subtract rent, utilities, and other financial obligations (like student loans). You can redirect your discretionary income towards savings without significantly impacting your quality of life.
- To eliminate worry. Once you know where your money is going, you can eliminate the unknowns and minimize unexpected spending. You know how much discretionary income you have, so you also know what you can afford to purchase.
You should spend the time to develop your own budget. As you stick to it, you will see more benefits develop. When practice budgeting, you’ll soon discover it is one of the strongest tools to take charge of your financial future. With it, you have the power to control your money and make better financial decisions.
Create a Working Budget and Stick to It
By now, you should understand why budgeting is an essential habit if you want to gain financial independence. Now, and not later, is the best time to create a budget. We know that it can be hard to motivate yourself to stick to a budget, especially if it is extremely strict. Your budget should be tailored to your needs, factoring in what is important to you. This way, it will be easier for you to follow it long-term. It takes discipline and consistency to stay with your budget long enough to get the outcome you desire.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9489079117774963,
"language": "en",
"url": "https://www.heraldtribune.com/news/20161003/dennis-zink-when-profit-margins-decay",
"token_count": 1175,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.00159454345703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:54afb3ac-9a26-494c-9071-78fbb2446a4c>"
}
|
According to Investopedia, a profit margin is part of a category of profitability ratios and is calculated as net income divided by revenue, or net profits divided by sales.
Net income or net profit is determined by subtracting all of a company’s expenses, including operating costs, material costs (including raw materials), labor and tax costs from its total revenue. Profit margins are expressed as a percentage and, in effect, measure how much out of every dollar of sales a company actually keeps in earnings. A 20 percent profit margin, then, means the company has a net income of 20 cents for each dollar of total revenue.
Industries where the best margins are found
Accounting services top the list at 18.3 percent, according to Sageworks, a financial information company. Other top industries include legal services and real estate leasing companies, at 17.4 percent, outpatient care centers, at 15.9 percent, and dental offices, at 14.9 percent. Of the top fifteen industries with the best margins, five are in health care and three in real estate.
What determines average profit margins in a small business?
According to Grant Houston with studioD, there are many determining factors, including the type of business, location, capital costs, taxes, labor costs, inventory, systems used and technology deployed. Small Business Administration small business loan criteria consist of companies with fewer than 500 employees and less than $7 million in annual sales, although the average small business has fewer than 20 employees and less than $2 million in sales. For example, a $2 million medical equipment and supplies company might have a 27 percent net profit margin; a computer and electronics products company at the same level might return 54 percent; and a food processing sector business might only return 10 percent.
A good way of measuring productivity is to look at the average revenue per employee, which is found by dividing total revenue by total employees. Managing inventory and cutting costs provide additional ways to maximize profit margins.
Why are margins decaying?
Warren Buffett, the Oracle of Omaha, once said, “In my opinion, you have to be wildly optimistic to believe that corporate profits as a percent of GDP can, for any sustained period, hold much above 6 percent. One thing keeping the percentage down will be competition."
The average net profit margin for private companies was 7.7 percent for the 12 months ended June 30, based on Sageworks data. Clayton Browne, of studioD, points out that although retail clothing gross profit margins might seem high at better than 48 percent, after deducting operating expenses that number plunges to under 8 percent. The telecommunications industry has better than an 86 percent gross profit margin yet nets only 11 percent profit after overhead.
Competition and closings
If you drive around your neighborhood, chances are good that many stores have ‘left the building.’ Many well-known retailers, such as Sports Authority, have recently filed for bankruptcy. In 2015, store closings by RadioShack, Barnes & Noble, Macy’s, Kmart, JCPenney, Sears and Walmart were rampant.
The internet’s impact on margins
The internet is a huge factor in the competitive marketplace and affects everyone. Buying cheap electronics online, like getting a great deal on a TV, might be nothing more than someone selling from their home. These bedroom entrepreneurs drop-ship merchandise to you while they sip coffee in their underwear and make a few extra bucks in the process.
In a commodity business, you can purchase the exact same product from anyone selling it. There is no loyalty, and the best price with a reasonable reputation usually wins. This drives down margins as competition becomes fierce in the race for a buck. Think about retailers that advertise sales all the time. Do you shop there if you have to pay full boat? Probably not. How often have you priced products at brick-and-mortar stores and then ordered via the internet?
A confluence of impending changes that will affect margins
■ U.S. Department of Labor overtime pay changes. Effective Dec. 1, the exemption threshold for overtime pay at time-and-a-half pay effectively doubles from $23,660 ($11.37 per hour) per year to $47,476 per year ($22.83 per hour). Employers must pay overtime to salaried employees making below this amount, which effectively makes these employees hourly. To adjust for the net effect of this rule, employers might choose to reduce employee base pay, reduce hours worked or hire additional workers. Tracking compliance and time clocked will add to costs, further eroding profit margins.
■ Rising healthcare costs. As more insurance companies flee the Affordable Care Act (Obamacare), healthcare costs will rise and put additional financial pressure on employers and employees.
■ Increasing the minimum wage. If wages escalate, it isn’t only the bottom rung that will receive higher wages. Virtually all hourly employees will demand that they receive increases. Without a gain in productivity, this is inflationary. Cost-push inflation tends to drive up prices and decrease profits.
Even with increased payrolls, adding staff is a more significant change in smaller companies than in larger ones, and margins may be squeezed further. It is indeed a challenging marketplace.
Dennis Zink is a volunteer, certified mentor and chapter chairman of Manasota SCORE and chairman of the Realize Bradenton board. He is the creator and host of Been There, Done That! with Dennis Zink, a nationally syndicated business podcast series. He facilitates a CEO roundtable for the Manatee Chamber of Commerce, created a MeetUp group, Success Strategies for Business Owners and is a business consultant. Email him at [email protected].
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9727873802185059,
"language": "en",
"url": "https://www.lynda.com/Business-tutorials/accounting-equation/2815106/2261708-4.html",
"token_count": 808,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.053955078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:25859cd5-d480-43a7-8c46-656ae18034df>"
}
|
In this video, learn how to list out the basic elements of the accounting equation. This equation shows how everything needs to balance in accounting.
- The balance sheet is built around one of the most awesome creations of the human mind, the accounting equation. There it is. Assets equal liabilities plus equity. Now, I can tell you're underwhelmed, you were expecting a little bit more, something like Einstein's famous equation E equals MC-squared, well, in it's own way, the accounting equation is just as great as E equals MC-squared. Let me tell you where this accounting equation comes from. First, the asset side. People have been listing assets for thousands of years. There's primitive written evidence that farmers were keeping lists of assets 7000 years ago in Ancient Mesopotamia. The great insight behind the accounting equation was created a little bit over 500 years ago in Italy. The traders in Venice and other traders in Italy had this insight. Listen, let's keep a list of our assets like we've always been doing, but in addition, every time we get an asset, let's also write down where we got the money to buy that asset. Simply stated. We write down the asset and we also write down the source of the financing to buy that asset. Did I borrow the money to buy the asset? Was the money invested by the owners? If I borrow the money, then liability is the name I give to the source of the financing to buy that asset. If the money was invested by owners or shareholders, I say, equity was the source of the money to buy the asset. So we've got the two sides of the accounting equation. The first side, the asset side is the real world. You can go touch a company's assets. It's cash, it's buildings, it's land. That's the real part of a company. The other half of the accounting equation just tells you where you got the money to buy those assets. Let me give you a simple example. I have a teenage granddaughter. Her name is Koby. She's a very good business person and a very good saver. Let's say that I come home from work one day and she's standing there in the house with $100 bill. If you were her grandfather, what would be your first question? Well, you'd exchange pleasantries and then you would say, I see you got $100 there, where'd you get the $100? If you see an asset, you also want to know the source of the asset. That's what the accounting equation tells us. For example, as of September 29th, 2018, Apple had total assets of $366 billion. Where did Apple get the 366 billion to buy these assets? Well, of this total, Apple got $94 billion from long-term loans, another $107 billion came from investments by shareholders, Apple's total sources of financing were $366 billion, enough to buy that $366 billion in assets. That is the accounting equation. It seems so simple. But this simple practice of writing down assets and where we got the money to buy the assets is the foundation of all the sophisticated financial reporting that we now have in the world, and we've been using this simple system for over 500 years. The accounting equation is an awesome invention. I tip my hat to those mid evil accountants in Italy who invented it. Assets equal liabilities plus equities. Continue to keep track of the assets as we've been doing for 7000 years, and thanks to the Italians of 500 years ago, we also keep track of the sources of financing to buy those assets.
- Describe line items that appear on financial statements.
- Differentiate between the three types of financial statements.
- Interpret current accounting issues and trends.
- Calculate the market capitalization of a company.
- Identify the most important expense for a retail company.
- Explain the use of common size financial statements.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9237834811210632,
"language": "en",
"url": "https://www.mobilize.net/resources/guides/hipaa-risks-vb6",
"token_count": 4176,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.044921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:44c3089a-4095-4724-8935-f053d2ef57ee>"
}
|
The HIPAA and HITECH regulations
The Health Insurance Portability and Accountability Act (HIPAA) is a public law enacted by the US Congress in 1996.
It has four main objectives. These include the improvement of portability and continuity of health insurance coverage.
Additionally, it aims to avoid waste, fraud and abuse in health insurance and healthcare delivery. A third goal is the reduction of costs and administrative burdens of healthcare by improving the efficiency and effectiveness of the system through the standardization of the interchange of electronic data for specified administrative and financial transactions. And lastly, it aims to protect the privacy of records by ensuring the security and confidentiality of healthcare information. So, HIPAA basically aims to:
- Improve portability and continuity of coverage
- Avoid waste, fraud and abuse
- Reduce costs and administrative burdens
- Protect the privacy of records
The legislation carries grave civil and criminal penalties for failure to comply. Civil penalties include fines that range from $100 per violation to $250,000 per calendar year, and the US Department of Justice will enforce criminal penalties which may include up to 10 years imprisonment and a $250,000 fine. (See American Medical Association. HIPAA Violations and Enforcement)
The Health Information Technology for Economic and Clinical Health Act (HITECH), which is part of the American Recovery and Reinvestment Act of 2009 (ARRA), widens the scope of privacy and security protections available under HIPAA. In turn, it increases potential legal liability for non-compliance and provides more enforcement of HIPAA rules. ARRA contains incentives related to healthcare information technology, in general, - some specifically designed to accelerate the adoption of electronic health record (EHR) systems among providers. For example, civil penalties for willful neglect are increased under the HITECH Act, extending up to $1.5 million for repeat/uncorrected violations, plus certain HIPAA security provisions directly apply now to business associates, such as software vendors providing EHR systems. The HITECH Act has focused on the establishment of a national health infrastructure and on ensuring improved privacy protections, placing both HIPAA’s Privacy Rule and Security Rule as critical challenge for healthcare providers.
HIPAA’s provisions and Information Systems
HIPAA is comprised of five Titles. Title I guarantees access, renewal and portability of health insurance. Title II addresses cost reduction, administrative simplification, and fraud and abuse. Title III establishes medical savings accounts. Title IV sets group plan regulations; and Title V encompasses revenue offsets.
The provisions with the greatest impact to Healthcare Organizations are those contained in Title II, which call for the development of national standards to protect the privacy of Americans’ healthcare records. This title, known as the Administrative Simplification provision, requires the establishment of national standards for electronic healthcare transactions and national identifiers for providers, health insurance plans, and employers. It constitutes a method of making business practice uniform in the areas of billing, claims, computer systems and communication so that providers and payers do not have to change the way in which they interact with each other through each other's proprietary systems. That includes activities such as enrolling an individual in a health plan, paying insurance premiums, checking eligibility, obtaining authorization to refer a patient to a specialist, processing claims or notifying a provider about the payment of a claim. This will reduce costs and improve efficiency through the implementation of a standardized electronic data interchange. In turn, these savings would be available to toward improving of healthcare quality and availability. In fact, the net savings over 10 years were estimated at $12.3 billion. However, this title also includes a series of regulations oriented towards guaranteeing the privacy and security of the critically sensitive data involved in these systems.
In healthcare today, reliable information about individuals is critical to providing high quality coordinated care. Data corruption or inaccuracy can have life-threatening consequences. As well, numerous forces have been driving the healthcare industry towards advances in the use of health information technology, such as the potential for reducing medical errors and healthcare costs, and increasing the patients’ involvement in their own health care.
HIPAA created specific requirements for managing health information privacy and security, dramatically changing the legal and regulatory environment for managing patient medical data. One of these mandates is to protect health information by establishing transaction standards in security and privacy for the exchange of health information.
HIPAA: The Security Rule
There are a series of regulations, called the “Security Rule”, which specify administrative, physical, and technical safeguards for covered entities, establishing standards for all health plans, clearinghouses and storage of healthcare information to properly ensure the confidentially, integrity and availability of electronic protected health information:
- Confidentiality assures that data is shared only among authorized persons or organizations.
- Integrity assures that data is accurate, authentic and complete, and that cannot be changed unless an alteration is known, required, documented, validated and authoritatively approved.
- Availability assures that systems responsible for delivering, storing and processing critical data are accessible when needed, by those who need them, under both routine and emergency circumstances.
These standards apply to healthcare providers, insurance plans, and data clearinghouses:
Who needs to comply with the Security Rule?
- General practitioners
- Hospitals and clinics
- Diagnostic, laboratories and imaging centers
- Nursing Homes
- Ambulance Services
- Dental Services
- Mental Health Services
- Physical therapy and other outpatient services
HEALTH INSURANCE PLANS
- Major medical / traditional feefor- service plans
- Managed care plans: HMO. PPO, POS, EPO
- Consumer or self-directed plans
- Self-funded corporate plans
- Government programs: Medicare, Medicaid, and the Veterans Health Administration
Entities that process standard and non-standard health information they receive from other entities into standard electronic formats for purposes of processing insurance claims, patient billing or the storage of patient data.
HIPAA: The Privacy Rule
HIPAA also includes a “Privacy Rule”, which establishes the national standards as to who may have access to Electronic Patient Health Information (ePHI). The rule requires appropriate safeguards to protect the privacy of personal health information, and sets limits and conditions on the uses and disclosures that may be made of such information without patient authorization. (See U.S. Department of Health and Human Services. Health Information Privacy: The Privacy Rule.)
While the Privacy Rule sets the standards for ensuring that only those who should have access to this data will actually have access, it is the requirements of the Security Rule which have the largest impact on healthcare organizations in terms of both technical and organizational compliance challenges. Basically, the HIPAA Security Rule makes sure the ePHI is not disclosed improperly, and that hackers can’t easily gain access to Electronic Medical Records (EMRs). Protection of data from unauthorized access, whether external or internal, stored or in transit, is all part of the Security Rule. (See U.S. Department of Health & Human Services. Health Information Privacy: The Security Rule)
To accomplish this, each covered entity is required to meet 3 basic conditions:
- Assess potential risks and vulnerabilities to the individual health data in its possession.
- Develop, implement, and maintain appropriate security measures, which must include, at a minimum, the following requirements and implementation features:
- Administrative Procedures
- Physical Safeguards
- Technical Security Services and Mechanisms
- Ensure these measures are documented and kept current
Data covered by this rule, for example, may reside in the following servers, workstations, networks, terminals, peripherals, web sites, application service providers and claims processing systems.
Implementing the necessary safeguards required by HIPAA implies the requirement for more sophisticated technologies than was has traditionally been available in the 1990’s. The security standards do not dictate or stipulate the use of specific technologies, but legacy software will insure increased risk of systems compromises. With the 5 prospect of severe civil and criminal penalties from the Department of Justice or a state’s Attorney General’s office even for minor infractions, it is critically important to take the necessary precautions and ensure full compliance.
Appropriate technical safeguards include:
- Ensuring that applications are built using the latest technologies which incorporate advances in security features and best-practices.
- Mitigating the risk of breaches that make critical data vulnerable.
- Making sure that all software running on the systems is currently supported by the vendor as this guarantees access to updates and patches providing an added layer of security.
Protected Health Information
Protected Health Information, also referred to as “PHI”, can be breached in any of the commonly recognized data states as shown below in the diagram 1, below. Data is considered “in motion” while it is moving through networks, over wireless transmissions such as communications with a clearinghouse or an email or by means of fax. Data is considered “at rest” when it is residing in a file system, database or any other structured form of storage. It can all be “in use” when it is being updated, created or reviewed. Likewise, it’s “disposed” state, whether electronic or paper records, is also a state in which the PHI should be unusable, unreadable or indecipherable to unauthorized individuals, with the minor exception that PHI is no longer “protected” once it has been “deidentified”.
PHI is rendered unusable, unreadable or indecipherable to unauthorized individuals and thus in compliance if one or more of the following applies:
- Electronic PHI when in motion and at rest must be encrypted as specified in the HIPAA Security Rule and that the encryption key has not been breached. While at rest, valid encryption processes include those consistent with the National Institute of Standards and Technology, NIST, Special Publication 800-11, Guide to Storage Encryption Technologies for End User Devices. And while in motion, data must comply with different set of standards such as Transport Layer Security (TLS) and Virtual Private Networks (VPNs) such as Internet Protocol Security (IPsec) and Secure Socket Layer (SSL).
- For electronic media in the disposed state it must have been cleared, purged or destroyed consistent with NIST’s Guidelines for Media Sanitation such that the PHI cannot be retrieved. Media sanitation is further divided into four categories: disposal, clearing, purging and destroying.
Technical safeguards include access, audit and integrity controls in addition to transmission security. Technical policies and procedures that allow only authorized personnel to access electronic PHI must be implemented.
Hardware, software and other mechanisms to record and monitor access and other activity in the systems that come in contact with electronic PHI need to be created. Electronic measures must be put into place to ensure that this information is not improperly altered or destroyed. And finally, there needs to be a means to guard against unauthorized access to electronic PHI while it is in transmission over an electronic network.
Microsoft’s Visual Basic 6 and the .NET platform
Over the past 10 years, each release of the .NET platform and its corresponding programming languages and development environments has had a particular theme that was marketed louder than others, e.g., managed code, generics, Language Integrated Query (LINQ), Dynamic Language Runtime (DLR), there have been other countless improvements in C# and VB.NET over the legacy platform, Visual Basic 6, each offering to remedy previous shortcomings and providing advances in security features and best-practices.
The new development environments offer better modeling of business objects with increased support for design patterns and efficient architectural options. With .NET, the introduction of the “try/catch” syntax in Visual Basic allows for improved error handling techniques over the previous “On Error” approach. Strict type checking and tighter control on variable scope and member permeability offer modern data typing disciplines and simpler data validation. Long gone are the days of the “variant” – often an entry point for hackers and a source of performance limitations with large memory overheads. New support for the Long datatype will reduce the messy hacks previously used to support 64-bit numerical operations. The elimination of versioning problems typically associated with “DLL hell”, the inability to create Windows services and dependencies on fragile COM Registry entries are no longer an issue with the .NET Framework. Functionally-equivalent code written in .NET simply requires less code. Less code generally translates to fewer bugs and entry points for potential hackers. Additionally, these .NET languages provide for true support for multithreading and 64-bit application development. More than just allowing for true object-oriented programming, .NET programming languages offer a wider degree of options for language paradigms. All of this amounts to serious improvements over Visual Basic 6.
The last release of Visual Basic 6 arrived in 1998 and its mainstream support ended in March of 2005. Microsoft even ended its extended support in March 2008. Visual Basic 6 is quickly approaching its 15th birthday and is clearly no longer the latest technology. Real-world advances in security and improvements in application development bestpractices are no longer available to Visual Basic 6 applications. Without access to updates and patches, it’s no longer feasible to mitigate the risk of vulnerability to security breaches of critical data.
While Visual Basic 6 will indeed hobble on in Windows 7 environments until about 2020 according to Microsoft (See Support Statement for Visual Basic 6.0 on Windows Visa, Windows Server 2008 and Windows 7), it will most likely go no further. The ability for software developers to respond to serious lingering issues and new threats on the unsupported platform continues to wane. Continuing down the road of VB6 obsolescence creates a real security risk in terms of human resources. In addition to all the benefits mentioned above, a modernized application creates an environment that helps to sustain high job satisfaction and ultimately greater retention, - both ingredients vital to HIPAA compliance as they lower the risk of negligent data handling and theft. Making the jump to the .NET platform, whether VB.NET or C#, is a low risk strategy for healthcare providers, insurance plans and clearinghouses, despite what might seem like the daunting task of migration. Such a move, however, will establish a technology foundation capable of meeting current and future needs especially in the face of rules that are expected to change often.
Preserving VB6 Applications
As desktops continue to be updated to the last operating system versions or service packs, it is clear that eventually the older Visual Basic 6 applications will cease to function. But before the application quits working altogether, there will be noticeable consequences of preserving the legacy application.
Consequences of preserving your VB6 application on Windows 7 include:
- User Interface Issues
- Decreased Functionality
- Broad Security Risks
- Performance Latency
- Lack of Technical Support
- Forward Incompatibility
Many of the more common place consequences will include user-interface issues. Many user controls previously compatible with Windows XP and other version, will no longer be compatible with Windows 7. There are also Windows API calls that have are no longer available. Similarly, the “SendKeys” functionality is no longer supported.
The lack of technical support is a bit issue. The Visual Basic 6 development environment doesn’t run on Windows 7 without the use of a virtual machine. There has been no support available for the development environment since April 8, 2008. With lots of potential security risks, these applications often have to “run as administrator”. Business operations may require integration between this older application with new applications; such integrations may prove unfeasible or very costly.
Even Microsoft recognized the near-futility of running a VB6 application on Windows 7 by providing a free virtual PC for Enterprise and Ultimate editions of the operating system. While this may offer some relief, many applications experience performance latency and hanging.
Beyond VB6, managed code reduces vulnerabilities that are inherent to programmers, such as having to handle their own memory management. Managed code also reduces risks of unintentionally opening up security holes that are inherent in low-level system interactions. The .NET Framework offers better coding models to this end. For instance, the Common Language Runtime (CLR) provides file format and metadata validation. Microsoft Intermediate Language (MSIL) code verification ensures type safety, prevents bad pointer manipulations and virtually eliminates buffer overflow vulnerabilities. The integrity of “strong-named” assemblies, in lieu of traditional GUIDs, is verified using a digital signature that ensures that the assembly was altered in any way since it was built and “signed”. This means that attackers cannot alter your code in anyway by directly manipulating the MSIL instructions. From a security perspective, .NET managed code offers significant improvements over Visual Basic 6. (See https://msdn.microsoft.com/en-us/library/cwk974ks%28vs.71%29.aspx and https://msdn.microsoft.com/enus/library/ff648652.aspx)
User and Code Security
Both role-based and code-access security are layered on top of Windows security in the .NET Framework. While rolebased security controls user access to application-managed resources, code-access security is concerned with which code can access which resources and perform which privileged operations. For Web application, this is an enormously beneficial security feature because it restricts what a likely attacker is able to do if it managed to compromise the Web application process. This feature also provides application isolation – particularly important for hosting companies. The two security features offer a great advantage of traditional Visual Basic applications.
With HITECH widening the scope of compliance concerns for HIPAA’s Privacy and Security Rules, participants in the health care industry, including providers, insurance companies, clearinghouses and software vendors, not just the corporate entities but employees and business associates as well, should be aware of the possible enforcements measures that could be taken upon them if standards aren’t met. Compliance issues can now result in felony prosecutions of up to 10 years in prison as well as millions of dollars in civil penalties. The security provisions of these laws apply directly to software vendors who provide the systems to the industry and those that administer the systems. They will now be held legally accountable for the confidentiality, integrity and availability of their patient data.
While HIPAA and HITECH do not call for the use of any specific software, it does suggests using software that has vendor support and access to updates and patches that will reduce the risk of non-compliance. So the VB6 IDE, which Microsoft stopped supporting years ago, could represent a violation, even though the runtime is still OK. Moreover, even if a legacy VB6 application is able to run on Windows 7, it is likely to encounter issues – many of which may compromise security, availability or functionality. Upgrading to .NET makes it easier to implement technology to stay in compliance with other areas of HIPAA/HITECH, like keeping the data secure in transmission and encryption. Regardless, not having a compliance plan in place will be considered “willful neglect” by enforcement authorities.
Software companies and their developers can no longer afford to treat security as an afterthought. They must ensure that applications being built and supported are using the latest technologies, – incorporating recent advances in security features and best-practices. Penalties are now being imposed ( https://www.hhs.gov/ocr/privacy/hipaa/enforcement/examples/index.html ) as attorneys general offices nationwide are seeking the every widening options and availability for HIPAA enforcement training. (https://www.workplaceprivacyreport.com/tags/hipaa-enforcement-training/) Fortunately, Microsoft has placed security-related features at the core of the .NET Framework and forces developers regardless of carelessness or lack of experience to address security both from use of managed code, role-based and code-access security and the rich libraries in the .NET security namespaces. The necessary next step for many organizations is to migrate their Visual Basic 6 legacy application to .NET to remove completely the risk of VB6 obsolescence.
Call us today at 1-425-609-8458 or email us at [email protected].
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9425876140594482,
"language": "en",
"url": "https://comprara.com.au/procurement-glossary/buyers-market/",
"token_count": 194,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11572265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:26d32535-aa8c-43a1-9228-4d1bba91022d>"
}
|
A buyers market occurs when there is an excess of supply over demand and buyers have many alternative sources of supply for goods and/or services. Supply-demand imbalances occur for a variety of reasons, including economic downturn, technological development and the introduction of free trade, any or all of which can cause markets to become a buyers’ markets, at least in the short term. In the longer term, suppliers faced with dwindling margins exit the market and a balance between supply and demand is restored.« Back to Glossary Index
Discover the world’s largest Glossary of Procurement terms
With over 800 Procurement specific terms (and growing) you will find everything you need to know or thought you knew about the Procurement function. Our aim is to provide you with a comprehensive list collated from the Comprara Groups hub of training and consulting source materials.The Procurement Glossary has been compiled by industry expert Paul Rogers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9134281277656555,
"language": "en",
"url": "https://jargonism.com/words/678",
"token_count": 64,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.029052734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8127559e-2d75-4057-856e-76bc14e4c829>"
}
|
Definition: Acronym for Quantitative easing, which is when the government increases the money supply by providing financial institutions with capital at a cheap rate.
Example: QE is driving up home prices.
Usage of "QE" by Country
Details About QE Page
Last Updated: May 14, 2015
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9700831174850464,
"language": "en",
"url": "https://www.dlacalle.com/en/u-s-budget-spending-is-the-problem/",
"token_count": 1070,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.283203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:69e0df2e-38b0-46dd-aa30-a1702f2a892d>"
}
|
Every time there is a budget debate, politicians from both parties will discuss the deficit and spending as if the first one did not matter and the latter could only increase. However, the main problem of the US budget in the past four decades is that total outlays rise significantly faster than receipts no matter what the economic growth or revenue stream does. For example, in the fiscal years 2018 and 2019 total outlays rose mostly due to mandatory expenses in Social Security, Medicare, and Medicaid. No tax revenue measure would have covered that amount.
Total outlays were $4,447 billion in 2019, $339 billion above those in FY 2018, an 8.2 percent increase. No serious economist can believe that any tax increase would have generated more than $300 billion of new and additional revenues every year.
The idea that eliminating the tax cuts would have solved the deficit is clearly debunked by history and mathematics. There is no way in which any form of revenue measure would have covered a $339 billion spending increase.
No serious economist can believe that keeping uncompetitive tax rates well above the average of the OECD would have generated more revenues in a global slowdown. If anything, a combination of higher taxes and weaker growth would have made the deficit even worse. Why do we know that? Because it is exactly what has happened in the Eurozone countries that decided to raise taxes in a slowdown and it is also what all of us witnessed in the United States when revenue measures were implemented.
The US was maintaining a completely uncompetitive and disproportionately high corporate income tax (one of the highest in the world) and all it did was to make it similar to other countries (the Nordic countries have corporate income tax rates of 21.4% Sweden and 22% Denmark, for example).
What happened to corporate tax receipts before the tax cut? The evidence of a weakening operating profit environment: Corporate tax receipts fell 1% in 2017 and 13% in 2016. The manufacturing and operating profit recessions were already evident before the tax cuts. If anything, reducing the corporate rate helped companies hire more and recover, which in turn made total fiscal revenues rise by $13 billion to $3,328 billion in the fiscal year 2018, and rise by $133 billion in 2019, to $ 3,462 billion, both above budget, according to the CBO. Remember also that critics of the tax cuts expected total receipts to fall, not increase.
Mandatory spending is now at $2 trillion of a total of $4.45 trillion outlays for the fiscal year 2019. This figure is projected to increase to $3.3 trillion by 2023. Even if discretionary spending stays flat, total outlays are estimated to increase by more than $1 trillion, significantly above any measure of tax revenues, and that is without considering a possible recession.
Any politician should understand that it is simply impossible to collect an additional $1 trillion per year over and above what are already record-high receipts.
For 2020, tax receipts are estimated at $3,472 billion compared to $4,473 billion in outlays, which means a $1,001 billion deficit. With outlays consistently above 20% of GDP and receipts at 16.5% average, anyone can understand that any recession will bring the gap wider and deficits even higher.
Deficits mean more taxes or more inflation in the future. Both hurt the middle class the most. More government spending means more deficit, more debt, and less growth.
When candidates promise more “real money” for higher spending they are not talking of real money. They talk of real debt, which means less real money into future schools, future housing, and future healthcare at the expense of our grandchildren’s salaries and wealth. More government and more debt is less prosperity.
Anyone who thinks that this gap can be reduced by massively hiking taxes is not understanding the US economy and the global situation. It would lead to job destruction, corporate relocation to other countries and lower investment. However, even in the most optimistic estimates of tax revenues coming from some politicians, the revenue-spending gap is not even closed, let alone a net reduction in debt. The proof that the US problem is a spending issue is that even those who propose massive tax hikes are not expecting to eliminate the deficit, let alone reduce debt, that is why they add massive money printing to their magic solutions.
Now, let us ask ourselves one question: If the solution to the US debt and deficit is to print masses of money, why do they propose to increase taxes? If printing money was the solution, the Democrats should have massive tax cuts in their program. The reality is that neither tax hikes nor monetary insanity will curb the deficit trend.
No tax hike will solve the deficit problem. Even less when those tax hikes are supposed to finance even more expenses. No amount of money printing will solve the financial imbalances of the US, it only increases the problem. If money printing was the solution, Argentina would be the highest growing economy in the world.
If the US wants to curb its debt before it generates a Eurozone-type crisis that leads to stagnation and high unemployment, the government needs to really cut spending, because deficits are soaring due to ballooning mandatory outlays, not due to tax cuts.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9396010637283325,
"language": "en",
"url": "https://www.greennudge.sg/post/green-nudge-explains-the-resource-sustainability-act-and-what-it-is-all-about",
"token_count": 2853,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07080078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:06496c1a-048f-4d78-8f57-ba8ee2bca292>"
}
|
Green Nudge Explains - The Resource Sustainability Act and What it is All About
Mar 19, 2021
By Audrey |
Heard about the Resource Sustainability Act but not sure what it does and how it will enhance our nation’s sustainability efforts over the next few years? Our intern Audrey breaks it down and shares more about the Act.
The Resource Sustainability Act is a landmark legislation that was officially passed on September 4, 2019. It has been over a year now since the bill was passed by parliament, but what is it really about?
Getting the (Resource Sustainability) Act Together
In short, the Resource Sustainability Act is a law that will help Singapore to become more sustainable by introducing guidelines to reduce and recycle the waste we produce. Although there have been many previous attempts to go green that have focused on consumer behavior, like campaigns to get Singaporeans to raise recycling rates, the Act will shift some of that responsibility up to big businesses that actually profit from those products.
To effectively cut down on waste, the legislation will target three main waste streams that are likely to cause the most problems in the future: e-waste, packaging waste, and food waste.
1. Electronic Waste (E-waste)
E-waste, or unwanted electrical and electronic equipment (EEE) are electronic items that are no longer needed and range from small items like mouse, thumb drives to large items such as washing machines, televisions or refrigerators.
“Why is this important?”
Because they contain numerous components including plastic, metals as well as potentially harm liquids, these are not easily disposed and require additional treatment. Currently, only smaller items such as cables, cameras, mobile phones are bring recycled which extracts the valuable metals to be used, and only if these are being placed in recycling boxes. Larger items like refrigerators are not easily transported and thus are not yet commonly recycled. Most of these waste are still being thrown away as general waste. As a result, there is a lot more that we can do to turn these unwanted equipment into meaningful materials.
“How does the RSA help?”
The Resource Sustainability Act targets manufacturers and retailers of EEE in charge of the recycling of their products in 3 ways. First, all EEE producers have to register with the National Environmental Agency to supply products in Singapore. Secondly, producers that supply more than a set amount of EEE have to be licensed under the Producer Responsibility Scheme (PRS). Finally, the PRS operator has to create a recycling system for consumer products while large retail stores provide on-site e-waste collection centres.
What this means is that instead of allowing any of these items to be randomly thrown and incinerated, suppliers now must take responsibility to ensure that the items are properly managed, right before they are being delivered to users, AND after users have completed using their products.
This is a big deal because producers or suppliers who bring in these products have to decide and create ways for consumers to dispose of their e-waste properly. And the onus isn’t so much on the consumers i.e. us to figure out what we can do with our waste. It doesn’t mean we can now shake off the task of recycling. Rather, it makes it so much easier for consumers to send these items to be properly disposed and treated.
2. Packaging Waste
Enjoying online shopping and the joy of receiving items from your favourite store? In order for them to deliver items to you in good condition, many of them have to package them properly using materials like bubble wrap or cardboard. But these items are often thrown away the moment we receive them.
“Why is this important?”
Packaging material is a huge waste category in Singapore. HUGE. They practically make up one-third of the total household waste produced in 2018. And with more people buying things online, or having them delivered, more packaging will be used. And that’s not just that. That bag you took from the supermarket, that additional box you took from the stall. These all add up at the end of the day.
“How does the RSA help?”
The Mandatory Packaging Reporting (MPR) framework requires producers of packaging materials and packaged goods to report the amount and type of packaging they release in the market. On top of that, producers have to submit plans to reduce, reuse, and recycle the packaging waste they create.
While it’s not going to overhaul the demand of the packaging used (we reckon that consumers will continue to spend if we spy a bargain), it helps companies to make sure that they cut down the packaging before they are even given up. And if there are available channels to recycle, this helps to make it more accessible to recycle. Hopefully we can even see recycling boxes in the shops that we buy the items from!
3. Food Waste
“Why is this important?”
Food waste is a big thing in Singapore because we eat a lot! Think of all the good food that we have in SIngapore, from laksa to lotong, to salad to steaks, to seafood to desserts. We eat so much food that a good portion of them goes to waste. And it seems such a pity that we have to throw away good food to be incinerated. And especially when a lot of the food are good edible food, it just does not seem right.
“How does the RSA help?”
To target food waste, the Act helps to classify two main types of food waste - avoidable food waste and unavoidable food waste. Avoidable food waste is waste that could have been prevented with better management, such as expired food or leftovers. Unavoidable food waste includes foods that aren't meant to be eaten, such as bones or eggshells.
The Resource Sustainability Act focuses on the latter waste stream by creating a framework for food separation. First, new building designs for large food waste generators like hotels and malls have to include an on-site food waste treatment center. Afterall, if they will be generated, then it is best to tackle these items first. Then, in a few years, large food waste generators have to segregate food waste for more effective treatment. The treated waste will eventually be made into things like animal feed, compost, and biogas.
These food waste don’t just disappear into thin air. By treating them, the food waste becomes either useful fertilisers which can now be used for gardening, farming and community use, thus lesser amount of waste is sent to the incineration plant and landfill. Alternatively, they are turned into harmless water which then gets introduced into our water stream. All is well!
Impact on Consumers
So the Resource Sustainability Act appears to be holding big businesses accountable for the waste they produce, but how will this affect you or me as an individual consumer?
Although most of the changes are happening behind the scenes at corporations, there are going to be a few differences that will trickle down to us consumers too. These aren’t necessarily going to be fixed responsibilities though, more like opportunities to reduce the waste you generate as big companies start becoming more sustainable.
For example, consumers can dispose of e-waste through large electronic retailers when collecting a new device, making things more convenient for you, and more beneficial for the environment.
Or, if supermarkets decide to reduce plastic packaging waste at stores by getting rid of plastic bags, you would need to adjust your shopping habits by taking actions such as bringing reusable tote bags. Little improvements like this mean that the way the public consumes will eventually change and affect how you shop as a customer if you aren't willing to adapt.
Our Take on the Act - The Yays
Singapore has a problem with waste, there’s no getting around it. It has been estimated that Pulau Semakau will be completely full by as early as 2035, a full decade earlier than it was supposed to last.
One of the reasons that the Resource Sustainability Act is so important is because it can extend the lifespan of the landfill by recycling materials instead of just dumping them. In other words, Singapore wants to create a circular economy where materials are reused for as long as possible to draw out the maximum value. For example, materials like gold in e-waste can be collected and put back into the economy instead of being thrown in a landfill.
Not only does the Resource Sustainability Act strengthen the economy by closing the supply chain and keeping resources within Singapore, but it also creates more job opportunities as new recycling facilities need to be designed and built.
With a strong focus on economic benefits, it is important to keep in mind that solving environmental issues should be the priority. You might not even notice the consequences of the waste you throw away, but environmental issues are affecting everybody. Even though e-waste makes up less than 1% of the total waste produced by Singapore, electronic equipment can release chemicals like refrigerants that harm our environment and health. When food waste rots, greenhouse gases like methane are produced. And when plastic packaging is incinerated, toxic fumes and carbon emissions are released. These are pressing concerns; however, these issues are only pieces of a larger environmental problem.
Our Take on the Act - More Yays Please
Just targeting three waste streams for economic benefit could potentially backfire if we lose sight of the original goal: making Singapore more sustainable. For example, by emphasizing recycling as an easy solution to reduce waste, we miss the fact that the recycling system doesn’t actually effectively treat our waste.
While it is important to also develop the economy, we have to be sure that our intentions to be more environmentally conscious remain strong so we won’t confine ourselves to focusing on parts of a big problem.
As environmental issues like global warming and plastic pollution get worse, we can’t forget that all our actions have an impact. If individuals, corporations, and the government work together, only then can we make significant changes that will benefit both the economy and the environment. This shift in consumer behavior is still a necessary step to make Singapore more sustainable, and with the support of the Resource Sustainability Act, you can be a part of the journey to go green.
This article was written by Audrey, one of our Green Nudge interns :)
Chertow, Marian R., and Daniel C. Esty. “Environmental Policy: The Next Generation.” Issues in Science and Technology, 11 June 2019, issues.org/esty/.
“Extended Producer Responsibility (EPR) System for E-Waste Management System.” National Environment Agency, www.nea.gov.sg/our-services/waste-management/3r- programmes-and-resources/e-waste-management/extended-producer-responsibility-(epr)-system-for-e-waste-management-system.
Geddie, John. “In Singapore, Where Trash Becomes Ash, Plastics Are Still a Problem.” Reuters, Thomson Reuters, 6 June 2018, www.reuters.com/article/us-singapore- waste-idUSKCN1J20HX.
Min, Ang Hwee, and Cindy Co. “IN FOCUS: 'It Is Not Easy, but It Can Be Done' - The Challenges of Raising Singapore's Recycling Rate.” CNA, 3 Aug. 2020, www.channelnewsasia.com/news/singapore/in-focus-singapore-recycling-sustainability-blue-bins-waste-12972634.
Min, Ang Hwee. “Singapore to Reduce Semakau Waste by 30% under First Zero Waste Master Plan.” CNA, 1 Oct. 2019, www.channelnewsasia.com/news/singapore/ semakau-reduce-landfill-zero-waste-master-plan-recycling-bins-11856664.
“Overview.” Towards Zero Waste Singapore, 15 Sept. 2020, www.towardszerowaste.gov.sg/zero-waste-masterplan/chapter3/.
Tan, Ashley. “New Environment Law Compels Businesses to Reduce e-Waste, Packaging & Food Waste.” Mothership.SG - News from Singapore, Asia and around the World, 5 Sept. 2019, mothership.sg/2019/09/environment-law-singapore-resource- sustainability-bill-passed-parliament/.
Tan, Bryan. “Resource Sustainability Act: Singapore's Road to Zero Waste.” Pinsent Masons, Pinsent Masons, 17 June 2020, www.pinsentmasons.com/out-law/analysis/resource- sustainability-act-singapore-road-to-zero-waste.
“Waste Statistics and Overall Recycling.” National Environment Agency, www.nea.gov.sg/our-services/waste-management/waste-statistics-and-overall-recycling.
Zheng, Zhangxin. “S'pore Govt Close to Passing New 'Landmark' Environment Law in Parliament.” Mothership.SG - News from Singapore, Asia and around the World, 30 Aug. 2019, mothership.sg/2019/08/resource-sustainability-bill-singapore/.
Sign up for Green news
Get the latest news from Green Nudge
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8753271102905273,
"language": "en",
"url": "https://www.ingwb.com/insights/circular-economy-event/from-a-linear-to-circular-value-chain-while-protecting-and-recovering-resources-value",
"token_count": 257,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.035400390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3ee03d51-2196-41e4-898a-10d360da0d86>"
}
|
From a linear to circular value chain while protecting and recovering resources value
Summary of the contribution of Mr. Justin Keeble, managing director, Accenture.
There are five key circular economy business models:
1. Circular suppliers, who use renewable, recycled or biodegradable inputs instead of non-renewable resources, such as bioplastics or Crailar, which makes a cotton-like fibre that uses 17 litres of water per kg rather than the 2,000-29,000 litres used for cotton.
2. Resource recovery – General Motors, for example, has a commitment to zero waste across all its manufacturing sites. Today it recycles 90% of its materials and generates $1bn a year from its own waste.
3. Product life extension – Caterpillar remakes about 6,000 different parts and in 2012 it remanufactured more than 73,000 tonnes of material after taking back 2.2 million end-of-life components.
4. Sharing platforms for selling, lending, bartering and gifting, such as AirBnB, BlaBlaCar or JustGiving.
5. Products as a service – from Phillips selling light instead of lights to SolarCity selling solar power rather than solar panels.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9277117848396301,
"language": "en",
"url": "https://www.myassignmenthelp.net/project-management-assignment-help/manage-project-scope",
"token_count": 8675,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.04150390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:38703dc0-c110-478a-9c82-3b0818d2b2ca>"
}
|
Manage project scope
What is a project?
Before we delve into project time management, maybe we should take a moment to define what a project is.
- A project has a single objective that must be accomplished through the completion of tasks that are unique and interrelated
- Projects are completed through the deployment of resources
- Projects have scopes, schedules, and costs and are accomplished within specific deadlines, budgets, and according to specification
And to summarise, a project is a sequence of unique, complex, and connected activities having one goal or purpose and that must be completed by a specific time, within budget, and according to specifications.
Project Scope Management
According to the PMBOK Project Scope Management includes the processes required to ensure that the project includes all the work required, and only the work required, to complete the project successfully. It is primarily concerned with defining and controlling what is or is not included in the project.
Project Scope Management includes 5 processes:
- Initiation – authorising the project or phase.
- Scope Planning – developing a written scope statement as the basis for future project decisions.
- Scope Definition – subdividing the major project deliverables into smaller, more manageable components.
- Scope Verification – formalising acceptance of the project scope.
- Scope Change Control – controlling changes to project scope.
A project generally results in a single product, but that product may include subsidiary components, each with its own separate but interdependent product scopes. For example, a new telephone system would generally include four subsidiary components – Hardware, Software, Training and implementation.
Completion of the project scope is measured against the project plan, but completion of the product scope is measured against the product requirements. Both types of scope management must be well integrated to ensure that the work of the project will result in delivery of the specified product.
Initiation - Conduct project authorisation activities
Before any project can advance a formal authorisation of its scope must be achieved and documented. Such authorisation may be subject to needs and risk assessment, analysis of needs vs. conditions and feasibility and other equivalent efforts that are designed to secure the formal approval of the scope or determining that the project is high risk, not feasible or not viable.
In some cases, a portion of the work will be carried out prior to obtaining formal approval. This is done to support and secure the formal approval, when the work done serves as a proof of feasibility or greater readiness for the project.
Project scope isn’t developed and presented to the sponsor at one go, it’s a process.
Develop and confirm procedures for project authorisation with an appropriate authority
INPUTS - The first step in preparing to obtain project authorisation is collating inputs to initiation, this includes:
1. Product/Service description
Documents describing the product or service, including behaviour, characteristics and the business reason for its proposed creation.
2. Strategic plan
Strategic planning is how an organisation or an individual defines their objectives, values, missions and the way to achieve these. Project management is the discipline of planning, organizing, motivating, and controlling resources to achieve specific goals. A project is a temporary endeavour with a defined beginning and end under-taken to meet unique goals and objectives.
A strategic plan by the organisation that undertakes the project should be considered and may be included in the inputs to initiation, as the project should coincide with the organisation’s goals, objectives and values. If it isn’t, issues may arise in a later phase of the project.
3. Selection criteria and business reason
Innovative ideas are important for organisations, but projects are typically associated with risk of losing the investment, losing time, process, reputation, causing indirect damage to other products and services etc. This is why selection criteria must be set in place to determine which ideas make the cut and are “worthy” of a project.
The business reason for the project and the details explaining why the idea is worthy of turning in to a full scale project are important input to obtaining project authorisation.
4. Historical information
Although not always available, historical information can help the authorising body or individual make a good decision. Similar previous projects may shed light on the process, cost, issues and results involved with the project you are about to commence.
Think of yourself as the financing body that provides authorisation to projects. You would probably want to see as much information and details about the reason for the project, why it will pay off and how it is going to be managed smoothly and deliver the expected outcome.
Other tools and techniques for initiation include modelling techniques in which models are made to support the reason and logic to start the project (decision models, diagrams, mathematical models, researches, analytical hierarchy etc.).
Another source of information is Expert Judgement. This may be achieved within the organisation or externally and will reinforce other documentation.
Now that you have collated all the relevant information you possibly could as inputs to initiation, it is time to create the documents that will formally authorise the project:
1. PRODUCT DESCRIPTION
Product/service description incorporates product/service requirements that reflects agreedupon customer needs and the product design. It explains how the product/service operates/behaves/used in order to address and agreed-upon customer need.
2. THE PROJECT CHARTER.
This document holds all the relevant information for a decision maker to know about the project. It includes the business reason (needs identified) and the product/service description.
It is better for project charters to be developed by a party external to the project as that will allow:
- An objective interpretation of data
- The project manager to allocate and apply resources as determined.
If the organisation is contracted to produce a product or a service to a client, the contract may serve as the project charter.
3. CONSTRAINTS – These are limitation to the project that are likely to affect your ability to manage the project towards successful delivery. These can be internal, like a dire financial straight for the performing organisation or external like environmental constraints.
The more specific and detailed you are with the assessment and description of project constraints, the better you will manage the project around the constraints and find alternative ways of achieving your goals and objectives.
Every project operates under a set of basic assumptions. These are factors that you consider to be certain and will affect project planning. You must make assumptions and that involves an element of risk because things may change. This is why assumptions must be validated throughout the project and if proven wrong than project planning must change accordingly.
We discussed the documents and data included in the initial scope planning;
- Product description
- Project charter
All of these documents are inputs for the next step in scope planning.
Scope Planning doesn’t stop with the creation of the four documents mentioned above. In fact, this is an ongoing process design to achieve accurate scope definition and controlling the scope throughout the project to deliver the defined deliverables.
Scope planning uses tools to help you better understand, evaluate and document information about the project scope. These include:
A detailed analysis of the product/service in terms of: engineering, value, functions and quality.
Evaluation and estimation of cost vs returns of different scenarios (with regard to the project and product/service produced) to assess relative desirability of the alternatives.
Finding alternative approaches to manage the project.
The analysis, evaluation, assessment and advice of an expert about the processes involved with producing the product/service.
The outputs of this step in the process are 2 documents:
- Scope statement – describing project objectives and deliverables.
- Scope management plan – How we will manage the scope, identify scope creeps, document changes and obtain approvals.
An additional document is “supporting detail” which may accompany any of the two. Scope Statement
The scope statement includes:
- The business reason (justification) for the project (i.e. customer need we set out to address)
- Description of the product/service including main characteristics and functions
- The deliverables required to complete the project
- Quality criteria that when met the project is deemed successful
This document may be modified throughout the project when changes are needed and approved. This will be the basis for future project.
Supplementary documentation, calculations, analysis etc. should be documented to assist in:
- Supporting scope statement
- Other areas of managing the project
- Supporting Scope management Plan
Stakeholder’s approval of scope
A formal acceptance of the project scope by stakeholders is necessary in order to kick off the project. The stakeholder can be a sponsor, customer, employer or other. In order to approve the relevant stakeholder will review the documentation mentioned above and if all is well – approve the project to start. This could mean singing the project charter, a contract or various other acceptable ways to give approval in the business environment.
A good presentation of information through the documents mentioned above is crucial to obtaining approval for project scope.
Obtain authorisation to expend or shrink resources
Change requests happen in almost any project. It’s enough to think of a young family building their home where they plan to raise children to imagine how many changes may occur before and during construction. Changes aren’t necessarily bad and should automatically stress or hinder a project, but they most certainly be managed.
The actual request can be made orally or in writing and by more than one stakeholder, including internally and externally. The changes may attract an expansion or shrinkage of scope.
The following table shows internal and external causes for scope change requests.
Errors and omissions in calculations, evaluations, estimations and assessments.
Errors and omissions in calculations, evaluations, estimations and assessments.
Change of management.
Change of Government.
Respond to risk – Implementation of Plan B.
Respond to risk – Implementation of Plan B.
Change of legislation.
Critical changes of labour.
Scope change control
Scope change control is the set of procedures to follow when a scope change is needed.
The procedures will relate to the way in which the change will be requested (i.e. paperwork or electronic system used), the integration of the change and the documentation of the entire process.
Think about your own workplace, if you wanted something changed, what is the procedure?
In a project, this process of changing the scope must be predetermined and clear as changes to scope will directly or indirectly have an effect on:
The output of this process is the documented scope change, which is any change to the approved Work Breakdown Structure (WBS).
Approved scope changes may attract modification to several other documents in the project such as technical and planning documents.
Another tool to manage scope is the corrective action. This includes any action done subsequently to changes or fluctuations to align the project actual performance with the plan.
Here lies the historical data of future projects. It simply means that what you have learnt in the current project may be useful information for future projects and that is why you should identify and record the reasons for corrective actions, the corrective actions and its results.
The project baseline is a document that shows the plan for the project over the period of time it is designed to occur. Milestones for payments, deliverables and significant events are marked on the timeline and helps the project manager monitor and compare actual results with planned results.
If the change affects the baseline then an updated/new baseline must be created.
Confirm project delegations and authorities in project governance arrangements
What is Project Governance?
Project governance is the management framework within which project decisions are made. Project governance is a critical element of any project since while the accountabilities and responsibilities associated with an organization’s business as usual activities are laid down in their organizational governance arrangements, seldom does an equivalent framework exist to govern the development of its capital investments (projects).
For instance, the organization chart provides a good indication of who in the organization is responsible for any particular operational activity the organization conducts. But unless an organization has specifically developed a project governance policy, no such chart is likely to exist for project development activity.
Therefore, the role of project governance is to provide a decision making framework that is logical, robust and repeatable to govern an organization’s capital investments. In this way, an organization will have a structured approach to conducting both its business as usual activities and its business change, or project, activities.
Put simply, the project governance will tell you what activities are under whose authority and which project delegations are related to the activity you need to carry out.
Imagine that you need more time, equipment or machinery to complete a task in the project. For example, during an excavation you realise that you’ve hit pipes and must stop the work.
Typically you would approach your supervisor or project manager.
But if you are the project manager you need to know who in the performing organisation to contact about this. Who is delegated and authorised to make a decision and approve your solution to the problem.
Not knowing who to talk to may cause delays. When the work is stopped and workers are just waiting for further instructions money is wasted and this may affect the overall project in terms of meeting deadline, staying within budget, quality etc.
When you work on a project as part of a team you could work under a project governance scheme or under the organisational umbrella (when no project governance framework has been developed). It is essential for project managers to clarify the delegations and authorities prior to work commencement.
Define project scope
Identify, negotiate and document project boundaries
As described earlier, the INITIATION phase of Project Scope Management includes the input of:
- Product/service description
- Strategic plan
- Project selection criteria
- Historical information While the output is:
- Project charter
- Project manager (identified or assigned)
The project charter in itself may include the boundaries of the project, the constraints and the assumptions under which the project will run.
The project plan should include reference to OUT of SCOPE components. This may also be included in the project charter which is a supplementary document to the project plan.
Describing OUT of SCOPE components is about identifying the points of controversy and demystifying them as early as possible.
For example, a landscaping company is working on a project to build a Koi Fish Pond in a residential house backyard.
The only thing we know about the project at this point is it include a Koi Pond although we don’t have specifications.
Naturally, the size, shape and orientation of the Koi Pond will be discussed and determined, but areas of controversy may be:
- Garden path
The boundaries should be defined and agreed upon at phase 1 of the Project Scope Management and be part of the inputs to initiation, thus included in the project Charter.
Continuing the example above, if the project scope was agreed to include:
- A pond of around 2.5 m x 2 m and 1.2 m deep containing approximately 5500 L.
- One gravel path – 8 metre long.
It may be useful for the project manager to add:
This may seem unnecessary but many project managers run into this trap later on in the project when the client is asking what happened to that components and arguing that it should have been included.
The more detailed you are the easier it would be for you to utilise this information down the track.
Establish measurable project benefits, outcomes and outputs
To develop a scope statement you will need to have:
- Product/service description
- Project charter
When developing the Project Scope Statement you will determine measurable project benefits, outcomes and outputs.
The scope statement includes:
- Project justification – a problem or a need that the project was undertaken to address.
- Project’s product/service – a summary of the product/service description.
- Project deliverables – a list of deliverables whose full and satisfactory delivery marks completion of the project.
- Project objectives – at the very minimum; cost, time and quality criteria that must be met. They should have an absolute or relative value to measure the project outcome.
o Project must end by 30 June 2022 o Project must be completed within a $4 m budget o All structures, materials and facilities must meet Australian standards. If project objectives aren’t quantifiable, it’s very hard to assess the success of the project. For example, if your objective was “customer satisfaction”, it would be harder to know where you stand.
Establish a shared understanding of desired project outcomes with relevant stakeholders
Who are project stakeholders?
According to the Project Management Institute (PMI), the term project stakeholder refers to,
‘an individual, group, or organization, who may affect, be affected by, or perceive itself to be affected by a decision, activity, or outcome of a project’ (Project Management Institute, 2013).
Project stakeholders are entities that have an interest in a given project. These stakeholders may be inside or outside an organization which:
- sponsor a project, or
- have an interest or a gain upon a successful completion of a project; 3. May have a positive or negative influence in the project completion.
The following are examples of project stakeholders:
- Project leader
- Project team members
- Senior management Project customer
- Resource Managers
- Line Managers
- Product user group
- Project testers
- Any group impacted by the project as it progresses
- Any group impacted by the project when it is completed
- Subcontractors to the project
- Consultants to the project
- Individual contributors
When establishing a shared understanding of desired project outcomes with the relevant stakeholders it is important to be precise and clear and not to use general terms and unmeasurable objectives.
This process of scope definition involves breaking down project deliverables from the scope statement into smaller manageable components.
By doing that you will achieve:
- Better estimation of cost, duration and required resources
- Establish a realistic baseline
- Assign responsibilities and resources more accurately.
Adequate scope definition helps you understand the project and manage it better to complete it on time and within budget, and to better handle the changes as they arise.
The inputs to scope definition are:
- Scope Statement
- Supplementary information
- Historical information
The outputs of the scope definition process are:
- Work Breakdown Structure (WBS)
- Updated Scope Statement
Work Breakdown Structure (WBS)
A WBS is a breakdown of the project activities, grouping them according to their contribution to specific deliverables. The WBS defines the total scope of the project and activities or deliverables that are not in the WBS are outside the scope of the project. The second tool for Scope definition is Decomposition.
Decomposition is the breakdown of the deliverables (or major activities) into smaller, more manageable components until a point where the deliverables are defined in enough detail to support the development of project activities.
An example of a work breakdown structured is shown in the next page. The project is the building of a new shed where an old one stood.
These are the main deliverables of the project:
- Design – an architectural design
- Demolition – a clear area to build a new shed on
- Levelling – a levelled ground because the new shed will be a lot bigger
- Construction – a ready new shed
- Landscaping – a garden with Koi Pond and trees.
You may use a template from a previous project to create your WBS, but remember that each project is unique and you can’t just use previous WBS as is or sections from it without verifying accuracy and relevancy.
Document scope management plan
Scope Management Plan
While the other documents we discussed dealt with the “what” of the project scope, this document deals with the how. The scope management plan depict HOW the project scope will be managed and changes integrated into the project.
This document usually include a scope stability assessment, which simply means how likely changes to occur in this project are.
Most large scale projects encounter changes. When this happens stakeholders must know what the process is, meaning:
- Who is asking for the changing?
- In what way is the request for change being made?
- Who is authorising the change?
- What is the process for authorising changes? How will changes to scope be documented?
The scope management plan is a supplementary component of the project plan and may vary in detail and content.
Manage project scope control process
The scope management plan describes how to manage changes to scope and we discussed how to identify project delegation and authorities. But how will you monitor and ensure that the project is advancing according to the agreed-upon scope?
The project baseline is the answer. The baseline is developed in the Scope Definition step for performance measurement and control.
Baseline is the value or condition against which all future measurements will be compared. The baseline is a point of reference. In project management there are three baselines – schedule baseline, cost baseline and scope baseline, and in some cases quality baseline.
Implement agreed scope management procedures and processes
We’ve discussed procedures to manage changes to scope and now we will take a look at some tools that will serve us in the process of monitoring our progress and comparing actual behaviour against planned.
The tools are project baselines; scope, cost and time.
Creating the Scope Baseline
Introduction > Creating Project Charter > Creating Scope Baseline > Creating Schedule Baseline > Creating Cost baseline > Integrated Change Control > Risk Management > Communication Management.
The baseline is:
- Set at the end of the planning phase
- The original approved plan (and any approved scope changes)
- The basis against which all progress will be measured
The scope baseline includes all approved plan elements that define scope.
1. Scope statement
The scope baseline outlines the requirements for the scope of the project and how the work will be broken down.
Cost performance baseline
1. Resource estimates
2. Cost management plan Budget
3. development, including provisions for risk
This is a version of the budget, used to compare actual expenditures with planned expenditures, over time.
Schedule performance baseline
1. Project schedule
This is a specific version of the schedule, used to compare actual delivery to planned delivery.
The process of monitoring our progress involve monitoring the Performance Measurement Baseline.
What is the Performance Measurement Baseline (PMB)?
The Performance Measurement Baseline (PMB) is a time-based budget plan that outlines how the project will be completed and against which performance measures it will be evaluated. The PMB is a direct output of the project planning process; planning typically involves all known stakeholders that have an interest in a project’s outcome.
A PMB is not a single baseline schedule, but rather is made up of several baselines that describe the approved scope, cost and time.
These baselines are vital for evaluating performance during the project to judge whether the project is on track, as well as enable project teams to re-assess scheduling throughout a project development.
Performance Measurement Baseline
The chart shows the actual cost over time against the planned scope to complete over time (value), and the actual earned value.
In the example below we can see that the project hasn’t achieved the planned scope on time and the earned value is lower than the planned.
Now that you’ve had a look at the tools we can talk about the process. The process of monitoring the baselines against actual performance is ongoing. As project manager you should determine intervals for performing that check. These can be periodically, i.e. daily, weekly, monthly etc. or at milestones. It is important to undertake this comparison of the planned baselines against actual performance at milestones even if you have a periodic checks in place.
Milestones are another tool that will help you manage scope.
What are project milestones?
Milestones are tools used in project management to mark specific points along a project timeline. Completion of a certain deliverables by a specific date may be requested by the project sponsor, the project customer, or other stakeholders. Once scheduled, these dates become expected and often may be moved only with great difficulty.
Milestone events need to be part of the activity sequencing to assure that the requirements for meeting the milestone(s) are met.
Modern project management software enable you to add diamond shape elements to your baselines to indicate milestones in your project.
The following is an example of using milestones in the software SMARTSHEET which we’ll be using in this course assessment.
From the Smartsheet blog:
Life is full of milestones – and so are projects. When planning a project, aside from laying out the tasks that take you from beginning to end, you’re inevitably going to want to mark key dates along the way. One easy way to do this is through the use of a diamond shaped symbol in your Gantt chart, the milestone. Milestones not only help your team stay on track, they are also useful to you as a project manager to more accurately determine whether or not your project is on schedule.
Incorporating milestones in your project planning helps you and your team keep sight of:
- Key Dates Launch parties, board meetings, product rollouts and other key dates mark significant pieces of your project. It’s also helpful to include other one-day events unrelated to your project specifically that are still important for your team to keep in mind – like a group offsite or team holiday.
- Key Deadlines Key deadlines are important to surface on large project plans so your team can easily see what’s coming up and plan accordingly. For example, the date that website development is completed or when customer conference registrations need to be returned to qualify for early bird pricing. Key deadlines are related directly to your project but they aren’t project tasks. Use a key deadline as a milestone to reflect when a section of tasks or key task is completed.
- External Dates and Deliveries For example, a due date for a deliverable you are expecting from an agency, the date when your hiring manager has received an offer letter, or the day that pipes are scheduled to be delivered. These key events can affect when other tasks in your project are allowed to start. They may also be used as predecessors in your plan.
Scope verification is the process of obtaining formal acceptance of the project scope by the stakeholders (sponsors, client, customer, etc.). It requires reviewing deliverables and work results to ensure that all were completed correctly and satisfactorily.
For example, if the project was turning office space into a classroom, the deliverables to review could be:
- Painted walls
- New flooring
- Pictures on the walls Electricity: wiring and lighting New furniture set in place.
If the project is terminated early, the scope verification process should establish and document the level and extent of completion. Scope verification deals with acceptance of the work results (and not the correctness that is done in quality control).
The scope verification is done against a number of items. The inputs to scope verification include the actual deliverables (representing the work result), product documentation (specifications), WBS, scope statement and project plan.
The process of scope verification can be easily described as inspection that may vary in nature depending on the item inspected. The object of this inspection is to determine whether the work results conform to requirements.
The output of this inspection process is called Formal Acceptance. This acceptance should preferably be signed off by the customer/sponsor and documented. Skipping this process leaves you exposed to complaints, demands and change of requirements throughout the project.
The formal Acceptance apply to deliverables, phases or products and these will differ from project to project but must always exist, even in small projects.
For example, in a website project you could ask the client/sponsor to sign and approve the following:
- Homepage design
- Inner pages design
- Homepage HTML
- Inner pages HTML
- Content Management System (CMS)
- Content uploaded
- Functioning website ready for launch
Manage impact of scope changes within established time, cost and quality constraints according to change control procedures
Project Management Procedures
Project Management Procedures describe how the project will be managed, and are an effective way to communicate the processes to the project team, customers, and stakeholders. They may already exist at the organisation level, or need to be created per project.
Although they take time to develop, this will pay off as you will have a framework in which the project can progress confidently when workers, management and stakeholders know how to behave and what to expect. When you have a set of procedures that allow you to be successful, you can reuse them in future projects.
Scope management procedures (examples)
- Change request: a written change request form must be filled out for every requested change to scope that will attract one of the following:
- Delay in schedule
- Increase in cost
- Increase resources requirements
- Introduce a new risk factor
- Appropriate activities are added to the work plan to ensure the change is implemented
- The project budget should be updated (if relevant)
- If the approved scope change results in a major change to the project, the original Project Definition should be updated and communicated to relevant stakeholders.
Procedures and processes may vary between projects. If you are a project manager you need to contextualise these tools and frameworks to the project you are working on, considering:
- The nature of the work
- The environment
- The location
- The type of workers
- The stakeholders
- The conditions
- The worksite
- The performing organisation culture and attributes
- The resources you have available The project schedule and budget Constraints and assumptions.
Identify and document scope management issues and recommend improvements for future projects
Project managers may encounter many diverse problems when managing project scope. However, three issues seem to be a common for most projects and good handling of them will help you keep the project on track.
1. Scope Creep
Scope creep occurs when new requirements are added on to scope. These newly introduced requirements were not part of the initial phase of a project, and were not considered during the planning phase. But nevertheless, these additions or extras creep through to the execution phase of the project, impacting schedule, cost, and resources and at times even quality and risk.
Scope creep does not relate to changes derive from necessity, such as technology changes, new regulations and basic adjustments in user needs. Scope creep refers to the additions that are not necessary and are driven (in many of the cases) by not being able to visualise the project at the planning phase.
For example, in house construction, if the customers during the execution phase decided they must have a sky window in the bedroom, although not mentioned previously in the planning they will affect:
- Cost – sky windows cost more.
- Resources – You may not be able to use the roof components you ordered.
- Time – This may add a few hours.
- HR – You will need a worker who knows how to handle sky windows as not all workers do.
This is just one example, but in a real construction of a house people realise during the build the real dimensions of the structure and how they feel while moving in it. This alone may attract changes and scope creep. The problem lies in the lack of ability to visualise the product. This is why architects use 3D simulations.
In IT projects the scope creep problem is even bigger. This is due to 2 main reasons:
- It easier to forget a technical detail/feature than a tangible (you will more likely remember to include a basin in your bathroom than you are to include a search feature in your website).
- Technology is evolving so fast that not implementing new technologies and features in the project could mean staying behind and defeating the purpose or some of the objectives of the project. For example, you are developing a mobile app for a pet accessories franchise when location based technology breaks, allowing you to know where your potential clients are and which branch is closest to them. Not incorporating the new technology in your project means that your client will probably have to start another project if they wish to keep up with the competition.
2. Poorly Defined Project Scope
Let’s try and define project scope and its documentation:
Project scope is the part of project planning that involves determining and documenting a list of specific project goals, deliverables, tasks, costs and deadlines.
The documentation of a project’s scope explains the boundaries of the project, establishes responsibilities for each team member and sets up procedures for how completed work will be verified and approved. The documentation may be referred to as a scope statement, statement of work (SOW) or terms of reference. During the project, this documentation helps the project team remain focused and on task.
Now, imagine the consequences of poorly defined project scope. Immediate implications are:
- Workers aren’t sure who is responsible for which activity because no HR assignment have been made.
- Team isn’t sure when an activity has finished because the requirements aren’t well defined.
- Failure to meet deadlines because milestones haven’t been properly defined. Extra unnecessary work is being done ending up costing more than planned.
- Necessary work may be overlooked and when remembered it’s too late and rework cost more than planned.
And the list goes on and on.
In summary, the project manager must invest sufficient time to review, understand and modify if necessary the project charter and for the development of an adequate scope statement that is acceptable by stakeholders, realistic and taking into considerations the unique nature of the project and the conditions and resources.
3. Lack of Communication with Stakeholders
The success of project is not determined by the project manager alone. Each project has stakeholders and that includes sponsor/customer. You can’t be successful if your sponsor/customer is unhappy with the project and this often comes down to their experience rather than the quality of the end result.
Agile project management boasts effective stakeholder’s management processes and one of the principles of that project management methodology is indeed working closely with customers. They are not wrong. Projects in which key stakeholders are not involved enough in the planning phase and aren’t shown scope definition documentation tend to run into the following problems:
- Misinterpretation of requirements – could have been avoided in a short meeting.
- Scope creep – The more you discuss things prior to execution the less surprises later.
- Satisfaction level – we all like to be included and involved, especially when it’s our money.
Imagine that you engaged a builder to build a house for your family. You invest a significant amount of money and possibly even borrowing some. Now consider the 2 following scenarios:
Good communication with the project manager
You meet the project manager/builder several times before the project starts and sit with them during the planning phase. You share your thoughts, ideas, concerns and they advise you until you agree on the general layout, ballpark figure budget, materials and time frame. You approach an architect and share the progress and issues in the design process with your builder, so if he/she has any input it is brought up on time and you are not surprised later. During the execution you have not many new ideas because you have had a long time
to think, consult, discuss and choose the right specifications. You maintain close communication with the project manager / builder throughout the project and they feel comfortable approaching you with issues, ideas and thoughts. The project is completed and you have been a part of it.
Poor communication with the project manager
You hand the builder the architectural plan designed by the architect. You sign a contract for the building of the house. You provide an engineering plan and wait for the project manager to deliver.
Because you haven’t brainstormed your options with the project manager you now have all these questions, ideas, thoughts and concerns, but the project is already in its execution, meaning changes will cost you more. You either keep it to yourself or become bitter about the construction, or you ask the builder to add the changes. They price it higher than they would in the planning phase, the schedule stretches, budget increased and quality may compromised.
All this could have been neutralised if you had been part of the planning.
The three commonly encountered problems are a just a small representation of the issues and problems project managers have to deal with, but addressing them and considering them from the first day of the project will dramatically increase your chances of a successful delivery. Another thing worth mentioning about communication is that even “bad news” should be communicated to stakeholders. It is always in the project’s long-term interest that you maintain an open and consistent relationship with stakeholders so that problems can be dealt with when they are still manageable.
Reasons for Poor Scope Definition
There may be different reasons for poor scope definition. Here are a few of the leading ones;
- Pressure to get product to market faster. Not only that compressing project phases and skipping adequate planning won’t get you to the market on time but they are likely to cause cost and schedule overruns.
- Lack of in-house design or planning capability. With some projects, full in-house planning is not possible. Using the construction of a residential house again, it is not likely that you will be able to do the architectural plan for the house better than an architect (unless you are one). But handing the requirements off and disengage will result in the architect and engineer taking decisions for you, and increase the chances of failing to meet customer expectations.
- Overly optimistic leadership. This is usually true when the project is similar to one that the organisation has done before. This could lead to complacency and a slack planning as you rely on historical data.
- Financial pressure to minimize planning costs. It may be the case that one or more of the key stakeholders will want to cut the planning phase shorter and get to business. There must always be ample time to properly plan and develop project scope. As mentioned before, the time saved on planning will be doubled when the undefined activities are to be carried out in the execution phase.
- Using vague language and terms in the project scope statement. The scope statement must be concise and clear and has no room for abbreviations, acronyms, slang and other forms of language that may be misinterpreted. You should also avoid language that is too general or not measurable or clear.
- Not including the types of deliverables that are “in scope” and “out of scope”. Defining in and out of scope will draw a clear line for the team and help them understand where a task begins and end.
- Not including procedures for how completed work will be verified and approved. See the section below.
- Omitting guidelines for handling project change requests that alter the scope. See change request (above).
You have now learned about the full cycle of project Scope Management:
When considering this process of developing the Scope Baseline (Scope Statement + WBS) we need to remember that the information (data) used in each stage is carried on to the next. This is why we must exercise caution with our estimates and when using historical data (information from previous projects.
Once you have the Scope Statement and a good WBS that has been appropriately decomposed, have people assigned to tasks and each sub task has a duration you have your baseline.
Combined with the Cost and Time baselines you have your project baselines ready. These baselines form the basis for project management, and performance and deviations are compared against them.
In the next pages we will demonstrate how a project management software tackles all the mentioned above baselines and serves as a project plan for project managers to plan, communicate information to team and record progress, completion, issues and changes.
The use of such software is not a condition for successful delivery, but the world is shifting towards it as it provides mobility, flexibility and accessibility of information across multiple devices.
Many project managers do not adopt this new technology development because they don’t feel comfortable around computers. It is advisable for any project manager to at least try to use such software and give it a fair chance.
Project management software
Modern project management software allow you to develop the WBS and decomposition of work activities on a single sheet.
They also allow you to assign people to tasks and associate costs and define durations and dependencies.
Since many of them are cloud-based (not an application you need to download to your computer) the collaboration options they offer are really useful.
WBS - Categories
The software also enables you to upload files and associate them with task, subtask or the entire project. This will help you facilitate access and share information with stakeholders.
Another prominent benefit of the cloud-based applications is they have forum-like capabilities, and that is the ability to start discussions and include any team member.
The program also has a calendar feature including reminders, which can be set to alert team members to take an action. They will be sent a notification email and will receive a message once log back in.
This calendar can also be set as the main view format:
Or you may wish to view the Gantt chart for the project:
The entire project sheet (in this case it is the project plan) can be shared with people:
You can also know when other are viewing the sheet:
In this course we will be using Smartsheet to create projects, practice project management tools and for assessment.
If you don’t have an account with Smartsheet yet, or you don’t know how to log in please contact your trainer.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9647342562675476,
"language": "en",
"url": "http://econ488.com/2017/04/funding-the-commonwealth-through-social-wealth/",
"token_count": 1700,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.412109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3c58bfe7-528e-4d5f-8171-634bb87bae66>"
}
|
One of my previous posts detailed the advantages of the United States adopting an unconditional or universal basic income (UBI). Within that post I only briefly mentioned the cost of implementing a basic income, which I will address more in this post. According to the Department of Health and Human Services, poverty is defined federally for 2017 as:
Persons in household
Applying this data broadly, if a UBI was given at $1,005 a month or $12,060 per person annually, poverty, as defined federally would be ended. Assuming $12,060 is given to every American citizen (approximately 324 million), the government will need to spend nearly $4 trillion to cover this expense. The federal government currently spends around $4 trillion annually, so yes, this would be a large (100 percent) increase in government spending. This additional $4 trillion expenditure would actually end up being much lower, however. First, people under the age of 18 (children) would not be eligible for an income payment, but a payment should still be allocated to parents with children. Roughly 25 percent of the American population is under the age of 18, so providing only a $500 monthly benefit to parents of children brings the total cost down to $3.5 trillion. A $500 child benefit, will provide single parents with enough income to remain above the poverty line. For example consider a single parent with two children and no income. This household would now receive a basic income plus two child benefits, now bringing their income to $24,060, above the poverty line for a three-person household. Despite lowering the benefit amount for children, this is still a substantial increase in government spending, but it can be brought down more. If the government raised taxes on people with incomes at or above the median income level so that after-tax income would remain the same despite receiving the UBI, the cost would be halved. This would bring the cost to a manageable $1.75 trillion increase in government spending. It may also be possible to cut some current cash-based welfare spending that would become redundant once a basic income is instituted. (Important note: a lot of these calculations were done to show that it would be relatively easy to whittle the cost of a basic income down, these are not necessarily the best way to lower the cost.) We are left with a $1.75 trillion increase in spending in order to eliminate poverty as currently defined. The obvious question is how can the government sustain that large of an increase in spending. Any economic student will tell you the government receives funds in three ways: taxes, borrowing, and money creation. Each of these sources has their associated problems. A new source of revenue for the government would be extremely beneficial. Creating a social wealth fund could be the way to increase government revenues while avoiding some of the issues associated with current sources.
A social wealth fund is an actively managed fund run by the government. The interest, dividends, and other income the fund creates could be used to fund a basic income. Social wealth funds such as these are actually pretty common. Both Norway and Alaska are two prime examples of successful social wealth funds. Norway’s fund has a market value of $7.8 trillion and has averaged a 5.7% return. Alaska’s fund has a market value of $58 billion and had a 5.7% return in 2016. The Alaska fund, in particular, is notable because it pays out a dividend to all citizens of Alaska, unconditionally. The successes of both of these funds can be replicated in an American federal social wealth fund. Since the size of the fund must be expansive, it might be best to split the fund. Dividing the fund would also allow for competition between the people running the funds and provide an extra incentive to perform well. Different parts of the fund could also be used to diversify the holdings of the overall fund. A robust social wealth fund could be beneficial in providing the government the ability to fund a basic income.
Obviously, a federal social wealth fund would have to contain a massive amount of assets to fund a basic income. Both Alaska and Norway’s social wealth funds are funded through oil reserves, which a national fund would not be able to replicate. So, if not oil, where would those assets come from? The simplest way to fill a fund would be to set aside a certain amount of current government revenues to purchase assets. Additionally, the government could introduce new taxes such as a financial transaction tax (FTT). The Tax Policy Center has estimated that a well-designed FTT could bring in $50 billion in additional revenue with limited effects on economic activity.
In addition to instituting new types of taxes, the government can do something it has always done, just slightly differently. In 2008, through TARP and other programs, the Federal Reserve purchased billions of dollars of risky assets to stabilize the markets. Instead of the Fed purchasing these risky assets, a social wealth fund could purchase these assets, removing this burden from the Fed. The high amount of capital a social wealth fund would have would allow the fund to purchase large amounts of risky assets cheaply and largely be protected against swings in the markets. Roger Farmer and Miles Kimball have discussed the potential stabilization benefits of social wealth funds due to this ability. If a social wealth fund is used to purchase these assets, the funds have a good chance at earning a higher return and the economy as a whole will benefit from the stabilization.
These proposals to fill the social wealth fund are all fine, but are unlikely to provide enough capital to create a truly robust fund. An important place to look to is corporate taxes. Nearly everyone is unenthusiastic about corporate income taxes; people either want to close loopholes or lower rates. News stories detailing corporate inversions or other loopholes are common. An entire industry exists to assist corporations in finding the lowest legal amount of taxes they can pay. This is all wildly inefficient. Dean Baker has proposed something that would solve these problems and provide social wealth funds with a large amount of capital. Baker proposes eliminating the corporate income tax in favor of a mandatory share issuance. This would mean that a “tax rate” is decided and all corporations make a one time transfer of shares to the government at that tax rate. So if 20% were decided as the desired rate, all corporations would issue shares equal to 20% of the corporation’s market capitalization. In effect, the government would become a 20% shareholder in all corporations. In order to remove potential problems associated with government control, the shares would be non-voting. This would entitle the government to 20% of the corporation’s income, while allowing corporations to maximize profits without worrying about the tax they must pay. Additionally, it removes the need for corporations to spend money on tax services or inversions, hopefully creating more economic growth.
You may be concerned about the effects of mandatory issuances on the current shareholders. Yes, share prices may decline by 20% (or whatever rate is chosen) due to dilution, but, long-term, this will benefit shareholders due to the absence of corporate income taxes. The lack of income taxes will result in higher profits paid to shareholders, and will eventually compensate them for the dilution. Also, this short-term decline in stock values would occur in conjunction with the institution of a basic income. Additional income received from a UBI would also help to offset the immediate cost to shareholders. The short-term effects may also be lessened if corporations are given a choice between income taxes and share issuances before issuances become necessary. If it were announced that by the year 2025 corporate income tax would be completely replaced by a one-time mandatory share issuance, then corporations and shareholders would have some time to prepare. Corporations may even elect to make the one-time share transfer before 2025 to replace their tax burden. If a national social wealth fund holds this much corporate stock, the fund will be well on its way to producing the income needed to fund a basic income.
The combined institution of a basic income, a national social wealth fund, and mandatory share issuances could very well result in the end of poverty as currently defined. While unlikely to happen in the current political climate, these three proposals each have benefits that would strongly enhance the United States’ economic environment, primarily through the end of poverty.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9509363770484924,
"language": "en",
"url": "https://lawaspect.com/eu-investigation-gehoneywell-merger/",
"token_count": 837,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1083984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9faba428-0f18-4329-95d2-75e3cdc26f5a>"
}
|
The European Competition Commission (ECC), through its Merger Task Force (MTF), is charged to review the merger of companies whose combined worldwide and European revenues exceed the established threshold revenues. This threshold is $4. 2 billion. The mandate for the commission is to protect free competition and to prevent massive concentration of market power on one company, which may in the long run affect consumer interest in the European Union. The MTF will assess the competitive effects of such mergers by: 1.
Determining whether such combination will create a company that will be so powerful as to impede or stifle effective competition. 2. Determine the combined market share of the merging companies in relation to the immediate competitors (horizontal effect). 3. Evaluate the combined effects of the merging companies on the consumer, especially if they have the same consumer base (conglomerate effect). 4. Analyze the merging companies’ position as the supplier and/or customer of their immediate competitors (vertical effect). Why the deal was rejected by ECC:
General Electric (GE) held a dominant position (about 60%) in the manufacture of large jet aircraft engines. Meanwhile Honeywell held a leading position in the manufacture of avionics and non-avionics aerospace components. According to European Union Merger Regulation, mergers or acquisitions, that creates or strengthens a dominant position as a result of which effective competition will be impeded, will be prohibited. Based on this regulation, the merger as it is, unless some concessions are made, will be rejected as it impedes effective competition.
These are the findings: 1. The combined position of the two companies will create exclusionary practices that will have the effect of shutting out single line competitors. Examples are Rolls Royce in aircraft engines and Rockwell Collins in aerospace components. This may eventually lead to the exit of these disadvantaged companies. It also means new entrants will be highly unlikely as the economies of scale will be so high. 2. GE will further increase their dominance with the use of GE Capital Aviation Services (GECAS).
Currently, GE buys back aircraft from manufacturers, which it leases out to airlines. This is great, as it reduces the financial burden on these aircraft manufacturers. The problem though is that GE has a “GE only engine” policy. There is fear that this policy will also extend to the aerospace components of Honeywell if this merger goes through. This occurring will further stifle the competition and technological innovation since the aircraft manufacturers will be forced to use GE engines and Honeywell aerospace components in other to sell their finished products to GECAS. 3.
Horizontal and vertical effects of the merger will further strengthen the dominance of the merged company at the detriment of the competitors. Honeywell sold large regional jet engines, in addition to GE’s dominance in other jet engines categories. The vertical issue observed is that Honeywell is the sole manufacturer and supplier of engine starters to Rolls Royce, a major competitor of GE. What GE and Honeywell have to do for ECC to approve the merger: The pressure to approve the merger is even greater since the Department of Justice of the USA has already given it her blessing.
The goal is to protect the EU consumer, by discouraging practices that impede competition, and at same time avoid views that are divergent from that of the US regulatory counterparts. As such, the merger will be approved if GE and Honeywell will give in to some concessions: 1. Eliminate the product overlap by having Honeywell sell off the regional jet engine division. 2. Honeywell should divest its Maintenance, Repair, and Overhaul (MRO) division. 3.
GE should sell a portion of GECAS to any of the competition to reduce the overpowering effect of GE on aircraft manufacturers, which will be even be greater by merging with Honeywell. Doing this will give the aircraft manufacturers the leeway on picking the right components for their aircraft without fearing if they will be able to sell their finished product. This will also encourage innovation as the potential discrimination of GECAS buying GE and Honeywell based aircrafts will be diminished.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9689082503318787,
"language": "en",
"url": "https://thecollegepeople.com/2017/10/the-future-of-online-education/",
"token_count": 616,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.14453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:621de61e-4cd9-432e-a30c-eb15b5213bfe>"
}
|
It is difficult to deny the fact that the current model for higher education in this country is broken. Although the value of a college degree has never been higher, it has become very difficult for most people to afford a diploma. College tuition costs have vastly outpaced inflation over the past several decades; according to a recent report from Sallie Mae, the average family pays more than $20,000 per year in college expenses.
With the government struggling to pay its own bills, it is becoming more difficult for many families to obtain financial aid. Therefore, many college students have resorted to debt in order to pay for college. Unfortunately, the interest payments on that debt can quickly spiral out of control. Total student loan debt already exceeds $1 trillion, and there is no indication that number will stabilize any time soon.
However, there is a groundbreaking innovation that has the potential to completely change the education landscape: online learning. The Internet has the ability to control education costs in a way that would be almost unimaginable with any government policy or college initiative. By avoiding most of the fixed expenses of traditional colleges, online institutions could pass on their considerable savings to debt-ridden students.
Not only could Internet-based education help to reduce costs, but it could also greatly expand the opportunities available to disadvantaged students who do not currently have access to high-quality schooling as well as providing all the information and help needed to write research papers. Whereas most colleges must severely limit the number of students who are admitted, there is no need for such restrictions with online learning, which makes it a far more scalable solution to the problem of educational access.
Many elite universities are already appreciating the power of the Internet to revolutionize higher education, and they are slowly starting to experiment with the platform to test new strategies for educating students. For instance, Harvard and MIT have joined forces to create edX, which will offer free classes online to anyone with an Internet connection. A couple of years ago they only requested a written self assessment essay as an application.
Of course, edX is not the only important initiative in online education. Coursera, a company that was founded just last year, has already recruited many prestigious universities – including Duke, Stanford and Princeton – to offer more than 100 classes this fall in subjects as varied as world music and quantum mechanics. In addition, there is Udacity, which teaches subjects like programming and computer science by helping students build interesting projects like a blog or a search engine.
Although online education is still in its infancy – you can’t get a degree from edX or Coursera – it gives students many advantages that augurs well for its future growth. Students can watch classes from home, discuss lectures on message boards, prepare book reports and solve problems at their own pace.
If done correctly, online learning has the potential to spread education to the masses while simultaneously customizing that education for each student. For students who are suffering from the heavy costs of the current system, the widespread adoption of online education would be a welcome change.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9632562398910522,
"language": "en",
"url": "https://university.pretrial.org/glossary/bail",
"token_count": 1492,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.150390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d9e84dc5-8751-4d57-9fec-c3c23ba8b584>"
}
|
In criminal law, bail is the process of releasing a defendant from jail or other governmental custody with conditions set to reasonably assure public safety and court appearance. “Bail” is perhaps one of the most misused terms in the field, primarily because bail has grown from the process of delivering the defendant to someone else, who would personally stand in for the accused if he or she did not appear for court, to presently being largely equated with sums of money. It is now clear that, whatever pure system of “standing in” for a particular defendant to face the consequences of non-appearance in court may have existed in the early Middle Ages, that system was quickly replaced with paying for that non-appearance first with goods (because standardized coin money remained relatively rare in Anglo Saxon Britain until the Eighth and Ninth Centuries) and later money. The encroachment of money into the process of bail has since been unrelenting. And, unfortunately to this day, the terms “money” and “bail” have also been joined in an unholy linguistic alliance.
This coupling of money and bail is troubling for several reasons. First, while money bail may have made sense in the Anglo Saxon criminal justice system – comprised of monetary penalties for nearly all bailable offenses – the logic eroded once those monetary penalties were largely replaced with corporal punishment and imprisonment. Second, while perhaps logically related to court appearance (many people believe that money motivates human action, and in most state statutes, money amounts are forfeited for failure to appear), to date money has never been empirically related to it – that is, no studies have shown that money works as an added incentive to appear for court. Third, the purpose of bail itself has changed over the past 100 years from reasonably assuring only court appearance to also reasonably assuring public safety, and research has demonstrated that money is in no way related to keeping people safe. Indeed, this notion is reflected in most state statutes, which routinely disallow the forfeiture of money for breaches in public safety. Fourth, money bail does not reflect the criminal justice trend, since the 1960s, to make use of own recognizance or personal recognizance bonds with no secured financial conditions. And finally, in most jurisdictions monetary conditions of release have been overshadowed by the numerous nonfinancial conditions designed to further bail’s overall purpose to provide a process for release while reasonably assuring court appearance and public safety.
Garner has correctly noted the multiple definitions of bail that have evolved over time, most of which presuppose some security in the form of money.1 For example, besides being defined as the security agreed upon, bail was also once defined as a person who acts as a surety for a debt, and was often used in sentences such as, “The bail is supposed to have custody of the defendant.”2 However, because much has been learned over the last century about money at bail (including its deleterious effect on the concept of pretrial justice), and because the very purpose of bail has also changed to include notions of public safety in addition to court appearance (preceding a new era of release on nonfinancial conditions), defining the term “bail” as an amount of money, as many state legislatures, criminal justice practitioners, newspapers, and members of the public do, is flawed. Thus, a new definition of the term is warranted.
Bail as a process of release is the only definition that: (1) effectuates American notions of liberty from even colonial times; (2) acknowledges the rationales for state deviations from more stringent English laws in crafting their constitutions (and the federal government in crafting the Northwest Territory Ordinance of 1787); and (3) naturally follows from various statements equating bail with release from the United States Supreme Court from the late 1800s to 1951 (in Stack v. Boyle, the Supreme Court wrote that, “federal law has unequivocally provided that a person arrested for a non-capital offense shall be admitted to bail. This traditional right to freedom before conviction permits the unhampered preparation of a defense, and serves to prevent the infliction of punishment prior to conviction”)3 and to 1987 (in United States v. Salerno, the Supreme Court wrote that, “In our society liberty is the norm, and detention prior to trial or without trial is the carefully limited exception.”).4
Bail as release accords not only with history and the law, but also with scholar’s definitions (in 1927, Beeley defined bail as the release of a person from custody), the federal government’s usage (calling bail a process in at least one document), and use by organizations such as the American Bar Association, which has quoted Black’s Law Dictionary definition of bail as a “process by which a person is released from custody.”5 States with older (and likely outdated) bail statutes often still equate bail with money, but many states with newer provisions, such as Virginia (which defines bail as “the pretrial release of a person from custody upon those terms and conditions specified by order of an appropriate judicial officer”),6 and Colorado (which defines bail as security like a pledge or a promise, which can include release without money),7 have enacted statutory definitions to recognize bail as something more than simply money. Moreover, some states, such as Alaska,8 Florida,9 Connecticut,10 and Wisconsin,11 have constitutions explicitly incorporating the word “release” into their right to bail provisions.
The phrase “or other governmental custody” is added in recognition of the fact that bail, as a process of releasing a defendant prior to trial, includes various mechanisms occurring at various times to effectuate that release, for example, through station house release from a local police department. The term “with conditions” is added with the understanding that by changing the status of an individual from citizen to defendant in a court proceeding, each release of any particular defendant contains at least one condition – attendance at trial – and typically more to reasonably assure court appearance as well as public safety.
1. Garner, supra note 1, at 96. According to Garner, as a noun, people use the term bail to mean (1) a person who acts as a surety for a debt, (2) thesecurity or guarantee agreed upon, and (3) the release on surety of a person in custody.
2. Bouvier’s Law Dictionary, 8th ed., Vol. 1, at 153 (1858).
3. 342 U.S. at 4 (internal citation omitted) (emphasis added).
4. 481 U.S. 739, 755 (1987).
5. Frequently Asked Questions About Pretrial Release Decision Making (ABA 2012).
6. Va. Code. § 19.2-119 (2013).
7. Colo. Rev. Stat. § 16-1-104 (2013).
8. Alaska Const. art. I, § 11.
9. Florida Const. art. I, § 14.
10. Conn. Const. art. 1, § 8.
11. Wis. Const. art. 1, § 8.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9471513032913208,
"language": "en",
"url": "https://work.chron.com/alternatives-raising-minimum-wage-18704.html",
"token_count": 631,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.43359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9937a319-eca8-4ed1-9a78-47ef04892f55>"
}
|
Alternatives to Raising Minimum Wage
In his 2013 State of the Union address, President Barack Obama called for an increase of the minimum wage, from the current rate of $7.25 an hour to $9.00 an hour. While the initiative is widely supported -- a Gallup poll says 71 percent of Americans support the increase -- opponents say that increasing the minimum wage is bad for the economy because it creates a disincentive for hiring, among other reasons. They propose alternative strategies for helping lower-wage workers, including changes to the tax structure.
Increase The Earned Income Tax Credit
Increasing the Earned Income Tax Credit, or EITC, is a widely-proposed alternative raising the minimum wage. This federal program benefits middle- and low-income households. According to the IRS website, the maximum amount that a family could receive annually through this program is currently around $6,000. Proponents of raising the Earned Income Tax Credit say that doing so would more effectively aid low-income families than raising the minimum wage, since some minimum-wage earners are actually teens living in middle-class households.
Guaranteed Basic Income Program
A guaranteed basic income program, also known as a universal income program, means that the government provides its citizens with an income, regardless of whether or not they work. This type of income supplement is sometimes compared to the way social security currently works in America. Opponents of a guaranteed basic income program say that it potentially creates a disincentive to work. The Basic Income Earth Network, formerly the Basic Income European Network, is a collection of academics and economists who support this concept.
Increasing The Child Tax Credit
Households with children benefit from the Internal Revenue Service's child tax credit program. Those in favor of increasing the child tax credit say that unlike raising the minimum wage, this strategy would directly benefit low-income families with children. Also, like other tax-based solutions to helping low-income workers, the burden lies with the federal government, and not with businesses who they say would struggle if forced to pay higher minimum wages. The current child tax credit for low-income families who qualify is up to $1000 per qualifying child.
One criticism of raising the minimum wage is that if the pay rate of low-skilled and entry-level jobs is too high, workers don't have an incentive to leave these jobs for better opportunities that pay higher wages. By offering minimum-wage workers training, they increase the employees' skills, which makes them more valuable to their current employer, as well as to other employers in their market.
- Bloomberg: The Ticker: A Smarter Alternative To Raising The Minimum Wage
- Slate: Money Box: EITC Isn't The Alternative To A Minimum Wage, This Is
- Los Angeles Times Op Ed: Why We Shouldn't Raise The Minimum Wage
- Marketplace: Alternatives To Raising Minimum Wage
- Gallup: In U.S., 71% Support Raising Minimum Wage
- IRS: Preview of 2013 EITC Income Limits, Maximum Credit Amounts and Tax Law Updates
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9703727960586548,
"language": "en",
"url": "https://www.accountingcoach.com/blog/what-is-a-promissory-note",
"token_count": 280,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0888671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9b8c900c-83b5-4122-a32f-8ba4f6087f28>"
}
|
Definition of Promissory Note
A promissory note is a written promise to pay an amount of money by a specified date (or perhaps on demand). The maker of the promissory note agrees to pay the principal amount and interest.
The maker of the promissory note is known as the borrower or debtor and records the amount owed in a liability account such as Notes Payable. The person or organization that has the right to receive the money when the promissory note comes due is known as the lender or creditor and records that amount in an asset account such as Notes Receivable.
Under the accrual method of accounting, both the borrower and the lender must report any accrued interest as of each balance sheet date. The maker/borrower of the note will report interest expense and interest payable. The creditor/lender will report the accrued interest as interest income and interest receivable.
Example of a Promissory Note
A promissory note is created when a company borrows money from its bank. However, a promissory note could also be used when a company is unable to pay one of its suppliers as agreed. In that situation, the supplier may demand that the company issue a promissory note. This results in the company replacing its account payable with a note payable, and the supplier replacing its account receivable with a note receivable.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9049580097198486,
"language": "en",
"url": "https://www.aggregate.com/carbon-offsetting",
"token_count": 403,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0208740234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:753a22f1-128d-4deb-a4a0-dc6d17cd5d4d>"
}
|
In 2019, the UK agreed a ground-breaking target of net zero carbon emissions by 2050.
Putting clean growth at the heart of the country’s industrial strategy, this ambitious target could change how we live and work for generations. But to achieve this, businesses, local authorities and households will all need to make changes.
Aggregate Industries are working to support this target. One of the ways we are achieving this is through the development of low carbon products. We’ve already reduced the carbon in our production processes and products. But to help us further reduce our carbon footprint, we’re implementing an offsetting scheme to produce carbon neutral products where possible.
What is carbon offsetting?
Carbon offsetting means individuals and companies can reduce carbon emissions by buying credits in carbon reduction projects
Clean water access
Clean cookstove projects
Renewables, such as solar PV and wind turbines
Each carbon credit is equivalent to a carbon reduction of one tonne of CO2 and also meets ten of the United Nations Sustainability Goals (UNSDG)
Our offsetting partner
We’ve chosen to partner with the UK based environmental consultant, Circular Ecology. Founded in 2013, they specialise in resource efficiency services, including carbon footprinting, water footprinting, life cycle assessment (LCA), and circular economy.
Circular Ecology has a strong background in the construction industry, making them an ideal partner to support Aggregate Industries.
The offsetting process
The process of carbon offsetting your emissions involves procuring carbon credits and then retiring the credits on behalf of the organisation. In order for a carbon credit to be have credibility, it must be:
Additional – ensuring that the carbon reduction is real and permanent
Verified – providing assurance on the quality and credibility of the credits
Traceable – transparent and proving proof of the offset
In order to meet these criteria, there are carbon offsets available from various verification schemes, including:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9651052355766296,
"language": "en",
"url": "https://www.businessgrowthhub.com/green-technologies-and-services/green-intelligence/resource-library/energy-performance-stagnating-in-commercial-buildings",
"token_count": 418,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.091796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:197e0bec-e1db-4ff7-bb89-c50ad0d7d94e>"
}
|
Government statistics show that little progress is being made on energy efficiency in non-domestic buildings despite the pressure of forthcoming legislation to make the worst performers unlettable.
The statistics include data from Energy Performance Certificates (EPCs) of buildings in England and Wales that have been constructed, sold or let since 2008, along with data on larger public buildings.
All non-domestic buildings require an EPC when being built, sold or rented out, as well as a recommendation report to help owners and occupiers make their building more energy efficient.
While there has been an increase in the number of properties in the top scoring A band in recent years, the number of properties in the B band has stagnated. At the lower end of the scale, the number of properties in the F and G bands has changed very little since 2010.
In the last 12 months to June 2016, 31 per cent of all non-domestic buildings built, sold or let were awarded an EPC rating of E, F or G.
The data matches the conclusions of a recent government-backed study into the real-world performance of buildings, which found that many non-domestic buildings were not meeting performance expectations.
With the majority of buildings today expected to still be in use in 30 years’ time, and the government targeting a 50 per cent cut in emissions from buildings by 2025 against 1990 levels, significant improvements will have to be made to both existing and new buildings.
The data is particularly concerning for the commercial property market. From 1 April 2018, it will become illegal to rent out a property with an EPC rating of F or G under the government’s Minimum Energy Efficiency Standards regulations.
The regulations could affect up to 80,000 commercial properties and force thousands of landlords to invest in energy efficiency measures.
A recent report from the British Chambers of Commerce and British Gas suggests that businesses than own their properties should consider getting advice to prioritise their own energy efficiency actions.
A useful guide to energy efficiency for SMEs is available here.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9436938166618347,
"language": "en",
"url": "https://www.techpally.com/blockchain-transaction-verification/",
"token_count": 744,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0634765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:18aca2c2-bb6c-492f-a106-633fcec77804>"
}
|
The blockchain transactions are like a series of recorded transactions within several bitcoin addresses.
All these transaction records will be updated by the Bitcoin network.
It will be shared between the nodes whenever the balance increases or decreases.
A Simple Transaction
Let us consider that TechPally wants to send some bitcoins to HealthPally, in this case, there will be three processes involved in the Bitcoin transaction.
There is the input where the record of the bitcoin address from where TechPally received the bitcoin which is to be transferred to HealthPally is given.
Then comes the amount which is the bitcoin value which TechPally has to send to HealthPally.
Then there is the output, which involves the public key of HealthPally. This can be called the bitcoin address of HealthPally.
The Elements of the Transaction
In order to make a bitcoin transaction, there should be accessible to public and private keys that are related to the amount of bitcoin.
When a person claims to have bitcoins, it actually means that the person will have the access to a key.
This key will be the combination of a public key as well as a private key.
The public key is where the bitcoin was previously sent.
There is a unique private key that corresponds to the mentioned public key.
This will help in authorizing the transaction previously sent to the public key which is to be sent somewhere else.
The public keys are also called bitcoin addresses.
They are basically a random sequence of numbers and letters that will function similar to the email address or like a username of a social media site.
Since this is public, it is safe to share the public key with other users too.
The public key has to be sent to others when a bitcoin transaction is to be made.
The private key will have letters and numbers. But the private keys, unlike the public keys, should be kept a secret.
The private keys should not be shared with anyone under any circumstance.
The private keys should also be backed up in the pen and paper format and should be kept in a safe place.
The bitcoin address is similar to a transparent box. The content inside the box can be seen by anyone but they should have the private key to unlock it and procure the funds therein.
The Actual Transaction
In the given scenarios, TechPally will have to initiate a bitcoin transaction to HealthPally.
In order to perform this, TechPally should use a private key in order to sign a message with the respective details.
The message which is sent will have the input, the output, and the amount.
The input will be the source of the coins which was initially sent to TechPally.
The output will be HealthPally’s public address.
The amount will be the transaction amount from TechPally to HealthPally.
The transaction will be broadcasted to the Bitcoin Core Network.
Here the verification of TechPally’s keys in order to access the input will be accessed.
This process, which is involved in confirming the address from where TechPally received the previous transaction, is called mining.
In this process, new bitcoins are also created. All the given transactions will be verified by miners who are present in the Bitcoin blockchain.
The miners here will mine the blocks which represent the collections of transactions.
There are chances where some transactions will be left out of the given block and will be put on hold until the next one is arranged.
The general time to process it is ten minutes. This might cause some delay in the bitcoin transaction and hence the process might take a longer time.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9446572065353394,
"language": "en",
"url": "https://blog.morphisec.com/are-threat-actors-winning-the-cybersecurity-arms-race",
"token_count": 1242,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2060546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4c63a796-74ed-4cdb-8eb3-ed8c49826e2d>"
}
|
Ever since the Morris Worm infected 10 percent of internet-connected computers in 1988, endpoint security has become a critical asset for organizations and endpoints themselves a top target for threat actors. However, in recent years, the arms race between cybercriminals and businesses has reached a fever pitch. Even though spending on cybersecurity solutions has increased exponentially, the damage done by cybercrime has not diminished. Estimates now show that by 2025 global cybercrime will cost over $10 trillion per year — equivalent to half the United States' current GDP.
While an increasingly digitized world raises the stakes for cyberattacks, another key driver of this rising cost is the proliferation of devastating ransomware. The most frequently used tool for cybercriminals right now, ransomware attacks happen every 11 seconds. As of this year, the average ransom payment has also increased by 57 times since 2015. However, while the damage that cyberattacks pose to organizations is growing, enterprises also have access to a greater variety of "next-generation" solutions than ever before. With that in mind, it's worth asking why it looks like cybercriminals are gaining the upper hand.
The Problem with Modern Cybersecurity
The increasing danger posed by cyberattacks is partly due to the continued use of outdated cybersecurity approaches. Traditionally, cybersecurity has been built on a perimeter-based and fragmented approach. In the increasingly borderless digital environment where modern enterprises operate, this approach is ineffective.
Threat actors have learned to seek out and exploit both fragmented security stacks and misconfigured server environments to bypass enterprise firewalls and propagate malware and ransomware. Accordingly, cloud infrastructure misconfigurations alone have been responsible for more than 30 billion record leaks in the recent past. Most enterprises find out too late that many of their security solutions do not protect them against fileless attacks and in-memory exploits that propagate laterally across their networks.
On the other hand, relying on siloing important assets also doesn't work. As illustrated by the Stuxnet virus's dramatic success in 2010, no system, no matter how critical, can be hermetically sealed from threats. With everything from cars to medical devices now forming potential endpoints, IoT devices have also opened up new attack vectors for cybercriminals. While the pandemic has undoubtedly highlighted the importance of endpoint security for this growing target area, protection solutions are often too heavy or opaque for modern enterprise environments. Meanwhile, the human element of endpoint security remains underappreciated.
Cyberattack Innovation Continues Its Acceleration
While cracks in enterprise defenses continue to expand, cybercriminals are rapidly innovating. Previously the reserve of criminal gangs with a high level of technical savvy, ransomware-as-a-service has made high-end ransomware easily accessible. Ransomware strains such as DoppelPaymer can now be "licensed" by affiliates in return for a percentage payment of their ill-gotten gains.
Threat actors are also on track to leverage developments in machine learning technology. Far from a dystopian science fiction story, the prospect of self-learning phishing scams and AI-powered ransomware attacks is unnervingly close to becoming a reality. The combination of intelligent targeting, where machine learning algorithms change out words until they find the most effective combination for phishing emails, and intelligent evasion designed to make it easier to bypass detection-centric tools is a powerful risk that could easily overwhelm even the savviest defenders.
Further, the proliferation of state-sponsored cybercrime means that while many businesses face increasing danger from persistent threats, more virulent ransomware strains are filtering down into the hands of profit-driven cybercriminals. The privateering nature of highly capable gangs such as Evil Corp, who recently conducted a massive ransomware attack on Garmin, also shows how the line between state sponsorship and profit-driven opportunism is blurring.
Companies Need to Adopt a Proactive Approach to Reduce Their Cybersecurity Risks
To counter the growing capability of threat actors, the cybersecurity industry presents endless opportunities for organizations to deploy more complex collections of solutions. Regrettably, the cybersecurity industry is rife with marketing jargon where real security is needed. While solutions that use buzzwords such as "next generation" or "machine learning" sound innovative, they ultimately fail to protect against the growing number of unknown, evasive attacks.
The only reliable way for organizations to counter increasingly capable adversaries is to refocus on what works. A less diverse array of effective solutions is a far better bet when it comes to solutions stacks. At the same time, more effort and investment need to be directed towards protecting the endpoints where ransomware can gain a beachhead in the first place. This concept means improving cyber hygiene through proactive measures such as device hardening and privilege restriction while working towards a zero-trust approach propagated throughout the enterprise.
Only 29 percent of information security professionals in a recent InfoSec survey reported training employees on safe remote working practices. Unfortunately, this finding highlights the inherent weakness in many enterprises — its people. It's vital to remember that employees are frequently the weakest link in any security posture. The key to shoring up this critical security weak point is up-to-date training and resources that help individuals spot and avoid phishing attacks and social engineering scams that allow most attacks access to your network.
Instead of responding to malicious actors' growing capability by loading up on increasingly complex solutions stacks, enterprises should leverage a proactive cyber defense strategy. While the key to this approach is making security an inherent operational asset, refining the enterprise security stack is also vital. Taking a proactive approach, rather than a reactive one of responding to threats once they’ve breached critical systems, enables even the most resource-constrained teams with a reduced attack surface and lowered risk of attack.
To achieve a simple and effective security solution stack that offers protection from known and unknown threats, you leverage a deterministic solution like Morphisec Guard that enhances visibility into and control over the OS-native security tools inherent in Microsoft Windows 10. Bolstered with a more effective security posture, enterprises shouldn't lose hope. Even if cyber criminals may win the battle, there is no reason they should win the war.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9745858907699585,
"language": "en",
"url": "https://guernseydonkey.com/the-guernsey-double/",
"token_count": 740,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.251953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5a6eb832-31d3-4061-bd93-091dc5cc8f8c>"
}
|
The Guernsey Double
Today the pound is the currency of Guernsey, however it wasn’t always this way. It’s only since 1921 that Guernsey has been in currency union with the UK. This means that the Guernsey pound is not a separate currency but is a local issue of banknotes and coins denominated in pound sterling, in a similar way to the banknotes issued in Scotland and Northern Ireland. However even after 1921 there was a curious anomaly, the Guernsey Double, which carried on in circulation for another 45 years. We look here at these curious ‘little’ coins … The Guernsey Doubles.
Guernsey’s Currency – A Quick Backgrounder
Until the early 19th century, Guernsey used predominantly French currency. Coins of the French livre were legal tender until 1834, with French francs used until 1921. However in 1870, British coins were also made legal tender, with the British shilling circulating at 12½ Guernsey pence. Bank of England notes became legal tender in 1873. In 1914, new banknotes appeared, some of which carried denominations in Guernsey shillings and francs.
However after the First World War, the value of the franc began to fall relative to sterling. This was the trigger that caused Guernsey to adopt a pound equal to the pound sterling in 1921.
In 1971, along with the rest of the British Isles, Guernsey decimalized, with the pound subdivided into 100 pence, and began issuing a full range of coin denominations from ½p to 50p (£1 and £2 coins followed later).
The Guernsey Doubles (pronounced doobulls!) were a coinage unique to Guernsey, although Jersey had similar small value coins and were first struck in 1830. Doubles were issued alongside coins equivalent to UK denominations. Eight Doubles were equivalent to One Penny In UK currency. Eight Doubles coins were issued, along with Four Doubles (equal to one Halfpenny), Two Doubles (equal to one Farthing) and One Double (one eighth Penny). All were initially copper coins.
The Four Doubles was the earliest copper coin minted for Guernsey in 1830, during the reign of William IV. The Eight and One Double coins appeared first in 1834. The Two Doubles coin was first struck in 1858.
The Eight Doubles coin was based for its first issues, in size and weight, on the British Penny. When after 1860, the Penny coin was reduced in size and struck in bronze, the Eight Doubles coin followed suit. The Four Doubles was correspondingly reduced as well, but the Two and One Doubles continued unchanged.
The Eight Doubles and Four Doubles coins were then issued virtually unchanged until 1949. The first Elizabeth II issues of 1956 were a redesign. This design continued until the coins were discontinued in 1966.
No Two Doubles coins were struck after 1929, but the tiny One Double continued to be issued until 1938, despite its very low denomination.
None of the coins ever portrayed the Monarch’s head. All denominations showed on their obverse the Arms of Guernsey (three lions) within a shield; and their value on the reverse, together with the date of issue. Some issues bore a Guernsey Lily on the reverse too.
If you want read more about how and when the Guernsey States issued their own currency then refer to our article entitled Beating the Bankers at their Own Game – the Guernsey Way.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9399640560150146,
"language": "en",
"url": "https://msdara.com/qa/question-is-accounts-payable-a-revenue-or-expense.html",
"token_count": 1279,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.056640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:45d1d9f1-4770-4661-b045-ba195670576d>"
}
|
- What are the 3 golden rules?
- Is revenue an asset?
- How do you record accounts payable entry?
- What is the journal entry for accruals?
- What is the 3 golden rules of accounts?
- What type of account is revenue?
- What is Account payable job?
- What is the difference between expenses and accounts payable?
- Is Accounts Payable a debit or credit?
- What are 3 types of accounts?
- Is equipment considered an expense?
- What are the steps for accounts payable?
- What account type is Accounts Payable?
- Is equipment a revenue or expense?
- What are the 5 types of accounts?
- What is Accounts Payable journal entry?
- Why is account payable not an expense?
- What is Accounts Payable in simple words?
- Is Accounts Payable a revenue?
What are the 3 golden rules?
To apply these rules one must first ascertain the type of account and then apply these rules.Debit what comes in, Credit what goes out.Debit the receiver, Credit the giver.Debit all expenses Credit all income..
Is revenue an asset?
What is revenue? Revenue is listed at the top of a company’s income statement. … However, it will report $50 in revenue and $50 as an asset (accounts receivable) on the balance sheet.
How do you record accounts payable entry?
Accounts payable entry. When recording an account payable, debit the asset or expense account to which a purchase relates and credit the accounts payable account. When an account payable is paid, debit accounts payable and credit cash. Payroll entry.
What is the journal entry for accruals?
Usually, an accrued expense journal entry is a debit to an Expense account. The debit entry increases your expenses. You also apply a credit to an Accrued Liabilities account. The credit increases your liabilities.
What is the 3 golden rules of accounts?
Take a look at the three main rules of accounting: Debit the receiver and credit the giver. Debit what comes in and credit what goes out. Debit expenses and losses, credit income and gains.
What type of account is revenue?
Revenue and Gains are subclassifications of Income. Expense accounts represent a company’s costs of doing business. Common examples include wages, salaries, materials, utilities, rent, depreciation, interest, insurance, etc. Contra-accounts are accounts with negative balances that offset other balance sheet accounts.
What is Account payable job?
Accounts Payable job description guide The role of the Accounts Payable involves providing financial, administrative and clerical support to the organisation. Their role is to complete payments and control expenses by receiving payments, plus processing, verifying and reconciling invoices.
What is the difference between expenses and accounts payable?
Accounts payable refers to liabilities, which are obligations that have yet to be paid, and expenses are obligations that have already been paid in an effort to generate revenue.
Is Accounts Payable a debit or credit?
Since liabilities are increased by credits, you will credit the accounts payable. And, you need to offset the entry by debiting another account. When you pay off the invoice, the amount of money you owe decreases (accounts payable). Since liabilities are decreased by debits, you will debit the accounts payable.
What are 3 types of accounts?
A business must use three separate types of accounting to track its income and expenses most efficiently. These include cost, managerial, and financial accounting, each of which we explore below.
Is equipment considered an expense?
The purchase of equipment is not accounted for as an expense in one year; rather the expense is spread out over the life of the equipment. This is called depreciation. From an accounting standpoint, equipment is considered capital assets or fixed assets, which are used by the business to make a profit.
What are the steps for accounts payable?
The full cycle of accounts payable process includes invoice data capture, coding invoices with correct account and cost center, approving invoices, matching invoices to purchase orders, and posting for payments.
What account type is Accounts Payable?
current liability accountThe general ledger account Accounts Payable or Trade Payables is a current liability account, since the amounts owed are usually due in 10 days, 30 days, 60 days, etc. The balance in Accounts Payable is usually presented as the first or second item in the current liability section of the balance sheet.
Is equipment a revenue or expense?
For this reason, the Internal Revenue Service generally requires you to depreciate equipment purchases, recognizing part of the expense each month over a period of years. The cost of the equipment will eventually make its way onto the income statement, but it will do so gradually in the form of a depreciation expense.
What are the 5 types of accounts?
5 Types of accountsAssets.Expenses.Liabilities.Equity.Revenue (or income)
What is Accounts Payable journal entry?
Accounts Payable Journal Entries refers to the amount payable accounting entries to the creditors of the company for the purchase of goods or services and are reported under the head current liabilities on the balance sheet and this account debited whenever any payment is been made.
Why is account payable not an expense?
Accrual accounting is a method of tracking those payments. Accounts payable refers to the liabilities that will be paid soon. Payables are those that still need to be paid while expenses are those that have already been paid.
What is Accounts Payable in simple words?
Accounts Payable is a short-term debt payment which needs to be paid to avoid default. … Description: Accounts Payable is a liability due to a particular creditor when it order goods or services without paying in cash up front, which means that you bought goods on credit.
Is Accounts Payable a revenue?
Accounts payable is considered a current liability, not an asset, on the balance sheet. … Delayed accounts payable recording can under-represent the total liabilities. This has the effect of overstating net income in financial statements.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9503030776977539,
"language": "en",
"url": "https://pearsonblog.campaignserver.co.uk/category/essentials-of-economics-8e/essentials-of-economics-8e-ch11/",
"token_count": 7505,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f61ddaa1-e6b3-43af-afe6-3f974cbf8e55>"
}
|
Speculation in markets can lead to wild swings in prices as exuberance drives up prices and
pessimism leads to price crashes. When the rise in price exceeds underlying fundamentals, such as profit, the result is a bubble. And bubbles burst.
There have been many examples of bubbles throughout history. One of the most famous is that of tulips in the 17th century. As Box 2.4 in Essential Economics for Business (6th edition) explains:
Between November 1636 and February 1637, there was a 20-fold increase in the price of tulip bulbs, such that a skilled worker’s annual salary would not even cover the price of one bulb. Some were even worth more than a luxury home! But, only three months later, their price had fallen by 99 per cent. Some traders refused to pay the high price and others began to sell their tulips. Prices began falling. This dampened demand (as tulips were seen to be a poor investment) and encouraged more people to sell their tulips. Soon the price was in freefall, with everyone selling. The bubble had burst .
Another example was the South Sea Bubble of 1720. Here, shares in the South Sea Company, given a monopoly by the British government to trade with South America, increased by 900% before collapsing through a lack of trade.
Another, more recent, example is that of Poseidon. This was an Australian nickel mining company which announced in September 1969 that it had discovered a large seam of nickel at Mount Windarra, WA. What followed was a bubble. The share price rose from $0.80 in mid-1969 to a peak of $280 in February 1970 and then crashed to just a few dollars.
Other examples are the Dotcom bubble of the 1990s, the US housing bubble of the mid-2000s and BitCoin, which has seen more than one bubble.
Bubbles always burst eventually. If you buy at a low price and sell at the peak, you can make a lot of money. But many will get their fingers burnt. Those who come late into the market may pay a high price and, if they are slow to sell, can then make a large loss.
GameStop shares – an unlikely candidate for a bubble
The most recent example of a bubble is GameStop. This is a chain of shops in the USA selling games, consoles and other electronic items. During the pandemic it has struggled, as games consumers have turned to online sellers of consoles and online games. It has been forced to close a number of stores. In July 2020, its share price was around $4. With the general recovery in stock markets, this drifted upwards to just under $20 by 12 January 2021.
Then the bubble began.
Hedge fund shorting
Believing that the GameStop shares were now overvalued and likely to fall, many hedge funds started shorting the shares. Shorting (or ‘short selling’) is where investors borrow shares for a fee and immediately sell them on at the current price, agreeing to return them to the lender on a specified day in the near future (the ‘expiration date’). But as the investors have sold the shares they borrowed, they must now buy them at the current price on or before the expiration date so they can return them to the lenders. If the price falls between the two dates, the investors will gain. For example, if you borrow shares and immediately sell them at a current price of £5 and then by the expiration date the price has fallen to $2 and you buy them back at that price to return them to the lender, you make a £3 profit.
But this is a risky strategy. If the price rises between the two dates, investors will lose – as events were to prove.
The swarm of small investors
Enter the ‘armchair investor’. During lockdown, small-scale amateur investing in shares has become a popular activity, with people seeking to make easy gains from the comfort of their own homes. This has been facilitated by online trading platforms such as Robinhood and Trading212. These are easy and cheap, or even free, to use.
What is more, many users of these sites were also collaborating on social media platforms, such as Reddit. They were encouraging each other to buy shares in GameStop and some other companies. In fact, many of these small investors were seeing it as a battle with large-scale institutional investors, such as hedge funds – a David vs. Goliath battle.
With swarms of small investors buying GameStop, its share price surged. From $20 on 12 January, it doubled in price within two days and had reached $77 by 25 January. The frenzy on Reddit then really gathered pace. The share price peaked at $468 early on 28 January. It then fell to $126 less than two hours later, only to rise again to $354 at the beginning of the next day.
Many large investors who had shorted GameStop shares made big losses. Analytics firm Ortex estimated that hedge funds lost a total of $12.5 billion in January. Many small investors, however, who bought early and sold at the peak made huge gains. Other small investors who got the timing wrong made large losses.
And it was not just GameStop. Social media were buzzing with suggestions about buying shares in other poorly performing companies that large-scale institutional investors were shorting. Another target was silver and silver mines. At one point, silver prices rose by more than 10% on 1 February. However, money invested in silver is huge relative to GameStop and hence small investors were unlikely to shift prices by anything like as much as GameStop shares.
Amidst this turmoil, the US Securities and Exchange Commission (SEC) issued a statement on 29 January. It warned that it was working closely with other regulators and the US stock exchange ‘to ensure that regulated entities uphold their obligations to protect investors and to identify and pursue potential wrongdoing’. It remains to be seen, however, what it can do to curb the concerted activities of small investors. Perhaps, only the experience of bubbles bursting and the severe losses that can result will make small investors think twice about backing failing companies. Some Davids may beat Goliath; others will be defeated.
- GameStop: The competing forces trading blows over lowly gaming retaile
Sky News (30/1/21)
- Tempted to join the GameStop ‘angry mob’? Lessons on bubbles, market abuse and stock picking from the investment experts… including perma-bear Albert Edwards
This is Money, Tanya Jefferies (29/1/21)
- A year ago on Reddit I suggested investing in GameStop. But I never expected this
The Guardian, Desmund Delaney (29/1/21)
- The real lesson of the GameStop story is the power of the swarm
The Guardian, Brett Scott (30/1/21)
- GameStop: What is it and why is it trending?
BBC News, Kirsty Grant (29/1/21)
- GameStop: Global watchdogs sound alarm as shares frenzy grows
BBC News (30/1/21)
- The GameStop affair is like tulip mania on steroids
The Guardian, Dan Davies (29/1/21)
- GameStop news: Short sellers lose $19bn as Omar says billionaires who pressured apps should go to jail
Independent, Andy Gregory, Graig Graziosi and Justin Vallejo (30/1/21)
- Robinhood tightens GameStop trading curbs again as SEC weighs in
Financial Times, Michael Mackenzie, Colby Smith, Kiran Stacey and Miles Kruppa (29/1/21)
- SEC Issues Vague Threats Against Everyone Involved in the GameStop Stock Saga
Gizmodo, Andrew Couts (29/1/21)
- SEC warns it is monitoring trade after GameStop surge
RTE News (29/1/21)
- GameStop short-squeeze losses at $12.5 billion YTD – Ortex data
- GameStop: I’m one of the WallStreetBets ‘degenerates’ – here’s why retail trading craze is just getting started
The Conversation, Mohammad Rajjaque (3/2/21)
- What the GameStop games really mean
Shares Magazine, Russ Mould (4/2/21)
- Distinguish between stabilising and destabilising speculation.
- Use a demand and supply diagram to illustrate destabilising speculation.
- Explain how short selling contributed to the financial crisis of 2007/8 (see Box 2.7 in Economics (10th edition) or Box 3.4 in Essentials of Economics (8th edition)).
- Why won’t shares such as GameStop go on rising rapidly in price for ever? What limits the rise?
- Find out some other shares that have been trending among small investors. Why were these specific shares targeted?
- How has quantitative easing impacted on stock markets? What might be the effect of a winding down of QE or even the use of quantitative tightening?
The BBC podcast linked below looks at the use of quantitative easing since 2009 and especially the most recent round since the onset of the pandemic.
Although QE was a major contributor to reducing the depth of the recession in 2009–10, it was barely used from 2013 to 2020 (except for a short period in late 2016/early 2017). The Coalition and Conservative governments were keen to get the deficit down. In justifying pay restraint and curbing government expenditure, Prime Ministers David Cameron and Theresa May both argued that there ‘was no magic money tree’.
But with the severely dampening effect of the lockdown measures from March 2020, the government embarked on a large round of expenditure, including the furlough scheme and support for businesses.
The resulting rise in the budget deficit was accompanied by a new round of QE from the beginning of April. The stock of assets purchased by the Bank of England rose from £445 billion (the approximate level it had been since March 2017) to £740 billion by December 2020 and is planned to reach £895 billion by the end of 2021.
So with the effective funding of the government’s deficits by the creation of new money, does this mean that there is indeed a ‘magic money tree’ or, indeed, a ‘magic money forest’? And if so, is it desirable? Is it simply stoking up problems for the future? Or will, as modern monetary theorists maintain, the extra money, if carefully spent, lead to faster growth and a reducing deficit, with low interest rates making it easy to service the debt?
The podcast explores these issues. There is then a longer list of questions than normal relating to the topics raised in the podcast.
- Which of the following are stocks and which are flows?
(c) The total amount people save each month
(d) The money held in savings accounts
(e) Public-sector net debt
(f) Public-sector net borrowing
(g) National income
(h) Injections into the circular flow of income
(i) Aggregate demand
- How do banks create money?
- What is the role of the Debt Management Office in the sale of gilts?
- Describe the birth of QE.
- Is raising asset prices the best means of stimulating the economy? What are the disadvantages of this form of monetary expansion?
- What are the possible exit routes from QE and what problems could occur from reducing the central bank’s stock of assets?
- Is the use of QE in the current Covid-19 crisis directly related to fiscal policy? Or is this use of monetary policy simply a means of hitting the inflation target?
- What are the disadvantages of having interest rates at ultra-low levels?
- Does it matter if the stock of government debt rises substantially if the gilts are at ultra-low fixed interest rates?
- What are the intergenerational effects of substantial QE? Does it depend on how debt is financed?
- How do the policy recommendations of modern monetary theorists differ from those of more conventional macroeconomists?
- In an era of ultra-low interest rates, does fiscal policy have a greater role to play than monetary policy?
On 25 November, the UK government published its Spending Review 2020. This gives details of estimated government expenditure for the current financial year, 2020/21, and plans for government expenditure and the likely totals for 2021/22.
The focus of the Review is specifically on the effects of and responses to the coronavirus pandemic. It does not consider the effects of Brexit, with or without a trade deal, or plans for taxation. The Review is based on forecasts by the Office for Budget Responsibility (OBR). Because of the high degree of uncertainty over the spread of the disease and the timing and efficacy of vaccines, the OBR gives three forecast values for most variables – pessimistic, central and optimistic.
According to the central forecast, real GDP is set to decline by 11.3% in 2020, the largest one-year fall since the Great Frost of 1709. The economy is then set to ‘bounce back’ (somewhat), with GDP rising by 5.2% in 2021.
Unemployment will rise from 3.9% in 2019 to a peak of 7.5% in mid-2021, after the furlough scheme and other support for employers is withdrawn.
This blog focuses at the impact on government borrowing and debt and the implications for the future – both the funding of the debt and ways of reducing it.
Soaring government deficits and debt
Government expenditure during the pandemic has risen sharply through measures such as the furlough scheme, the Self-Employment Income Support Scheme and various business loans. This, combined with falling tax revenue, as incomes and consumer expenditure have declined, has led to a rise in public-sector net borrowing (PSNB) from 2.5% of GDP in 2019/20 to a central forecast of 19% for 2020/21 – the largest since World War II. By 2025/26 it is still forecast to be 3.9% of GDP. The figure has also been pushed up by a fall in nominal GDP for 2020/21 (the denominator) by nearly 7%. (Click here for a PowerPoint of the above chart.)
The high levels of PSNB are pushing up public-sector net debt (PSNB). This is forecast to rise from 85.5% of GDP in 2019/20 to 105.2% in 2020/21, peaking at 109.4% in 2023/24.
The exceptionally high deficit and debt levels will mean that the government misses by a very large margin its three borrowing and debt targets set out in the latest (Autumn 2016) ‘Charter for Budget Responsibility‘. These are:
- to reduce cyclically-adjusted public-sector net borrowing to below 2% of GDP by 2020/21;
- for public-sector net debt as a percentage of GDP to be falling in 2020/21;
- for overall borrowing to be zero or in surplus by 2025/26.
But, as the Chancellor said in presenting the Review:
Our health emergency is not yet over. And our economic emergency has only just begun. So our immediate priority is to protect people’s lives and livelihoods.
Putting the public finances on a sustainable footing
Running a large budget deficit in an emergency is an essential policy for dealing with the massive decline in aggregate demand and for supporting those who have, or otherwise would have, lost their jobs. But what of the longer-term implications? What are the options for dealing with the high levels of debt?
1. Raising taxes. This tends to be the preferred approach of those on the left, who want to protect or improve public services. For them, the use of higher progressive taxes, such as income tax, or corporation tax or capital gains tax, are a means of funding such services and of providing support for those on lower incomes. There has been much discussion of the possibility of finding a way of taxing large tech companies, which are able to avoid taxes by declaring very low profits by diverting them to tax havens.
2. Cutting government expenditure. This is the traditional preference of those on the right, who prefer to cut the overall size of the state and thus allow for lower taxes. However, this is difficult to do without cutting vital services. Indeed, there is pressure to have higher government expenditure over the longer term to finance infrastructure investment – something supported by the Conservative government.
A downside of either of the above is that they squeeze aggregate demand and hence may slow the recovery. There was much discussion after the financial crisis over whether ‘austerity policies’ hindered the recovery and whether they created negative supply-side effects by dampening investment.
3. Accepting higher levels of debt into the longer term. This is a possible response as long as interest rates remain at record low levels. With depressed demand, loose monetary policy may be sustainable over a number of years. Quantitative easing depresses bond yields and makes it cheaper for governments to finance borrowing. Servicing high levels of debt may be quite affordable.
The problem is if inflation begins to rise. Even with lower aggregate demand, if aggregate supply has fallen faster because of bankruptcies and lack of investment, there may be upward pressure on prices. The Bank of England may have to raise interest rates, making it more expensive for the government to service its debts.
Another problem with not reducing the debt is that if another emergency occurs in the future, there will be less scope for further borrowing to support the economy.
4. Higher growth ‘deals’ with the deficit and reduces debt. In this scenario, austerity would be unnecessary. This is the ‘golden’ scenario – for the country to grow its way out of the problem. Higher output and incomes leads to higher tax revenues, and lower unemployment leads to lower expenditure on unemployment benefits. The crucial question is the relationship between aggregate demand and supply. For growth to be sustainable and shrink the debt/GDP ratio, aggregate demand must expand steadily in line with the growth in aggregate supply. The faster aggregate supply can grow, the faster can aggregate demand. In other words, the faster the growth in potential GDP, the faster can be the sustainable rate of growth of actual GDP and the faster can the debt/GDP ratio shrink.
One of the key issues is the degree of economic ‘scarring’ from the pandemic and the associated restrictions on economic activity. The bigger the decline in potential output from the closure of firms and the greater the deskilling of workers who have been laid off, the harder it will be for the economy to recover and the longer high deficits are likely to persist.
Another issue is the lack of labour productivity growth in the UK in recent years. If labour productivity does not increase, this will severely restrict the growth in potential output. Focusing on training and examining incentives, work practices and pay structures are necessary if productivity is to rise significantly. So too is finding ways to encourage firms to increase investment in new technologies.
Podcast and videos
- Initial reaction from IFS researchers on Spending Review 2020 and OBR forecasts
IFS Press Release, Paul Johnson, Carl Emmerson, Ben Zaranko, Tom Waters and Isabel Stockton (25/11/200
- Rishi Sunak is likely to increase spending – which means tax rises will follow
IFS, Newspaper Article, Paul Johnson (23/11/20)
- Economic and Fiscal Outlook Executive Summary
- UK’s Sunak says public finances are on ‘unsustainable’ path
Reuters, David Milliken (26/11/20)
- Rishi Sunak warns ‘economic emergency has only just begun’
BBC News, Szu Ping Chan (25/11/20)
- UK will need £27bn of spending cuts or tax rises, watchdog warns
The Guardian, Phillip Inman (25/11/20)
- What is tomorrow’s Spending Review all about?
The Institute of Chartered Accountants in England and Wales (24/11/20)
- Spending Review 2020: the experts react
The Conversation, Drew Woodhouse, Ernestine Gheyoh Ndzi, Jonquil Lowe, Anupam Nanda, Alex de Ruyter and Simon J. Smith (25/11/20)
- What is the significance of the relationship between the rate of economic growth and the rate of interest for financing public-sector debt over the longer term?
- What can the government do to encourage investment in the economy?
- Using OBR data, find out what has happened to the output gap over the past few years and what is forecast to happen to it over the next five years. Explain the significance of the figures.
- Distinguish between demand-side and supply-side policies. How would you characterise the policies to tackle public-sector net debt in terms of this distinction? Do the policies have a mixture of demand- and supply-side effects?
- Choose two other developed countries. Examine how their their public finances have been affected by the coronavirus pandemic and the policies they are adopting to tackle the economic effects of the pandemic.
With the imposition of a new lockdown in England from 5 November to 2 December and in Wales from 3 October to 9 November, and with strong restrictions in Scotland and Northern Ireland, the UK economy is set to return to negative growth – a W-shaped GDP growth curve.
With the closure of leisure facilities and non-essential shops in England and Wales, spending is likely to fall. Without support, many businesses would fail and potential output would fall. In terms of aggregate demand and supply, both would decline, as the diagram below illustrates. (Click here for a PowerPoint.)
The aggregate demand curve shifts from AD1 to AD2 as consumption and investment fall. Exports also fall as demand is hit by the pandemic in other countries. The fall in aggregate supply is represented partly by a movement along the short-run aggregate supply curve (SRAS) as demand falls for businesses which remain open (such as transport services). Largely it is represented by a leftward shift in the curve from SRAS1 to SRAS2 as businesses such as non-essential shops and those in the hospitality and leisure sector are forced to close. What happens to the long-run supply curve depends on the extent to which businesses reopen when the lockdown and any other subsequent restrictions preventing their reopening are over. It also depends on the extent to which other firms spring up or existing firms grow to replace the business of those that have closed. The continuing rise in online retailing is an example.
With the prospect of falling GDP and rising unemployment, the UK government and the Bank of England have responded by giving a fiscal and monetary boost. We examine each in turn.
In March, the Chancellor introduced the furlough scheme, whereby employees temporarily laid off would receive 80% of their wages through a government grant to their employers. This scheme was due to end on 31 October, to be replaced by the less generous Job Support Scheme (see the blog, The new UK Job Support Scheme: how much will it slow the rise in unemployment?). However, the Chancellor first announced that the original furlough scheme would be extended until 2 December for England and then, on 5 November, to the end of March 2021 for the whole of the UK. He also announced that the self-employed income support grant would increase from 55% to 80% of average profits up to £7500.
In addition, the government announced cash grants of up to £3000 per month for businesses which are closed (worth more than £1 billion per month), extra money to local authorities to support businesses and an extension of existing loan schemes for business. Furthermore, the government is extending the scheme whereby people can claim a repayment ‘holiday’ for up to 6 months for mortgages, personal loans and car finance.
The government hopes that the boost to aggregate demand will help to slow, or even reverse, the predicted decline in GDP. What is more, by people being put on furlough rather than being laid off, it hopes to slow the rise in unemployment.
At the meeting of the Bank of England’s Monetary Policy Committee on 4 November, further expansionary monetary policy was announced. Rather than lowering Bank Rate from its current historically low rate of 0.1%, perhaps to a negative figure, it was decided to engage in further quantitative easing.
An additional £150 billion of government bonds will be purchased under the asset purchase facility (APF). This will bring the total vale of bonds purchased since the start of the pandemic to £450 billion (including £20 billion of corporate bonds) and to £895 billion since 2009 when QE was first introduced in response to the recession following the financial crisis of 2007–8.
The existing programme of asset purchases should be complete by the end of December this year. The Bank of England expects the additional £150 billion of purchases to begin in January 2021 and be completed within a year.
UK quantitative easing since the first round in March 2009 is shown in the chart above. The reserve liabilities represent the newly created money for the purchase of assets under the APF programme. (There are approximately £30 billion of other reserve liabilities outside the APF programme.) The grey area shows projected reserve liabilities to the end of the newly announced programme of purchases, by which time, as stated above, the total will be £895 billion. This, of course, assumes that the Bank does not announce any further QE, which it could well do if the recovery falters.
Justifying the decision, the MPC meeting’s minutes state that:
There are signs that consumer spending has softened across a range of high-frequency indicators, while investment intentions have remained weak. …The fall in activity over 2020 has reflected a decline in both demand and supply. Overall, there is judged to be a material amount of spare capacity in the economy.
How effective these fiscal and monetary policy measures will be in mitigating the effects of the Covid restrictions remains to be seen. A lot will depend on how successful the lockdown and other restrictions are in slowing the virus, how quickly a vaccine is developed and deployed, whether a Brexit deal is secured, and the confidence of both consumers, businesses and financial markets that the economy will bounce back in 2021. As the MPC’s minutes state:
The outlook for the economy remains unusually uncertain. It depends on the evolution of the pandemic and measures taken to protect public health, as well as the nature of, and transition to, the new trading arrangements between the European Union and the United Kingdom. It also depends on the responses of households, businesses and financial markets to these developments.
- Covid: Rishi Sunak to extend furlough scheme to end of March
BBC News (6/11/20)
- Furlough extended until March and self-employed support boosted again
MSE News, Callum Mason (6/11/20)
- Number on furlough in UK may double during England lockdown
The Guardian, Richard Partington (3/11/20)
- ‘We wouldn’t manage without it’: business owners on the furlough extension
The Guardian, Molly Blackall and Mattha Busby (6/11/20)
- Sunak’s abrupt turn on UK furlough scheme draws criticism from sceptics
Financial Times, Delphine Strauss (6/11/20)
- Coronavirus: Bank of England unleashes further £150bn of support for economy
Sky News, James Sillars (5/11/20)
- Bank of England boss pledges to do ‘everything we can’
BBC News, Szu Ping Chan (6/11/20)
- Savers are spared negative rates but the magic money tree delivers £150bn more QE: What the Bank of England’s charts tell us about the economy
This is Money, Simon Lambert (5/11/20)
- Covid-19 and the victory of quantitative easing
The Spectator, Bruce Anderson (26/10/20)
- Will the Bank of England’s reliance on quantitative easing work for the UK economy?
The Conversation, Ghulam Sorwar (9/11/20)
- With a W-shaped recession looming and debt piling up, the government should start issuing GDP-linked bonds
LSE British Politics and Policy blogs, Costas Milas (6/11/20)
- Illustrate the effects of expansionary fiscal and monetary policy on (a) a short-run aggregate supply and demand diagram; (b) a long-run aggregate supply and demand diagram.
- In the context of the fiscal and monetary policy measures examined in this blog, what will determine the amount that the curves shift?
- Illustrate on a Keynesian 45° line diagram the effects of (a) the lockdown and (b) the fiscal and monetary policy measures adopted by the government and Bank of England.
- If people move from full-time to part-time working, how is this reflected in the unemployment statistics? What is this type of unemployment called?
- How does quantitative easing through asset purchases work through the economy to affect output and employment? In other words, what is the transmission mechanism of the policy?
- What determines the effectiveness of quantitative easing?
- Under what circumstances will increasing the money supply affect (a) real output and (b) prices alone?
- Why might quantitative easing benefit the rich more than the poor?
- How could the government use quantitative easing to finance its budget deficit?
In the current environment of low inflation and rising unemployment, the Federal Reserve Bank, the USA’s central bank, has amended its monetary targets. The new measures were announced by the Fed chair, Jay Powell, in a speech for the annual Jackson Hole central bankers’ symposium (this year conducted online on August 27 and 28). The symposium was an opportunity for central bankers to reflect on their responses to the coronavirus pandemic and to consider what changes might need to be made to their monetary policy targets and instruments.
The Fed’s previous targets
Previously, like most other central banks, the Fed had a long-run inflation target of 2%. It did, however, also seek to ‘maximise employment’. In practice, this meant seeking to achieve a ‘normal’ rate of unemployment, which the Fed regards as ranging from 3.5 to 4.7% with a median value of 4.1%. The description of its objectives stated that:
In setting monetary policy, the Committee seeks to mitigate deviations of inflation from its longer-run goal and deviations of employment from the Committee’s assessments of its maximum level. These objectives are generally complementary. However, under circumstances in which the Committee judges that the objectives are not complementary, it follows a balanced approach in promoting them, taking into account the magnitude of the deviations and the potentially different time horizons over which employment and inflation are projected to return to levels judged consistent with its mandate.
The new targets
Under the new system, the Fed has softened its inflation target. It will still be 2% over the longer term, but it will be regarded as an average, rather than a firm target. The Fed will be willing to see inflation above 2% for longer than previously before raising interest rates if this is felt necessary for the economy to recover and to achieve its long-run potential economic growth rate. Fed chair, Jay Powell, in a speech on 27 August said:
Following periods when inflation has been running below 2%, appropriate monetary policy will likely aim to achieve inflation moderately above 2 per cent for some time.
Additionally, the Fed has increased its emphasis on employment. Instead of focusing on deviations from normal employment, the Fed will now focus on the shortfall of employment from its normal level and not be concerned if employment temporarily exceeds its normal level. As Powell said:
Going forward, employment can run at or above real-time estimates of its maximum level without causing concern, unless accompanied by signs of unwanted increases in inflation or the emergence of other risks that could impede the attainment of our goals
The Fed will also take account of the distribution of employment and pay more attention to achieving a strong labour market in low-income and disadvantaged communities. However, apart from the benefits to such communities from a generally strong labour market, it is not clear how the Fed could focus on disadvantaged communities through the instruments it has at its disposal – interest rate changes and quantitative easing.
Modern monetary theorists (see blog MMT – a Magic Money Tree or Modern Monetary Theory?) will welcome the changes, arguing that they will allow more aggressive expansion and higher government borrowing at ultra-low interest rates.
The problem for the Fed is that it is attempting to achieve more aggressive goals without having any more than the two monetary instruments it currently has – lowering interest rates and increasing money supply through asset purchases (quantitative easing). Interest rates are already near rock bottom and further quantitative easing may continue to inflate asset prices (such as share and property prices) without sufficiently stimulating aggregate demand. Changing targets without changing the means of achieving them is likely to be unsuccessful.
It remains to be seen whether the Fed will move to funding government borrowing directly, which could allow for a huge stimulus through infrastructure spending, or whether it will merely stick to using asset purchases as a way for introducing new money into the system.
- In landmark shift, Fed rewrites approach to inflation, labor market
Reuters, Jonnelle Marte, Ann Saphir and Howard Schneider (27/8/20)
- 5 Key Takeaways From Powell’s Jackson Hole Fed Speech
Bloomberg, Mohamed A. El-Erian (28/8/20)
- Fed adopts new strategy to allow higher inflation and welcome strong labor markets
Market Watch, Greg Robb (27/8/20)
- Fed to tolerate higher inflation in policy shift
Financial Times, James Politi and Colby Smith (27/8/20)
- Fed inflation shift raises questions about past rate rises
Financial Times, James Politi and Colby Smith (28/8/20)
- Dollar slides as bond market signals rising inflation angst
Financial Times, Adam Samson and Colby Smith (28/8/20)
- Wall Street shares rise after Fed announces soft approach to inflation
The Guardian, Larry Elliott (27/8/20)
- How the Fed Is Bringing an Inflation Debate to a Boil
Bloomberg, Ben Holland, Enda Curran, Vivien Lou Chen and Kyoungwha Kim (27/8/20)
- The live now, pay later economy comes at a heavy cost for us all
The Guardian, Phillip Inman (29/8/20)
- The world’s central banks are starting to experiment. But what comes next?
The Guardian, Adam Tooze (9/9/20)
- Find out how much asset purchases by the Fed, the Bank of England and the ECB have increased in the current rounds of quantitative easing.
- How do asset purchases affect narrow money, broad money and aggregate demand? Is there a fixed money multiplier effect between the narrow money increases and aggregate demand? Explain.
- Why did the dollar exchange rate fall following the announcement of the new measures by Jay Powell?
- The Governor of the Bank of England, Andrew Bailey, also gave a speech at the Jackson Hole symposium. How does the approach to money policy outlined by Bailey differ from that outlined by Jay Powell?
- What practical steps, if any, could a central bank take to improve the relative employment prospects of disadvantaged groups?
- Outline the arguments for and against central banks directly funding government expenditure through money creation.
- What longer-term problems are likely to arise from central banks pursuing ultra-low interest rates for an extended period of time?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8707736730575562,
"language": "en",
"url": "https://tool.viridad.eu/economic_activities/63",
"token_count": 203,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1630859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:11ed106d-017b-4953-b5c4-a8d372bc6f1a>"
}
|
Passenger Rail Transport (Interurban)
Climate Change Mitigation
Metric and threshold:
Zero direct emissions trains are eligible. Other trains are eligible if direct emissions (TTW) are below 50g CO2e emissions per passenger kilometre (gCO2e/pkm) until 2025 (non-eligible thereafter).
Zero direct emissions rail (e.g. electric, hydrogen) is eligible because:
• With the present energy mix, the overall emissions associated with zero direct emissions rail transport (i.e. electric or hydrogen) are among the lowest compared with other transport modes. • The generation of the energy carriers used by zero direct emissions transport is assumed to become low or zero carbon in the near future.
The threshold of 50 gCO2e/pkm until 2025 ensures that the carbon intensity remains similar to criteria for eligible road vehicles with low occupation factor (50 gCO2/vkm) and significantly lower than emissions for an average car in the current vehicle stock.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8949577808380127,
"language": "en",
"url": "https://www.indicative.com/data-defined/time-series-analysis/",
"token_count": 261,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.047119140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:55f66000-5190-40dc-84db-e22dbe4b72aa>"
}
|
Time Series Analysis Defined
Time series analysis is a statistical technique which deals with time series data, or trend analysis. Simply, this type of analysis deals with a series of data points ordered in time. This could either be particular time periods or intervals.
In a time series, time is often the independent variable, with the goal to be to make a forecast for the future. There is no minimum or maximum amount of time that must be included in this analysis, allowing the data to be gathered in a way that provides the information needed by the users.
This statistical technique can be considered in three different ways:
Time series data: A set of observations on the values that a variable takes at different times.
Cross-sectional data: Data of one or more variables, collected at the same point in time.
Pooled data: A combination of time series data and cross-sectional data.
Time Series Analysis is used for many applications, including:
- Sales Forecasting
- Economic Forecasting
- Stock Market Analysis
- Budgetary Analysis
- Inventory Studies
- Workload Projections
In Data Defined, we help make the complex world of data more accessible by explaining some of the most complex aspects of the field.
Click Here for more Data Defined.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9499351978302002,
"language": "en",
"url": "http://www.nepalenergyforum.com/hydropower-development-policy-of-nepal-an-overview-of-its-implementation/",
"token_count": 1559,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.369140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2b7a19e7-d9c0-47a1-9239-85790966af68>"
}
|
After the restoration of democracy in 1990, the state’s efforts were focussed on participatory development with a liberal economic policy. Considering the potentiality of harnessing the vast natural water resources, hydropower sector was given priority, and a Hydropower Development Policy, 1992 was announced by the government. After nine years of its implementation, the then government approved and implemented a new policy, Hydropower Development Policy, 2001, which is still in practice. In this context, it is already late in reviewing and updating the policy in the new changing environment.
The Hydropower Development Policy, 1992, which was formulated for the first time, was quite limited in scope. Yet it was able to involve the private sector in hydropower development in the country. With the lessons learn from this policy implementation and incorporation of the latest legal provisions like Environmental Protection Act and Rules (1997) and Local Self Governance Act (1999), the government formulated the Hydropower Policy in 2001 by incorporating all new criteria and private sector demands as well.
The provisions made in the policy emphasised generating electricity at low cost by utilising the water resources available in the country, extension of reliable and qualitative electricity service throughout the country, tie-up of electrification with economic activities, rendering support to the development of the rural economy by extending rural electrification and development of hydropower as an exportable commodity. However, research shows that the policy has been unable to achieve its objectives as targeted.
Nepal’s hydropower policy notes that generation and consumption of electrical energy in Nepal is minimal. The major sources of energy are still agriculture and forest-based resources. Despite the abundant possibility of hydropower generation as a renewable energy source, it has not been harnessed to the desired extent. Industrial enterprises have not developed at the desired pace due to the lack of electricity. An opportune hydropower policy is, thus, seen as a prerequisite for the supply of energy at a reasonable price, which has the pivotal role in the development of rural electrification, supply of domestic energy, creation of employment and in the development of industrial enterprises.
Based on the experiences gained in the course of implementing the principles followed by the Hydropower Development Policy, 1992, emerging new concepts in the international market and their impacts, technological development, possibility of exporting hydropower energy, possibility of foreign investment and commitment to environmental protection, the revision and improvement of the hydropower policy has become imperative with a view to making it clear, transparent, practical and investment-friendly.
The new hydropower policy should clearly reflect the direction on vital issues such as development of multipurpose plans for maximum utilisation of available water resources, appropriate sharing of benefits, role of public and private sector, utilisation of internal as well as external market, and clarity and transparency in the activities of government with the private sector.
Study shows that there have been a few and remarkable achievements from the implementation of the hydropower policy in the form of power generation, royalty collection, private sector encouragement in hydropower development and capacity building. This has ultimately contributed to social and economic transformation of the country.
However, on the other side, there are many gaps in the policy due to which the private sector and international investors are in a wait and see position. The policy is unable to harmonise with the strategies set by the Water Resources Strategy, 2002 and targets set by the National Water Plan, 2005. The main gap is found in policy and legal harmonisation and regular updating of the policy as per the requirement.
According to the study, the following scenario appears to be the impact of the Hydropower Development Policy:
Up to the year 2014/15, a total of 733.557 MW of hydropower has been produced, of which 255.647 MW has been generated through private sector investment. Some 83 projects with an installed capacity of 1,521.28 MW are in the construction phase. In addition to these, 33 hydropower projects of 532.542 MW installed capacity are in different stages of development. This has opened the door for national and international private sector investment, but the government should do more to convince the private sector to lure foreign investment.
Nepal’s hydropower policy has strongly recommended rural electrification, meeting the domestic needs and exporting energy, but still the country is facing an acute power outage even in the summer season. The policy has clearly made provisions about royalty collection, energy quality, energy inspectors, institutional arrangement, but still no clear guidelines and implementation plans are in operation to realise them.
The policy has strongly recommended the regulatory body, Nepal Electricity Regulatory Commission (NERC), for regulating electricity, but the council is yet to be established. The bill is still pending in Parliament. Due to this reason, monitoring and regulation of the electricity sector is weak and like a ship without a captain.
The Department of Electricity Development (DoED) and Water and Energy Commission (WEC) have been established as per the policy, which can be considered a good initiation, but both the organisations are not functioning as per the mandate, due to which energy planning and private sector promotion in hydropower development are not being effective. Institutional strengthening and capacity building of these organisations are essential.
Conflicts, both violent and non-violent, social movements, financial structure, political instability and a multi-window process for approval of projects are other factors leading to the delay in identification, study/investigation and construction of projects.
o address these issues and challenges, Nepal’s hydropower policy should be updated, harmonised with the prevailing laws/plan/strategy while the enactment of a new electricity act and regulatory body act is essential. The private sector is not fully encouraged and convinced by the current policy. The policy lacks clear provisions and an operational mechanism for the projects which will be handed over by the private sector developers to the government after their license period expires.
As the private sector has already achieved significant progress in hydropower development, the Nepal Electricity Authority needs to be reformed. A master plan for hydropower development of the country has become most urgent. As The Water Resources Strategy, 2002 and National Water Plan, 2005 give emphasis to basin planning and adoption of an Integrated Water Resources Management approach for the holistic development of water resources, the policy has not recognised these provisions.
As multipurpose and reservoir projects are different from conventional run of the river and daily peaking power plant in view of construction technology, coverage and financial investment, the policy has no such provision to attract private sector investment. Due to the load pattern and current situation, multi-purpose and reservoir projects development is essential for the long run.
Various government agencies are involved in the sector, however the policy does not emphasise on collaboration and coordination mechanism among them. For the fast and sustainable development of hydropower, a single window policy and effective coordination between all the agencies are necessary.
Social and political problems
The policy does not foresee social and political problems, which are major issues and concerns in recent days. The private sector is a profit making sector, hence the private sector always seeks profit and investment assurance. Private hydropower developers are seeking clearer provisions and assurances against their investment in projects like hydropower. The policy, however, fails to give such assurances to international developers/multi-national companies. If assurance is there and ambiguous legal provisions are removed, huge investment is possible in mega hydropower projects in Nepal.
By :Hari Bahadur Thapa, He is Senior Divisional Engineer,National Vigilance Centre.
Source : The Rising Nepal
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.942276120185852,
"language": "en",
"url": "https://comparesoft.com/asset-management-software/asset-life-cycle/",
"token_count": 1167,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0732421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ff4197b3-37ec-43a0-8c20-7de22b6e0d24>"
}
|
The Four Key Stages of Asset Life Cycle Management
Improving the accessibility and management of assets is an important factor in a business’s goal to generate revenue, paving the way for digital tools to better enhance the asset life cycle management process.
Through solutions such as Asset Management Software, businesses are able to understand and analyse the life cycle of each asset. Eventually assisting owners to make better procurement decisions, maximize the efficiency of equipment, and reduce needless spending costs and maintenance.
What Is the Asset Life Cycle?
An asset life cycle is a strategic and analytical approach to the management of a business’s assets. Most commonly performed with an accurate data collection system, such as Asset Management Software, an asset life cycle is broken down into multiple stages.
Although procurement of an asset is most commonly seen as the first stage of the asset life cycle, it actually begins with planning. From first identifying the need for an asset, the process then continues through an asset’s useful life through to disposal.
Each asset has a life cycle that can be digested into four key stages:
- Operation and Maintenance
Whether its an espresso machine in a coffee shop or a CNC lathe in a manufacturing warehouse, it’s important to understand the life cycle of your revenue-citric assets. By successfully managing this, businesses can then determine the importance of an asset by various factors such as cost, reliability, and efficiency.
Why Asset Life Cycle Management Is Important
No matter what the industry or size of operations, all businesses are reliant on their fixed assets. Each asset has its own life cycle, including a period of useful life where it runs at peak performance. But, after inevitable wear and tear, an asset’s optimal operating life decreases and requires maintenance. Until repair costs eventually outweigh the price of replacement.
The disposal of an asset can be for various reasons including the amount of usage by a production team, the way it had been used by operators, or even the effectiveness of a maintenance plan.
With the deployment of a successful asset life cycle management, or LCAM (Life Cycle Asset Management), strategy businesses can gauge when an asset will reach its optimal peak performance and analyse how long of a useful life it has left. Before eventually planning for maintenance work or its replacement.
This detailed data-driven approach to asset life cycle management also ensures businesses are keeping their assets operating for as long as possible. Among other capabilities such as:
- Calculating asset depreciation value
- Building preventive maintenance strategies
- Specifying asset roles in operations
- Ensuring compliance with regulatory standards
- Calculating the cost of procurement and replacement
- Integrating assets into asset tracking systems
The Four Stages of an Asset Life Cycle
Although the organisation and structure of an asset life cycle may differ between different businesses, there are some stages that are more predominant than others. An asset life cycle can be broken down into four key stages:
Planning helps to establish the requirement of an asset, based on the evaluation of existing assets. This is done by introducing a management system that can analyse trends and data. Allowing the decision-makers to identify the need for the asset and what value it can add to operations.
This first stage of an asset life cycle is crucial for all stakeholders, from financial teams to operators. The decision to purchase an asset realise on this asset fitting a business’s needs, contributing to operations, and generating revenue.
Once an asset has been identified, the next stage is to purchase it. This means that an asset has been properly analysed and identified as a much-needed resource to improve business operations.
This stage will also focus on the financial side of acquiring an asset that is within a specific budget that has been set within the planning stage.
When the asset is eventually acquired and deployed, it can then be tracked throughout its entire life cycle by using an asset management system.
Operation and Maintenance
With the asset now installed, the next stage is operation and maintenance; the longest phase of an asset life cycle. This stage indicates the application and management of the asset, including any maintenance and repair that may be needed.
As the asset is finally put to its intended use within the business, it is now improving operations and helping to generate revenue. As well as reacting to upgrades, patch fixes, licenses, and audits.
During operation, an asset will be regularly monitored and checked for any performance issues that could unexpectedly develop. This is when maintenance and repairs start to become a common occurrence.
As an asset ages and wear and tear increases, regular maintenance is needed to help prolong the life and value of the asset. Not only does this mean repairs, but modifications and upgrades are also required to keep assets in sync with an ever-changing workplace.
Maintenance strategies can differ between businesses. Some prefer a reactive approach, whereas others opt for a predictive or preventive maintenance strategy. But, each maintenance strategy works towards, including:
- Reducing downtime
- Minimizing emergency repair costs
- Increasing equipment uptime
- Prolonging asset life expectancy
By targeting potential improvement areas, maintenance can even help an asset perform better than it originally was.
Finally, at the end of an asset’s useful life, it is removed from service and either sold, re-purposed, thrown away, or recycled.
Although at this stage an asset has no business value, it may still need to be disposed of efficiently to ensure it does not harm the environment. This process could even involve dismantling the asset piece by piece or wiping it clear of data.
However, if there is still an operational need for this type of asset, a replacement is planned for and the asset life cycle can begin again.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9429090023040771,
"language": "en",
"url": "https://eccovia.com/how-organizations-can-benefit-from-workforce-development/",
"token_count": 767,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0908203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:50a89d09-46b5-47a4-bb40-4370151a3153>"
}
|
While the new year is usually the best time to look for a new job, unprecedented spikes in unemployment due to the COVID-19 pandemic may prove otherwise. Notwithstanding, job placement continues to be one of the most effective ways to break the poverty cycle, and more organizations can benefit from it through data management.
THE STATE OF UNEMPLOYMENT
In the first half of 2020 alone, over 40 million Americans filed for unemployment1. The rates of unemployment have drastically fluctuated throughout the year, starting at around 3.6% in January and peaking in April at 14.7%2, indicating a much more unstable employment rate. Now more than 10.7 million Americans are out of work, relying on social services and unemployment benefits to try and make ends meet.
The cause of this extreme rise in unemployment is directly tied to the COVID-19 pandemic—the coronavirus has forced thousands of businesses to cut back, reduce staff, and lower salaries. However, the burden of unemployment is not born equally between the different classes of people.
Pew Research found that unemployment rates this year were significantly higher among individuals with lower levels of education, correlating with other measures of socio-economic status such as income, areas of residency, and alternate forms of financial help3. Those who typically worked lower-paying jobs were, unfortunately, also the most likely to lose their job.
Disadvantaged peoples, such as low-income communities, commonly face additional hurdles when looking for work. Incarcerated people, those experiencing homelessness, survivors of domestic violence, those with limited education, and refugees are often viewed as risks, and hiring managers generally avoid such prospects4. Challenges then quickly become compounding—individuals cannot improve their situation without income, yet they cannot secure income without first improving their situation—and thus the cycle of poverty reinforces itself.
NON-PROFITS AND JOB PLACEMENT
The good news is that many non-profits dedicated to workforce development are seeing success in their job placement efforts5. When individuals are trained, given work opportunities, and regularly assessed, rates of job abandonment significantly decrease. Furthermore, these non-profits are showing that jobs are statistically one of the most effective ways to break the cycle of poverty6.
However, workforce development should not be left to specialized non-profits alone. All organizations and programs benefit from job acquisition, as income can act as one of the most effective ways to improve living circumstances. Even more so, difficulty in finding work affects some areas of social services more profusely, such as prisoner re-entry, homelessness, refugees, those with disabilities, youth, and welfare recipients7.
If your organization focuses on one of these topics (or one related to them), then incorporating work force development into your program may be one of the most crucial improvements to success.
USING DATA FOR WORKFORCE DEVELOPMENT
Workforce development relies heavily on data to inform and improve services. For programs to help individuals find sustained work, it is important to gather data on skills, experience, and the current marketplace as well as to track goals, efforts, and achievements. When such data is used properly, job placement and retainment can exponentially improve.
Case management tools like ClientTrack™ make data use for workforce development clear and simple. Such software can generate employment reports, track employment history, and create and track employment goals. Combining employment tracking to other program initiatives (like HMIS care or nutrition services), organizations can better serve their communities.
Although barriers to employment will continue to face disadvantaged communities, learning to talk about and gather data on them can act as an important step to combating such hurdles. Once more individuals find sustainable work, we can continue to work on ending the poverty cycle.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9463680386543274,
"language": "en",
"url": "https://fredblog.stlouisfed.org/2018/08/has-wage-growth-been-slower-than-normal-in-the-current-business-cycle/",
"token_count": 746,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.003662109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:808aacaf-6f41-45eb-87b5-b65b1839036c>"
}
|
You may have read in the popular press that wage growth seems much slower since the Great Recession compared with previous business cycles. Let’s see what FRED data can tell us. The graph above shows wage growth, defined as the annualized percentage change in the average hourly earnings of private production and nonsupervisory employees. To interpret the graph, note the gray bars, which indicate recessions since 1976, and the green vertical lines, which indicate the peaks of each business cycle. A generally U-shaped pattern occurs between the starts of consecutive recessions. At the start of a recession, the rate of wage growth falls for a number of months, then the trend is reversed as wages increase until the next recession, and the cycle repeats.
To better compare how wages behave across business cycles, we graph the wage behavior observed for each of the prior three business cycles and the current business cycle together. Each cycle is centered at zero, which denotes the month with the lowest wage growth for each business cycle. The current business cycle is identified by the purple line. This cycle started at a lower level of wage increases than the prior three cycles. More importantly, the wage increase from the low point has been following a lower trend: In prior cycles, wage increases exceeded 4%; the current cycle’s wage increases still have yet to reach 3%.
In a future blog post, we’ll look into possible reasons why the current business cycle’s wages have been increasing much more slowly.
How these graphs were created (plus some background): For the first graph, search for wages and select “Average Hourly Earnings of Production and Nonsupervisory Employees: Total Private.” From the “Edit Graph” panel, change the units to “Percent Change from a Year Ago.” The business cycles can be accented by adding green lines to the graph corresponding to each peak using the “Create user-defined line” option under the “Add Line” tab. For the second graph, change the units to “Index” and enter the date “1986-12-01.” This was the lowest point in wage growth for the associated business cycle, which had begun 65 months earlier and would last 43 months longer. To capture the entire business cycle with monthly data, check the “Display integer periods…” box and set the range from -65 to 43. If the units under the “Customize data” tab are changed to “Percent Change from a Year Ago,” the resulting graph shows the section of the first graph from July 1981 to July 1990. While this same result could have been achieved more easily by changing the date range of the original graph, an advantage of this approach is that it allows the same series to be plotted from multiple separate date ranges. Use the “Add Line” tab to add this same series to the graph four times. The options for each line will be the same as those for the first line, except that the custom index date and length of the date range will be different: A second low point occurred in September of 1992, 26 months into a cycle that would last 102 months longer, and the next in January 2004, 35 months into a cycle that would last 46 months longer. The present cycle had its low point 58 months in, during October 2004, and the end date of the cycle has yet to be determined. One way to resolve this problem is to set an unnecessarily high integer end date, like 200. FRED will then automatically fill in the latest available data.
Suggested by Ryan Mather and Don Schlagenhauf.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9393444061279297,
"language": "en",
"url": "https://mnmblog.org/agricultural-fumigants-market.html",
"token_count": 731,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2490234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:37e34e13-bf60-4425-a644-d56f572ac653>"
}
|
The agricultural fumigants market is estimated to be valued at USD 1.59 Billion and is projected to grow at a CAGR of 4.10% from 2017, to reach USD 1.94 Billion by 2022. The growth of this market can be attributed to the growing focus on increasing agricultural production, increase in focus on the reduction of post-harvest losses, and growing usage of agricultural fumigants for the production and storage of cereals.
How will investments in the production of new products creating profitable opportunities for manufacturers in the agricultural fumigants market?
Increasing tolerance of pests towards methyl bromide fumigation, followed by its phasing out, has resulted in the adoption of suitable alternatives for methyl bromide for the management of stored products and to quarantine pests. The alternatives for methyl bromide fumigant include phosphine, sulfuryl fluoride, carbonyl sulfide, ethyl formate, hydrogen cyanide, carbon disulfide, methyl iodide, and methyl isothiocyanate. Hence, manufacturers are focusing on new product developments by investing in R&D activities for active ingredients that can inhibit the resistant insects by using these alternative fumigants.
To know about the assumptions considered for the study, download the PDF brochure
Increase in focus on the reduction of post-harvest losses
Reduction of post-harvest food losses is a critical component for ensuring food security. Post-harvest losses arise from freshly harvested agricultural produce undergoing changes during handling. Post-harvest losses are a measurable reduction in foodstuffs and affect both quantity and quality. According to the UN DESA (United Nation Department of Economic & Social Affairs) report the global population is expected to reach 9.7 billion by 2050, further adding to global food security concerns. Thus, food availability and accessibility can be raised by increasing the production and reducing losses.
Post-harvest losses can be avoided by undertaking fumigation for pest prevention. For example, the decay of citrus post-harvest is controlled by ammonia gas fumigation. Post-harvest green mold and blue mold, caused by Penicillium digitatum and Penicillium italicum respectively, are effectively controlled by ammonia gas fumigation of lemons and oranges. This treatment does not harm oranges; however, it causes the tissue within previously injured areas on the crust of lemons to become darker in color. Fumigation of lemons with ammonia slightly accelerates the natural transition of the color of the crust from green to yellow. Thus, fumigation technology helps in preventing post-harvest losses to maintain the quality of agricultural commodities. In addition, fumigation also helps in the thorough cleaning of storage areas, silos, or warehouses. This is employed as a further preventative method in pre-harvest cleaning for the storage of grains.
To speak to our analyst for a discussion on the above findings, click Speak to Analyst
The market is dominated by key players such as BASF (Germany), Syngenta (Switzerland), ADAMA (Israel), Dow Chemicals (US), and FMC (US). Other players include UPL (India), Degesch (US), Nufarm (Australia), American Vanguard (US), Nippon (Japan), Arkema (US), and Rentokil (UK). The key players have adopted strategic developments such as new product launches, expansions & investments, mergers & acquisitions, agreements, collaborations, joint ventures, and partnerships to explore the market in new geographies.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9486590623855591,
"language": "en",
"url": "https://newszou.com/the-development-of-african-countries-through/",
"token_count": 1416,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.345703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3ce15524-ac47-4b0e-afbc-2e97ba54ea2e>"
}
|
Export-led growth can be economic progress based on raising exports and export income, a key aspect in Aggregate Demand.
This would mean GDP increases, resulting in larger incomes and growth in the domestic overall economy. This can be attained by exploiting a country’s comparative advantage. To do this, several standards must be fulfilled, such as tolerante trade and minimal authorities intervention.
Yet , there must end up being a strong supply of facilities. This is featured in the “weak supply capacity-limited ability… in Africa”. Indeed, the UNCTAD senior economist Samuel Gayi raised the issues of a “shortage of trustworthy electric supply … banking services and efficient transportation”. Additionally , Africa’s primary foreign trade is gardening products.
As a result of technological advancements such as increased fertilisers and increased mechanisation in produced countries, supply of agricultural items has dramatically increased. Protectionist policies such as subsidies have lowered rates. Because demand for agricultural products is very income inelastic, demand from customers has barely changed. This means that prices of farming products include fallen considerably.
At the same time, people are now eating more made goods, which can be income elastic. This enhances the cost of made products, that are Africa’s principal imports. The falling global price of agricultural commodities and raising cost of manufactured goods ensures that Africa confronts deteriorating terms of trade. Deteriorating conditions of operate mean that Africa’s exports command word a lower selling price while imports become more expensive. This means that countries have to generate more income when you sell more exports in order to buy the equivalent imports.
By further elevating supply, prices fall actually lower. This kind of creates a bad cycle. To earn much larger revenues, several countries possess overused their very own resources.
For instance , in Ethiopia, observers have got recently left a comment that in a few areas, garden soil is irreversibly damaged. This has long-term implications. This could have contributed to The african continent losing globe export business, despite “two decades of… trade liberalisation”. Unsustainable practises may have lowered productivity. Indeed, the content notes just how Africa was once a net food exporter and has now “become areas most determined by external meals aid”.
The content notes the “strategic role of agriculture in Africa”, implying more focus and funding should be placed into farming. This would let Africa to feed on its own. While I agree with this, I do not consider this would finally allow The african continent to follow export-led growth.
The most frequently cited examples of export-led expansion are the “Asian Tigers”. These types of countries focused on exporting manufactured good at a minimal price as a result of their comparative advantage of cheap, low skill labour. The increased income from export products was then simply used to increase education therefore future export products were complex products, just like South Korean electronics. The condition with this process is that it may not be simple for Africa. A great export-led development through made products will require greater levels of technology and system, both of which Africa is lacking in.
This would require extensive government spending. However , many African countries will be heavily in debt and probably would not be able to fund this, or may be unwilling to take out a loan. For example , Nigeria has a personal debt of $34 billion, inspite of having paid off $18 billion on a $17 billion debt. 1 This is how “developed countries can enjoy a critical role”.
What could be performed is debt relief or waiving debt. This will allow government authorities to reallocate expenditure. By using a debt service diagram, you observe how lowering debt obligations can mean improved funding for other functions, such as system development. This would make export-led growth more feasible, and improving the quality of life to get millions.
For instance , people as well as industry would have access to clean water. Yet , this strategy can be not devoid of its problems. This plan may not solve foodstuff problems. What needs to be performed instead is definitely development of both Primary and Secondary Sector.
This would let African countries to supply their particular agricultural requirements and not count on food imports and aid. Money spent in buying food products could be accustomed to further increase infrastructure. To summarize, to pursue export-led expansion, Africa demands substantial purchase in facilities, as well as restored efforts to improve agricultural performances.
This can be attained by reliving personal debt, or waiving it all together. This would enable governments to get the money, by way of example in facilities. These tactics would change “worrisome trends” and help Africa develop. you Source: Generate Poverty Record, Geraldine Bedell, Penguin Literature, 2005
Economics Commentary – Russian Quota on US Pork and Indian Government Tax on cars Essay
A quota is a physical limit on the quantity or benefit of goods that can be imported in a country. This really is one of the few protectionist measures that ...
Economics and Book Online Essay
1 ) Liza must buy a textbook for economics class. The price with the college bookstore is $65. One on-line site presents it intended for $55 and another web page, ...
Technology’s impact on Caribbean Ecnomies Essay
Imported technology, in context from the statement, identifies machines which are not indigenous towards the Caribbean. These types of machines let little or no manual effort used in order to ...
Are Professional Athletes and Actors Overpaid Essay
Through this essay Let me explore a much debated matter in this region pertaining to the undeniable fact that professional actors and athletes overwhelmingly get paid greater than the average ...
The Invisible Hand Theory Essay
“The Enquiry for the Nature and Cause of prosperity and Nation” by Adam Smith is among the well-known affected books in the economy. As share on the subject of the ...
Role of Banks in the Economic Development Essay
Financial institution: An organization, generally a corporation, chartered by a state or federal government, which does most or perhaps all of the following: receives require deposits and time debris, honors ...
Middle Range Theory Essay
Cohen’s article (2010) addresses the simple fact that the transition from teenage years to adult life no longer arises at age 21 years old. Starting in the 1970’s the U. ...
Major Challenges Before Indian Economy Essay
This report has recently been an honest and dedicated try to make the analysis on advertising material as authentic as it could. And i also earnestly desire that it supplies useful ...
Capital Budgeting Essay
1 ) Define the term “incremental money flow”. Because the project will be financed simply by debt, should the cash flow analysis are the interest expenditure? Incremental cashflow is the ...
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9685048460960388,
"language": "en",
"url": "https://pocketsense.com/three-four-month-trade-schools-7994900.html",
"token_count": 803,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.044921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:58b64836-b9a9-4a85-a475-90af5985edb1>"
}
|
Trade schools are idea for adult learners or people who want to change careers. Displaced workers who need to reenter the work force quickly may also benefit from a trade school educational program. Trade school offers programs that focus on specific skills rather than on courses that are not necessarily beneficial in the workforce. These skills have the potential to yield high-paying positions, and students can complete them in as little as three to four months. Students who graduate from a three or four month trade school program will earn a certificate or diploma rather than a degree.
Requirements for admission to a three or four month trade school vary from school to school. However, the common requirement is candidates must have a high school diploma or a High School Equivalency Diploma (GED). Students who have not earned a high school diploma or GED may also be candidates for admissions once they pass a state approved ability-to-benefit skills test. Schools may also impose an age limit as a condition for admission. Most trade schools require an application fee.
Potential applicants for a trade school's three to four month program may choose between enrollment in a traditional educational setting or an online learning environment. Online education offers flexibility to students. It is also an option for those who want to attend an out-of-town trade school that offers this learning environment. However, applicants should be aware that online education may not be a suitable option for all. It is important to assess personal learning styles since some students may require face-to-face contact with professors rather than learning through computer interactions.
The U.S. Department of Education's Federal Student Aid Program will finance a trade school education provided that the trade school meets the department's standards. Students who enroll in a trade school that does not meet the department's standards are required to use personal resources to fund their education. However, students who enroll in an eligible trade school program are eligible for financial aid provided that they meet eligibility guidelines.
Students who are eligible for financial aid must demonstrate a financial need and must have a high school diploma or GED. Students who passed a Department of Education approved ability-to-benefit test are also eligible for financial aid. Financial aid guidelines also mandates that students have a Social Security card and are citizens of the United States or are eligible non-citizens. Registration with selective service is also a requirement for students who are required to register. Eligible students must also certify that they are not in default of a financial aid loan and that they will restrict the use of financial aid funds to educational related expenses.
Trade schools offer several programs that students can complete in four months or less. For example, an appliance repair program teaches students to repair electrical and small appliances. Graduates of this program have the option to work independently or to work for an established repair shop or appliance dealer. A personal computer fundamentals program also lasts for four months or less. This training prepares workers for promotions or makes those seeking employment more marketable. Four months or less is all the time that may be needed to train as a physical therapy aide. Students that graduate from this program could become part of a professional team that provides physical therapy services to customers who receive services from physical therapy offices, home health agencies or personal care facilities.
These programs are flexible and offer students the opportunity to complete their studies independently from their home. After students are enrolled in their program, instructors will send them instruction sets. After an instruction set is complete, students take the exam that corresponds to the instruction set. Once students complete all instruction sets for their program and pass all exams, they are eligible to receive their certificate or a diploma.
Alva Hanns began writing professionally in 1993. Her publications include "Social Justice isn't a Myth." She blogs on political websites and is the author of "I Remember the Time" and "God Fulfilled His Promise." She has a Bachelor of Science in social work from the University of Indianapolis and a master's degree in social work from the University of Alabama.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9602193832397461,
"language": "en",
"url": "https://thenextweb.com/topic/dm",
"token_count": 363,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.25390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d2bcdb1f-7dd5-4da0-b9f2-89bd45f8ac60>"
}
|
The (, german mark, abbreviated "dm") was the official currency of west germany (1948–1990) and unified germany (1990–2002) until the adoption of the euro in 2002. it is commonly called the "deutschmark" in english but not in german. germans often say or . it was first issued under allied occupation in 1948 replacing the reichsmark, and served as the federal republic of germany's official currency from its founding the following year until 1999, when the mark was replaced by the euro; its coins and banknotes remained in circulation, defined in terms of euros, until the introduction of euro notes and coins in early 2002. the deutsche mark ceased to be legal tender immediately upon the introduction of the euro—in contrast to the other eurozone nations, where the euro and legacy currency circulated side by side for up to two months. mark coins and banknotes continued to be accepted as valid forms of payment in germany until 28 february 2002. however in 2012, it was estimated that as many 13.2 billion marks were in circulation, with polls showing a narrow majority of germans favouring the currency's restoration. the deutsche bundesbank has guaranteed that all german marks in cash form may be changed into euros indefinitely, and one may do so in person at any branch of the bundesbank in germany. banknotes and coins can even be sent to the bundesbank by mail. on 31 december 1998, the council of the european union fixed the irrevocable exchange rate, effective 1 january 1999, for german mark to euros as dm 1.95583 = €1. one deutsche mark was divided into 100 pfennig.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9366094470024109,
"language": "en",
"url": "https://www.eubioenergy.com/2015/09/14/bioenergy-and-europes-quest-for-a-circular-economy/",
"token_count": 1059,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.057861328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2213e94b-a171-46db-aa27-65d7a9baff76>"
}
|
By Lisa Benedetti, BirdLife Europe
Europe is on the move to become a ‘circular economy’ which is more competitive and resource efficient. The goal is a more circular flow of materials and energy so that Europeans use and consume in a way that creates minimal waste and puts less pressure on natural resources on this continent and other parts of the world. Sounds like a common sense plan…right? Yes, but one important question arises. Why isn’t the Commission including different types of biomass (biological material) as part of the circular economy equation?
At the moment EU policies mostly encourage the use and consumption of biomass (agricultural and forest products) to produce bioenergy. But this bioenergy flows down a one way street. Crops and forests are being grown so that they can be harvested and chopped down to directly feed into and meet Europe’s ever growing hunger for bioenergy to fuel our cars and heat our homes without being used for higher value purposes like food, clothing, and building materials first. On a continent where natural resources are scarce, this is making us dependent on expensive imports and causing the disappearance of natural places while ignoring the potential of other materials (crop waste, recycled building materials, etc.) that could otherwise be used to fuel this ever growing demand.
The waste sector, where we have the magic three RRR’s, reduce, reuse and recycle, is ahead of the energy sector. EU waste legislation dictates a hierarchy where prevention and reduction of waste comes first, and then reused and recycled, before being used to produce energy. But there are parallels between the two sectors, and what is being done for waste could in many cases be also applied to biomass. The term that is used in the bioenergy world for this hierarchy and use of raw material in a specific order of priority is called the ‘cascading use‘ principle.
One new to the idea of ‘cascading use’ might become a bit confused when looking for a simple definition. In hindsight, the term may not have been chosen wisely as its meaning has been debated for quite some time, and probably will be for a long while longer. So, it’s better to not get lost in the finer details. The basic theory behind the principle is similar to that for a ‘circular economy’, to improve efficiency and produce as little waste as possible.
Actually, to truly achieve ‘cascading use’ of biomass in a circular economy, products and raw materials would never be used to primarily feed bioenergy demands. Cascading use requires that only raw materials that have already been used for higher value purposes be reused and recycled first before and only used to produce energy when that particular material has no other purpose. It also encourages the so called use of ‘side streams’ of processed raw materials, like black liquor from the paper and pulp industry which is often used to produce energy.
Cascading use could mean that fewer forests would be cut down and valuable agricultural lands be used to produce crops for food rather than crops for fuel, again, in Europe and elsewhere. It is not efficient to take raw materials from existing forests or agricultural land, or plant new forests and crops, just to burn them for energy. Yet, this is exactly what has been, and is happening because of the EU’s current policies, or lack of policies, for bioenergy. For example, the wood pellet industry where raw materials taken from forests are being turned into pellets to produce electricity is becoming bigger and bigger and more lucrative. The business sector hypes it as a green source of energy, but this sort of burning and incineration is described as ‘raw material leakage’ when thinking circular economy because the materials could have and should have been used for other purposes first.
How can Europe be true to the ‘cascading use’ of biomass?
A circular economy where there is cascading use of biomass can only begin with a good design. First, the EU must clean up current legislation which distorts what the cascading use of biomass is. Encouraging burning of biomass to produce renewable energy without restrictions on the kind of biomass that should be used is one example of current distorting policies.
Some fear that applying the cascading use principle means that individual forest owners will be told how they should harvest or how they should sell their wood, but this is only fear mongering. Rather, sticking to ‘cascading use’ will help us identify for example what kinds of wood use (or other kinds of biomass) and which kinds of uses should be publicly subsidized. For sure, EU renewable energy policies should not, or should at least limit, support for bioenergy fuelled by dwindling primary forests and agricultural resources.
There is a big difference between using forests to build our homes instead of heating our homes. There is a big difference in growing crops for food to eat instead of fuel to drive our cars. In its quest for a circular economy, Europe must hold true to the ‘cascading use’ of biomass.
Photo Credit: Forest for the trees (c) Justin Kern, Flickr Creative Commons
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9402468800544739,
"language": "en",
"url": "http://genus.springeropen.com/articles/10.1186/s41118-020-00100-8",
"token_count": 11872,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.02587890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b0fb2e69-3403-4d00-9251-c9e5fe64d8c2>"
}
|
- Original Article
- Open Access
Population aging and the historical development of intergenerational transfer systems
Genus volume 76, Article number: 31 (2020)
From our evolutionary past, humans inherited a long period of child dependency, extensive intergenerational transfers to children, cooperative breeding, and social sharing of food. Older people continued to transfer a surplus to the young. After the agricultural revolution, population densities grew making land and residences valuable assets controlled by older people, leading to their reduced labor supply which made them net consumers. In some East Asian societies today, elders are supported by adult children but in most societies the elderly continue to make private net transfers to their children out of asset income or public pensions. Growing public intergenerational transfers have crowded out private transfers. In some high-income countries, the direction of intergenerational flows has reversed from downward to upwards, from young to old. Nonetheless, net private transfers remain strongly downward, from older to younger, everywhere in the world. For many but not all countries, projected population aging will bring fiscal instability unless there are major program reforms. However, in many countries population aging will reduce the net cost to adults of private transfers to children, partially offsetting the increased net costs to working age adults for public transfers to the elderly.
An intergenerational transfer is a transfer of money or goods by one person to another of a different age or generation, with no quid pro quo and no expectation of repayment. Private intergenerational transfers include the parental costs of rearing a child or supporting an elderly relative or helping an adult child with the costs of a grandchild or a mortgage. End of life bequests are also important but will not be discussed here. Public intergenerational transfers include paying taxes to fund public pensions, health care, or education. Intergenerational transfers are quantitatively important, amounting to 55% of GDP on average in a collection of rich and developing countries (Lee & Donehower, 2011). Here I will discuss the evolutionary origins of intergenerational transfer behavior, describe intergenerational transfers for countries at different levels of economic development in different parts of the world, and consider how population aging will interact with intergenerational transfer systems in the coming decades. The paper makes sweeping generalizations that are often speculative, but also brings empirical evidence from quantitative studies in many social contexts.
I take a very long view of systems of intergenerational transfers. The human life cycle, as observed today, has an extended period of economic dependency in childhood and another in old age, sustained by intergenerational transfers from the surplus produced at intermediate ages. One might think that old age dependency is rooted in biology while protracted child dependency is created by the need of modern economies for well-educated workers. This view is only partially correct, and it will be useful to take a brief excursion through the evolutionary background on work, dependency, and intergenerational transfers.
The evolutionary background for intergenerational transfers
The strong sociality of human hunter gatherers is expressed in support of children up to age 20 through intergenerational transfers, and by food sharing among kin and non-kin. The roots of such behavior are deep in our past and in more recent times find expression in a variety of culturally moderated practices to be explored later.
First, consider the evolutionary background for human altruism and sociality. Among primates, human offspring grow exceptionally slowly and have a very long period of dependence. While various explanations have been advanced, there is evidence that slow growth reflects the heavy energetic requirements of the growing brain, which at birth requires more than 50% of resting metabolic energy (RME) and in childhood requires about two thirds of RME. The energetic demands of the brain are inversely correlated with the rate of weight gain of the developing child (Kuzawa et al., 2014). The brain is at the center of the human evolutionary strategy, and the investments it requires lead to a period of nutritional dependence that extends up to age 18 or 20 according to anthropological studies of hunter-gatherer groups over the past half century (Howell, 2010; Kaplan, 1994; Lee, 2000).
Because children were dependent for so long, their mothers typically had multiple simultaneously dependent children to care for and provision, setting them apart from other primates which wait until an offspring can forage for itself before reproducing again. It would not have been possible for human mothers to manage without very substantial assistance from other family members and even unrelated helpers (Hrdy, 2009). This is a form of cooperative breeding (Hill & Hurtado, 2009; Sear & Mace, 2008) featuring allomaternal care provided by the father and grandparents of the children, but also by cousins, aunts, uncles, and non-kin. A study by Burkart et al. (2014) found such cooperative breeding or allomaternal care to be the best predictor of prosocial “hyper-cooperative” (altruistic) behavior across species.
The long period of development required by the human brain is made possible by intergenerational transfers from many adults. The participation of non-kin is made possible by human altruism and sociality. An adult in one generation feeds and cares for her child in the next generation. The child never repays the adult. Instead, the child herself grows up to become an adult and a mother and feeds and cares for her own children in the subsequent generation, and so on. Social sharing is another evolved behavior related partly to the need for help beyond the family (Hrdy, 2009) and partly to the high variance of returns to big game hunting which required risk pooling for survival, with hunters in many groups successful on only 3 to 25% of their outings (Hill & Hurtado, 2009). In hunter-gatherer groups, there can be asymmetry in sharing, where a family with more dependent children and a higher dependency ratio is systematically helped (Gurven, 2004; Kaplan & Gurven, 2005).
These points are illustrated by the age profiles of consumption and labor income for hunter-gatherers shown in Fig. 1, which also shows other profiles to be discussed later. For hunter-gatherers, labor income is measured as the caloric value of food brought into camp, and consumption is calculated based on the calories available in each sharing group (typically several households), allocated to individuals in proportion to their caloric needs (Howell, 2010; Kaplan, 1994; Lee, 2000). All the age profiles shown in the figure have been standardized by dividing by the average labor income at ages 30–49 to make their shapes visually comparable. The profiles in Fig. 1 are the average of data for the !Kung in Botswana (Howell, 2010) and the average of the Ache, Piro, and Machiguenga, all groups in the Amazon Basin.Footnote 1
There are two striking features of the age profiles. First, the youth do not become nutritionally self-sufficient until around age 20, consistent with the earlier discussion. Calculations show that the net cost to achieve one child that survives to maturity is 10 or 12 years of average adult consumption (Lee, Kaplan, & Kramer, 2002). Second, the elderly are still net producers at age 70, the last age observed in the data. From this, we see that parental and grandparental provisioning of children is not repaid. On average, adults produced more calories than they consumed at all ages observed and transferred the surplus to children. There was no retirement, and no extended period in which a child could have repaid them.
Why did parental investments in offspring evolve?
In a sense, all reproduction is an intergenerational transfer, since it involves the transfer of some resources in the form of an egg or a seed or a new body, a transfer that will never be repaid to the parent. But in the case of some species, this process of parental investment in offspring continues following birth, perhaps for months or years. Why would such apparently altruistic behavior have evolved?
Consider the young offspring of some species acquiring food energy and then allocating it among growth, survival, and reproduction. An additional calorie of energy would have some marginal impact on the offspring’s lifetime reproductive fitness. An additional calorie would also have a marginal impact on the fitness of the mother of this offspring. If the marginal fitness gain from the mother’s consumption of the calorie is less than one half the marginal fitness gain for her offspring (one half, because the mother shares only half her genes with her offspring), then the mother could raise her reproductive fitness by transferring the calorie to her offspring rather than consuming it herself.Footnote 2 In this way, natural selection could lead to the evolution of intergenerational transfers to offspring in some species, but in other species, the criterion might not be met (for a detailed discussion see Lee, 2014; Chu & Lee, 2006, 2012, 2013). In the case of human offspring, their hungry brains evidently yielded a high fitness rate of return on transfers of food to children.
This argument can account for downward transfers from adults to offspring, but what of the reverse, from adult children to the elderly? Chu and Lee (2013) show that in theory this could evolve as part of an efficient division of labor between grandmother and mother to invest in food and care time for the offspring. In general, however, the evolutionary perspective strongly suggests that if intergenerational transfers do occur, they should flow downwards from older to younger ages.
Intergenerational transfers in agricultural settings
With the Neolithic revolution came agricultural practices appropriate to land-abundant settings, such as forest fallow and bush fallow (Boserup, 1965, 1981). At these low population densities, land has little economic value at the margin and individual property rights are not defined, with communal control of usage.
Empirical evidence is scarce, but Kramer (2005) and Lee and Kramer (2002) present estimates of production and consumption by age for an isolated Mayan village in Yucatan practicing forest fallow agriculture. The profiles for both males and females indicate that adults continue to produce more than they consume through the end of observation which, in this study, is in their early 60s. All adults contribute to providing food for the many children in the population. The children themselves produce more than in the hunter-gatherer populations, but adult surplus net production is less.
Children in this setting become productive earlier than hunter-gatherers because many productive tasks are simple and safe enough for them to perform, so the net cost of raising children to maturity is greatly reduced, as are intergenerational transfers to them. Gains from food sharing are also reduced because randomness in production across households is highly correlated, depending more on weather than on the luck of the hunt, so risk pooling is ineffective.
As populations grow and become denser and production becomes more labor intensive, the value of land rises and property rights in land begin to emerge (Boserup, 1965; Domar, 1970). As they do, people tend to accumulate property across the life cycle through saving and inheritance, and older people own assets such as land, structures, and livestock. This means that on the one hand their adult children can be productively employed on the land and on the other hand, the adult children may hope to inherit the assets when their parents die. Ownership of assets gives the elderly new options. The assets they own are productive and account for about a third of self-employment output (mixed income) with two thirds accruing to labor (Gollin, 2002).
In some settings, older adults may withdraw from labor, at least as it is measured in surveys, and they then consume more than their labor produces (Mueller, 1976; Stecklov, 1997) in contrast to older hunter-gatherers and low density agriculturalists. In other settings, the elderly may continue to work hard into old age. In the Natural Transfer Account (NTA) data to be discussed below, both cases are observed. Taking asset income into account, the elderly may be responsible for more production than they consume and may in fact make net intergenerational transfers to younger family members, even if they work little.
In some countries with large agricultural sectors, net consumption by the elderly is funded by their adult children, typically through co-residence, but among countries in the National Transfer Accounts project this is rare as I shall discuss later. More commonly, the elderly make net transfers to younger family members funded in part by their asset income and sometimes by public sector pensions.
Rise of the welfare state.
With industrialization came assets beyond agricultural and residential property, in the form of financial assets and nonagricultural capital. Saving and investment outside of family enterprise became an alternative means of providing for consumption in old age. But an equally profound change was the rise of government programs which make intergenerational transfers to children and the elderly. These public transfers involve vertical (intergenerational) redistribution of income as distinct from horizontal (intragenerational) redistribution from rich to poor without an age-specific intent. Much has been written about the growth of these new public intergenerational transfer programs, but for present purposes the theory of Becker and Murphy (1988) is particularly interesting. According to this theory, public education arises because the family, lacking an enforcement mechanism for repayment of parental loans to children, tends to underinvest in human capital. But taxing adults to pay for children’s public education leaves the parental generation worse off than before (they could have spent more on their children’s education had they chosen to). To fix this problem, the state later introduces a public pension program, taxing the newly educated children to compel them to reimburse their parents who were themselves compelled to pay taxes to fund public education without receiving any themselves.
Empirics of the economic life cycle
The National Transfer Accounts project or NTA provides estimates of many of the quantities discussed earlier in this paper (for information on the project, see www.ntaccounts.org, Lee & Mason, 2011b, and United Nations Population Division, 2013). Estimates are based on existing surveys, censuses, and administrative data, which are used to construct average values by age for variables such as labor income, consumption, and private and public intergenerational transfers. Labor income includes self-employment income, salaries and wages, and employer-provided benefits, before taxes, and averaged over the whole population including those with no labor income at all. Consumption includes household consumption expenditures allocated by age, plus public in-kind benefits received such as public education, public health care, and public assistance. Cash public transfers such as pensions are not included. All survey estimates are adjusted up or down proportionately so that, when multiplied by the population age distribution and summed, they produce totals that equal those provided in standard National Income and Product Accounts.
Figure 1 shows the most basic components of the economic life cycle—consumption and labor income by age. Consumption is private consumption expenditure (household expenditure imputed to individuals) plus in-kind government transfers (such as publicly provided health care, long-term care, and education, but not cash transfers such as pensions). Labor income is wages and salaries plus fringe benefits, plus two thirds of income from self-employment and unpaid family labor.
The age profiles for developing countries are averages for selected countries in Africa, Latin America, and Asia, as indicated on the figure, and those for rich countries are averages for Europe, N. America, and Oceania. There is considerable heterogeneity within each of these groups, but I will focus on the similarities. While older people in hunter-gather societies continue to be net producers at least through age 70, we see that in developing countries they become net consumers after age 57 where the labor income and consumption lines cross. These profiles reflect a mix of agricultural and nonagricultural labor, but Stecklov (1997) using similar methods and looking separately at the rural population of Cote d’Ivoire finds an even earlier point at which older people become net consumers. In rich countries, this occurs at age 60 on average. Children in hunter-gatherer groups begin early to earn labor income, in developing countries are intermediate, and in rich countries start the latest. The peak labor income occurs latest in rich countries, but then it drops very rapidly as people become eligible for public pensions.
There are also important differences in consumption. In developing and rich countries, income is derived from assets as well as labor income which is an important reason why the consumption curves are higher in those countries. At younger ages, the rich countries show a consumption bulge reflecting human capital investment in children, indicated in the figure by the dip in consumption after age 18. Most striking is the rise in consumption across adult ages in the rich countries while it is flat or slightly declining in the developing countries and hunter-gatherer groups.
The growth of the welfare state has a profound effect on the shape of the economic life cycle. Whatever its original rationale, it has come to shape the economic rhythms of our lives. This can be seen in the changing age profiles of consumption in the USA shown in Fig. 2. In 1960, total consumption dips after age 60. In 1981, it is flat or somewhat rising across the adult ages, and by 2011, it rises strongly with age. Closer inspection reveals that the tilt toward higher old age consumption is largely due to increased public provision of health care to the elderly, mainly through Medicare and Medicaid, two programs begun in the mid-1960s. It is also likely that increased coverage and generosity of Social Security pension benefits have been at least partly responsible for the increasing age gradient in private consumption expenditure. By 2011, the ratio of consumption at age 80 to age 20 had more than doubled relative to 1960. Similar changes have been documented in Sweden and (over a shorter time span) in Japan.
Combined with substantial declines in retirement age during the twentieth century in Europe and its offshoots, these changes in the economic life cycle have greatly increased the relative social cost of old age. Simply put, the elderly now consume more and work less, making population aging even more costly. Since the mid-1990s, however, there has been a modest reversal in OECD countries as the mean age at retirement has risen by a year or two.
The interaction of private and public transfers
How would we expect the private behavior of individuals to respond to a social system which attempts to change their pattern of consumption over the life cycle by taxing them during their working years and transferring resources to them in childhood and old age? Here, evolutionary theory and economic theory lead to the same expectation: public transfers to the elderly will be offset, at least partially, by countervailing changes in private transfers. This is a prediction of evolutionary theory as sketched earlier, with its balancing of marginal fitness gains from own consumption and transfers to descendants. It is also a prediction of economic theory under Ricardian Equivalence as elaborated by Barro (1974), on the assumption that individuals care about their own wellbeing but also care about the wellbeing of their descendants. Adults decide how much to consume themselves and how much to help their children and their own parents, striking a balance between their own wellbeing and that of their children, grandchildren and more distant progeny. If the government then taxes the adult children to give the elderly more resources through pensions and health care, this theory predicts that the elders will make private transfers to restore the consumption balance across the generations, rather than consume the government transfer themselves.
This theory applies best to people with sufficient income to leave planned bequests to their children and to invest in their children’s human capital. They can make adjustments at the margin in the response to changes in public programs. But for those with lower and less secure incomes, things might play out differently. We do find some support in NTA for these ideas. Brazil is the country with the most generous public pension system (as we shall see below), and it is also the country with the largest private net transfers from the elderly to their younger family members (Turra, Queiroz, & Rios-Neto, 2011). The system of familial support of the elderly in Japan appears to have been largely neutralized by the public pension and long-term care programs (Ogawa, Matsukura, & Chawla, 2011). The public pension system introduced in South Africa greatly increased the incomes of many older rural people, and a number of studies have documented the use of these pension funds by grandmothers to benefit grandchildren through schooling, health care, and improved nutrition. Again, public transfers to the elderly are offset at least in part by private transfers from the elderly to children or grandchildren.
There are also indications of substitution between public and private transfers to children. In Europe, public education is strong and private expenditures on education are very low, as seen in NTA data. However, in Latin America and East Asia, public spending on education is relatively low, and there we see substantial private spending on education.
Funding the gap between consumption and labor income
The gap between consumption and labor income at a given age can be funded either through use of asset income (that is, by borrowing or by using the part of asset income that is not saved) or by net transfers, that is transfers received minus transfers made to others. Transfers may be public or private. Countries vary greatly in the extent to which the elderly rely on asset income, or public transfers, or familial transfers, to fund their net consumption in old age.
This diversity is depicted in Fig. 3 by plotting the share of old age net consumption (in excess of labor income) funded from each source on a triangle graph. Each point represents a different country, identified using the United Nations’ two-letter code. In a country located at the Asset vertex, the elderly are funded 100% out of asset income and not at all from public or private transfers, as in the Philippines (PH). In countries located at the Public Transfers vertex, the elderly are completely funded by public transfers with no support coming from assets or familial transfers, as in Hungary, Austria, Slovenia, or Sweden. Countries located on the line joining two vertices are funded by a mix of the two respective sources, as in the case of Great Britain (GB), where the elderly are funded half by public transfers and half by assets, or Thailand, where the elderly are funded two thirds by assets and one third by family transfers. In a country towards the middle of the triangle, like Taiwan, China, or S. Korea, net consumption by the elderly is funded roughly equally from all three sourcesFootnote 3.
There are many countries near the Public Transfers vertex: Austria, Hungary, Slovenia, Sweden, and Brazil, with Germany, Costa Rica, Chile, and Peru funded at least two thirds by public transfers. There are no countries near the Family Transfers vertex, but in Thailand and the three East Asian countries within the triangle the elderly get substantial familial funding. Many countries lie outside the triangle to the right, and in these, the elderly themselves make net transfers to younger family members rather than the reverse. Finally, there are a number of countries where the elderly derive at least two thirds of their support from assets: the USA, Thailand, Philippines, Mexico, and India.
For this diagram, all people 65 and over were grouped together, which conceals an important pattern: younger old people, say 65–75, tend to make net transfers to their children, and older old people, say above 75, tend to receive net transfers from their children, and in Fig. 3, these counter flows tend to cancel. But as populations age, the share of older old will rise, and so net transfers to the elderly may rise as well. Also, in those countries where the elderly make net transfers to their children and grandchildren, population aging will raise the size of those flows or allow older people to cover the costs of their children (e.g., tuition) with smaller transfers per elder.
Another perspective: the direction of intergenerational flows
In countries at earlier stages of the demographic transition, there are many children and few elderly, and in some of these countries, the elderly continue to work into old age and consume amounts similar to other adults. In these circumstances, income will be reallocated from the older ages at which much of it is earned to the younger ages at which much of it is consumed. This can be seen by calculating the average age at which output is consumed and the average age at which it is earned in a population. The result will be influenced both by the population age distribution and by features of the age profiles of consumption and labor income. If the average age of consumption is Ac and that of earning is Ayl, then the average unit of output flows from Ayl to Ac. If Ac is less than Ayl, then income flows downward from older to younger on average; otherwise, income flows upward from younger to older. From the earlier discussion, this flow and its direction are the net result of first, the use of saving, asset accumulation, and dissaving to move income from younger to older ages, and second, private and public transfers that reallocate income both upward and downward.
We can plot the average ages for each country using arrow diagrams. Age is on the horizontal axis, and the tail of the arrow is placed at Ayl while the head is at Ac. The thickness or width of the arrow is per capita consumption, c, and for visual comparison, this is divided by per capita labor income, yl. The area of the arrow, c(Ac − Ayl), is called “life cycle wealth” which can be positive or negative. For present purposes, life cycle wealth is not important; the interested reader will find a discussion in Lee (1994) or Lee and Mason (2011b).
Results for 40 NTA countries as well as two hunter-gatherer groups are shown in Fig. 4. They are grouped by world region and ranked by per capita income within regions, and the regions are ranked by per capita income as well. Shaded arrows give the averages for each region. We see that hunter-gatherer groups reallocate income strongly downward by 10 or 12 years, from older to younger, as we already knew must be the case based on the age profiles shown in Fig. 1. We see that the same is true for all the countries in Africa, South and West Asia, and Latin America, although the arrows are much shorter in some countries such as Cambodia, Vietnam, or Uruguay. In East Asia, despite very low fertility and substantial familial support of the elderly, China, Taiwan, and S. Korea all have strongly downward transfers. For Japan, however, with a much older population and with generous public transfers to the elderly and high old age consumption, the arrow points upwards. In the “West” region, Slovenia, Italy, Germany, Austria, and the UK all have upward pointing arrows, and the average for the region is only very slightly downward. A small amount of population aging would reverse the arrows for several additional countries and the region as a whole. These reversals of direction are the vanguard of a deep and dramatic shift from societies in which all adults generated a surplus which flowed downward for investments in children, to elder-oriented societies with a high proportion of older people who consume a great deal and work very little (Lee, 2000; Lee & Mason, 2011a). The inclusion of time transfers in this analysis would raise downward transfers to children while leaving upward transfers to the elderly largely unchanged, since the elderly for the most part care for one another (Gál and Vanhhuysse, 2018).
The direction of private transfers
Figure 4 portrays the reversal of a pattern that had probably lasted as long as humanity. The reversal is due in part to the aging of the populations which is itself a watershed event, and in part to the changes in the economic life cycle that we discussed earlier, including the rising importance of physical and financial assets. Here our focus is on intergenerational transfers, and we will begin with private or familial transfers which are the most fundamental and were the only kind of transfers for tens of thousands of years.
Private transfers are the sum of interhousehold transfers as reported on surveys and intrahousehold transfers calculated based on the income and consumption of household members. As we have seen, in some countries, the elderly receive substantial net private transfers, but more typically the elderly on net make transfers to younger family members. There are also important variations in transfers to children. These occur in part because in rich countries public education is strong and private spending on education is very low (except in the rich countries of East Asia), while in Latin America, Asia, and Africa, public education is relatively less strong so private spending on education is relatively higher. Variations also derive from differences in fertility which is hyper-low in some countries and high in others.
The direction and magnitude of private transfers can be portrayed using the same design as for the reallocations of Fig. 4. The tail of the arrow is placed at the average age of private transfers made, and the head at private transfers received. The private transfers for hunter-gatherers are identical to the reallocations shown in Fig. 4, because they had neither physical property nor government. The NTA countries are shown in Fig. 5, which tells a simple story: private transfers are strongly downward from older to younger in every country. (Colombia appears to be an outlier with private transfers that are only barely downward. I suspect that this will turn out to be a data issue.) Even in those countries with strong familial support of the elderly, strong transfers to children dominate upward transfers to elderly parents, even with extremely low fertility. In these countries, the arrows, although perhaps somewhat shorter than for other countries, still point strongly downward. The thinner arrows for the West reflect the low private spending on education in that region.
The net private transfer per child is the difference between the total transfer from parents to the child and later transfers from the same child as an adult to the now elderly parents. The parental expectations of this net transfer are the “price” they face for raising the child. The fact that net private transfers are so strongly downward in every one of these 40 countries (except, perhaps, Colombia) at all levels of development suggests that a simple interpretation of Caldwell’s (1976) wealth flows theory of the fertility transition is not consistent with these facts. I say a “simple version” because only the money value of transfer flows is measured here, and it is possible that children’s contributions through insurance value, physical security, and political power might tip the balance.
The direction of public transfers
At earlier stages of development, public transfers are primarily for education and health care. At a later stage, public pensions are introduced, usually initially for the military and for civil servants later extending to the formal sector and then to the whole labor force. At earlier stages of development, public expenditures on health care are higher for children, and then as development proceeds, they become more flat across age and then distinctly favor the elderly as in the rich countries today (Mason & Miller, 2018). Another portion of public transfers is not age-targeted but rather is for social infrastructure, military, police, and other public goods. NTA allocates these public expenditures equally across individuals of all ages.
Using the same design, Fig. 6 portrays the direction and extent of public transfers. The thickness of the arrows represents the volume of public transfers relative to labor income. In Africa and in South and West Asia, the arrows are thin reflecting a small role of public spending, and all point strongly downward. Many Latin American countries adopted European style pension programs at an early stage while keeping a relatively low level of spending on public education, which accounts for the short but fatter arrows in this region, with some pointing upwards: Brazil, Uruguay, and Argentina (barely). In East Asia, public sector transfer programs are relatively small outside of Japan, despite the high incomes of Taiwan and S. Korea. Japan has generous public pensions and publicly funded health care and long-term care, and a fat upward pointing arrow to show for it. All the countries in the West have upward pointing arrows except for the USA (which has a relatively young population, a small public pension benefit, and a strong reliance on asset income).
Public transfers alter the net costs and private incentives for behavior such as childbearing, schooling, saving, retirement, and entering a nursing home. Beyond these incentives, public transfers also create externalities or spillover effects for behavior, and this is particularly true for fertility (Wolf, Lee, Miller, Donehower, & Genest, 2011).
It is also instructive to consider total transfers, the sum of public and private, because as noted earlier these appear to substitute for one another across countries, at least to some degree. We can see the direction and volume of the total transfers in Fig. 7 which plots their arrows. Recall that private transfers were uniformly downwards in all countries, while public transfers were downward in Africa and South/Southeast Asia, mixed in Latin America and East Asia, and upward in Western nations (except for the US). Total transfers are downward by a lot in the lower income regions and by a slight amount in the West (except for Hungary), but as the Western and East Asian populations age the direction of these arrows will in many cases flip upward in coming decades.
Effect of rising longevity
Population aging happens in two ways—through low fertility which reduces the number of workers relative to the number of elderly, and through low mortality which means that the average person spends more years in old age. The economic consequences of the two sources of aging are different, and here I will discuss the consequences of rising longevity from an analytic perspective, drawing on the age profiles of consumption and labor earning (Fig. 8).
As noted in Lee (1994) and Eggleston and Fuchs (2012), as mortality declines, even though individual lives are always lengthened at the end, for the population as a whole, the person-years gained do not occur at the end of life, but rather occur throughout the life cycle. When mortality is high with life expectancies at birth (e0) in the 20s or 30s, the person-years of life gained are mainly in childhood. As mortality decline proceeds, subsequent gains occur mainly in the working ages. Finally, they come to occur mostly in old age, which is the situation in recent decades and will be even more so in the future (Lee, 1994).
It is instructive to plot the age distribution of gains in person years lived against the economic life cycle to assess their interaction, as in Fig. 8. Multiplying together the difference between consumption and labor income on the one hand and the distribution of person years gained (when e0 rises by one year), and summing, gives the proportionate increase in the cost of net lifetime consumption. This can be expressed as a proportion of the present value of lifetime consumption. In the USA, each additional year of e0 costs 1.4% of lifetime consumption. To accommodate the consumption cost of living one additional year, we must reduce consumption at every age by 1.4% or we must work 1.4% more at every age, or we must retire enough later to generate 1.4% more labor income.
With an average initial working life of 40 years, this would mean postponing retirement by a half year for each 1-year increase in e0 (.56 = .014 × 40). All these calculations are done without any discounting, but since a reduction in consumption or increase in earnings earlier in the life cycle would lead to increased savings which could then be invested to pay for a longer retirement, discounting is appropriate. At a 3% real discount rate, the “cost” of longer life falls to .4% of consumption per year going forward, so all the adjustments above would be reduced by two thirds or three fourths, and longer life appears to be much less costly. Adjustments made earlier in the life cycle are more effective than those made later like postponed retirement.
Population aging and public transfers
We can get a different perspective on the economic consequences of population aging by using population projections together with baseline age profiles of public and private intergenerational transfers. Intergenerational transfers are particularly important, because they determine the extent to which the elderly are dependent. We naturally view the elderly as dependent because they consume much more than the little they earn. But output is produced from inputs of both labor and capital. Younger adults supply a lot of labor (and human capital) but not much physical or financial capital. Older adults supply a lot of physical and financial capital but not much labor. The assets of the elderly generate income just as the labor of younger adults does, and we should not disregard it. Arguably, the elderly are dependent only to the extent that they depend on transfers from working age adults for their consumption, transfers that may be public or private. This is why it is so important to consider intergenerational transfers when we think about the economic impact of population aging.
We will begin by considering public transfers. It is well known that public transfers in many countries are fiscally unsustainable as currently structured, in the face of projected population aging. It is less well known that in some other countries population aging may actually be fiscally beneficial, which occurs if the elderly pay more in taxes than they receive in benefits and if transfers are largely to children.
All countries have some level of public education, which means that all can benefit fiscally from lower fertility and declining proportions of children in the population. In low- and middle-income countries, this is typically the main public transfer program. However, all rich countries and many lower- and middle-income countries also have public pension programs and public provision of health care. Depending on the balance of these programs, and on the extent of elder tax payments, population aging may bring fiscal relief or, more often, impose fiscal hardship, based on current program structures. It seems very likely, however, that as incomes rise the public programs of the less wealthy countries will come to resemble more closely those of the rich nations, making the “current program structure” assumption less relevant and making population aging more costly. Nonetheless, I will focus here on the interactions of current program structures with population aging. These should not be interpreted as projections but rather as analytic calculations of the purely demographic component of future changes.
The specific calculation multiplies the projected population age distributions times the current age profiles of benefits minus taxes by age and sums to get the total net cost or surplus for each future year up to 2050. The public transfer load is this sum divided by the total consumption projected for that year based on the initial consumption age profiles. To focus on the projected changes, this public transfer load is standardized to 0 in 2010.
The results are shown in Fig. 9. We see that almost all countries are projected to have an increasing public transfer load as populations age. Those with declining loads—India, Indonesia, and the Philippines—have these in part due to their profiles of public transfers but also because they are still early in the demographic transition and will not begin aging for two or three decades. The rest of the countries, even those with relatively small programs for the elderly, show a rising load of public transfers as the population ages. Slovenia, Brazil, and Germany show the greatest increases, followed by Spain, Hungary, Japan, and Sweden. The first group has increases of about 20% in the ratio. If there were no change in these benefits or in government borrowing, then achieving balance would require an offsetting 20% increase in tax revenues. It is far more plausible that governments will enact reforms in the public programs incorporating some combination of benefit cuts and tax increases.
Population aging and private transfers
Public intergenerational transfers are only a part of the story. Private or familial intergenerational transfers are also pervasive, and with NTA data, we can examine the effect of population aging on these as well.
In the triangle diagram of Fig. 3, we saw that the elderly of most nations either make net transfers to younger people or have a net transfer flow close to zero. In only a few East Asian countries and Thailand do families make appreciable net transfers to the elderly. As populations age, there is a private transfer benefit in most countries with more elderly to assist their younger family members. This appears as an increasingly negative transfer cost in Fig. 10 which plots the private transfer load as projected from 2010 to 2050 after being set to 0 in 2010 so that changes can be seen more clearly. Leading examples of countries with these declining loads are Brazil, Mexico, India, Philippines, and Costa Rica. In the USA, the load declines somewhat. In the Asian countries with strong family support of the elderly, the opposite occurs: population aging raises the support cost substantially. Leading examples here are S. Korea and China. In S. Korea, which is already aging, the load initially declines as child dependency drops and then rises very strongly. Surprisingly, however, Japan also has a strongly rising cost. The reason is that Japan, like a number of other countries that are near the zero line in the triangle figure, has strong downward transfers from the elderly from ages 65 to 78, but after that has strong upward transfers from children to provide support for their elderly parents. These opposing flows cancel and are hidden in the triangle figure, but as population ages, the balance shifts toward the oldest old who are net receivers of care, and the net support costs rise. That is what we see happening for Japan in Fig. 10, and also for Slovenia, Spain, and some other European countries.
Population aging and the total transfer load
Countries vary greatly in the extent to which the young and the old rely on public and private transfers, so to get a complete picture, we need to consider both. Adding together the public and the private transfers at each age, we can calculate the total transfer load, set it to zero in 2010 for each country, and observe the changes as shown in Fig. 11.
The most striking change in Fig. 11 is for Brazil. Brazil had the heaviest public transfer load by 2050, but when we take into account the strong net private transfers Brazilian elders make, the total transfer load initially declines and then rises modestly, ending up not much above zero. A number of developing countries would see declining total transfer loads under current public program structures, such as India, Indonesia, Philippines, and Mexico. In the countries with generous public transfers to the elderly, private transfers by the elderly are not enough to change the challenging outlook, particularly for Slovenia, Germany, Sweden, Spain, and Japan. In the USA, the transfer load rises only very modestly.
From our evolutionary pasts as hunter-gatherers, we have inherited core features: a long period of child dependency, extensive intergenerational transfers to children, cooperative breeding, and social sharing of food including with non-kin. Older people in that context continued to produce a surplus over their consumption, and to transfer it to the young. These practices may have persisted during low-density agriculture, but in higher density agriculture where land and residences were valuable and likely to be owned by older people, labor supply at older ages was sharply reduced and older people became net consumers relative to their labor income, although in some countries they have remained net producers if we take their asset income into account. In some Asian societies today, the elderly live with their adult children and receive net transfers from them, but in most societies, the elderly continue to make net transfers to their children out of asset income or out of public pensions. With the growth of governmental intergenerational transfers, private transfers have been reduced or, at times, have reversed their direction so as to offset the government programs. Among the rich nations, a number have experienced a reversal of intergenerational flows from downward, as in our evolutionary and agricultural past, to upwards, from young to old. Nonetheless, private transfers remain strongly downward, from older to younger, everywhere in the world. With these extensive net public transfers, the population aging that is projected for coming decades will lead to fiscal instability unless there are major program reforms. However, in many countries, the elderly make net private transfers to their children and grandchildren. In this case, population aging will permit children to receive the same net private transfers per child for which each elder contributes less, or permit children to receive increased transfers per child for which each elder contributes the same as before. In these ways, population aging can make private transfers less costly, partially offsetting the increased costs through public transfers.
Availability of data and materials
The data used in this paper are presented in charts. The author will make available the data plotted in the charts on request.
That is, the weights in the average are one half for the !Kung and one sixth for each of the three Amazon Basin groups.
In the case of the father, the additional gain for the child would have to be sufficiently large to overcome the uncertainty about paternity.
In NTA data for more recent years, the importance of familial support for the elderly is much reduced in these three countries (Mason & Lee, 2018).
National Transfer Accounts
Gross Domestic Product
Resting metabolic energy
- e0 :
Life expectancy at birth
Purchasing power parity
Barro, R. J. (1974). Are government bonds net wealth? Journal of Political Economy, 28(6), 1095–1117.
Becker, Gary S. and Kevin M. Murphy (1988) “The Family and the State,” Journal of Law and Economics (April) v.31 n.1, pp. 1-18.
Boserup, E. (1965). The conditions of agricultural growth: The economics of agrarian change under population pressure. Chicago: Aldine Publishing Co.
Boserup, Ester (1981) Population and technological change (University of Chicago Press)
Burkart, J. M., O. Allon, F. Amici, C. Fichtel, C. Finkenwirth, A. Heschl, J. Huber, K. Isler, Z. K. Kosonen, E. Martins, E.J. Meulman, R. Richiger, K. Rueth, B. Spillmann, S. Wiesendanger, & & C. P. van Schaik (2014) “The evolutionary origin of human hyper-cooperation” Nature (Aug 27).
Caldwell, John C. (1976) “Toward a restatement of demographic transition theory,” Population and Development Review, reprinted as Chapter 4 of John Caldwell (1982) Theory of Fertility Decline (Academic Press), pp. 113-180.
Chu, C.Y. and Ronald Lee (2006) “The co-evolution of intergenerational transfers and longevity: An optimal life history approach.” Theoretical Population Biology (March) V.69 n.2., pp. 193-201.
Chu, C. Y.C., and Lee, R.D. 2012. "Sexual dimorphism and sexual selection: A unified economic analysis," Theor. Popul. Biol. https://doi.org/10.1016/j.tpb.2012.06.002
Chu, C. Y., & Lee, R. D. (2013). On the evolution of intergenerational division of labor, menopause and transfers among adults and offspring. Journal of Theoretical Biology, 332, 171–180 http://authors.elsevier.com/sd/article/S0022519313001999.
Domar, Evsey (1970) “The causes of slavery or serfdom: A hypothesis” Journal of Economic History, 30, pp 18-32. doi:10.1017/S0022050700078566. Read only Part I, pp.18-23. http://www.jstor.org/stable/2116721
Eggleston, K. N., & Fuchs, V. R. (2012). The new demographic transition: Most gains in life expectancy now realized late in life. Journal of Economic Perspectives, 26(3), 137–156.
Gál, R. I., Vanhuysse, P., & Vargha, L. (2018). Pro-elderly welfare states within child-oriented societies. Journal of European Public Policy, 25(6), 944–958. https://doi.org/10.1080/13501763.2017.1401112.
Gollin, D. (2002). Getting income shares right. Journal of Political Economy, 110(2), 458–474.
Gurven, M. (2004). To give and to give not: The behavioral ecology of human food transfers. Behavioral and Brain Sciences, 27, 543–583.
Hill, K. and A.M. Hurtado. 2009. Cooperative breeding in South American hunter-gatherers. Proc. R. Soc. B published online 19 August 2009. doi: https://doi.org/10.1098/rspb.2009.1061.
Howell, N. (2010). Life histories of the Dobe !Kung. Berkeley: University of California Press.
Hrdy, S. (2009). Mothers and others: The evolutionary origins of mutual understanding. Cambridge, MA: Harvard University Press.
Kaplan, H. (1994). Evolutionary and wealth flows theories of fertility: Empirical tests and new models. Population and Development Review, 20(4), 753–791.
Kaplan, H., & Gurven, M. (2005). The natural history of human food sharing and cooperation: A review and a new multi-individual approach to the negotiation of norms. In H. Gintis, S. Bowles, R. Boyd, & E. Fehr (Eds.), Moral Sentiments and Material Interests: The Foundations of Cooperation in Economic Life, (pp. 75–114). England: MIT Press, Cambridge MA and London.
Kramer, K. (2005). Maya children: Helpers at the farm. Cambridge: Harvard University Press.
Kuzawa, Christopher W., Harry T. Chugani, Lawrence I. Grossman, Leonard Lipovich, Otto Muzik, Patrick R. Hof, Derek E. Wildman, Chet C. Sherwood, William R. Leonard, and Nicholas Lange (2014) “Metabolic costs and evolutionary implications of human brain development” PNAS http://www.pnas.org/cgi/doi/10.1073/pnas.1323099111 (PNAS early edition).
Lee, R. (1994). The formal demography of population aging, transfers, and the economic life cycle. In L. Martin, & S. Preston (Eds.), The Demography of Aging, (pp. 8–49). Washington, DC: National Academy Press.
Lee, R. (2000). A cross-cultural perspective on intergenerational transfers and the economic life cycle. In A. Mason, & G. Tapinos (Eds.), Sharing the wealth: Demographic change and economic transfers between generations, (pp. 17–56). Oxford: Oxford University Press.
Lee, R. (2014). Intergenerational transfers, social arrangements, life histories, and the elderly. In M. Weinstein, & M. A. Lane (Eds.), Sociality, Hierarchy, Health: Comparative Biodemography: Papers from a Workshop. Washington, D.C.: National Academies Press.
Lee, Ronald and Gretchen Donehower (2011) “Private transfers in comparative perspective” Chapter 8 in R. Lee and A. Mason (eds.), Population aging and the generational economy: A global perspective. Edward Elgar. (viewable on the IDRC website http://www.idrc.ca/EN/Resources/Publications/Pages/IDRCBookDetails.aspx?PublicationID=987)
Lee, Ronald, Gretchen Donehower, and Tim Miller (2011) “The changing shape of the economic lifecycle in the United States, 1960 to 2003” Chapter 15 in R. Lee and A. Mason (eds.), Population aging and the generational economy: A global perspective. Edward Elgar. (viewable on the IDRC website http://www.idrc.ca/EN/Resources/Publications/Pages/IDRCBookDetails.aspx?PublicationID=987).
Lee, Ronald and Karen Kramer (2002) “Children’s economic roles in the context of the Maya family life cycle: Cain, Caldwell, and Chayanov Revisited,” Population and Development Review, 28 (3):475-499 (September 2002).
Lee, Ronald and Andrew Mason. (2011a) “Generational economics in a changing world” in Ronald Lee and David Reher (eds), Demographic transition and its consequences, special supplement to population and development review. 37:115-142. PMCID: PMC3143474; PMID: 21804657 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3143474/
Lee, R. and A. Mason, principal authors and editors (2011b). Population aging and the generational economy: A global perspective. Cheltenham, UK, Edward Elgar.
Lee, Ronald, Hillard Kaplan and Karen Kramer (2002) “Children and elderly in the economic life cycle of the household: A comparative study of three groups of foragers and horticulturalists.” Paper presented at the Annual Meetings of the Population Association of America.
Mason, Andrew and Ronald Lee (2018) “Intergenerational transfers and the older population”, National Academies of Sciences, Engineering, and Medicine in Future Directions for the Demography of Aging: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: https://doi.org/10.17226/25064.
Mason, Carl N. & Miller, Timothy, 2018. "International projections of age specific healthcare consumption: 2015–2060," The Journal of the Economics of Ageing, Elsevier, vol. 12(C), pages 202-217.
Mueller, E. (1976). The economic value of children in peasant agriculture. In R. Ridker (Ed.), Population and development: The search for interventions, (pp. 98–153). Baltimore: Johns Hopkins Press.
Ogawa, Naohiro, Rikiya Matsukura, and Amonthep Chawla (2011) “The elderly as latent assets in aging Japan” in R. Lee and A. Mason (eds.), Population aging and the generational economy: A global perspective. Edward Elgar. (viewable on the IDRC website http://www.idrc.ca/EN/Resources/Publications/Pages/IDRCBookDetails.aspx?PublicationID=987.
Population Division (2013). National Transfer Accounts manual: Measuring and analyzing the generational economy. New York: United Nations.
Sear, R., & Mace, R. (2008). Who keeps children alive? A review of the effects of kin on child survival. Evolution and Human Behavior, 29, 1–18.
Stecklov, G. (1997). Intergenerational resource flows in Cote d'Ivoire: Empirical analysis of aggregate flows. Population and Development Review, 23(3), 525–553.
Turra, Cassio M., Bernardo L. Queiroz, and Eduardo L.G. Rios-Neto (2011) “Idiosyncrasies of intergenerational transfers in Brazil” in R. Lee and A. Mason (eds.), Population aging and the generational economy: A global perspective. Edward Elgar. (viewable on the IDRC website http://www.idrc.ca/EN/Resources/Publications/Pages/IDRCBookDetails.aspx?PublicationID=987
United Nations Population Division (2013) World population prospects: The 2012 revision. New York.
Wolf, D. A., Lee, R. D., Miller, T., Donehower, G., & Genest, A. (2011). Fiscal externalities of becoming a parent. Population and Development Review (June), 37(2), 241–266.
I am grateful to members of National Transfer Accounts (NTA) country teams for use of their data and to Gretchen Donehower for assistance. The NTA researchers are identified, and more detailed information is available on the NTA website: www.ntaccounts.org.
Research for this paper was funded by a grant from the National Institutes on Aging, NIA R37 AG025247.
The author declares that he has no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Lee, R. Population aging and the historical development of intergenerational transfer systems. Genus 76, 31 (2020). https://doi.org/10.1186/s41118-020-00100-8
- Population aging
- Intergenerational transfer
- Demographic transition
- Old age
- Support system
- Welfare state
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9429137706756592,
"language": "en",
"url": "https://dailyyonder.com/eveglades-refuge/2011/03/30/",
"token_count": 2177,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06103515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:65fafb3c-a141-4dd7-8db2-0be033341b09>"
}
|
[imgcontainer left] [img:palmetto-prairie320.jpg] [source]Andrew Moore[/source] Vast palmetto prairie in the Florida Everglades provides habitat for native and endangered species. [/imgcontainer]
Multiple efforts have been advanced in recent years to create and expand an Everglades wildlife refuge, an ambitious and costly goal in such financially uncertain times.
The recession hit Florida harder than most other states. Despite the economic climate, however, the U.S. Fish and Wildlife Service has proposed creating an Everglades Headwaters National Wildlife Refuge in Central Florida. And although Florida Governor Rick Scott has indicated that now is not the time to spend tax dollars on conservation projects, the U.S. Fish and Wildlife Service says this is actually an ideal moment to achieve such goals.
With private land deals slowed by the recession, the federal wildlife agency proposes to “capitalize on the real estate economy to protect biologically important lands.” Wildlife officials realize that if the economy turns around, and now-idling developments are financed, that window of opportunity could close.
Charlie Pelizza, Refuge Manager at Pelican Bay in Florida, says, “There are at least sixty Developments of Regional Impact—residential developments primarily—that are in the [Everglades] landscape that are either in initial stages, or have already been approved. When the economy improves, those could proceed.”
Pelizza says that if those development projects do proceed, the Service would lose the ability to restore water quality for wildlife and for Floridians. In addition, existing wildlife corridors would be further fragmented, limiting the range and viability of Florida’s most endangered species.
Pelizza also cites nascent alternative energy projects — for biofuels, wind, and solar power — that would likely move ahead when the economy improves, significantly altering the landscape.
[imgcontainer] [img:wildlifestudyarea520.jpg] [source]U.S. Fish and Wildlife Service[/source] The 1.78 million-acre region of the Everglades watershed, now under study as a wildlife refuge. [/imgcontainer]
The area now under study for the Everglades refuge consists of four counties — Polk, Osceola, Highlands, Okeechobee — totaling 1.78 million acres. The U.S. Fish and Wildlife Service is hoping to identify 150,000 acres for the refuge and conservation area—100,000 acres to be acquired through conservation easements, and 50,000 acres through direct purchase.
The most recent Everglades restoration project in South Florida was forced to be scaled back from an earlier, more ambitious plan. In 2008, Governor Charlie Crist proposed purchasing 187,00 acres from the U.S. Sugar Corporation. The South Florida Water Management District agreed to buy this land for $1.3 billion.
But citing the economic downturn, the state was forced to reduce the project dramatically, twice. The final sale of 26,791 acres, for $197.3 million closed in October of last year.
This land was sought primarily to improve the water quality in South Florida ecosystems by taking a chunk of the Everglades out of heavy agricultural production. The current proposal for a refuge cites similar goals for improving water quality, but rather than restoring an ecosystem, the Service seeks only to preserve existing habitats that are increasingly rare but currently in healthy ecological condition.
[imgcontainer right] [img:lakemarian530.jpg] [source]Andrew Moore[/source] Prairie grazing lands near Lake Marian, in the proposed Everglades Wildlife Refuge [/imgcontainer]
Florida’s current governor, Rick Scott, spoke out against the U.S. Sugar-Everglades deal during his campaign for office, in August of 2010.
As governor Scott has designated no funds for the state’s conservation land acquisition program, Florida Forever, in his recommendation for the 2011-2012 budget. Scott did, however, recommend $17 million for lower Everglades restoration projects. He has told reporters that his current priority is creating jobs.
Earlier this month, state legislators introduced a bill to allow the development of golf courses on wildlife habitat in state parks. One park named specifically was Jonathan Dickinson State Park in Palm Beach County, a federally designated Wild and Scenic River.
Although the bill has since been withdrawn, it indicates the current legislature’s willingness to sacrifice wildlife habitat for projected economic benefits. The bill grew out of talks between Hall of Fame golfer Jack Nicklaus and Governor Scott, according to the St. Petersburg Times.
Yet the Fish and Wildlife Service sees the current economic situation very differently. Because Florida land prices have fallen from the highs of the land speculation and development boom, the Service sees this as an opportune time to purchase lands at more affordable rates. The Service argues that conditions are actually ripe for funding conservation projects.
According to Pelican Bay’s refuge manager Pelizza, royalties collected from offshore oil drilling could provide funding for the proposed Everglades project. A second major source of potential funding comes from the Federal Duck Stamp, required of waterfowl hunters. The Migratory Bird Hunting Stamp Act, signed into law in 1934, generates revenue for wetlands acquisition for what is now the National Wildlife Refuge System.
[imgcontainer right] [img:mooreeaglenest320.jpg] [source]Andrew Moore[/source] Eagles’ nest at the Adams Ranch in Osceola County, Florida. [/imgcontainer]
The creation of this refuge is being sought, first of all, in order to conserve habitat for 88 threatened and endangered species, including Florida panthers, Florida black bears and Everglades snail kites. Many endangered species are declining due to continual habitat destruction; the Service is taking into consideration global climate change and rising sea levels as future causes of habitat loss.
But another goal is to protect the water supply for millions of people, including residents of the heavily populated urban areas of South Florida. The study area includes the headwaters of the Everglades and Lake Okeechobee, which is the main source of water for the majority of South Floridians.
In addition, according to the Service, the proposal supports the America’s Great Outdoors program, a national priority of the Secretary of the Interior, “by conserving a rural ranching and agricultural community, as well as the rural character of Central Florida.” This would be accomplished by creating conservation easements on privately held lands, allowing ranchers to continue raising cattle.
The rules of conservation easements vary from state to state, and from project to project, but in most cases the private landowner sells development rights to a public entity–in this case the USFWS–but remains sole owner of the property. A rancher, for example, can continue cattle-grazing operations, but no new developments can occur on the landscape.
Landowners are paid for these development rights at rates determined through property value appraisals. The Wildlife Service would make an offer to willing sellers based on current market values. A landowner would still be able to sell his or her land in the future, but the development rights would remain with the Service.
The family-owned Adams Ranch has partnered with research and wildlife management for decades on their ranch in Osceola County and in St. Lucie County, where the family business is headquartered.
This past November, Adams Ranch entered into a 40-acre easement at their Osceola ranch, adjacent to the Three Lakes Wildlife Management Area. The easement area is a native prairie that the ranch is not permitted to disturb either by turning it into improved pasture or by constructing roads. The ranch is also required to manage the property properly, which includes prescribed burning. Mike Adams, president of Adams Ranch, says his cattle are still free to graze in the area.
But because the current refuge proposal would include a much larger area, Adams says, his company would need to make sure those future easements aren’t as restrictive before making any further agreement.
“We work in a dynamic world now, on a world basis,” Adams says. “What you do today you may not be doing ten years from now.”
The Osceola ranch contains a mix of dense hammocks, cypress domes, palmetto prairie, and open prairie. In addition to cattle, the ranch is home to bald eagles, gopher tortoises, wild turkey, killdeer, and the migrating purple martin, among other species. Adams Ranch shares a property line with Three Lakes Wildlife Management Area, a 62,000-acre preserve of dry prairie. This proximity makes the ranch a highly prized zone for expanding existing wildlife corridors.
Pelizza says that ranching lands are more favorable than heavy agriculture, such as citrus groves.
[imgcontainer right] [img:MikeandLeeAnnAdams320.jpg] [source]Andrew Moore[/source] Mike and LeeAnn Adams — LeeAnn has led conservation initiatives at Adams Ranch. [/imgcontainer]
“Especially when we’re talking about the ranching community, their needs are similar to ours, and very compatible,” Pelizza says. “Our mission is to provide habitat for wildlife — the ranching community are also interested in providing these opportunities for future generations as well.”
Adams admits that at the height of the real estate and land speculation boom, his ranch received a few offers that were close to the right price. But he asked the interested developer to stop offering, because the ranch was not for sale.
“A lot of the people in the cattle business really enjoy what they do,” Adams says. “It’s not so much for a return on their investment,” Adams says; it’s a way of life, and quality of life, that the ranchers like himself care about most. “Florida without wildlife would be kind of a sad place,” he says.
Adams’s son, Zachary, lives near the Osceola ranch. In total, four full-time cowboys work at the ranch.
“I know we’ve had a good balance of wildlife and productive agriculture operation,” Adams says. “We feel it could be very much sustainable into the future, but you need the ability to do different things.”
Andrew Moore is a writer in Pittsburgh, Pennsylvania.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9507172703742981,
"language": "en",
"url": "https://news.georgmedia.com/category/developing-countries/",
"token_count": 2428,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.267578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a9bbcfa2-47b0-4d90-bd68-9f1bd9d52313>"
}
|
- In December, the world watched as the UK and the US administered their first doses of Pfizer and BioNTech’s COVID-19 vaccines.
- But lower-income countries may have to wait for years before they can vaccinate the majority of their population, researchers have found.
- Cost and availability, combined with transport, storage, and distribution issues pose serious problems – which could threaten global herd immunity.
- Visit Business Insider’s homepage for more stories.
December has been a momentous month in the global fight against COVID-19. Amid a wave of emergency use authorizations, the UK and the US have already begun administering the first shots of Pfizer and BioNTech’s vaccine.
But in lower-income countries, the wait could be much longer.
Governments across the world are negotiating deals to buy COVID-19 vaccines – but this “frenzy of deals” could prevent poorer countries from accessing enough vaccines for most of their population until 2024.
This is according to researchers at Duke University’s Global Health Innovation Center. Scientists at the center’s Launch and Scale initiative have looked into the barriers that could affect access to a vaccine – and found a myriad of factors.
It isn’t just the cost and availability of vaccines that is pricing lower-income countries out. Many of the most vulnerable segments of society also lack the infrastructure to transport, store, and distribute the vaccine.
Earlier this month, Pfizer became the first company to have a COVID-19 vaccine authorized for emergency use in the West, and the first hundreds of shots have already been given out in both the UK and the US.
However, it takes time to manufacture doses.
The leading vaccines use several different technologies, such as mRNA, recombinant protein, and adenoviruses. Each of these has its own complex manufacturing process, meaning the vaccines take a long time to make.
But it could take three to four years to produce enough vaccines to immunize the global population, the researchers from Duke University found. Wealthier countries may be able to issue multiple doses of the vaccine to their populations before the immunization becomes widespread in poorer countries.
Even if drugmakers heavily invest in their manufacturing facilities, “there is a limit to how much global vaccine manufacturing capacity can expand in the next few years,” said Andrea Taylor, the lead analyst for Launch and Scale.
“High-income countries are making deals with major vaccine developers who are in turn reserving the lion’s share of the world’s manufacturing capacity to meet those commitments,” she said.
Experts are also worried about a shortage of glass vials to store the vaccines in.
The vaccine will also be expensive to buy. Pfizer charged the US $19.50 per dose for the first 100 million doses, its partner company BioNTech said. Each person requires two doses of the vaccine, putting its cost at $39 per person.
Moderna, meanwhile, plans to charge from $25 to $37 per dose.
Some drugmakers, however, have promised to guarantee lower-income countries can also have access to the doses.
AstraZeneca is reserving 400 million doses of its vaccine for low- and middle-income countries, and said it would sell its vaccine at cost during the pandemic for between $3 and $5 per dose. But this no-profit guarantee could expire before July 2021.
Johnson & Johnson also said it would not profit from sales of its vaccine to poorer nations, and China said its vaccine would be “made a global public good.”
To prevent wealthier countries from snatching up vital doses of the vaccine, the World Health Organization (WHO), Gavi, and the Coalition for Epidemic Preparedness Innovations (CEPI) launched a scheme called Covax in April.
Countries sign up to access an equal share of successful vaccine candidates, meaning that the doses are shared among richer and poorer countries. The scheme aims to provide lower-income countries with enough doses to cover 20% of their population.
“For lower-income funded nations, who would otherwise be unable to afford these vaccines, as well as a number of higher-income self-financing countries that have no bilateral deals with manufacturers, Covax is quite literally a lifeline and the only viable way in which their citizens will get access to COVID-19 vaccines,” the companies behind the initiative said.
As of November 11, the Duke University researchers had found no evidence of any direct deals made by low-income countries, suggesting that they would be “entirely reliant on the 20% population coverage from Covax.”
Despite being a “phenomenal effort at international collaboration,” Covax is “seriously underfunded,” Ted Schrecker, professor of global health policy at Newcastle University Medical School, told Business Insider.
Some countries, notably the US, haven’t joined. The US could eventually control 1.8 billion doses, the Duke University researchers found, or about a quarter of the world’s near-term supply – and none of this would be shared with lower-income countries via Covax.
“The whole call for global solidarity has mostly been lost,” Dr. Katherine O’Brien, the WHO’s vaccine director, said in internal recordings obtained by the AP.
Gavi also said that the risk Covax will fail its mission is “very high” in a report issued in December, per the AP.
Covax has only managed to secure a total of 200 million doses – just a 10th of the 2 billion it aimed at buying over the next year, according to the publication.
It has agreed to purchase another 500 million doses has also been agreed, but not in a way that is legally binding. It is $5 billion short of the money needed to buy them, the AP reported.
Furthermore, many wealthy countries which have signed up to the scheme, including the UK, EU, and Canada, have also struck “side-deals” with pharmaceutical companies to guarantee their supply, the Duke University researchers found. Most of these deals were arranged in advance of the vaccines’ approval, whereas Covax has been hesitant to order stocks prior to approval.
“The hoarding of vaccines actively undermines global efforts to ensure that everyone, everywhere can be protected from COVID-19,” Stephen Cockburn, Amnesty International’s head of economic and social justice, said earlier this month, as a coalition of charities spoke out against the unequal global distribution of doses.
“Rich countries have clear human-rights obligations not only to refrain from actions that could harm access to vaccines elsewhere but also to cooperate and provide assistance to countries that need it.”
Distributing the vaccines globally is proving to be a mammoth task.
Cargo airline execs have already warned that getting a COVID-19 vaccine to everyone on Earth could take up to two years, saying that it could be “one of the biggest challenges for the transportation industry.”
Some require ultra-cold chain storage which requires significant investment. Pfizer’s vaccine, for example, has to be transported at -94 degrees Fahrenheit through a system of deep-freeze airport warehouses and refrigerated vehicles using dry ice and reusable GPS temperature-monitoring devices.
Even when the vaccines do make it to low-income countries, they might lack the transport links and road networks to distribute the doses to everyone in need.
Specially-adapted vehicles may also be needed, Alison Copeland, professor of human geography at Newcastle University, told Business Insider. Lower-income countries may not be able to afford them, however.
When doses do reach local communities, vaccines such as Pfizer’s still have to be kept in cold-chain storage. Even some of the most reputable US hospitals, such Minnesota’s Mayo Clinic, lack adequate facilities to store the vaccine, leading to a scramble for hyper-cold freezers – and in lower-income countries, this access to ultra-cold freezers is even less likely.
After the shots reach health centers, they can be thawed in a regular fridge – but they have to be injected within five days.
In many low-income countries, only metropolitan areas are well-resourced, Schrecker explained, and some villages and informal settlements may not have a working fridge.
Even if communities are able to afford storage for the vaccine, they may not have working electricity, Copeland explained.
And the various vaccine candidates being developed by drugmakers have different storage needs, making it difficult for countries to know how to prepare and whether to invest in cold-chain facilities.
AstraZeneca’s vaccine, for example, can be stored, transported, and handled at normal fridge temperatures of between 36 and 46 degrees Fahrenheit for at least six months.
Once it reaches its destination, it can be “administered within existing healthcare settings,” AstraZeneca said, rather than requiring investment in expensive ultra-cold storage equipment.
Moderna’s vaccine can also be transported and stored at fridge temperatures, but only for a month.
Pfizer is also looking into alternatives to solve the storage problem. The US drugmaker is looking into developing a second-generation coronavirus vaccine in powder form, which would only need to be refrigerated, not deep-frozen. This could be developed in 2021, Pfizer’s CEO told Business Insider, but it’s currently uncertain.
Health centers and infrastructure
Given that urban areas have the most transport infrastructure, they also have the majority of healthcare infrastructure, too.
Although many African countries improved their health services during the Ebola pandemic, most rural communities remain isolated, Schrecker told Business Insider.
Alongside the vaccine doses themselves, other supplies are needed to carry out the vaccinations. For example, countries need to ensure they have syringes available in time for the arrival of vaccines, Taylor said.
Low-income countries may also have to launch vaccination drives where health literacy is poor. While childhood vaccinations are becoming increasingly common in low-income income countries, people of all age groups, especially the elderly, will need the COVID-19 vaccine. This will require the counties to carry out major vaccination education campaigns, Taylor and Copeland both said.
Another challenge is that most vaccines require two shots, including Pfizer’s, which needs two shots injected three weeks apart. In rural parts of India, where people are harder to contact or may live a long way from vaccination centers, some people don’t come back for a second shot, public health experts told Bloomberg.
The country will also have to roll out mass paramedical training to teach healthcare staff how to administer the two-shot doses, Pankaj Patel, chairman of drugmaker Cadila Healthcare, told the publication.
Cause for optimism
Despite the hurdles that lower-income countries face, mass global vaccination is still a possibility.
After their mid-November summit, the G20 states said they will “spare no effort to ensure their affordable and equitable access [to COVID-19 diagnostics, therapeutics, and vaccines] for all people.”
Wealthier countries could also be motivated to provide aid to ensure all countries have access to a vaccine, because of herd immunity beliefs.
“In order to control the virus, we need worldwide herd immunity, so between 60% and 72% of the population need immunizing,” Copeland told Business Insider. “This will hopefully be enough incentive for richer countries to help out.”
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9332543015480042,
"language": "en",
"url": "https://spartaselite.de/ofer7134niqa/future-value-questions-and-answers-pdf-lebo.php",
"token_count": 802,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.16796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:951033fd-6f85-407b-8ab0-bf65869120d4>"
}
|
6 May 2014 The most basic time value of money formula that links PV with FV is Answer: You are essentially asked to compound $80,000 for 10 years at 10% annual compound- regarding the question as asking for FV10 = PV (1+r). 19 Feb 2014 http://ishbv.com/manifmagic/pdf Solution P = 10,000 R = 10% = 0.10 T = 4 years 9 months 9 =4 = 4.75 12 S = P (1 + RT ) PRACTICE 1 1. Simple Interest – Present Value The formula to calculate the present value is given What is the present value of the annuity if the first cash flow occurs: a) today. PV of annuity due = $5,772.19 b) one year from today. PV of ordinary annuity = $5,550.18 c) two years from today. PV of a deferred annuity = $5,550.18 / 1.04 = $5,336.71 d) three years from today. If $18,000 is invested at 2.5% for 20 years, find the future value if the interest is compounded the following ways: (a) continuously (b) simple (not compounded) (Round the answers to the nearest c FV = the future value of a sum of money. PV = the present value of the same amount. r = the interest rate, or the growth rate per period. n = number of periods of growth If we know any three of the quantities, we can always find the fourth one.
From time to time we are faced with problems of making financial decisions. interest and rate of discount, and the present and future values of a single payment. Solution: The interest charges for year 1 and 2 are both equal to. 2, 000 × 0.08
You are asked to calculate the present value of a 12-year annuity with payments of $50,000 per year. Calculate PV for each of the following cases. (a) The annuity Nominal and Effective Interest rates are common in problems where interest is stated in various ways. Published interest tables, closed-form time value. Demonstrate the use of timelines in time value of money problems. 1 These notes Solution. The future value of your deposit is: FV = $687,436.81χ1.055. 7. irrelevant as long as the future value is twice the present value for doubling, three times as large for tripling, etc. To answer this question, we can use either the
Questions 155-157 are from the previous set of financial econom ics questions. Question 158 is new. • Questions 66, 178, 187-191 relate to the study note on approximating the effect of changes in interest rates. • Questions 185-186 and 192-195 relate to the study note on determinants of interest rates.
These questions are representative of the types of questions that might be asked of candidates sitting for the Financial Mathematics (FM) Exam. These questions are intended to represent the depth of understanding required of candidates. The distribution of questions by topic is not intended to represent the distribution of questions on future Step 1: Find the future value of the annuity due. $1000 × (1+.0625)17 −1 .0625 +$1000 = $29,844.78 Step 2: Take this amount that you will have on December 31, 2028, and let it go forward five years as a lump sum. $29,844.78 ×(1 +.0625)5 = $40,412.26 Mortgage Payment 7. PV(Present Value): PV is the current worth of a future sum of money or stream of cash flows given a specified rate of return. Future cash flows are discounted at the discount rate, and the higher the discount rate, the lower the present value of the future cash flows.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9335294961929321,
"language": "en",
"url": "https://www.len.com.ng/csblogdetail/521/Examples-on-the-Principle-of-Double-Entry-System",
"token_count": 2195,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03466796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:76fb548b-5ab7-4cc5-bc41-f8ca0c65efb0>"
}
|
Topics in AccountsCharacteristics of Double Entry System Examples on the Principle of Double Entry System Advantages and Limitations of the Double System in Accounting Scheme of work, Financial Accounting, SS1, Third Term Scheme of work, Financial Accounting, SS1, Second Term Scheme of work, Financial Accounting, SS1, First Term Differences between Bookkeeping and Accounting Journal - Contents, Format, Characteristics and Advantages of Journal Bank Wire and Wire Transfer Chart of Accounts Meaning, Importance, Processes, Examples and Steps in Balance Sheet Reconciliations
Academic Questions in Accounts
According to the principles of the dual entry system, an increase in asset is credited.
The followings are true on the double entry system of accounting except _____.
A. Errors can't be made
B. It can be time consuming
C. It can be used as an accounting reference
D. It can be used to control a company's expenditure
E. There is completion of account transaction
F. It isn't difficult to implement
With regards to bookkeeping and accounting, which of the following statement is incorrect?
A. Accounting covers the entire practice of finance management
B. Bookkeeping is a branch of accounting
C. Bookkeeping is a necessary requirement when making financial statement of account
D. Bookkeepers record financial transaction in a chronological order
E. Accountant earn higher salaries than bookkeepers
F. Accounting reports assist business managers in making a more detailed decision
The debited and credited accounts are written in which column of a journal?
F. None of the above
A journal may also be referred to any of the following except _____.
A. Book of original entry
B. Book of primary entry
C. Book of first entry
D. Initial book
E. Day book
F. Chronological book
Which of the following statement is true concerning bank wire?
A. Another name for bank wire is cash transfer
B. The term 'wire' in bank wire signifies a computer based programmed message sent to a bank customer in regards to their account information, transaction and other important financial notifications
C. The International Bank Account Number (IBAN) must always be involved when carrying out a bank wire process
D. The term bank wire has no significant relationship with wire transfer
E. Cyber criminals cannot utilize bank wire threats like CSRS, phishing and whaling to achieve their fradulent act
F. All the above statements are true
The following are recommendations to follow during the process of balance sheet reconciliation EXCEPT _____.
A. All balance sheet accounts should be reconciled periodically, quarterly or annually
B. Comparing the trial balance of both the payables and receivables with the respective aging schedule
C. Analysis of enteries and relocating them to a sub-ledger if need be
D. Comparing the general ledger trial balance of the account to another source; for instance, a bank statement
E. Analysis of the differences in both accounts and making appropriate correction to ensure the correctness of entered information
F. None of the above
LEN ACADEMY SMART SCHOOL SOFTWARE
Click here to read more on its smart academic features. Recommend to a school and get N50,000
Please click here to kindly support education
Every accounting system is based on the principle of double entry. Before we further on the examples that reflects this principle (double entry system of accounting), it will be a great idea if we comprehend the concept on the principle of double entry. Below is a brief introduction:
Double entry system may simply be defined as the process of keeping an account with a balanced (or an equal) debit (Dr) and credit (Cr) side.
An understanding of both sides (debit and credit) of the double entry is crucial; but for now, think of the debit side as a recording point for 'one that receives' while the credit side accounts for 'one that gives out'.
Just before we look into the examples of the double entry system, a transactional knowledge of what's recorded in the debit and credit sides of the account book is very important. The link below will prove useful.
The instances below will be used to explain the double entry system:
1. Purchase of Phone
When a buyer purchases a phone, he or she automatically has a new asset; and that's the phone. An increase in asset is recorded on the debit side of the entry.
Conversely, the buyer pays out some sum of money to purchase the new phone. The money payed out represents a decrease in asset (according to the buyer) and it's recorded on the credit side of the dual entry system.
If this buyer were to record both entries into an accounting system, the newly owned phone will be recorded on the debit side while reflecting the money payed out on the the credit side.
Debit: New phone owned -> Increase in asset
Credit: Money paid out -> Decrease in asset
2. Receipt of Bank Loan
When an individual receives a loan from a bank, such will reflect as an increase in the person's asset. This is true because the money can be utilized (or invested) for its intended purpose(s). Recall that an increase in asset is recorded on the debit side of the dual or double accounting system.
Conversely, the individual who received the loan will have to repay it alongside some interest. The interest paid is considered as an increase in liability; and thus, it is recorded on the credit side of the dual accounting system.
Debit: Cash received -> Increase in asset
Credit: Money received + Interest -> Increase in liability
3. Payment of School Fees
A part-time student who does other jobs to pay his school fees will consider the paid fee (money) an expense. This is true because the student will need to save or budget some money for this purpose.
Conversely, the training and lectures received by the student will be considered as an increase in equity. Equity in this instance (of education) implies that he enjoys the necessary support needed to become successful in his chosen field of study.
If the student decided to record this inventory in a financial book, the payment of school fees will be recorded on the debit side (because it's an increase in expense) while the lectures received, on the credit side (because it's an increase in equity).
Debit: School fees -> Increase in expense
Credit: Lectures received -> Increase in equity
Please click here to follow Len Academy on Google News.
Please like and follow our official facebook page here for great educational write-ups.
You can follow Len Academy on twitter here.Thank you.
Kindly share this article via the links below:
Alfred Ajibola is a Medical Biochemist, a passionate Academician with over 7 years of experience, a Versatile Writer, a Web Developer, a Cisco Certified Network Associate and a Cisco CyberOps Associate.
CONTRIBUTE TO THIS TOPIC | ASK A QUESTION
Amazing facts in Accounts
It is believed that Bookkeeping is the only English word to contain three sets of double letters repeatedly.please read our article on bookkeeping vs Accounting here
NOTABLE POINTS IN Accounts
Bookkeeping can be defined as all the activities that have to do with the orderly classification and recording of financial data or business transaction.
Bookkeeping assist the process of accounting via the taking of an accurate record keeping.
Accounting is a field of study that covers the entire process and practice of managing the finances of an individual or an organization.
In smaller organizations, a bookkeeper’s job may go beyond simple transaction recording, as they may also be involved in the accounting process of the organization. On the other hand, accountants may have to record financial transaction in addition to analyzing financial transaction.
Below are the importance of bookkeeping in any organization:
It brings accuracy into the recordings of the daily business transaction.
They provide the information on which financial accounts are prepared.
Aside from its importance in business organizations, it can also be used by nonprofit organizations and individuals.
It can also take record of liabilities, assets and loans. This function of bookkeeping can be crucial for many businesses.
A journal, also called book of original entry or book of primary entry or book of first entry or day book or chronological book is a book where daily transactions are recorded in a chronological order (the order of occurrence).
A journal contains the total record of all transactions made by a company. It can be distinguished into different types which will include:
Sales Journal: For recording inventory and sales.
Cash Receipts Journal: For recording money received from sales or cash inventory.
Purchase Journal: For recording all purchases made by a company.
General Journal and so on.
The content of a journal include the followings:
The date when the transaction occurred.
The description of the transaction.
Bank Wire is a messaging system that allows banks to communicate the various events occurring on a clients’ account.
As an instance, when your account is credited by someone, you may get a notification from your bank with details concerning such transaction.
Such notification(s) from the above instance is a function of bank wire
The term 'wire' in a bank wire signifies a computer based programmed messages that are sent to the bank customers regarding their account information, transactions and other important financial notifications. These messages are always secured and may be encrypted along the wire.
Balance Sheet Reconciliations is a process of comparing the amounts on a balance sheet general ledger accounts to the details making up those balances, so that all their details agree or match.
Below are some recommendations to follow during Balance Sheet Reconciliations:
All balance sheet accounts should be reconciled periodically, quarterly or annually so as to verify that all items have been accurately posted to the account.
During the process of acount reconciliation, we will have to analyze the differences and make corrections so that the information is correct, complete and consistent on both accounts.
With the balance sheet reconciliations, it is important to compare the trial balances of both payables and receivables with the respective aging schedule balances during reconciliation.
In situations where the trial balance is more than the balance of the aging schedule, then it is likely that the entries are placed directly to the general ledger instead of the sub-ledger. One will need to analyze these entries and then relocate them to the sub-ledger.
During balance sheet reconciliation, you compare the general ledger trial balance of the account to another source which could be internal; for instance, a sub-ledger or a bank statement.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9612706303596497,
"language": "en",
"url": "http://www.futuredams.org/financial-risks-in-large-private-sector-financed-hydropower-projects/",
"token_count": 1346,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0087890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2321f7d1-cf82-4343-877d-f219a699c16a>"
}
|
Large hydropower projects can cost more than a billion dollars to build. For the private sector, to whom Governments are increasingly turning for infrastructure finance, this represents a significant financial risk in the context of developing countries with weak governance, regulation and institutions.
As the world seeks a zero-carbon future, more and more solar and wind technology is being built – low carbon certainly, but intermittent as neither sun nor wind is available 24/7. This begs the question of which low carbon technology can provide grid energy when the sun doesn’t shine and the wind doesn’t blow. If 2050 global temperature change targets are to be met, the energy intensity of electricity needs to decline by a massive 95%, reducing grid intensity from an average of 400-500g CO2/Kwh to levels of nearer 50g/Kwh. Many planners are banking on sustainable hydropower to play this role, by managing the known social and environmental impacts and ensuring an economically productive use of natural resources for growth and development.
Global investment in clean technologies reached $437 billion in 2015, with 68% of that investment provided by the private sector. Developed countries committed $100 billion annually to address adaptation and mitigation needs in developing countries. So far climate funds have shown resistance to fund hydropower, due to the social and environmental risks; rainfall and hydrological uncertainty; and the perception that hydropower is not “transformational” which is a requirement for financing. In addition, the costs of hydro-electricity are seen as quite high compared to that from solar or wind which has dropped consistently over the last five years and is now as low as 4-5c/KWh in many country auctions.
If private sector investments in sustainable hydropower were to increase in the future, what could this look like? This was the question addressed at a round table meeting recently held by the Cambridge Institute for Sustainability Leadership and IIED under the FutureDAMs research project led by the University of Manchester. The participants, drawn from engineering companies, lenders and developers, discussed the management of risks, which are significant in all hydropower projects. They range from geotechnical risk through to foreign exchange risks, hydrological risks (e.g. climate change or more irrigation upstream) or the risks that government may change and will impose revised contractual arrangements for energy purchase or new regulations. A wide range of risks were identified and discussed. For each risk a range of mitigation measure were discussed and the impact on private financiers was highlighted.
Participants stressed the role of sustainable hydropower as more than just a provider of kWh. It has the capacity to provide grid strengthening services which are vital to the management of electricity supply. While this has long been an undervalued benefit of storage hydropower, it becomes increasingly important as grids include more and more intermittent renewables, and less thermal power. Sustainable hydropower within a grid also provides opportunities for storing any excess energy (e.g. reservoir or pumped storage), as well as rapid ramping and despatch, avoiding the need to keep thermal power stations idling and ready to meet fluctuating demand. Although the cost of lithium-ion batteries is declining, sustainably developed pump storage remains competitive as a large-scale storage option in many countries, particularly over the long term.
In future, hydropower with storage flexibility could ultimately become remunerated largely for its grid management potential rather than as a source of KWh. This would, if well structured, lower the hydrological risk associated with some hydropower plants and encourage better use of their full potential.
Cost remains a substantial barrier to hydropower investment. Contributors to the round table explained that one reason why hydropower is often more expensive than alternatives (per KWh) is that the risks are extensively analysed, quantified, and then compounded through the life of the project. As they are not usually capped, they weigh heavily in the financial assessments, and if they are all crystallised at the outset the costs of offsetting them can constitute as much as 60% of the total cost of the project. Governments tend to expect the private sector to accept all of the risk in a privately led project, but in doing so they are paying a very high risk premium that is incorporated into the construction bids and ultimately the price of electricity. Participants discussed whether models exist that might allow the risks not to be fully crystallised, and for risk management to be dealt with differently.
The risks in hydropower construction are substantial and projects are well known to overrun by an average of 25% despite all the risk mitigation measures taken. This is partly because the costs increase for each risk which occurs, but do not decrease for known risks which do not occur. Currently, as many risks as possible are costed and mitigated (eg through insurance) even though only 10-20% of them may arise in any one project. One possible option is the FELT (Finance, Engineer, Lease and Transfer) model proposed by Mike McWilliams. In countries where there could be many ongoing private sector projects, could the risks (and therefore the costs) be distributed differently as a probability of their occurrence? Governments would essentially spread the risk over four or five projects and carry the risk themselves, rather than expecting the private sector to bear it on a case by case basis.
From the developer’s perspective the identification and management of risk is essential in designing and delivering a viable investment. Abandoned hydropower projects in Chile, Myanmar and Brazil have each reportedly cost more than $100 million to their private sector developers so the costs of getting this wrong can be significant. Every country, and every project carries a different risk profile, and a different energy mix in the grid. If we are genuinely to meet the requirement for 50 g CO2/Kwh average emission in energy grids to meet the global change targets, then what role for the private sector and what role for the international climate funds in managing the risks inherent in sustainable hydropower?
This research will continue by further refining the analysis of risk, particularly considering which risks can be mitigated to the satisfaction of the financiers and which are the risks that will always cause financiers simply to walk away. The quantum of funds available from climate finance is, to date, relatively small. The research will consider how such funds could be used to address significant barriers to the private financing of sustainable hydropower.
Note: This article gives the views of the author/academic featured and does not represent the views of the FutureDAMS as a whole.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9217440485954285,
"language": "en",
"url": "https://cdkn.org/2020/06/feature-providing-land-access-for-climate-resilient-infrastructure-indias-experience/?loclang=en_gb",
"token_count": 1565,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.02978515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:32459667-c657-412d-8f41-7b11f3de1468>"
}
|
FEATURE: Providing land access for climate-resilient infrastructure – India’s experience
The scarcity of land for development, and resulting high cost of land, is a challenge for developing new, climate-resilient infrastructure in India. Several local government bodies have trialled innovative schemes to work around this, but their successes are – as yet – insufficiently known and replicated. Souhardhya Chakraborty of ICLEI South Asia reports.
Land availability for climate resilience
At the turn of the millennium, India’s urban local bodies saw their roles and responsibilities expand. One of their new responsibilities was ‘climate resilience’. In the two decades since then, the resilient cities movement has also gained momentum rapidly, worldwide.
Unfortunately, land availability is scarce in cities and yet land is one of the basic underpinnings for developing climate resilient infrastructure, particularly new infrastructure. Most urban local bodies in India operate to provide basic services to their citizens in scenarios where land is scarce and extremely expensive.
State enactments to provide access to land
While the demand for land to develop climate-resilient infrastructures is increasing, the supply remains constrained.
Fallout of development on scarce land includes: involuntary displacement of residents, loss of livelihoods, inadequate compensation and lack of distributed benefits from land value improvements following development. To address these issues and related discontent, various mechanisms for accessing land have been enacted over the years by different State governments in India.
Prominent among such enactments is Gujarat’s Town Planning Scheme, by which land owners need to surrender their land in its entirety to the authorities, in lieu of which they are entitled to reconstituted land.
The Accommodation Reservation and Transfer Development Rights of Mumbai follows a similar approach, except that instead of a reconstituted land, the land owner receives an area calculated under the equivalent Floor Space Index (FSI).
The Cluster Redevelopment Scheme is rather unique in the sense that although the mechanism is the same as the previously-mentioned Mumbai laws; in this case, instead of a single owner, there are multiple tenants and a single owner who form part of the beneficiary list. This becomes especially important for redevelopment of urban cores and city centres, where single buildings typically have multiple tenants and occupants.
In order to access land for climate-resilient infrastructure, the best example among the various innovative land access models would be the Town Planning Schemes implemented by Ahmedabad Urban Development Authority. These have developed an urban area spanning 1,866 sqkm and covering a population of almost 9 million, as per the city’s Revised Draft Development Plan 2021 proposals.
Some of the prominent environment infrastructure projects implemented by the Ahmedabad Urban Development Authority through the various Town Planning Schemes include:
- City-level water supply, sewerage, stormwater drainage and recreational projects (includes parks, open spaces, theme based parks, organised green and water bodies) and
- Neighbourhood-level water supply network, sewerage network, stormwater drainage network, recreational, and lighting and street lighting projects.
In addition, 250,000 units of social housing with improved amenities of water supply, sewerage, electricity connection etc were constructed for 1.2 million slum dwellers and economically weaker families. These provisions encompassed 135 Town Planning Schemes spanning an area of 225.91 sqkm, of which only about 33% was used by the city government to implement the provisions and the remaining 67% of reconstituted land was returned to the land owners. Apart from this, a 270 million litres per day water treatment plant, a 240 million litres per day sewage treatment plant, solid waste management facility, tree plantation etc were implemented.
Accessing land for climate infrastructure
These innovative, State-led mechanisms have the potential of being replicated elsewhere in India to provide land for developing climate-resilient infrastructure.
In this regard, it is important to note that in India, it is seldom stated outright that land is being accessed for climate-resilient infrastructure. More often than not, climate-resilient infrastructure is masked: it falls under the guise of the usual service provisions or developments that are earmarked in the Master Plan or the Development Plan.
Nonetheless, such simple mechanisms will ensure that the government can access land for its needs and at the same time the land owners can enjoy the benefits of land value gains which accrue after the development of the climate-resilient infrastructure.
It is however advised that Town Planning Schemes are more suitable for both greenfield and brownfield contexts, while the Accommodation Reservation, Transfer Development Rights and Cluster Redevelopment Schemes are suitable for a brownfield context. As State-led mechanisms are specific to the type of land they access – such as for brownfield redevelopment, retrofitting, urban periphery and greenfield sites – a hybrid land policy that employs different mechanisms based on contextual needs could be the answer to the search by several States in India for land for climate-resilient urban infrastructure.
However, there ought to be a unified approach to addressing the issue of land access. Such a unified approach is sorely lacking in the Indian context.
Currently, the information pertaining to these innovative mechanisms remains localised, with little spread to other parts of India. As a consequence, the true potential of these innovative State-led mechanisms has not been exploited. Accessing land for the development of climate-resilient infrastructure continues to be a challenge across the country.
Occasionally, CDKN invites guest bloggers to contribute their views to www.cdkn.org These views are not necessarily those of CDKN or its alliance partner organisations.
For more information:
AUDA. 2014. Land Pooling and Land Management through Development Plan & Town Planning Scheme. Retrieved February 28, 2020, from
Mathews, R., Pai, M., Sebastian, T., & Chakraborty, S. (2016). State led Innovative Mechanisms to Access Serviced Land in India. Scaling Up Responsible Land Governance: 2016 Annual World Bank Conference on Land and Poverty. Washington DC: The World Bank.
Government of Gujarat. 1976. Gujarat Town Planning and Urban Development Act.
Government of India. 2007. Constitution of India, 1949 (Amended).
Government of Maharashtra. 1966. Maharashtra Regional Town and Country Planning Act.
Government of Maharashtra. 1966. Maharashtra Regional Town and Country Planning Act – “Sanctioned Modifications to Regulation 33(9) of Development Control Regulations for Greater Mumbai, 1991 under Section 37(1AAC)(c) of the Act”.
Adapted from – Mathews, R., Pai, M., Sebastian, T., & Chakraborty, S. (2016). State led Innovative Mechanisms to Access Serviced Land in India. Scaling Up Responsible Land Governance: 2016 Annual World Bank Conference on Land and Poverty. Washington DC: The World Bank.
Reconstituted land can either be planned and/or serviced land. Planned land is often the reshaping of irregular land parcels into more regular or rectangular shapes along with a statutory change in its use from agriculture to other urban uses; and Serviced land indicates the availability of physical infrastructure such as roads, water supply, drainage, sewerage, electricity etc. Although the reconstituted land might be lesser in area compared to what was surrendered, however, the valuation of the same would be much higher owing to the fact that the reconstituted land is returned post development
Image: Ahmedabad, courtesy jonbrew, flickr
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9448574781417847,
"language": "en",
"url": "https://en.everybodywiki.com/Medium_of_exchange",
"token_count": 2464,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1220703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c4aa36dc-b7cf-4f0e-b8c0-4f9859640974>"
}
|
Medium of exchange
Medium of exchange is one of the three fundamental functions of money in mainstream economics. It is a widely accepted token which can be exchanged for goods and services. Because it can be exchanged for any good or service it acts as an intermediary instrument and avoids the limitations of barter; where what one wants has to be exactly matched with what the other has to offer.
Most forms of money can act as mediums of exchange including commodity money, representative money and most commonly fiat money. Representative and fiat money often exist in digital form as well as physical tokens such as coins and notes.
Overcoming the limitations of barter
A barter transaction is the exchange of one valuable good for another of equivalent value. William Stanley Jevons described how a widely accepted medium allows each barter exchange to be split into a sale and a purchase. This process overcomes three difficulties of barter. A medium of exchange eliminates the need for a coincidence of wants.
Want of coincidence
A barter exchange requires finding a party who both has what you want and who wants what you have. A medium of exchange removes that requirement, allowing you to sell what you have and buy what you want from different parties via an intermediary instrument.
Want of a measure of value
A barter market would theoretically require an exchange rate for every possible pair of commodities, which is impractical to arrange, and impractical to maintain as the relative value of things changes all time. If all exchanges go 'through' a common medium, then all goods can be priced in terms of that one medium instead of with against other good. The medium of exchange thus makes it much easier to set and adjust the relative values of things in a marketplace.
Want of means of subdivision
A barter transaction requires that the held object and the wanted object be of equivalent value. A medium of exchange can typically and be subdivided to small enough units to approximate the value of any good or service.
Transactions over time
A barter transaction typically happens over a short period of time, or on the spot. A medium of exchange can held for a period of time until what is wanted becomes available. This relates to another function of money, the store of value.
Mutual impedance with Store of Value function
The ideal medium of exchange should be spread throughout the marketplace so that anyone with stuff to exchange can buy and sell. When money also serves the function of a store of value, as fiat money does, there are conflicting drivers of monetary policy, because a store of value can become more valuable if it is scarce in the marketplace. When the medium of exchange is scarce, traders will pay to rent it (interest), which acts as an impedance to trade and a net transfer of wealth from poor to rich.
Medium of Exchange and Measure of Value
Fiat currencies' most important and essential function is to provide a 'measure of value'[dubious ]... Hifzur Rab has shown that the market measures or sets the real value of various goods and services using the medium of exchange as unit of measure i.e., standard or the yard stick of measurement of wealth. There is no other alternative to the mechanism used by the market to set, determine, or measure the value of various goods and services. Determination of price is an essential condition for justice in exchange, efficient allocation of resources, economic growth, welfare and justice. The most important and essential function of a medium of exchange is to be widely acceptable and have relatively stable purchasing power (real value). Therefore, it should possess the following characteristics:
- Value common assets
- Common and accessible
- Constant utility
- Low cost of preservation
- High market value in relation to volume and weight
- Resistance to counterfeiting
To serve as a measure of value, a medium of exchange, be it a good or signal, needs to have constant inherent value of its own or it must be firmly linked to a definite basket of goods and services. It should have constant intrinsic value[dubious ] and stable purchasing power. Gold was long popular as a medium of exchange[dubious ] and store of value because it was inert, was convenient to move due to even small amounts of gold having considerable value, and had a constant value[dubious ].
Some critics of the prevailing system of fiat money argue that fiat money is the root cause of the continuum of economic crises, since it leads to the dominance of fraud, corruption, and manipulation precisely because it does not satisfy the criteria for a medium of exchange cited above. Specifically, prevailing fiat money is free floating and depending upon its supply market finds or sets a value to it that continues to change as the supply of money is changed with respect to the economy's demand. Increasing free floating money supply with respect to needs of the economy reduces the quantity of the basket of the goods and services to which it is linked by the market and that provides it purchasing power. Thus it is not a unit or standard measure of wealth and its manipulation impedes the market mechanism by that it sets/determine just prices. That leads us to a situation where no value-related economic data is just or reliable. On the other hand, Chartalists claim that the ability to manipulate the value of fiat money is an advantage, in that fiscal stimulus is more easily available in times of economic crisis.
Although the unit of account must be in some way related to the medium of exchange in use, e.g. coinage should be in denominations of that unit making accounting much easier to perform, it has often been the case that media of exchange have no natural relationship to that unit, and must be 'minted' or in some way marked as having that value. Also there may be variances in quality of the underlying good which may not have fully agreed commodity grading. The difference between the two functions becomes obvious when one considers the fact that coins were very often 'shaved', precious metal removed from them, leaving them still useful as an identifiable coin in the marketplace, for a certain number of units in trade, but which no longer had the quantity of metal supplied by the coin's minter. It was observed as early as Oresme, Copernicus and then in 1558 by Sir Thomas Gresham, that bad money drives out good in any marketplace (Gresham's Law states "Where legal tender laws exist, bad money drives out good money"). A more precise definition is this: "A currency that is artificially overvalued by law will drive out of circulation a currency that is artificially undervalued by that law." Gresham's law is therefore a specific application of the general law of price controls. A common explanation is that people will always keep the less adultered, less clipped, sweated, less filed, less trimmed coin, and offer the other in the marketplace for the full units for which it is marked. It is inevitably the bad coins proffered, good ones retained.
The fact that a bank or mint has always been able to generate a medium of exchange marked for more units than it is worth as a store of value, is the basis of banking.[dubious ] Central banking is based on the principle that no medium needs more than the guarantee of the state that it can be redeemed for payment of debt as "legal tender" – thus, all money equally backed by the state is good money, within that state.[dubious ] As long as that state produces anything of value to others, its medium of exchange has some value, and its currency may also be useful as a standard of deferred payment among others, even those who never deal with that state directly in foreign exchange.
Of all functions of money, the medium of exchange function has historically been the most problematic because of counterfeiting, the systematic and deliberate creation of bad money with no authorization to do so, leading to the driving out of the good money entirely.
Other functions rely not on recognition of some token or weight of metal in a marketplace, where time to detect any counterfeit is limited and benefits for successful passing-off are high, but on more stable long term social contracts: one cannot easily force a whole society to accept a different standard of deferred payment, require even small groups of people to uphold a floor price for a store of value, still less to re-price everything and rewrite all accounts to a unit of account (the most stable function). Thus it tends to be the medium of exchange function that constrains what can be used as a form of financial capital.
It was once common in the United States to widely accept a check (cheque) as a medium of exchange, several parties endorsing it perhaps multiple times before it would eventually be deposited for its value in units of account, and thus redeemed. This practice became less common as it was exploited by forgers and led to a domino effect of bounced checks – a forerunner of the kind of fragility that electronic systems would eventually bring.
In the age of electronic money it was, and remains, common to use very long strings of difficult-to-reproduce numbers, generated by encryption methods, to authenticate transactions and commitments as having come from trusted parties. Thus the medium of exchange function has become wholly a part of the marketplace and its signals, and is utterly integrated with the unit of account function, so that, given the integrity of the public key system on which these are based, they become to that degree inseparable. This has clear advantages – counterfeiting is difficult or impossible unless the whole system is compromised, say by a new factoring algorithm. But at that point, the entire system is broken and the whole infrastructure is obsolete – new keys must be re-generated and the new system will also depend on some assumptions about difficulty of factoring.
Due to this inherent fragility, which is even more profound with electronic voting, some economists argue that units of account should not ever be abstracted or confused with the nominal units or tokens used in exchange. A medium is just that, a medium, and should not be confused for the message.[dubious ]
- Mankiw, N. Gregory (2007). "2". Macroeconomics (6th ed.). New York: Worth Publishers. pp. 22–32. ISBN 0-7167-6213-7. Search this book on
- Krugman, Paul & Wells, Robin, Economics, Worth Publishers, New York (2006)
- Abel, Andrew; Bernanke, Ben (2005). "7". Macroeconomics (5th ed.). Pearson. pp. 266–269. ISBN 0-201-32789-9. Search this book on
- William Stanley Jevons, 1875. 'Money and the mechanism of exchange' Chapter 1 http://oll.libertyfund.org/titles/jevons-money-and-the-mechanism-of-exchange
- William Stanley Jevons, 1875. 'Money and the mechanism of exchange' Chapter 4 http://oll.libertyfund.org/titles/jevons-money-and-the-mechanism-of-exchange
- T.H. Greco. Money: Understanding and Creating Alternatives to Legal Tender, White River Junction, Vt: Chelsea Green Publishing (2001). ISBN 1-890132-37-3 Search this book on .
- Hifzur Rab (2009) 'Freedom, Justice and Peace Possible Only with Correct wealth measurement with a Unit of Wealth as Currency' HIJSE 26:1, 2010
- Hifzur Rab (2006) 'Economic Justice in Islam' AS Noordeen, Kuala Lumpur, Malaysia.
- Jones, Robert A. "The Origin and Development of Media of Exchange." Journal of Political Economy 84 (Nov. 1976): 757-775.
This article "Medium of exchange" is from Wikipedia. The list of its authors can be seen in its historical. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.946681797504425,
"language": "en",
"url": "https://group.bnpparibas/en/news/credit-rating-agencies-rate-companies-countries",
"token_count": 714,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.083984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ac6a401f-e0f5-46e8-813d-dbdfcfd2843b>"
}
|
BNP Paribas has been named Western Europe’s Best Bank for Corporate Responsibility and Best...
It’s big news when a country or major company’s credit rating is downgraded. But much less is said about why and how credit rating agencies operate. What are these agencies? And why do we need them?
What are credit rating agencies?
A credit rating agency is a private company whose purpose is to assess the ability of borrowers, either governments or private enterprises, to repay their debt. To do this, these agencies issue credit ratings based on the borrower’s solvency.
Since 2011, these independent companies have had to obtain certification from the European Securities and Markets Authority (ESMA) in order to operate in Europe. ESMA performs regular inspections to ensure that the rating agencies are following European regulations and the authority can issue sanctions for any infractions.
Who finances credit rating agencies?
Credit rating agencies collect a fee either from the entity seeking to receive a rating (business or government) or from the entity seeking to use and analyze the rating (the financial analysis department of a bank, financial institution, etc.).
How are credit ratings established and used?
To evaluate the solvency of borrowers, rating agencies issue credit ratings corresponding to the credit risk represented by the borrower, or in other words, the risk that the borrower will default on the loan. Credit ratings place this risk on a scale ranging from low risk (investment category) to high risk (speculative category).
Though there is no standard scale, credit ratings are typically expressed by letters corresponding to the potential risk, with the highest rating represented by AAA and the lowest rating by C or D, according to the agency. In addition to the letter grade, a credit rating might also consist of a “forecast” that describes how a particular rating may change in the future. For example, a credit rating with a negative outlook may indicate a future downgrade.
Each rating agency uses its own method to calculate its ratings. These methods take into account quantitative (financial data), qualitative (business strategy for a company or political stability for a country) and contextual criteria (changes in industry for a company or public finances for a country).
The final rating represents the credit agency’s evaluation of a borrower’s credit risk at a given time. It does not constitute investment advice.
What role do credit ratings play?
Along with other criteria, investors take credit ratings into account to help manage their portfolios. A rating downgrade indicates a greater risk for the lender. Depending on the sensitivity of the market, investors may require a higher return to protect against this risk, which in turn raises financing costs for the borrower.
Many investors give credit ratings a lot of consideration in their investment decisions. This has enabled credit rating agencies to play a central role in financial markets – a role that some economists see as excessive.
Banks are also evaluated by credit rating agencies. BNP Paribas regularly receives high credit ratings. In May 2015, Moody’s confirmed BNP Paribas’ rating of A1, while Fitch Rating confirmed BNP Paribas’ rating of A+, as did Standard & Poor’s in July 2015.
Read moreAll news
Sustainable finance plays a key role in serving a more responsible economy, in two aspects in...
BNP Paribas obtains the maximum rating of A1+ following its second solicited rating, established...
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9609695076942444,
"language": "en",
"url": "https://mmabbasi.com/2019/01/30/u-s-government-shutdowns-began-in-1790-its-the-american-way/",
"token_count": 2051,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.43359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4e6c4caf-8ab0-48e6-bf5f-74a80f9660cf>"
}
|
US government shutdown – What is it?
Federal government agencies and programs rely on annual funding appropriations made by Congress which must pass and the President must sign budget legislation for the next fiscal year (FY), consisting of 12 appropriations bills, one for each Appropriations subcommittee.
When the federal government’s fiscal year began October 1, Congress had enacted five of the 12 appropriations bills for FY 2019. Lawmakers have not yet passed full-year appropriations for the departments and agencies covered by the other seven appropriations bills. These programs had been running on continuing resolutions (CRs) that extend current funding levels.
A partial government shutdown began after midnight on December 21, the deadline specified in the most recent CR. In a “shutdown,” federal agencies must discontinue all non-essential discretionary functions until new funding legislation is passed and signed into law. Essential services continue to function, as do mandatory spending programs.
Here are the shutdowns through United States history.
Some economists argue that the U.S. defaulted when the federal government restructured bonds issued to fund the Revolutionary War.
Some economists argue that the U.S. defaulted when Congress passed a bill making it illegal for creditors to demand payment in physical gold.
The government shuts down for ten days, from September 30 to October 11, when President Gerald Ford vetoes a funding bill for the U.S. Department of Health, Education and Welfare (HEW) as well as the U.S. Department of Labor. The reason he gives for the veto is out of control spending. By October 1, Congress, controlled by the Democrats, votes to override the veto. However, it takes until October 11 to agree on a resolution on funding gaps in all parts of the federal government.
The government shuts down for 12 days, from September 30 to October 13, because the fight over abortion in the House and Senate, both controlled by Democrats, creates a funding gap in the Department of Labor and HEW. The House wants to keep the ban on using Medicaid dollars to pay for abortion, except for when the mother’s life is at risk. However, the Senate wants to extend the exceptions to rape or incest. A temporary agreement is made on October 13 so that the shutdown can end while Congress spends more time negotiating.
When the temporary agreement made on October 13 expires, the government shuts down for eight days beginning on Halloween. This shutdown lasts until November 9 when President Jimmy Carter signs a second funding agreement to allow Congress more time to negotiate.
The government shuts down one more time in 1977 when the House refuses to budge on the issue of Medicaid funding abortions for any other reason other than the mother’s life is at risk. This shutdown lasts for eight days, from November 30 until December 9, when finally a deal is made. In the end, the Senate wins and Medicaid is allowed to pay for abortions in cases of rape, incest and if the mother’s health is at risk.
The government shuts down for 18 days, from September 30 until October 18. This shutdown is caused by President Carter vetoing a defense bill and a public works appropriations bill. Carter’s cites wasteful spending as the reason for his veto.
The government shuts down for 11 days, from September 30 until October 12, when the House and Senate are once again at odds over abortion. The House wants to restrict federal spending on abortion to only cases where the mother’s life is at risk. The Senate wishes to keep abortion funding for cases of rape and incest as well as when the mother’s life is at risk. The House also pushes for a 5.5 percent pay increase for senior civil servants and members of Congress, a move the Senate opposes.
The government shuts down for two days, from November 20 to November 23, because President Ronald Reagan vetoes a spending bill that comes two billion dollars short of the cuts he wants. The Democratically-controlled House asks for pay raises for senior civil servants and for Congress. The House also asks for larger cuts in defense. A temporary bill is agreed on so Congress has more time to work out the issues.
The government shuts down for one day on September 30 because Congress passes the needed spending bills a day late. The government re-opens on October 2.
The government shuts down for three days, from December 17 to December 21. Both the House and Senate push for job program funding, but receive opposition from President Reagan. Meanwhile, the House opposes MX missile funding. In the end, Reagan drops the push to fund MX missiles and Congress drops their jobs plan. President Reagan agrees to fund the Legal Services Corporation in exchange for more aid to Israel.
The government shuts down for three days, from November 10 until November 14. This shutdown happens because President Reagan and the Democratic-controlled House are at odds. The House wants defense and foreign aid spending cuts, with increased funding for education. An agreement is made when the House reduces their desired amount of education funding and agrees to MX missile funding. The House gets its foreign aid and defense cuts as well as a ban on oil and gas leasing in federal wildlife refuges. An agreement to prohibit government employee health insurance from paying for abortions is also made.
The government shuts down for two days, September 30 to October 3, when Congress and President Reagan cannot agree on a deal. The House wants a crime-fighting package which President Reagan supports. However, the House also wants a water projects package which President Reagan does not support. A temporary extension is passed.
The government shuts down for one day on October 3 when the temporary extension expires. Congress agrees to drop its water projects package, but the crime-fighting package remains in the deal. Aid to the Nicaraguan Contras is also approved in this deal. The government re-opens on October 5.
The government shuts down for one day on October 16. The Democratic dominated House is once again at odds with President Reagan and the Republican-controlled Senate. The House makes several compromises in order to keep its welfare package in the deal. The government re-opens on October 18.
The government shuts down for one day on December 18 because the House and Senate want to cut funding to the Contras. They also want the Federal Communications Commissions to re-enforce the Fairness Doctrine. Congress drops the Fairness Doctrine issue in order to get non-lethal aid to the Contras. The government re-opens on December 20.
The government shuts down for three days, from October 5 to October 9, when President George H.W. Bush vetoes a continuing resolution because it does not include a deficit reduction package. The House does not override his veto, causing a shutdown. Congress then adds a deficit reduction package to its continuing resolution and the shutdown ends.
The government shuts down for five days, from November 13 to November 19, because President Bill Clinton vetoes a continuing resolution from a Republican-controlled Congress. The shutdown ends when Clinton agrees to a seven-year deadline to balance the budget and 75 percent funding for the next four weeks.
The government shuts down for 21 days, beginning on December 5, 1995 and ending on January 6, 1996. The shutdown occurs when the Republicans ask Clinton to propose his seven-year timetable budget with Congressional Budget Office numbers instead of his Office of Management and Budget numbers. Clinton refuses and eventually passes a compromise budget with Congress. The estimated total cost of the two 1995 shutdowns (26 days total) is more than $1.4 billion. (Adjusted for inflation, that’s $2.1 billion in 2013.)
On October 1, 2013, Congress fails to agree on a budget and pass a spending bill, causing the government to shut down. The failure to pass a bill is largely due to a standoff over the Affordable Care Act, also known as Obamacare. Already feeling pressure from the partial shutdown, Congress begins tense negotiations in an effort to pass a budget by the debt ceiling deadline on October 17, 2013. On October 16, 2013, the night before the debt ceiling deadline, both the House and Senate approve a bill to fund the government until January 15, 2014, and raise the debt limit through February 7, 2014. The last minute bill avoids a default and ends a 16-day government shutdown. It also ends the Republican standoff with President Obama over the Affordable Care Act. The partial shutdown takes $24 billion dollars out of the U.S. economy.
On January 19, 2018, Congress reaches a standstill over immigration that spills into federal budget negotiations. Congressional Democrats insist on addressing the matter of funding for DACA in the budget, while Republicans counter that the deadline on immigration isn’t until March. The shutdown lasts for three days, during which more than half a million government employees are furloughed. Though budget disputes continue for some time, Congress agrees to a short-term funding bill to reopen the government.
Toward the tail end of 2018, Congress approves small-scale appropriations bills to fund most of the government. President Donald Trump, facing pressure to fulfill campaign promises, requests $5.7 billion in funding to begin work on his proposed border wall and says he is willing to shut down the government to secure the funding. Senate Democrats counter with a smaller $1.6 billion for a border fence in a specific area of the border. Despite early speculation that the President would concede, he claims that he won’t approve any budget that doesn’t contain his requested border wall funding. Following the 2018 midterm elections, Democrats gain control of the House of Representatives in the middle of the standoff, and several public meeting between Democratic leadership and the president end in failure. The shutdown continues for several weeks, becoming the longest in U.S. history on January 12, 2019. During the shutdown, nearly a million federal employees stop receiving pay, and either work for no compensation or are furloughed. The shutdown comes to a close on January 25 with a three-week measure, during which time Congress and the President hope to conclude discussions and avoid another shutdown after the three weeks pass.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9477527737617493,
"language": "en",
"url": "https://www.attac.hu/2012/10/annamaria-artner-global-labour-market-and-profit-trends/",
"token_count": 760,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.30078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ae3646c5-efe7-4ef8-8eef-f630e4d51b90>"
}
|
2012.10.30. 13:28 Világgazdasági Intézet
64.1% of the global population, age 15-74, was ‘active’, i.e. worked or was looking for work in 2011. Their number decreased by 29 million over the four years of the crisis. The activity rate of the cohort under 25 is even smaller (48.7%). Moreover this rate has been declining for many years, since long before the crisis. A ‘neither – nor’ generation (neither studying – nor working’) has emerged and is growing larger.
On the basis of the ILO-definition, the number of unemployed grew by 27 million to 200 million. The rate of unemployment rose from 5.5 to 6% and there is no chance for a decline before 2016. In addition, more than 400 million people will enter the global labour market in the coming decade and, if they are not employed, the number of people without jobs will jump to 600 million. In addition, 900 million workers live with their families below the US$2 per day poverty line, thus requiring improved labour standards.
At present, 3.1 billion people are currently employed, i.e. at a minimum, working a one hour per week wage-job. At the same time, of this number 1.52 billion people are living under so-called ‘vulnerable’ employment conditions, where wages are low and employment terms vague. This means that, according to European norms and values (acceptable wage and safe employment), we can only speak of about 1.58 billion employed persons. This suggests a global unemployment rate of approximately 52%.
The developed countries have the fastest rate of growth in unemployment and their rate of joblessness is and will be higher than in Asia and Latin-America up through the year 2016. The governments of the G20 have saved or created 21 million jobs up through 2010. But the austerity policies have weakened the resources available for employment and economic growth and led to the ‘austerity trap’. In the past decades the polarization of wages and the increase in ‘atypical’ or ‘precarious’ forms of employment have also gained ground in the developed countries and a now massive amount of poverty exists there, albeit to a much lesser extent than in the developing world.
During the crisis, the structural problem of production is manifested as the structural problem of employment or skill and qualifications on the job market. Rising labor skills should make it possible for quick adaptation to the new professional, vocational and organizational requirements. However, the education of this skilled labor force should have begun many long years ago, assuming of course the production was planned. This situation is reflected in the slow but steady growth of the share of long run unemployment, in particular among the low skilled.
In contrast, profits were rising rapidly prior to the crisis in the developed countries. After two years of declining in 2008-2009, the bulk of profits soared again in 2010-2011. The share of employee compensation in GDP, however, has been declining for decades. But the direction of the distribution of new value is even more strongly reflected in the decline of the share of wages relative to net national income.
The profit producing economy sheds live labour continuously. This is required for competitiveness. This means that, in a situation where the employment per unit of production is decreasing because of the competitiveness requirement, if we want employment to increase in the market economy we have to increase production at a continuously growing pace in order to continuously create more and more jobs.
This continuously accelerating dance leads inevitably from time to time to crisis, the depreciation of capital and skills, unemployment, and increasing poverty.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9565893411636353,
"language": "en",
"url": "https://www.cfachicago.org/blog/cfa-society-chicago-book-club-blockchain-revolution-how-the-technology-behind-bitcoin-is-changing-money-business-and-the-world-by-don-tapscott-and-alex-tapscott/",
"token_count": 1434,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.134765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e2a6f604-dee1-4e6e-980e-3a2ec153abd2>"
}
|
Blockchains are simultaneously feared as a disruptive threat and lauded as a technological panacea, often with little understanding of how they actually work and often with little practical consideration of how they might be implemented. Don Tapscott and Alex Tapscott (father and son, respectively) assist the layperson in understanding how blockchains work and how they could be used in Blockchain Revolution (2016). The authors also, unfortunately, further delude the technological utopians by proposing seemly endless possible uses of blockchain technology while failing to address some of the practical considerations of implementation.
Starting with the positive, Blockchain Revolution is one of the first resources to both explain blockchain technology and to fully explore its potential uses beyond the now somewhat familiar bitcoin. Bitcoin is the digital currency created by “Satoshi Nakamoto” in 2009. Satoshi Nakamoto was the name that was used in internet chat rooms and the like by a person or group of persons who claimed credit for creating the cryptocurrency. Soon after creating bitcoin, Satoshi Nakamoto disappeared and the identity or identities behind the name never have been revealed. Replete with a dubious creation story, bitcoin maintains a religious, cult-like following despite scant uptake and usage. The history of bitcoin has been told elsewhere, including in Paul Vigna’s and Michael J. Casey’s The Age of Cryptocurrency (2016), which was the book of the month for the CFA Society of Chicago’s Book Club in February 2016.
What hasn’t been told widely until now are the other possible applications of the technology underlying bitcoin, the blockchain. A blockchain is nothing more than a ledger for recording transactions. The double-entry bookkeeping system that forms the foundation of modern accounting is widely attributed to Luca Pacioli, a Franciscan Monk and mathematician who lived in the 14th and 15th centuries. There are earlier claims to the discovery, which probably have some merit. There are probably undiscovered cave scribbles that merchant cavepeople used to record exchanges of spears and mastodon parts. As long as there’s been commerce, there’s been the need to record exchanges, and ledgers in some hasty form have probably served the part from time immemorial. The difference between blockchains and most previous ledgers is that previous ledgers resided with a trusted central party to the transaction, whereas blockchain ledgers are distributed, meaning that every member of a network retains a copy of the ledger. When there is a new transaction in the blockchain, members of the network that maintain the ledger verify the authenticity of the new transaction, append it to the chain of all previous transactions, and transmit the updated chain to the network.
That distributed feature is what poses the disruptive threat to numerous businesses that are based on intermediating markets. For example, the Uber business model is based on a central party that sits between drivers and passengers, links the two, and takes a slice of the profits in transaction fees. Similarly, Airbnb disrupted the hotel industry by intermediating the market for lodging by linking people who have spare capacity in their homes with travelers looking for a place to stay. Blockchains could further disrupt the disruptors by allowing those parties to transact directly and take out the middleman. The Tapscotts mention several other less obvious areas where blockchains could be used to intermediate markets or keep records, such a land and property deeds, personal medical and financial information, stock and bond offerings, contracts, and wills. The authors even argue that intellectual property such as music and other artwork could benefit from blockchains by allowing artists to control access to their works and charge a royalty fee directly to end users when they access them.
The oldest and largest business based on intermediation is, of course, banks. The primary function of banks always has been to intermediate the market of lenders and borrowers. Without banks, potential borrowers could find themselves having to go door-to-door, pleading for loans and negotiating the amount and the terms of the loans with each potential borrower. Banks have always done that legwork primarily by taking deposits and issuing those deposited funds as loans. Add credit and debit cards, foreign exchange, settlement, custody, and clearing to the mix and banks make considerable profits just by sitting between market participants, recording transactions, and taking fees. The Tapscotts and other blockchain utopians contend that all such businesses based on market intermediation will become unnecessary and disappear due to blockchain technology.
The CFA Society Chicago Book Club members who met to discuss Blockchain Revolution during their March 2017 meeting agreed that the range of possible blockchain uses was enlightening but found the tone of the book overly optimistic and found the treatment of implementation challenges lax. Take contracts, for example. The Authors seem to presume that blockchains will obviate the need for traditional contracts and courts to enforce them. A hypothetical blockchain contract might look as follows: A stadium owner engages a vendor to fix the plumbing in his stadium. When a credible party who has access to verify that the work has been completed confirms successful completion of the work in the blockchain, payment is automatically distributed. But what if the stadium owner contests the quality of the work? Was the vendor merely to fix the plumbing so that it didn’t leak or was the vendor supposed to restore the plumbing to like-new status? What if the stadium owner was relying on the repairs being completed by a certain time so that he could host a concert? If the vendor doesn’t complete the repairs, is it liable for the foregone revenue due to the stadium owner’s inability to host the concert? These are not far-flung hypotheticals. Contract law deals with those issues constantly. It’s not clear how blockchain-based contracts will be any better than paper-and-pencil contracts in terms of interpretation and adjudication.
Safety and security of blockchains is given similarly little treatment. Assuming for the sake of argument that the double key encryption technology that blockchains use makes them impenetrable to hackers, they could always just access blockchains using stolen passwords. And unlike when someone fraudulently uses a credit card, there is no legal department at bitcoin to contest the fraudulent transaction or IT department to reset the password.
Blockchain usage will undoubtedly increase. Even without blockchains, market intermediation for a variety of products and services has become increasingly automated and that trend will continue whether by blockchains or by other means. The consequence of that for banks and financial institutions is that they won’t be able to rely as much on the simple act of intermdiation for revenue and will instead have to increasingly compete on knowledge and customer service. Even now customers no longer need banks to purchase a variety of financial products and services, but retail and commercial customers still come to banks and financial institutions for sound advice and financial planning—and to reset their passwords.
Despite its shortcomings, Blockchain Revolution is an important contribution to understanding rapidly evolving blockchain technology, and hopefully others will step up to fill in the missing parts of the puzzle concerning how blockchains will be implemented and administered.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9729989171028137,
"language": "en",
"url": "https://www.greenoptimistic.com/nuclear-power-cost-renewables/",
"token_count": 329,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.275390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b0dfd5ec-0790-4297-8aeb-d3718e5f6dbd>"
}
|
Global investments in nuclear are now an order of magnitude less than those put into renewable technologies, and at least eight countries derive more energy from renewable sources than nuclear power.
It was predicted that nuclear power would become the primary global energy source back in the 1990’s, but unforeseen costs of maintaining and running giant nuclear power centers has impeded its adoption as a mainstream energy source. Smaller units called small modular reactors, thought at first to be easier to manufacture and install, were not developed within the originally promised time frame. Two companies were chosen by the Department of Energy to develop small modular reactors and one of the companies has already cut its spending on this project.
Many nuclear facilities are reaching the last quarter of their forty-year lifespan and the costs of extending that lifespan range from $1bn to $5bn for each reactor. The 60 reactors that are currently being built have all fallen years behind schedule and gone over budget. Five of the building sites have been under construction for thirty years.
Meanwhile, eight countries now draw more power from renewable sources than they do from nuclear. Even countries like Japan, strongly associated with nuclear power, have decided the cost is too high and in 2015 are not using nuclear at all, for the first time in four years.
Japan’s renewable energy sources are primarily non-hydro, like wind and solar energy. Germany, Brazil, India, Mexico, the Netherlands, and Spain also use more non-hydro renewables than nuclear power.
China, which did invest $9bn in nuclear power last year, spent even more on renewables to the tune of $83bn.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9542471766471863,
"language": "en",
"url": "https://www.mamamia.com.au/teaching-children-value-of-money/",
"token_count": 419,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1806640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:de5bb32e-f55b-4034-812c-2b2a6e80dd5a>"
}
|
In a world where we pay for things by tapping a plastic card (or even our phones), where apps help us manage everything from paying our bills to splitting dinner with our friends, and new currencies are popping up that don’t even have real notes or coins (hello bitcoin), how do we teach kids the value of money – when they can’t physically see it?
I’ve always been an advocate for open conversations about money in general, but now more than ever it’s essential that we start educating our kids when they’re young – where it comes from, how it works, what it does. So here are eight simple lessons to help you raise money-savvy kids:
1. It’s important to talk about money.
Recently another mum from my mothers’ group told me that her five-year-old son thought they got paid to shop (I wish)! She would often take cash out while making purchases, for example at Woolies or when buying a happy meal – so for her son, he saw that they would turn up to the store, collect their items and then get given cash. A reasonable assumption really! As parents, we know that kids interpret what they see (or don’t see) literally so openly talking about money can help to disperse those accidental misconceptions.
2. Money doesn’t grow on trees.
Instead of a regular allowance, reward your kids with money for doing chores or helping around the house. That all important lesson – that money is earned, not just given – will lay the foundations of their working lives. And if you can, at least to start with, pay them with physical cash. A jar or piggy bank (preferable a clear one!) is a great starting point.
So maybe this is why kids are so clueless about money. One in seven Aussie kids think cash from the ATM is free money. On This Glorious Mess, we discuss why they just don’t seem to understand. Post continues after audio.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9574706554412842,
"language": "en",
"url": "https://www.reidcurry.net/cities-are-different/",
"token_count": 995,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fcca7008-03ea-4798-8a76-d7ac856f70cd>"
}
|
“One number above all other metrics suggests a housing affordability and infrastructure emergency is pending. In New York City, one emergency is around 40,000 people living permanently in shelters, with a growing percentage of emotionally distressed and mentally ill people in the population. The number alone is less telling than realizing how and why it lasts for decades.
Homelessness has become a production function of cities.
In NYC, an additional 35,000 people, by official estimates, are homeless as transient or invisible. There are no rules or initiatives to stop these numbers from exponential growth.”Rex L. Curry
The history of cities is about how problems are defined and solved. The political skill of the dense city is different than other places. The city is regularly expected to create change that people will believe in, even though combinations of corruption and inspiration determine each change. The effectiveness of either or both is fixed in the experience of communities and demonstrated in neighborhoods. Inexplicably, is this what makes the celebration of cities so unique and important in advancing human thought? Here is one example.
From the 1960s to the early 90s New York City experienced rapid cultural and physical changes unlike any other. Initially, it confronted wholesale infrastructure deterioration coupled with a profound housing crisis, population loss, racism, double-digit inflation, a significant recession, and a nation embroiled in a foreign war. The city responded with improvements in race relations, education, and training. There was just enough of a federal response to prevent catastrophic collapse. Why? People with disadvantages and other people with extraordinary power found themselves face-to-face with the problem of being face-to-face.
The appointment of a financial control board control over the NYC credit crisis lasted a decade. Ending of the mid-1980s. You know the old story borrow $5,000 from a bank and don’t pay it back you are in trouble, but make that $500,000 with a run into trouble, you have a new partner. The concept of leverage is thematic in urban development. It includes knowing the power in the phrase, “people united can never be defeated.”
The agreement struck was to build equity through housing rehabilitation, rent stabilization, education, and good employment. Community control of schools and ideas on creating neighborhood government matured along with the creation of community-based development corporations in partnership with charitable foundations and city agencies. They had one purpose. Confront the city’s issues directly before them and create a better city. It worked, but new problems without easy solutions dug into the city’s flesh as irreversible displacement, and permanent homelessness became continuous, like a tide.
Displacement and Homelessness
The examination of the causes of displacement summarized in the UC Berkely presentation has some solutions and remedies offered at its conclusion. Zoning is not one of them. In fairness to Mike Bloomberg, his comment on the issue was, “Hey, this was the only game in town, so you’re either in or out.” To this extent, he is correct, the Federal response to urbanization continues to allow the market to have its way until it doesn’t, and the great recession of 2008 was not far off.
What is poorly understood is how low- and moderate-income people find housing in the suburbs for work and affordability by combining unrelated individuals and families in shared housing arrangements as under the radar as possible. The irony is shocking zoning is used in the dense urban environment to include low- and-moderate-income families in town and used to keep them out in the suburbs.
Evidence of failure to implement the remedies for ongoing home displacement is in the number of individuals and households (largely women with children) estimated in distress. A detailed look at this is described in a brief article entitled A New America It describes the beginnings of a federal role in housing production, infrastructure, and economic mobility due to the rise of displacement, formal and informal homelessness in America. Here is a brief excerpt:
“When violent change hits a community, the question turns to the first responder’s capacity, then speed, followed by when (or if) the full weight of federal support occurs. If the change is massive but slow, as if following the logic of a cancer cell, a long-term sense of resilience is essential. Leverage for needed change will be found in these fast and slow forms of damage. The “small fires” response to sudden catastrophes in the national context continues to produce quality emergency management skills. Service providers and communication systems reach deeply from federal to local levels. The service of a national post-trauma framework is building strength because it is vital, but first-response systems are quickly overwhelmed without front-end steps in mitigation that can pull its people out of trouble at a steady and reliable pace along with outright prevention.”RLC
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9615561962127686,
"language": "en",
"url": "https://www.stlouisfed.org/publications/central-banker/fall-2005/are-we-saving-enough",
"token_count": 803,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0869140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9e765039-724e-4bb2-b49a-6cc349d1808b>"
}
|
ByKevin L. Kliesen
Just as the proverbial piggy banks of our youth offered the promise of future riches-or a more enjoyable trip to the toy store!-a country that saves a relatively high percentage of its income will usually find that its living standards improve over time. Why? Because saving finances business investment, which is a key building block of long-term economic growth.
But with the nation's households currently saving only about 1 percent of their after-tax personal income (personal saving rate), many policy-makers are increasingly concerned about our future economic prospects. Indeed, if this low rate persists, it could lead to much lower growth rates of labor productivity and real incomes, which would mean slower growth of living standards over time.
Many people view the nation's total saving rate in terms of the personal saving rate. In reality, though, gross national saving is the sum of saving done by the three major economic sectors: households, businesses and the government.
Throughout most of the postwar period (1947-1999), gross private saving-the sum of household and business saving-remained at about 17.25 percent of GDP because increased business saving roughly offset the declining saving rates of households. Business saving averaged about 69 percent of gross private saving from 1947-1999, but rose to about 93 percent between 2003 and 2004.
The third component, government saving, usually was positive during the postwar period. That's only because state and local governments tend to run budget surpluses, while the federal government usually runs deficits. Although government saving at all levels is less today than, say, 30 or 40 years ago, government saving also tends to be a relatively small percentage of the gross national saving rate-even during periods of budget surpluses.
By adding these three components, we find that gross national saving (GNS) averaged a little more than 15 percent of GDP between 2000 and 2004. Although 15 percent is a modest fall from the nearly 17 percent average rate seen between 1983 and 1999, it is more than five percentage points lower than the 20.3 percent average GNS rate that prevailed between 1947 and 1982.
Foreign saving recently has become an important source of investment funds for our economy, helping to keep gross national investment rates nearly constant (as a share of GDP) throughout most of the post-war period. From 2000 to 2004, net foreign capital inflows-for example, foreign purchases of U.S. stocks or bonds-averaged 4 percent of GDP.
Some commentators are alarmed by this development; they believe U.S. residents must save more and rely less on foreign sources of investment funds. To others, the upsurge in foreign-capital flows to the United States is a measure of a fundamentally sound economy that offers high (risk-adjusted) rates of return. Regardless, foreign purchases of U.S. dollar-denominated assets have helped to lower long-term interest rates, which have been a boon to the U.S. housing industry and other producers of interest-sensitive products, e.g., cars and trucks.
Low interest rates are only one reason why American households have been saving less. Another reason is that household wealth has increased during recent years-arising largely from the rising values of stocks, bonds and house prices. Evidently, many households also have viewed this increased wealth as permanent and have decided to spend part of it by saving less out of their current wage income.
Ultimately, it's hard to escape the conclusion that current U.S. saving rates are low by historical standards and may need to be raised significantly. Why? Because the United States and most of the world's developed countries will soon be in a situation where the percentage of retirees-those who are drawing down their accumulated saving-will begin to rise relative to workers. Without sharp increases in taxes and/or reductions in benefits, it is likely that government budget deficits also will rise sharply, further lowering the national saving rate.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9430668950080872,
"language": "en",
"url": "https://www.treealliance.com.au/infohub/economics/reports/modelling_the_costs_and_benefits_of_agroforestry_systems",
"token_count": 174,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b14cabf4-8829-4e66-bfa8-c478dfff8d8c>"
}
|
Modelling the costs and benefits of Agroforestry systems
15 Oct 2020
CSIRO, November 2018 - Daniel Mendham
Integrating trees into agricultural production systems can provide a range of benefits that may add substantial value to farm enterprises. Here, the Imagine bioeconomic model was applied to agroforestry systems at four sites in Tasmania.
The aim was to understand the costs and benefits of a range of P. radiate agroforestry system configurations(2 row belt, 5 row belt, 10 row belt, and 2 x 2 row belts)integrated with a livestock grazing enterprise, in comparison to the returns from either full pasture or full trees.
The benefits from the trees that are accounted for by the Imagine model include the timber, amenity and carbon values, as well as the additional shelter benefits that trees provide to the adjacent agriculture.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9381727576255798,
"language": "en",
"url": "https://www.worldipreview.com/article/originals-versus-generics",
"token_count": 225,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.44140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8ade358e-6c4b-4ff5-b529-912ed66316ce>"
}
|
The battle between original pharmaceuticals and their generic counterparts has always been about balancing public interest with a rigorous national intellectual property regime.
This balancing act is particularly important but difficult for developing countries, such as Malaysia.
Original pharmaceutical drugs are generally protected by patents, which are essential to ensure a return on investment (in research and development, clinical studies, clinical trials), through a fixed-term market monopoly. On average, it can take between seven and 10 years for original drugs to get from the laboratory to the market. By comparison, a granted Malaysian patent confers a non-extendable 20-year monopoly, calculated from the filing date of the patent application.
Patent owners argue that, due to delays in patent prosecution and the process of regulatory approvals, this 20-year monopoly is insufficient compensation for the substantial investment committed to bring original drugs to the market. Unlike in the US or Europe, there are no provisions in Malaysian patent law for the grant of either supplementary protection certificates (SPC) or patent term extensions (PTE) to patents.
pharmaceuticals, generic drugs, Malaysia Patent Act
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9406128525733948,
"language": "en",
"url": "http://instantcertcredit.com/courses/5/lesson/309",
"token_count": 779,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.00482177734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7d8c1f00-463d-4575-ac97-4367573e2363>"
}
|
Lesson Objectives:- How to purchase using the perpetual inventory system
- Types of purchase transactions
- Recording journal entries for purchases
First let's take a look at what type of transactions are associated with purchases:
Purchases - This is the straightforward transaction of purchasing inventory.
Shipping costs - The seller or purchaser pays a shipping carrier for the item to be delivered.
Purchase return - When inventory is returned for defects, missing pieces or other issues.
Discounts - The company gives customer a discount on the purchase price.
What each of these transactions have in common is that they are all related to the merchandise inventory account. This account will come up in each example we review. Keep in mind that these transactions are only associated with accounts on perpetual inventory systems.
Let's say a company paid for overnight shipping for inventory costing $4,000 with 6 percent tax included. The merchandise inventory would be debited and accounts payable would be credited.
When you get into more intermediate and advanced accounting courses, you will find that the taxes are recorded as a separate transaction but we will keep it simple for the purpose of introducing you to the concept.
After the journal entry for the merchandise is recorded, we must record the shipping costs. The two types of payment methods for shipping include FOB shipping and FOB destination. The acronym FOB stands for free on board.
When the purchaser pays for the shipping costs, it is considered FOB shipping terms. In contrast, when the seller pays for shipping, it is FOB destination.
The best way to remember these terms is that when the seller pays for shipping costs, they are trying to get the merchandise to the destination. Normally when a customer purchases merchandise, they specify in their purchase order or terms and conditions document which type of shipping they are wanting to use.
Let's say the shipping costs for a shipment of goods is $80. For FOB shipping, the $80 would be recorded as a debit to merchandise inventory and a credit to accounts payable. If FOB destination is used, there would be no journal entry for the transaction because the seller is capitalizing the cost of the inventory.
Now let's review the journal entry for purchase returns, when the company is returning inventory to the seller.
This is one of the easiest transaction types to journalize as it is the reverse of a purchase entry. Instead of debiting the merchandise inventory and crediting the accounts payable, a return is recorded by debiting the accounts payable and crediting merchandise inventory.
Finally, the last type of purchase entry is discounts which are broken down into two types: quantity discounts and purchase discounts.
- Quantity discounts are given for purchasing inventory in bulk. Normally, a percentage discount is offered when the company purchases a certain amount of the product.
- Purchase discounts come up more frequently and account for when payment is paid early in cash. An example would be when the seller offers a 3 percent discount, if the full balance is paid within 10 days. This type of discount is often written in the following form: 3/10 n/30, where the n/30 stands for the standard net 30 days, which would be the full price.
For example, a discount of $40 was offered on the inventory purchase that had a full price of $4,000. First, the purchase entry would be recorded as a debit of merchandise inventory and credit of accounts payable. To show the discount, the accounts payable would be debited by 4,000 and cash credited for $3,960. You can't leave the entry unbalanced, so the merchandise inventory would need to be credited for $40.
We've covered the four types of purchasing transactions and examples of how they are recorded as journal entries. In the next lesson, we will talk about journal entries for different types of sales.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8647279739379883,
"language": "en",
"url": "https://www.atlantis-press.com/proceedings/iscfec-20/125936542",
"token_count": 482,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1513671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:31cd12fd-c920-4f61-996c-429ee1c39ac0>"
}
|
The Concept of Digitalization and Its Impact on the Modern Economy
A U Mentsiev, M V Engel, A M Tsamaev, M V Abubakarov, R S-E Yushaeva
A U Mentsiev
Available Online 17 March 2020.
- https://doi.org/10.2991/aebmr.k.200312.422How to use a DOI?
- Imagining living in the past without the facilities that are available to people today seems a bit too hard to survive. We live in an era where our dependency upon technology in our routine tasks has driven us to a point where we take most of gifts of technology for granted. Technology has transformed our way of living, including but not limited to food, education, communication, transportation, entertainment, and medical care. Our favorite grocery stores and restaurants are available to provide us food of our choice at all times. Virtual classrooms and huge amount of content available online has made attainment of education convenient. Our friends and family may be distant apart from us, but are only once click away. We book a cab sitting at home for commuting, instead of walking down the street to catch one. We carry around complete entertainment package with us in our pockets and bags. Wearable gadgets are facilitating us in medical attention and care. While the list goes on and on, there is one common factor – digitalization. This paper will reveal the concept of digitalization and its impact on the modern economy.
- Open Access
- This is an open access article distributed under the CC BY-NC license.
Cite this article
TY - CONF AU - A U Mentsiev AU - M V Engel AU - A M Tsamaev AU - M V Abubakarov AU - R S-E Yushaeva PY - 2020 DA - 2020/03/17 TI - The Concept of Digitalization and Its Impact on the Modern Economy BT - International Scientific Conference "Far East Con" (ISCFEC 2020) PB - Atlantis Press SP - 2960 EP - 2964 SN - 2352-5428 UR - https://doi.org/10.2991/aebmr.k.200312.422 DO - https://doi.org/10.2991/aebmr.k.200312.422 ID - Mentsiev2020 ER -
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9318784475326538,
"language": "en",
"url": "https://www.bls.gov/opub/mlr/2016/article/which-industries-need-workers-exploring-differences-in-labor-market-activity.htm",
"token_count": 10081,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1142578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:78c03251-20b9-4402-b1ae-31ef4585295b>"
}
|
Using data from the Job Openings and Labor Turnover Survey, this article takes a unique, simultaneous look at job openings, hires, and separations for individual industries and then categorizes industries as having high or low job openings and high or low hires. Studying the data items in relation to each other helps point out the differences among industries: some have high turnover, some have low turnover, some easily find the workers they need and hence have few job openings at the end of the month, and some need more workers than they can find. The author also includes fill rates and churn rates by industry and looks briefly at earnings by industry. The analysis of labor turnover patterns by industry may prove useful to jobseekers and career changers as well as employers.
Where should new graduates look for jobs? What about career changers? In what direction should career counselors and job placement programs direct clients? Which statistics can government officials use to help determine how to stimulate job growth? How do employers know if their turnover and worker demands are typical? Industries differ in employee turnover patterns, demand for workers, and ability to hire the workers they need. Understanding the labor turnover characteristics of the different industries may help jobseekers, those assisting them, employers, and government officials better focus their efforts.
Each data element in the Job Openings and Labor Turnover Survey (JOLTS)—job openings, hires, and separations—provides information about the labor market. However, when all three data elements are studied together, an even more informative picture emerges. The job openings data tell us about the unmet demand for workers; the hires and separations data provide information about the flow of labor. Industries with high turnover and low job openings, such as construction, are easily able to hire the workers they need. But industries with high turnover and high job openings, such as professional and business services, still have open jobs at the end of the month despite their hiring efforts during the month. Those industries with consistently moderate turnover and high unmet demand for labor, such as health care, may be a good option for career changers and students selecting a major, and officials who develop training programs and guide people into them can benefit from knowing which industries these are. Hence, analyzing the demand for and flow of workers by industry could prove helpful both to people looking for work and to those trying to help or hire them.
Studying job openings relative to hires reveals substantial differences among the industries. In some cases, hires (measured over the course of a month) are much greater than openings (on the last day of the month); in other cases, the gap between them is small.1 For a few select industries, openings exceed hires. Comparing industries by analyzing the number of openings or hires yields little information because industries vary greatly by size. Converting the number of hires and openings to rates—by dividing the number of hires or openings by the number of people employed in the industry—allows for meaningful cross-industry comparison. Figure 1 presents the hires and job openings rates by industry. For the United States (total nonfarm industries), the job openings rate averaged about 91 percent of the hires rate in 2014. In several industries, the hires rate far exceeded the average job openings rate: construction; arts, entertainment, and recreation; and retail trade. In several industries—for instance, mining and logging, professional and business services, and accommodation and food services—the hires rate exceeded the job openings rate to a lesser degree. The exceptional industries in which the job openings rate exceeded the hires rate were information; finance and insurance; health care and social assistance; federal government; and state and local government.
This first glance at the industries raises many questions. Why is there a large difference between the hires rate and job openings rate for some industries but not for others? What does a gap of any size mean, and is a gap good, bad, or neutral for the labor market and economy? Why do so few industries have a higher job openings rate than hires rate? Will a person looking for a job or looking to change fields have better success targeting an industry with high openings or with high hires or where openings exceed hires? Some of these questions can be answered rather easily, but others require further analysis. Before we can answer any questions, some definitions and background are needed.
The Bureau of Labor Statistics (BLS) has published JOLTS estimates for job openings, hires, quits, layoffs and discharges, other separations, and total separations by industry and region for each month from December 2000 forward.
For JOLTS to consider a job “open,” three requirements must be met: a particular job must exist, work can start within 30 days whether or not a suitable candidate is found, and the job must be actively advertised outside the establishment. The requirements reflect the survey’s goal of measuring current job demand in which a person seeking a job from outside the establishment has an opportunity to be hired. Job openings are a stock measure, with the count taken on the last business day of the month. Therefore, the job openings measurement represents positions that hires did not fill during the month.
The hires data are designed to capture all employer–employee relationships established during the month. A hire occurs each time an employer brings on any worker, including part-time, full-time, and seasonal. Also included are rehires of people who had previously worked for the same establishment. The hires count is a flow measure that sums all hires that occurred during the month.
Separations data are similar to those of hires in that separations include all instances in which an employer–employee relationship ended during the month. JOLTS breaks out separations into voluntary quits, involuntary layoffs and discharges, and other separations (retirements, transfers, and separations due to death or disability).
For hires and separations, we convert the levels (counts) to rates by dividing the level by the employment and multiplying by 100.2 Therefore, the rates show hires or separations during the month as a percentage of employment. The job openings rate is calculated slightly differently, with the job openings level divided by the sum of job openings and employment, times 100. The job openings rate indicates what percentage of all potential jobs—filled or unfilled—remained unfilled at the end of the month.
The above definitions and reference periods already answer one question: Why is it unusual for job openings to exceed hires? Given that the job openings level is a count of jobs left unfilled on the last day of the month, yet hires is a cumulative count of all employees hired throughout the month, openings outnumbering hires is noteworthy. Until 2014, only two industries had a higher job openings rate than hires rate; in 2014, however, 10 of the 18 industries had a higher job openings rate than average hires rate.
This paper focuses on the year 2014, the most recent full year for which data are available. Because JOLTS does not seasonally adjust the data for every industry, this article uses not seasonally adjusted data and calculates monthly averages for each year by industry. For the remainder of this article, “rate” will be used as a succinct way to refer to the average monthly rate for the year 2014 unless otherwise noted.
For the United States (total nonfarm industries), the average hires rate for 2014 was 3.5 percent and the average job openings rate was 3.2 percent. The individual industries vary widely around these averages. Those industries which differ most noticeably can be grouped into four categories: (1) high hires and high job openings, (2) low hires and high job openings, (3) high hires and low job openings, and (4) low hires and low job openings. Figure 2 graphically represents the hires rate and job openings rate by industry along with the employment level of each industry. The hires rate is along the horizontal axis, the job openings rate is along the vertical axis, and the size of each industry bubble reflects the level of industry employment.
High hires and high job openings. Industries with a high hires rate and a high job openings rate in 2014 were professional and business services (5.3, 4.4)3 and accommodation and food services (5.8, 4.5). The simultaneous high rates indicate that, in spite of strong hiring, even more employees are needed.
The professional and business services sector comprises services such as legal, accounting, architecture, engineering, computer, and temporary help agencies. The professional and business services industry is considered by economists to act as an early warning sign of an upcoming recession or as an early indicator of recovery.4 At the beginning of a recovery, when employers need more workers but are not ready to commit to hiring new staff, they may hire temporary workers.5 Employment services—which includes temporary help firms—was about 18 percent of professional and business services employment in 2014. Average monthly employment in 2014 in employment services was 38 percent higher than in 2009, which is when the recession ended. With employment of over 19 million and a high job openings rate, the professional and business services industry provides vast opportunities for jobseekers.
The accommodation and food services industry has a high turnover of workers and is affected by changes in both the season and the business cycle. The high hires reflect replacement hiring due to the high turnover, as well as seasonal hiring, and also expansion with the improving economy. The high job openings in accommodation and food services indicate an industry that is experiencing modest growth, with employment rising by just over 3 percent from 2013 to 2014.
Low hires and high job openings. These industries need workers but are not hiring them for one reason or another: information (2.8, 3.6), finance and insurance (2.2, 3.7), and health care and social assistance (2.7, 3.9). These industries may not be able to find qualified workers or they might not be offering a wage high enough to attract new employees. These industries may be of interest to jobseekers with the right skills and to job training programs preparing people for available jobs.
The reasons companies in the information industry and the finance and insurance industry need workers are not immediately obvious. The information industry includes broadcasting (radio and television), motion pictures and video, publishing (magazines, books, and newspapers), software publishing, and telecommunications. The JOLTS sample size does not allow for a finer level of industry detail to see which sections of the information industry have unmet demand. However, according to the BLS Occupational Outlook Handbook,6 many computer-related occupations are projected to grow faster than average. Particularly in the information industry, employment in computer occupations is projected to rise in software publishers and other information services. Finance and insurance includes banking (including mortgage processing), financial investment, insurance, and trusts and funds (pensions, trusts, and estates). As baby boomers age, they will need these services even more, and boomers employed in these careers will need to be replaced as they retire.7 Looking again at the Occupational Outlook Handbook, we find that the numbers of financial analysts and personal financial advisors are projected to grow faster than average and much faster than average, respectively, in 2012–22.
The health care industry had an especially high demand for workers, with employment of over 18 million and an average monthly job openings rate of 3.9 percent in 2014. The Bureau of Labor Statistics projects 5.0 million new jobs in health care between 2012 and 2022. The compound annual rate of change, 2.6 percent, is tied only with that of construction for highest of all industries. (See table 1.) Health care workers will be needed because of the aging of the population: the number of people needing health care will increase, as will the number of workers needed to replace retiring workers. Many of these jobs provide good pay, job security, and also job portability. The BLS Employment Projections program estimates that over 296,000 physicians and surgeons and over 1 million registered nurses will be needed in the 2012–22 timespan to fill jobs because of occupational growth and replacement hiring. Many organizations, including the federal government, are offering college scholarships and grants in order to recruit people into the field of nursing. Many jobs in the health care industry require a doctoral or professional degree, such as pharmacists and surgeons. Because these occupations require many years of education, even if more people begin training, the supply may lag the demand. However, not all upcoming jobs related to health care require a 4-year college degree or professional degree. Dental hygienist and nuclear medicine technologist jobs typically require only an associate’s degree for entry. Phlebotomist and dental assistant positions typically require only some postsecondary study or on-the-job training for entry. Personal care aides and home health aides do not even need a high school diploma for entry, yet 1.3 million new jobs and jobs due to replacement are predicted for aides in the 2012–22 timeframe. Students selecting a field or career changers looking for retraining would find plentiful opportunities in health care.
|Industry||Job openings rate||Fill rate = hires/job openings||Hires rate||Separations rate||Churn rate = hires rate + separations rate||Projected annual rate of employment change 2012–22|
|Mining and logging||3.1||1.2||3.8||3.4||7.2||1.2|
|Durable goods manufacturing||2.3||0.9||2.0||1.8||3.8||-0.3|
|Nondurable goods manufacturing||2.4||1.0||2.4||2.3||4.7||-0.8|
|Transportation, warehousing, and utilities||3.1||1.1||3.5||3.2||6.7||0.5|
|Finance and insurance||3.7||0.6||2.2||2.1||4.2||0.8|
|Real estate and rental and leasing||2.9||1.1||3.2||3.0||6.2||1.2|
|Professional and business services||4.4||1.2||5.3||5.0||10.3||1.8|
|Health care and social assistance||3.9||0.7||2.7||2.5||5.3||2.6|
|Arts, entertainment, and recreation||3.2||2.0||6.7||6.5||13.2||1.1|
|Accommodation and food services||4.5||1.2||5.8||5.5||11.3||0.9|
|State and local government||2.0||0.7||1.4||1.4||2.8||0.5|
|Source: U.S. Bureau of Labor Statistics.|
High hires and low job openings. Two different scenarios describe industries with a high hires rate and low job openings rate: an industry could have a lot of turnover (separations with replacement hires) and an easy time finding new employees to fill open jobs so that few jobs are left open by the end of the month, or an industry could be expanding but is able to find the needed workers to fill the open jobs by the end of the month. These expanding industries could have any rate of turnover, from low to high. Three industries are of the high-hires-and-low-job-openings nature: construction (5.1, 2.0), retail trade (4.8, 3.1), and arts, entertainment, and recreation (6.7, 3.2).
Construction is the one industry in which hires are always high and job openings are always low. Turnover is high because workers can move from site to site and employer to employer. For example, construction workers who are trained in framing a house or operating construction equipment can apply that skill either at new worksites their employer moves them to or at worksites for different construction companies as they change employers. Unfilled openings are low because of employers’ ability to quickly find the workers they need. As already mentioned, construction has a 2.6 percent compound annual rate of change for 2012–22. According to BLS projections, the most rapid growth occupations in construction are mechanical insulation workers; helpers of brickmasons, blockmasons, stonemasons, and tile and marble setters; and segmental pavers. The construction industry has a wide variety of occupations. Even though construction is an average-sized industry, employing 6.1 million people on average in 2014, construction may provide jobs for many jobseekers given its predicted high rate of growth in the near future.
Two other industries with high hires and relatively low job openings are retail trade (4.8, 3.1) and arts, entertainment, and recreation (6.7, 3.2). Because retail experience can be applied at any number of retail establishments that are hiring, high separations in retail trade are quickly followed by high hires; the result is a low number of open jobs at the end of the month. Jobseekers tend to have success finding a job in retail because retail trade is a very large industry—with an employment level exceeding 15 million people on average in 2014—and high turnover generates a large amount of replacement hiring. Arts, entertainment, and recreation has the highest hires rate among all industries because of the high turnover and resulting replacement hiring. Also, the high hires cause arts, entertainment, and recreation to have the largest difference between the hires rate and job openings rate of any industry.
Low hires and low job openings. A number of industries fell into the category of low hires and low job openings in 2014. Very little labor market activity occurred in the following industries: durable goods manufacturing, nondurable goods manufacturing, wholesale trade, educational services, federal government, and state and local government. Although these industries employed over 37 million workers per month in 2014, very little hiring occurred and open jobs were scarce. Although workers with particular skills may find employment within these industries, workers seeking employment or career changes would likely not target these industries on the basis of the 2014 data.
In the federal government and state and local government, the situation is slightly different from the private sector because their job openings rates were higher than their hires rates. Even though the openings rate is low for the public sector, the fact that openings outnumber hires indicates a need for workers.
In public education (which is a subset of state and local government), the larger number of job openings than hires at first appears to support the claim that teachers are in demand. However, with the recession and slow recovery, this very large industry posted few openings relative to its average 10 million employees in 2014. The lack of posted openings reasonably can be attributed to declining tax revenue at the national, state, and local levels, resulting in budget cuts affecting school budgets in many states. According to a 2010 survey by the American Association of School Administrators, 77 percent of school districts experienced a cut in state and local funds between the 2009–10 and 2010–11 school years.8 A lack of adequately trained teachers may help explain why some jobs go unfilled at the end of the month. However, budget cuts and lack of trained teachers may not be the full story. The National Center for Education Statistics estimated 3,377,900 teachers were in public elementary and secondary schools in the United States in the 2011–12 school year, the most recent year for which statistics are available. Of those 3.4 million teachers, 8 percent left the profession the following year. Of those teachers who left the field, only 38 percent retired. That means 62 percent left the teaching field to find other work or exited the labor force for reasons other than retirement. These numbers indicate the demand for teachers, albeit relatively low, is due primarily to a high rate of departure from the occupation, although positions vacated by retiring baby boomers also contribute to the openings.9
The educational services industry includes private schools of all levels as well as tutoring establishments. As with state and local education, teacher turnover and rising student populations create the need for teachers and tutors. However, private schools depend on the limited number of families that can afford often costly tuition payments, and tutors can be too costly for families struggling to pay a mortgage. The low turnover likely indicates that employees are staying put because tight private-school budgets translate into few postings of openings for potential job changes.10
The appearance of the federal government in this category of low hires and low openings but with openings outnumbering hires reflects that qualified applicants are difficult to find for some positions. The government jobs website www.USAJobs.gov, which posts all federal jobs, lists the following as what it calls “highly targeted careers:” medical officers, attorneys, administrative law judges, senior executives, and federal cybersecurity careers. This varied list reflects both the need for health care workers in the federal government as we saw in the private sector and the need for senior executives because of a retiring workforce.
Both durable goods manufacturing and nondurable goods manufacturing appear in this low-hires-and-low-job-openings category. This could be due to a long-term trend of decreased U.S. manufacturing employment and suppressed production during and following the Great Recession; together, these factors result in fewer jobs being posted and fewer job-changing opportunities.
Average hires and average job openings. The remainder of the industries fell around the averages for hires, job openings, or both: mining and logging; transportation, warehousing, and utilities; real estate and rental and leasing; and other services. In these industries, the hires and the vacancies were not especially high or low compared with the other industries. Almost all the industries with an average rate of job openings or hires had fewer job openings than hires, indicating employers in these industries were able to find the workers they need. One industry—wholesale trade—had an openings rate slightly higher than the hires rate but only by one-tenth of a percentage point.
Figure 3 summarizes in which of the hires-and-job-openings quadrants industries fell. Not shown are the industries that were average in hires and openings.
Other measures allow us to further explore the industries—the fill rate, the churn rate, and employment projections data. The fill rate and churn rate are created from the JOLTS data, and we have already touched upon employment projections data earlier in this article.
Fill rate. The fill rate is the hires level divided by the job openings level and then multiplied by 100. The rate is a measurement of how much hiring is occurring relative to how many openings remain at the end of the month. The interpretation is slightly complicated by the fact that hires is a flow measure, capturing all hires during the month, and job openings is a stock measure, capturing only jobs remaining open at the end of the month. The fill rate is still useful, however, because it provides another way to visualize the differences among the industries.
Figure 4 shows the average monthly fill rate in 2014 was 1.1 percent for total nonfarm industries, indicating just slightly more hires during the month than unfilled jobs remaining at the end of the month. The fill rate for the industries ranged from 0.5 percent for the federal government to 2.5 percent for construction in 2014.
Recalling the full-month reference period for hires versus the 1-day reference period for job openings, we find industries with a fill rate less than 1.0 noteworthy because they have more job openings than hires. In 2014, 6 industries had a fill rate less than 1.0, 3 had a fill rate of exactly 1.0, and 10 (including total nonfarm industries) had a fill rate greater than 1.0. Note that a fill rate close to 1.0 indicates the hires and job openings levels are close together but does not indicate if the individual rates are high or low. The fill rate can be close to 1.0 when both hires and openings are high or when both are low. A fill rate of less than 1.0 (more job openings than hires) is historically unusual; the unfilled jobs indicate a labor market with excess demand for workers. A later section of this article looks at the labor demand and turnover patterns of the industries across the years.
The high-demand industries are toward the bottom of the figure and have the lower fill rates; jobseekers may best focus their efforts on these industries. The industries with the lowest fill rates are those which had both low hires and high job openings or low hires and low job openings, as shown in figure 3.
Toward the top of figure 4, industries with the highest fill rates are construction; arts, entertainment, and recreation; and retail trade, which had high hires and low openings. These industries are still a good source of jobs because of the vast number of hires taking place. Table 1 provides data on job openings, hires, and fill rates by industry and also includes churn rates, which are discussed next.
Churn rate. One thing missing from this analysis so far is separations. Without separations, we do not know if the hires are for expansion of an industry or for replacement hiring following separations within the industry. To fully understand industries’ labor turnover, we need to consider separations as well as hires. When hires exceed separations, the industry is expanding. When hires and separations are at about the same level, industry employment is steady and we can deduce that the hires are mainly replacement hires. When hires are below separations, the industry is contracting.
The “churn rate” is defined in this article as the sum of the hires rate and the separations rate. Therefore, a high churn rate indicates an industry with high hires or high separations or both. A low churn rate indicates an industry with little turnover—that is, with low hires and low separations. As we did with the other data series in this article, we calculated the churn rate using the average monthly hires rate and average monthly separations rate for each industry. Figure 5 provides the average churn rate and average job openings rate by industry for 2014.
Figure 5 shows that the industries with the highest churn rates are arts, entertainment, and recreation; accommodation and food services; professional and business services; construction; and retail trade. These are high-turnover industries. Not surprisingly, the industries with the lowest churn rates are the federal government and state and local government. Not many people separate from government jobs and not many people are hired into government jobs. In 2014, almost all of the industries had fairly equal hires and separations rates with hires slightly exceeding separations, indicating employers generally were comfortable with replacement hiring plus a little more. None of the industries grew or shrank notably, but the mining and logging industry and the construction industry had the largest gap between average hires and separations rates for 2014, with a 0.4-percentage-point difference, indicating slight growth. The gap between hires and separations parallels the slight increase in employment in 2014.11 For both federal government and state and local government, the average hires and separations rates were equal in 2014, indicating replacement hiring but no expansion.
Combining the churn rate with the job openings rate provides additional perspective. For example, the churn rate in construction, at 9.9 percent in 2014, was very high relative to that of other industries. With a job openings rate of only 2.0 percent—one of the lowest job openings rates among the industries—construction establishments have many employees coming and going, but the businesses can easily hire needed workers. Two of the industries with high churn (arts, entertainment, and recreation; and retail trade) are mostly able to fill open positions by the end of the month, resulting in a low job openings rate. In contrast, professional and business services and accommodation and food services both have high churn and higher than average job openings rates, indicating they need more workers in addition to replacement hires.
Among the industries with low churn, several have higher than average job openings rates, including information, finance and insurance, and health care and social assistance. These industries have few employees separating, but they also have few employees to hire and they have a considerable need for workers. The remaining industries fall somewhere between with moderate churn and moderate openings.
We can see in figure 5 that, in 2014, the same industries that had low hires and high openings and some of the lowest fill rates also had below-average churn with job openings nearly as high as the churn. These industries were information, finance and insurance, and health care and social assistance. Both federal government and state and local government also had low hires and a job openings rate nearly as high as the churn. For these establishments, separations are low, hires are low, and open jobs are left unfilled. See table 1 for a full list of industries with their corresponding rates for job openings, hires, separations, fill, and churn.
A faster way to compare the industries on the basis of job openings and churn is to create a combined rank for both. Table 2 shows that accommodation and food services; professional and business services; arts, entertainment, and recreation; and mining and logging had the highest combined labor market activity in 2014 with regard to job openings and churn. The lowest activity industries in this regard are toward the bottom of the table, with federal government and state and local government being the lowest. The industries in the middle of the table have either low openings and high churn, high openings and low churn, or medium values of both.
|Industry||Rank by job openings rate||Rank by churn rate||Combined rank|
|Accommodation and food services||1||2||1|
|Professional and business services||2||3||2|
|Arts, entertainment, and recreation||6||1||3|
|Mining and logging||7||6||4|
|Health care and social assistance||3||11||5|
|Transportation, warehousing, and utilities||8||7||6|
|Real estate and rental and leasing||10||9||7|
|Finance and Insurance||4||15||7|
|Nondurable goods manufacturing||13||14||10|
|Durable goods manufacturing||15||16||11|
|State and local government||18||17||13|
|Source: U.S. Bureau of Labor Statistics.|
So far, the analysis of industries has looked only at the year 2014. One question left to answer is whether the labor demand and turnover characteristics within industries changed over the course of the business cycle and with structural changes in the economy. In short, these characteristics did not change, for the most part, over the period for which we have JOLTS data, 2001 through 2014. Each industry retained its characteristics regarding rates for job openings, hires, separations, fill, and churn.
Although all the industries were affected by the Great Recession of December 2007–June 2009, the basic characteristics of the industries did not change across the business cycle. Figures 6 and 7 show bubble charts for 2005 (before the recession) and 2009 (the last year of the recession), which can be compared with figure 2 for 2014. In all 3 years, the same group of industries is on the right side of the figure, which indicates relatively high hires rates. Likewise, no changes occurred in which of the industries appear in the left part of the graph, indicating relatively low hires rates. For example, construction maintained a high hires rate and low job openings rate across the years, while health care and social assistance maintained a low hires rate and high job openings rate, and state and local government maintained low hires and low openings.
The one industry to experience some change over time is the federal government. In most years, it had very little labor market activity with low and nearly equal hires and openings, but in the years 2008, 2009, and 2010, job openings exceeded hires. The rise in job openings in 2009 and 2010 was due to increased labor demand and hiring for the preparation and administration of the 2010 Decennial Census. The higher job openings rate for the federal government in 2009 can be seen in figure 7.
Fill rates over time. Fill rates across the years from 2001 through 2013 have similarities to those for 2014. The industries with the highest fill rates year after year are construction; retail trade; arts, entertainment, and recreation; and accommodation and food services. These are mostly the industries with both high hires levels and high hires relative to openings. The industries with the lowest fill rates year after year include information; finance and insurance; health care and social assistance; federal government; and state and local government. These are the same industries that fell in the low-hires-and-high-openings or low-hires-and-low-openings categories. The exception, as mentioned before, is the federal government because of the 2010 Decennial Census. Table 3 provides the fill rates by industry across the years from 2001 through 2014.
|Mining and logging||1.6||2.4||3.0||2.6||1.9||1.5||2.0||1.5||2.0||1.5||1.1||1.7||1.4||1.2|
|Durable goods manufacturing||1.1||1.5||1.7||1.5||1.2||1.0||1.0||1.2||1.9||1.2||0.9||0.8||0.9||0.9|
|Nondurable goods manufacturing||1.6||1.7||1.7||1.5||1.4||1.3||1.2||1.2||1.9||1.8||1.5||1.0||1.0||1.0|
|Transportation, warehousing, and utilities||1.2||1.5||1.5||1.9||1.5||1.2||1.0||1.3||2.4||1.7||1.2||1.4||1.2||1.1|
|Finance and insurance||0.7||0.8||0.8||0.7||0.6||0.6||0.8||0.8||0.8||0.6||0.6||0.6||0.6||0.6|
|Real estate and rental and leasing||1.4||1.7||1.6||2.0||1.4||1.3||1.4||1.2||1.7||1.7||1.4||1.1||1.3||1.1|
|Professional and business services||1.4||1.6||1.3||1.3||1.3||1.3||1.2||1.2||1.6||1.5||1.4||1.3||1.3||1.2|
|Health care and social assistance||0.6||0.7||0.8||0.7||0.8||0.7||0.7||0.7||0.9||0.9||0.8||0.7||0.8||0.7|
|Arts, entertainment, and recreation||2.2||2.3||2.9||2.5||2.0||1.9||1.9||2.2||4.6||3.2||2.6||2.3||2.1||2.0|
|Accommodation and food services||1.5||2.0||2.0||1.9||1.7||1.6||1.5||1.7||2.3||2.2||1.8||1.6||1.5||1.2|
|State and local government||0.8||0.8||0.8||0.9||0.8||0.8||0.7||0.8||0.9||0.9||0.8||0.8||0.8||0.7|
|Source: U.S. Bureau of Labor Statistics.|
Although the nature of the industries relative to other industries remained the same across the business cycle with regard to the fill rate, differences in the labor market can be seen by comparing the fill rates from 2005 (prerecession), 2009 (end of recession), and 2014. In 2005, the fill rate for total private industries was 1.3, with 5 industries having a fill rate of less than 1.0 (openings outnumbering hires) and 12 industries having a fill rate greater than 1.0 (hires outnumbering openings). In contrast, in 2009 at the end of the recession, the fill rate for total private industries had risen to 1.7, which reflects the decline of hires but even steeper decline of openings during the recession. In 2014, a year when we had mostly recovered from the recession, the numbers resemble 2005 with a fill rate of 1.1 for total private industries, with nine industries having a fill rate greater than 1.0 and six industries having a fill rate of less than 1.0.
Churn rates over time. As with the other measures, the churn rates by industry for the years 2001–13 are similar to the 2014 churn rates regarding the labor turnover characteristics of the industries. However, they also illustrate the effect of the business cycle. As seen in table 4, the industries with consistently high churn rates year after year are construction; retail trade; professional and business services; arts, entertainment, and recreation; and accommodation and food services. The lowest churn rates each year are for the federal government and state and local government. The effect of the business cycle can be seen in the lower churn rates during the recession, specifically as hires and quits slowed in 2008 and 2009. Churn has increased steadily since the end of the recession as both hires and quits have risen but has not yet recovered to prerecession levels. As of 2014, the churn rate for total nonfarm industries was 6.9 compared with the 7.4 prerecession rate, and total private churn measured 7.6 in 2014 compared with 8.3 before the recession. The one different industry is construction, in which churn did not fall much during the recession. Construction is especially prone to layoffs, and the industry’s rise in layoffs counteracted its falling quits to keep churn steady through the recession.
|Mining and logging||7.4||6.9||7.2||7.6||7.2||6.7||7.2||7.3||5.7||5.9||6.1||7.3||6.5||7.2|
|Durable goods manufacturing||5.1||5.2||5.0||5.3||5.3||5.0||5.1||4.7||4.5||4.1||3.8||3.8||3.7||3.8|
|Nondurable goods manufacturing||6.6||5.8||5.2||5.4||5.3||6.1||6.3||5.5||5.4||5.1||4.8||4.4||4.3||4.7|
|Transportation, warehousing, and utilities||6.5||6.0||5.9||6.8||7.3||6.9||6.0||6.0||6.2||5.4||5.6||6.2||6.1||6.7|
|Finance and insurance||5.3||4.6||4.3||4.6||4.6||5.0||5.5||4.6||3.7||3.9||3.4||4.0||4.4||4.2|
|Real estate and rental and leasing||7.3||7.3||7.9||8.4||7.7||8.2||7.9||7.3||7.3||6.0||5.9||6.4||6.7||6.2|
|Professional and business services||11.9||12.1||11.2||11.0||11.5||10.9||10.4||9.7||8.6||9.1||9.8||9.6||9.6||10.3|
|Health care and social assistance||6.4||5.9||5.6||5.5||5.7||5.8||5.6||5.5||5.1||4.8||4.6||4.9||5.1||5.3|
|Arts, entertainment, and recreation||16.2||13.7||14.8||14.2||13.8||13.0||13.3||12.3||10.6||11.2||12.6||12.7||12.5||13.2|
|Accommodation and food services||15.1||13.3||13.0||13.5||13.8||13.9||13.6||12.2||10.0||9.5||10.0||10.4||10.7||11.3|
|State and local government||3.2||2.9||2.7||2.9||2.8||3.0||2.9||2.6||2.4||2.6||2.6||2.7||2.7||2.8|
|Source: U.S. Bureau of Labor Statistics.|
No discussion of labor market data would be complete without at least mentioning how much the workers in the industry earn. Are high openings or high turnover due to low earnings? Or is the market more complicated than that? An analysis of earnings by industry or occupation as it relates to labor activity could be a whole article itself, but table 5 provides a quick look at job openings, churn, and average hourly earnings by industry. The earnings data are from the BLS Current Employment Statistics survey.12 The values range from an average of $13.03 per hour in the accommodation and food services industry to $34.01 per hour in the information industry. Not surprisingly, earnings are lowest in the accommodation and food services industry, which has the second highest churn rate and the highest job openings rate. However, professional and business services has one of the higher earnings rates yet has high churn and openings rates similar to accommodation and food services. The highest earnings are in the information industry in which the churn is modest and the openings rate is high. The construction industry has the sixth highest earnings but has high churn and very low openings. This combination of wages, churn, and openings suggests a more complicated labor market than one influenced simply by supply and demand and employee earnings. Much more analysis would be needed to sort through the interactions of these variables as well as to compare earnings, wages, and total compensation, all of which are different measurements.
|Industry||Job openings rate||Churn rate||Hourly earnings|
|Accommodation and food services||4.5||11.3||$13.03|
|Professional and business services||4.4||10.3||29.28|
|Health care and social assistance||3.9||5.3||24.98|
|Finance and insurance||3.7||4.2||33.00|
|Arts, entertainment, and recreation||3.2||13.2||19.47|
|Mining and logging||3.1||7.2||30.78|
|Transportation, warehousing, and utilities||3.1||6.7||24.21|
|Real estate and rental and leasing||2.9||6.2||23.65|
|Nondurable goods manufacturing||2.4||4.7||22.40|
|Durable goods manufacturing||2.3||3.8||26.18|
|Source: U.S. Bureau of Labor Statistics.|
Macroeconomic indicators such as employment, job openings, hires, separations, and earnings are essential for understanding the state of the economy and the labor market specifically. The data at the total nonfarm- or total private-industries level are quite informative. We learn even more by studying the data by industry. But analysis using only one or two of these data items misses much of the story. Studying more data items at once in relation to each other uncovers a much more complicated story of how different the industries within our economy are from each other. Some have very high turnover (arts, entertainment, and recreation), while some have very low turnover (government). Some easily find the workers they need and have few openings (construction), some need more workers than they can find (health care and social assistance), while others do not have much labor market activity at all (manufacturing). Because each industry is different, users of these labor data can benefit from studying the labor activity characteristics of the industries. Jobseekers and career changers can use the data to guide their education or job search. Job counselors could use the data to assist their clients. Employers might use the data to adjust benefits, wages, or on-the-job training opportunities if they are having trouble hiring. Government officials can learn from the data where to spend money, provide grants, develop training programs, or institute new policies. All of these people and entities are invested in the labor market and can benefit from the data series discussed in this article, all of which are readily available from the Bureau of Labor Statistics.
Charlotte Oslund, "Which industries need workers? Exploring differences in labor market activity," Monthly Labor Review, U.S. Bureau of Labor Statistics, January 2016, https://doi.org/10.21916/mlr.2016.1.
1 The JOLTS job openings count excludes jobs to be filled only by internal transfers, promotions, demotions, or recall from layoffs; jobs with start dates more than 30 days in the future; jobs for which employees have been hired but have not yet reported for work; and jobs to be filled by employees of temporary help agencies, employee leasing companies, outside contractors, or consultants.
2 The employment levels used in calculating JOLTS rates at the estimation level are from the Current Employment Statistics program at the Bureau of Labor Statistics.
3 For this article, the data will be written as the ordered pair (hires rate, job openings rate).
5 Stephen D. Simpson, “The 6 signs of an economic recovery,” Investopedia, August 16, 2010, http://www.investopedia.com/financial-edge/0810/the-6-signs-of-an-economic-recovery.aspx.
7 Lisa Baron, “Demand for financial advisors to grow,” Benefitspro, February 24, 2014, http://www.benefitspro.com/2014/02/24/demand-for-financial-advisors-to-grow.
8 Noelle M. Emerson, “Surviving a thousand cuts: America’s public schools and the recession,” American Association of School Administrators economic impact study series, December 2010, http://www.aasa.org/uploadedFiles/Policy_and_Advocacy/files/AASAThousandCutsFINAL121610.pdf.
9 Rebecca Goldring, Soheyla Taie, Minsun Riddles, and Chelsea Owens, “Teacher attrition and mobility: results from the 2012–13 teacher follow-up survey, first look,” National Center for Education Statistics, U.S. Department of Education, September 2014, http://nces.ed.gov/pubs2014/2014077.pdf.
10 The National Center for Education Statistics provides teacher demand estimates for public and private schools, among other statistics, from the Schools and Staffing Survey, http://nces.ed.gov/surveys/sass/.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9197933673858643,
"language": "en",
"url": "https://www.cece.eu/industry-and-market/trade-policy",
"token_count": 593,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08251953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a9eec38d-9bb2-4bf5-9f97-307a213e6667>"
}
|
EU Position in World Trade
- The EU is the largest economy in the world. Although growth is projected to be slow, the EU remains the largest economy in the world with a GDP per head of €25 000 for its 500 million consumers.
- The EU is the world's largest trading block. The EU is the world’s largest trader of manufactured goods and services.
- The EU ranks first in both inbound and outbound international investments.
- The EU is the top trading partner for 80 countries. By comparison the US is the top trading partner for a little over 20 countries.
- The EU is the most open to developing countries. Fuels excluded, the EU imports more from developing countries than the USA, Canada, Japan and China put together.
The EU benefits from being one of the most open economies in the world and remains committed to free trade.
- The average applied tariff for goods imported into the EU is very low. More than 70% of imports enter the EU at zero or reduced tariffs.
- The EU’s services markets are open and the EU has arguably the most open investment regime in the world.
- Trade under existing EU trade agreements keeps growing.
More information can be found here.
EU Trade Policy
The EU manages trade and investment relations with non-EU countries through the EU's trade and investment policy.
Trade policy is an exclusive power of the EU. The EU makes laws on trade matters and conclude international trade agreements.
The EU trade policy covers:
- trade in goods and services;
- foreign direct investment;
- public procurement;
- the commercial aspects of intellectual property, such as patents.
Trade policy is set out in Article 207 of the Treaty on the Functioning of the European Union (TFEU).
The European Commission negotiates trade agreement with a trade partner after receiving the mandate from the Council. The Council and the Parliament approve the proposal for a trade agreement with a trade partner submitted by the Commission.
CECE and the EU Trade Policy
The free flow of trade and investment is the lifeblood of modern manufacturing. Through international trade the construction equipment manufacturers have access to foreign markets, global supply chains and raw materials
CECE advocates for free trade and open markets. The European construction equipment manufacturers have been traditionally export-oriented and this is even more true today, as the demand for construction equipment has shifted to China, South America and India, with European demand representing only 20% of the global demand.
The construction equipment manufacturers in Europe have a turnover of EUR 40 billion, of which EUR 26 billion stems from sales beyond national borders, both in the Single Market and outside the EU.
This figure also underlines the importance of international trade for our sector.
Under focus: CECE and the EU-UK trade READ MORE
Under focus: CECE and the EU-US trade READ MORE
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9220608472824097,
"language": "en",
"url": "https://www.excelappraise.com/glossary/what-is-absorption-to-appraisers/",
"token_count": 1403,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.02197265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f14af1e8-1fc8-480f-88c6-4ee08df0ed78>"
}
|
What is absorption? Appraisers, brokers and other real estate professionals want to know how long it takes for a particular number of homes to sell in an area. For that answer, we look to the rate of absorption. Appraisers calculate this figure and assume that no other homes are being built during the particular period.
How To Calculate The Rate?
You may employ two formulas, depending on how you wish to express absorption in the appraisal process. To arrive at a percentage rate, you divide the number of homes sold in a month by the number homes available on the market. The formula looks like this:
Absorption Rate = Number of homes sold per month/Number of homes available for sale
Suppose your city has 2,400 homes listed for sale. If an average of 300 have been sold per month over some period of time, you calculate the rate as follows:
Rate =300 homes sold/2,400 for sale = 0.125, or 12.5 percent. That means purchasers take 12.5 percent, or just shy of 13 out of every 100, homes for sale each month.
Remember that absorption tells appraisers how long it takes to sell a given number of homes on the market. And that helps determine the value of your home. You can use this formula to calculate the months:
*Absorption Rate =Number of homes available for sale/Number of homes sold in a month
Using our above example of 2,400 homes for sale and 300 sold on average:
Rate =2,400 homes for sale/300 sold per month =8 months to sell all the homes on the market.
How Do You Get Sales Information?
The Multiple Listings Service (MLS) provides real estate brokers and appraisers details on the number of active listings and sales in an area. Appraisers can access MLS through a membership with the National Association of Realtors (Realtor.com).
Why Do We Use Absorption In Appraisals?
Appraisers render opinions of value to aid buyers, sellers, agents and lenders. The buyer (or buyer’s agent) wants to avoid overpayment, while the seller or seller’s agent guards against underpricing the home. Banks and mortgage companies require that the subject property appraise at a particular value as a condition of the loan. Fannie Mae and Freddie Mac have set loan-to-value ratios in which the amount borrowed cannot exceed a particular percentage of the value.
What Goes Into An Appraisal Generally?
Appraisers typically consider specific aspects of the home in arriving at value. These factors include the home’s square footage, the number and types of rooms (bedrooms, bathrooms, bonus rooms, kitchen, living room or den); the floor plan; landscaping, and colors and styles of cabinets, flooring and paint on walls. Whether the home has cracks in the ceiling, foundation or driveways and the condition of the roof also influence appraisals.
Uniform standards control how appraisers perform their work. Depending on the lender, though, your appraisal may have stricter requirements. Commonly, loans guaranteed or granted through the Federal Housing Administration (FHA) or Veterans’ Administration (VA) may require appraisers examinations and inspections not needed for a conventional loan.
Why Do Appraisers Use Absorption?
Supply and demand help set the prices of most things in the economy. Real property values are not exempt from the forces of supply and demand. All other things being equal, prices (and values) rise with higher demand or lower supply. With lower demand or greater supply comes lower property values. These principles underlie the theory of absorption in regards to appraisals.
What Does The Rate Tell Us?
With the calculations used in absorptions, appraisers and other real estate professionals can tell you if your real estate market is a “buyers’ market” or “sellers’ market”:
Buyers’ Market: In a buyers’ market, the supply of homes outpaces the demand. You will likely find that homes sell more slowly in such a market. With more sellers than buyers comes lower home prices across the board. Buyers may even exact certain concessions from sellers to motivate them, such as paying for or sharing in closing costs.
A rate south of 15 percent generally signals a buyers’ market. In our illustration above, you would find yourself in such an environment.
Sellers’ Market: You will experience a sellers’ market with rates greater than 20 percent. A sellers’ market translates to higher home prices because demand exceeds the supply of homes. In such a market, homes tend to sell more quickly.
To achieve a sellers’ market with 2,400 available homes, you would need to see more than 480 homes being sold per month.
What Affects Absorptions In Appraisals?
The conditions that affect the absorption rate and the supply and demand for homes range from the local or regional to even global. On a local scale, an influx of employers and accompanying job opportunities, quality schools or recreational and entertainment opportunities can drive demand upward. You can expect elevated demand for real estate in scenic areas such as beaches, mountains and lakeside communities.
Lending strongly influences supply and demand. Higher interest rates, stricter underwriting standards, weak economies characterized by high unemployment and low wages suppress loans granted by banks and mortgage companies. Fewer mortgages mean fewer home buyers and lower prices. Conversely, lowering interest rates help spark the demand for the loans that fund home purchases.
Why Might Lenders Be Interested In Absorption Rates?
Mortgage companies and banks rely on accurate or reliable appraisals and absorption rates to guide decisions on the amount and payment terms of loans. To that end, the lender does not merely rely on a negotiated or agreed upon price as proof of the value.
Factoring supply and demand into an appraisal helps banks catch potential overpricing. That is, the home might actually not be worth what solely the buyer and seller think or have selected as the price. Where the buyer and seller may have narrowed their focus to the appearance and condition of the home, an appraiser (and the lender) look at how quickly homes sell in a community. If an appraisal seems too high for the overall demand for homes in the area, an appraiser might offer as explanations upgrades to the home or interest from other prospective buyers. The latter may be especially convincing depending on the prices in those offers.
Whether you need an appraisal for a mortgage or to know if you might be under pricing the home or a seller is overpricing it, contact our team for professional appraisal services. We offer free quotes when you fill out a request form or by emailing us directly.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9488441348075867,
"language": "en",
"url": "https://www.ianfairlie.org/news/uk-electricity-renewables-and-the-problem-with-inflexible-nuclear/",
"token_count": 2507,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.26171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5f6450a8-23a0-409e-83ed-6fd047991d42>"
}
|
In recent years, the share of the UK’s electricity supplied by renewable energy (RE) sources has increased substantially to the point that RE is now the second largest source after gas: It now supplies 20% to 25% of our electrical needs. This is greater than the amount supplied by nuclear – about 15% to 18%. Coal, hydroelectric, and mainly gas (~40%) constitute the other sources. See chart for Britain’s electrical power supplies in 2019.
Why are AGR reactors inflexible?
Before his untimely death in 2018, the nuclear engineer, John Large, explained that while advanced gas reactors (AGRs) were considered safe and reliable once they were up and running, they were difficult to control, ie less safe, when ramping up and down, especially in comparison to pressurised water reactors (PWRs). (PWRs were originally designed for flexible use in US nuclear submarines.)
For this reason, in the 1970s and 1980s, the former CEGB set the pattern of a nuclear “base load” in which its Magnox and AGR reactors operated flat out most of the time. Essentially this pattern is still adhered to today by the National Grid. However in recent years, a fundamental change in system economics has happened: nuclear has been undercut by the renewables. So the AGR control issue didn’t used to be an economic problem, but now it is.
Difficult situation re nuclear and RE
The National Grid keeps supply and demand balanced in real time to prevent blackouts, such as occurred on August 9 2019, when a million UK homes were cut off. But when demand is low – as in recent months during the pandemic – it is difficult to have all nuclear and all RE sources running at the same time as the Grid would end up with too much electricity. To avoid this, the Grid requests utilities to shut off their supplies and makes “constraint” payments to those who do so. A complicated reverse auction system exists for these payments in which operators bid as low as they feel able in order to secure such payments.
Because of the inflexibility of the AGRs, RE suppliers are shut off first. This is explained in a recent report by the newly-formed pressure group, 100percentrenewableuk, which explains that the inflexible nature of nuclear power is instrumental in forcing the National Grid to turn off large amounts of wind power (ie in the jargon to be ‘constrained’) in Scotland when there is too much electricity on the network. https://realfeed-intariffs.blogspot.com/2020/06/nuclear-report-published-today-by-newly.html
This means nuclear reactors are also mainly responsible for the large constraint payments paid by the National Grid to wind farms to be turned off. These compensation payments are eventually paid for by all electricity consumers in the fixed element of their electricity bills.
The problem is that these constraint payments are now very large. For example, National Grid ESO, the UK system operator expects to spend an additional £500 million to balance the grid over the course of the 2020 summer, much of it in payments to wind farms to stop generating. In total, National Grid expects to spend £826 million to balance the grid in 2020. https://www.thetimes.co.uk/article/blackout-risk-as-low-demand-for-power-brings-plea-to-switch-off-wind-farms-xv36v575x
This appears nonsensical as the Grid is turning off cheap renewables to preserve expensive nuclear, and then paying large compensation payments to them to do so. One wonders what OFGEM makes of this? As pointed out by the National Audit Office, this problem will get even worse if Hinkley C were ever to be allowed to finish construction and allowed to operate.
The situation has recently become so problematic that the Grid has been forced to request EDF Energy to shut half the generating capacity of its nuclear reactor at Sizewell in Suffolk. See https://www.thetimes.co.uk/article/big-is-not-so-beautiful-in-grid-talks-to-power-down-8w0qxbtgg “More than” £50 million is to be paid to EDF just to reduce the output from Sizewell to avert the risk of blackouts this summer.
One surmises that the reason the Grid and EDF chose the Sizewell reactor to restrict is its large capacity (1200 MW). It is by far the largest reactor in Britain (the remaining AGRs are about 460 MW), and this presents a problem. If it failed or quickly went off-line (eg it scrammed its control rods for safety reasons), the sudden loss of 1200 MW in supply would present severe problems for the grid. It could result in a drop in frequency triggering other plants to fail. This happened after the simultaneous failures of two (non-nuclear) power stations in August 2019. One shudders to think what would occur if Hinkley C (2 x 1600 MW) were ever to operate and it failed. To avoid this problem, the National Grid keeps what is called “spinning reserve” on line, but this redundant reserve is expensive and all electricity consumers have to pay for it.
Can we manage the intermittency of renewables and attain 100% renewables?
Yes. In fact, many ways are possible, including
- improved resource and weather forecasting
- interconnecting the grid over larger UK regions
- digitally-controlled smart grids giving better control of demand
- power storage, in the form of pumped hydroelectric dams, dedicated batteries and electric car batteries
- the increased use of the many existing interconnectors with Europe
- the increased use of smart wind turbines, and
- the use of heat pumps, heat batteries, liquid air batteries and hydrogen fuel cells.
Interestingly, in June 2020, several large power companies, including Centrica and E.ON, sent an open letter calling on National Grid to accelerate the deployment of smart electric vehicle (EV) charging infrastructure, energy storage and other flexibility services in order to manage the Grid more rationally. The utilities’ letter stated that a number of options existed to reduce its current reliance on curtailing renewables, from long-duration storage to industrial-scale demand response. They stated that EVs, smart electric heaters and home solar batteries “could all be providing services at this time if the right signals and instructions were being administered”. They added “flexible technologies and storage assets will be needed to integrate a higher level of renewable generation into the system to produce carbon savings. Harnessing the potential of these technologies is critical to ensuring green energy supply isn’t unnecessarily wasted”. https://www.greentechmedia.com/articles/read/smart-flexibility-could-slash-uk-coronavirus-curtailment-costs
Indeed, throughout the UK,local authorities and local companies are in fact steaming ahead with their own initiatives. See box below. In addition, the recent UK pressure group, 100percentrenewableuk, was also set up to press for these developments. www.100percentrenewableuk.org
|Box. Some examples of innovative flexible RE technologies
1.An Edinburgh company, Gravitricity, is planning to use disused coal mine shafts in Scotland to store renewable energy by using heavy weights. Surplus electricity at night would be used to lift weights to the tops of mine shafts. When electricity were needed, the weights can be allowed to drop by gravity turning turbines for power.
2, Flexitricity, partnered with Gresham House Energy Storage Fund, is operating a 75 MWh battery storage site in Yorkshire. The lithium-ion battery storage site is trading in wholesale markets using the National Grid ESO’s balancing mechanisms. Energy Voice 15th May 2020 https://www.energyvoice.com/otherenergy/240736/uks-largest-battery-to-help-keep-the-nations-lights-on/
3. South Somerset District Council has built a 30 MW battery energy storage system. It works with a local company Opium Power to sell flexibility services to the grid generating income for the Council. Solar Power Portal 25th Oct 2019 https://www.solarpowerportal.co.uk/news/somerset_council_owned_battery_to_be_boosted_to_30mw
4. A Virtual Power Plant in West Sussex streamlines how low-carbon energy is generated, stored, traded and consumed. The £31m SmartHubs Smart Local Energy Systems project last year received £13m of funding through the Government’s Industrial Strategy Challenge Fund. The project acts as a demonstrator to facilitate the decarbonisation of heat, transport and energy across social housing, transport, infrastructure and private residential and commercial properties in West Sussex. Project partners include ITM Power, Moixa Technology, ICAX, PassivSystems, Newcastle University, West Sussex County Council and Connected Energy. Edie 27th May 2020 https://www.edie.net/news/8/West-Sussex-s–31m–smart–local-energy-system-to-progress-during-lockdown/
5. Orkney already has an operational smart grid generating more than 100% of its electricity demand via renewable energy sources. It is integrating a new Demand Side Management system with the existing grid to provide intelligent control and aggregation of electric heating systems in homes, businesses and council buildings, as well as EV charging points and hydrogen electrolysers. A distinctive aspect is that demand response services are delivered by a new local energy company, a consortium of local generators and other stakeholders. The system specifications and operating parameters are approved by the Grid’s DSO, which retains final oversight of the system, but day-to-day management is by the local company and its contractors. https://www.h2020smile.eu/the-islands/the-orkneys-united-kingdom/
For the future, at least two additional technologies below could also be implemented,
6. Heat pumps in conjunction with thermal storage to be operated when RE generated electricity is plentiful and demand is low. Denmark is looking at using CHP plants in conjunction with heat pumps and additional heat storage capacity to store surplus energy on windy days. Their district heating systems could absorb large quantities of surplus wind-generated electricity by using heat pumps and electric heaters for heating water. When demand for electricity is high but the wind is low, CHP plants could sell their electricity. http://www.pfbach.dk/firma_pfb/forgotten_flexibility_of_chp_2011_03_23.pdf
7. The same principle applies to electric vehicles using vehicle to grid technology. Central and local governments across UK have fleets of ~75,000 vehicles. If these were EVs, they could come back to depots with an estimated average 50% charge which could be sold back to the grid during the peak (red zone) period of 4.30 pm to 7.00 pm. They could then be recharged in the small hours ready for morning duties. (http://projects.exeter.ac.uk/igov/wp-content/uploads/2013/10/Lockwood-System-change-in-a-regulatory-state-paradigm-ECPR-Sept-13.pdf)
This is in contrast with France. It obtains about 75% of its electricity from nuclear so that EDF in France must ramp up and down most of their reactors to follow diurnal demand patterns. But this is a risky practice, therefore for safety reasons most of the older 900 MW French PWR reactors are restricted to low burnup regimes (<25,000 MW days per tonne when designed for 33,000 MW days per tonne).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8970625400543213,
"language": "en",
"url": "https://www.medgadget.com/2020/03/intracranial-pressure-monitoring-market-to-reach-usd-1-97-billion-by-2026-reports-and-data.html",
"token_count": 1466,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.283203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5c9d6c90-c355-4f23-af4f-45b9d6172646>"
}
|
Brazil According to the current analysis of Reports and Data, the global Intracranial Pressure Monitoring market was valued at USD 1.19 billion in 2018 and is expected to reach USD 1.97 billion by the year 2026, at a CAGR of 6.5%. Intracranial pressure monitoring (ICP) involves measuring the pressure in the skull by placing a small probe inside the skull, which is attached to the other end to a bedside monitor. The device senses the pressure inside the skull and sends the measurements to a recording device and hence can be compared with the normal range of the pressure inside the skull. The monitoring of intracranial pressure is used in treating various severe traumatic brain injuries, neurodegenerative diseases, and others.
Request free sample of this research report at: https://www.reportsanddata.com/sample-enquiry-form/2653
The key aspects acting as the growth factors include the rising prevalence of neurological disorders and traumatic accidents, which are anticipated to propel the market of intracranial pressure monitoring devices in the forecast period. For instance, according to the World Health Organization 2018, it is estimated that about sixty-nine million individuals worldwide are expected to sustain a Traumatic Brain Injury each year. Furthermore, escalating cases of brain infection, aneurysm, and meningitis would result in an amplified requirement for ICP monitoring.
An increase in the spending capacity of the individuals for the health care system is affecting boosting the market growth in the forecast period. For instance, according to the database of the World Health Organization, 2019 the two years into the Sustainable Development Goals era, global spending on health continues to rise which was US$ 7.8 trillion in 2017, suggesting about 10% of GDP and $1,080 per capita which has increased up from US$ 7.6 trillion in 2016. Moreover, the stringent regulation and the growing product recalls are anticipated to hamper the growth of the market in the coming years.
Further key findings from the report suggest
- According to the World Health Organization 2018, it is estimated that about sixty-nine million individuals worldwide are expected to sustain a Traumatic Brain Injury each year. The growing rates of traumatic injuries are calculated to propel the market in the near future.
- The stringent regulations of the market would decide the market growth in the forecast period. For instance, the U.S. Food and Drug Administration (FDA) in May 2019, announced a class I recall of Integra LifeSciences’ LimiTorr Volume Limiting Cerebrospinal Fluid (CSF) Drainage System and its MoniTorr Intracranial Pressure (ICP) External CSF Drainage and Monitoring Systems. A class I recall is the most severe type of recall in which the recalled product poses a grave risk of harm or death to the patient. The stringent regulations over the products would help the right quality products to grow in the forecast period.
- The key players in this sector are focusing more on technological advancements. For instance, Raumedic has recently launched an intracranial pressure monitoring device for home use. The telemetric catheter, the device called Raumed Home ICP, measures the pressure inside the cranium – the intracranial pressure, or ICP. The product has recently received CE marking and was developed primarily for people who suffer from hydrocephalus.
- The recent innovation in the intracranial pressure monitoring device includes the change in bioresorbable optical sensor systems, which uses millimeter-scale and bioresorbable Fabry-Perot interferometers and two-dimensional photonic crystal structures enabling accurate, continuous measurements of pressure and temperature.
- Among the key market players, Branchpoint Technologies in 2018 announced that the United States Food and Drug Administration (FDA) had granted 510(k) clearance for its AURA ICP Monitoring System. It includes a fully implantable and wireless intracranial pressure (ICP) sensor, which enables mobile ICP monitoring in brain-injured patients. The AURA system is entirely wireless in both the power and transmission of patient data directly to a bedside monitor. AURA enables telemetric monitoring of parenchymal ICP, including continuous ICP waveforms, and eliminates the need for additional capital equipment investments.
- The global Intracranial Pressure Monitoring market is highly fragmented with major players like Medtronic Plc. (Ireland), RAUMEDIC Inc. (Germany), Integra LifeSciences Corporation (US), DePuy Synthes (US), Codman and Shurtleff , Inc., Vittamed, Spiegelberg GmbH & Co. KG (Germany), Sophysa SA (France), Orsan Medical Technologies, Boston Neurosciences (US), Terumo Corporation (US), and Natus Medical Incorporated (US).
Order Your Copy Now: https://www.reportsanddata.com/checkout-form/2653
For the purpose of this report, Reports and Data has segmented the Intracranial Pressure Monitoring market on the basis of techniques, applications, end-use and region:
Techniques Outlook (Revenue in Million USD; 2016–2026)
- External Ventricular Drainage
- Microtransducer ICP Monitoring Devices
- Fibre Optic Devices
- Transcranial Doppler Ultrasonography
- Tymphanic Membrane Displacements
Applications Outlook (Revenue in Million USD; 2016–2026)
- Intracerebral Hemorrhage
- Traumatic Brain Surgery
- Subarachnoid Hemorrhage
- CNS Infections
- Cerebral Edema
End-Use Outlook (Revenue in Million USD; 2016–2026)
- Trauma Centers
- Ambulatory Surgical Centers
- Specialty Centers
To identify the key trends in the industry, click on the link below: https://www.reportsanddata.com/report-detail/intracranial-pressure-monitoring-market
Regional Outlook (Revenue in Million USD; 2016–2026)
- North America
- Rest of the Europe
- Asia Pacific
- Rest of Asia-Pacific
- Middle East & Africa
- Latin America
About Reports and Data
Reports and Data is a market research and consulting company that provides syndicated research reports, customized research reports, and consulting services. Our solutions purely focus on your purpose to locate, target and analyze consumer behavior shifts across demographics, across industries and help client’s make a smarter business decision. We offer market intelligence studies ensuring relevant and fact-based research across a multiple industries including Healthcare, Technology, Chemicals, Power, and Energy. We consistently update our research offerings to ensure our clients are aware about the latest trends existent in the market. Reports and Data has a strong base of experienced analysts from varied areas of expertise.
Head of Business Development
Reports And Data | Web: www.reportsanddata.com
Direct Line: +1-212-710-1370
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9685020446777344,
"language": "en",
"url": "https://www.msek.com/publication/paul-millus-authors-uber-drivers-employees-independent-contractors-nassau-lawyer/",
"token_count": 2638,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.45703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:91130333-e50f-4893-a143-b3fb61cbe68a>"
}
|
There was a time when everyone knew the difference between an employee and an independent contractor. An employee went to the office or factory, worked his eight hours for an employer (and only one employer), had his taxes deducted from his paycheck, and was paid two weeks’ vacation. The classic independent contractor was the plumber who came to the customer’s home (or business) in his own truck. The plumber told you when he chose to come, arrived when it was convenient for him, wholly dictated the price, used his own tools and waited to be paid on the spot. He then left, never to be seen again until the next leaky pipe.
The Rise of the Alternative Worker
The determination as to who is an employee and who is an independent contractor has become less clear over the years, mainly due to the expansion of the “alternative workforce” versus the employee workforce. This expansion was partly caused by the way businesses ran their operations to stay competitive in the global marketplace. In the 1970’s and 1980’s, recessions led to the downsizing of employee-rich bureaucracies leading companies to rethink their business models to include temporary workers, who may have been employed by someone, but were not employees in the place where they worked – they were part of an independent contractor force.
The next shoe to drop was globalization. The rise of technology and less costly transportation methods led to offshore production. Businesses simply could not afford a large employee workforce, and hiring workers on an ad hoc basis lowered their bottom lines and increased their profitability.[i] As of 2010, more than 10,000,000 workers, comprising 7.4 percent of the U.S. workforce, were classified by the Bureau of Labor Statistics as independent contractors, and another 4,000,000 worked in alternative work arrangements in which they were legally classified as independent contractors for one or more purposes. In that year, “alternative” workers, as they were called, accounted for approximately $626 billion in personal income, or about one in every eight dollars earned in the U.S.[ii]
The Common Law Tests
So, what is the law as it pertains to the employee versus the independent contractor conundrum? In 1926, the U.S. Supreme Court opined regarding who could be identified as an independent contractor in Metcalf & Eddie vs. Mitchell. In that case, the Court used well-established common law as its guide. In examining the performance of the contract at issue, the Court looked to whether (i) the performance of the contract involved the use of judgment and discretion on the part of the worker; and (ii) the worker was required to use his best professional skill to bring about the desired result. Thus, the Court concluded, if the worker enjoyed “liberty of action,” it “excludes the idea that control or right of control by the employer which characterizes the relation of employer and employee and differentiates the employee or servant from the independent contractor.” [iii] The key factor in these cases was level of control exerted by the putative employer.
New York courts apply the same common-law right-to-control test to determine whether a worker is an employee in several contexts.[iv] In Bynog v. Cipriani Group, Inc., the New York Court of Appeals identified five factors “relevant to assessing control, includ[ing] whether the worker (1) worked at his own convenience; (2) was free to engage in other employment; (3) received fringe benefits; (4) was on the employer’s payroll; and (5) was on a fixed schedule.”[v]
Then, there is the “economic reality test,” which is applied in connection with Fair Labor Standard Act (“FLSA”) cases, which focuses on “the totality of the circumstances.” In those cases, the “ultimate concern …[is] whether, as a matter of economic reality, the workers depend upon someone else’s business for the opportunity to render service or are in business for themselves.”[vi] The courts rely on several factors that are relevant in determining whether individuals are employees or independent contractors. These factors are derived from the Supreme Court’s decision in United States v. Silk and include (1) the degree of control exercised by the employer over the workers; (2) the workers’ opportunity for profit or loss and their investment in the business; (3) the degree of skill and independent initiative required to perform the work; (4) the permanence or duration of the working relationship; and (5) the extent to which the work is an integral part of the employer’s business.[vii]
Uber Drivers: Misclassified Employees?
In this complex world, it is impossible to make a snap determination as to who is an independent contractor and who is an employee. Thus, misclassification lawsuits have grown at a record pace. As of 2015, the number of wage and hour cases filed in federal court rose to 8,871, up from 1,935 in 2000, most pertaining to misclassification, including misclassifying workers as independent contractors when they are later found to be employees.[viii] That correlates to an increase of 358 percent, compared to the federal judiciary’s overall intake volume, which rose only a total of about seven percent over the same period.
Nowhere is the trend toward expanding misclassification litigation more apparent than when it comes to a company such as Uber. At first blush, Uber would seem to have a classic independent contractor relationship with its drivers. Let’s look at the basic facts: An Uber driver drives his/her own vehicle, obtains his/her own insurance, maintains that vehicle, drives when and where and for how long he/she desires. The driver is not issued any equipment by Uber and uses his/her own cell phone to access customers. Moreover, an Uber driver can drive for its competitor, Lyft, at any moment the driver wishes. It would seem the Uber driver has “liberty of action,” noted by the Court in Metcalf, and, thus, would not be considered an employee.
However, some courts and administrative agencies have ruled otherwise. In Berwick v. Uber Technologies, Inc., the first California decision to hold that Uber misclassified drivers as independent contractors, the California Labor Commissioner ruled that the Uber drivers bringing a class action were employees and not independent contractors.[ix] The Commissioner’s focus was on control.
Contrasting the factors listed above that would seem to contradict such control, the Commissioner found that Uber was involved in virtually every aspect of the operation. First, drivers can only avail themselves of Uber’s customers by utilizing Uber’s app. Next, Uber conducts driver background checks, sets the drivers’ compensation, and monitors drivers’ performance through customer reviews. Finally, Berwick held the work performed by the drivers was “integral” to the regular business of Uber – which is axiomatic.
Likewise, in June 2017, the New York State Unemployment Insurance Appeal Board held that three complainants were employees, stating, “Uber exercised sufficient supervision and control over substantial aspects of their work as Drivers,” similar to the analysis and holding in Berwick.[x] One of the factors considered by the Commissioner was that “Uber did not employ an arms’ length approach to the claimants” that the Commissioner believed would be present in a typical independent contractor relationship.
This raises interesting questions. Yes, Uber set the rates that could be charged and set certain conditions for drivers to follow, but one must assume some rules are necessary to establish consistency of the business model to attract and maintain customers for Uber and the drivers. Uber could not exist if it simply provided a means for drivers to pick up a passengers and left it to them to figure out the price of the service. However, what is an element of control, and sometimes what constitutes “control,” can be in the eye of the beholder.
Other Courts: Drivers Are Not Employees
There have been decisions to the contrary. In McGillis v. Department of Economic Opportunity, the Third District Court of Appeal of Florida upheld an administrative decision finding drivers were not employees.[xi] On the issue of “control” the court acknowledged that “both employees and independent contractors ‘are subject to some control by the person or entity hiring them. The extent of control exercised over the details of the work turns on whether the control is focused on simply the result to be obtained or extends to the means to be employed.’” Citing authorities, the court reasoned that if control is confined to results only, there is generally an independent contractor relationship, and if control is extended to the means used to achieve the results, there is generally an employer-employee relationship.
In Saleem v. Corporate Transportation Group, the Second Circuit addressed black car drivers in New York who were asserting claims against owners of black car “base licenses” and affiliated entities, pursuant to the FLSA. Like Uber, the black car drivers “possessed considerable autonomy in their day-to-day affairs.”[xii] They could determine when and how often to drive, without providing any notice to the Defendants, and they were at liberty to—and did—accept or decline jobs that were offered. In the end, the court found that the drivers were independent contractors, noting “[w]hile Defendants did exercise direct control over certain aspects of the CTG enterprise, they wielded virtually no influence over other essential components of the business, including when, where, in what capacity, and with what frequency Plaintiffs would drive.”[xiii]
What is the difference between the black car drivers in Saleem and the cases where Uber has been found to be an employer? The answer is very little. However, the law, like life, is nuanced. If the question is what constitutes control for purposes of making such a determination, one small factor could turn the tide either way. The real question is: has the economy and technology so changed that the normal paradigms we all think we understood regarding the nature of work and what it means to be “employed” mandate that a new way of looking at such concepts is in order-one way or the other?
[i] The Rise of the Supertemp, Jody Greenstone Miller and Matt Miller Harvard Business Review, May 2012.
[ii] The Role of Independent Contractors in the U.S. Economy, Jeffrey A. Eisenach, American Enterprise Institute; NERA Economic Consulting: December 1, 2010.
[iii] Metcalf & Eddie vs. Mitchell, 269 U.S. 514, 522 (1926).
[iv] Smith v. CPC Int’l, Inc., 104 F.Supp.2d 272, 275 (S.D.N.Y.2000) (“[T]he common law test of agency discussed in Darden is the same test applied by New York courts in addressing a variety of employer-employee relationships.”).
[v] Bynog v. Cipriani Group, Inc., 1 N.Y.3d 193, 198 (2003).
[vi] Brock v. Superior Care, 840 F.2d 1054, 1059 (2d Cir. 1988); see also Goldberg v. Whitaker House Coop., Inc., 366 U.S. 28, 33 (1961) (“‘[E]conomic reality’ rather than ‘technical concepts’ is to be the test of employment.” (quoting United States v. Silk, 331 U.S. 704, 713 (1947)).
[vii] United States v. Silk, 331 U.S. 704 (1947).
[viii] Why Wage and Hour Litigation is Skyrocketing, Washington Post, November 25, 2015.
[ix] Berwick v. Uber Technologies, no. 11-46739 EK, 2015 WL 4153765 (Cal. Dept. Lab. June 3, 2015).
[x] In the Matter of AK, JH and JS v. Uber, ALJ case No. 016-23858, New York State Unemployment Insurance Appeal Board (June 9, 2017).
[xi] McGillis v. Department of Economic Opportunity, 210 So.3d 220 (FDCA 3d Dist. 2017).
[xii] Saleem v. Corporate Transportation Group, Ltd., 854 F.3d 131 (2d Cir. 2017).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9478618502616882,
"language": "en",
"url": "https://www.sec-landmgt.com/flood-facts-and-figures.html",
"token_count": 683,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4567393b-4ad4-4ef2-aa81-464cdb4e22cd>"
}
|
FACTS AND FIGURES
FACT SHEET – FACTS & FIGURES
Unfortunately, many people do not know the basics about flooding or flood insurance. It is important that consumers have the facts about their flood risk and have an understanding about flood insurance so that they can make informed decisions. The following are important facts and figures that provide a good picture of the risk of flooding, its impact and options for protection.
- Floods are the number one natural disaster in the United States.
- Just an inch of water can cause costly damage to property.
- Everyone is at risk — due to weather systems, land development runoff or regional events.
- Most homeowners insurance doesn’t cover flood damage.
- More than 50 percent of properties in high-risk areas remain unprotected by flood insurance; all properties in high-risk areas need to be protected with flood insurance.
- Twenty to 25 percent of all flood insurance claims are filed in low- to moderate-risk areas.
- New construction can increase flood risk, especially if it changes natural runoff paths.
- More than 5 million Americans are protected with flood insurance, but millions more are unaware of their personal risk for property damage — or options for protection.
- There is a 26 percent chance of flooding during a 30-year mortgage, compared to a 9 percent chance of fire for buildings in high-risk flood areas.
- In the South and West, approximately 60 percent of homeowners in high-risk areas, or Special Flood Hazard Areas (SFHAs), are covered by flood insurance. However, outside of the high-risk areas, 1 percent of homeowners in non-SFHAs have purchased flood insurance (Source: RAND Corporation).
- In Northeast and Midwest SFHAs, the flood insurance coverage is significantly lower than in other areas of the United States. More than 70 percent of Northeast residents and nearly 80 percent of Midwesterners lack financial protection in case of a flood (Source: RAND Corporation).
- Property owners, renters and businesses can purchase flood insurance if their community is among the more than 20,300 communities that participate in the National Flood Insurance Program.
- It typically takes 30 days after the purchase of flood insurance for the policy to take effect.
- The average premium for a yearly flood insurance policy is approximately $500.
- People in low- to moderate-risk areas may be eligible for the Preferred Risk Policy with flood insurance premiums starting as low as $112 a year.
- Consumers can visit FloodSmart.gov or call 1-800-427-2419 to learn how to prepare for floods, how to purchase a flood insurance policy and what the benefits are of protecting their homes and property against flooding.
- Flood losses in the United States averaged $2.4 billion per year for the last decade.
- The National Flood Insurance Program (NFIP) paid nearly $16 billion in flood insurance claims to policyholders during the 2005 hurricane season (as of August 31, 2006).
- Federal disaster assistance is usually a loan that must be paid back with interest – and is only available when a disaster has been federally declared.
- In the last 50 years, nearly 1,000 flood events have been designated as federally declared disasters.
- Nearly 75 percent of all federally declared disasters over the past five years involved flooding.
Information provided by
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9449036121368408,
"language": "en",
"url": "https://www.swissre.com/risk-knowledge/risk-perspectives-blog/averting-collision-course-with-climate-change.html",
"token_count": 1522,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.130859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0d36a2c7-309a-4cdd-ab11-7b5373b4364d>"
}
|
Averting a collision course with climate change
Don't let the jargon fool you. 'Secondary peril' may sound like something trivial, but it's anything but. In our industry, the term usually refers to natural hazards that are moderately severe and occur fairly regularly: heatwaves, landslides, torrential rainfall or localised flooding. They're often a side effect of larger 'primary' natural catastrophes like hurricanes and earthquakes, the type of events that capture most of the headlines.
Yet this distinction is becoming increasingly obsolete. At worst it’s dangerously misleading. Consider last year's wildfires in California, the floods in India or the droughts in Europe – by definition secondary catastrophes, extreme weather events like these are happening more frequently. And they are increasingly responsible for most of the damage. As the Swiss Re Institute points out in its newest sigma report, secondary perils caused over 60% of all insured natural disaster losses in 2018. Even when mega events like Hurricanes Harvey, Irma and Maria broke new records the previous year, over half of all insured losses were actually driven by secondary perils.
So what's behind this trend? Much of it is the result of two clashing phenomena: the local occurrence of more extreme weather due to climate change and the relentless sprawl of urban centres in precisely those areas most affected, such as coastal regions or the urban-wildland interface. The collision of climate change and urban growth is adding a new twist to an old story about the risk of weather-related disasters and underinsurance.
Building climate resilience
The costs of natural catastrophes have been rising for years. And most are not covered by insurance. As a result, millions of households and businesses face a large and widening protection gap. Globally, this gap amounted to an annual average of USD 129 billion over the last ten years. That means about two-thirds of yearly catastrophe losses were not insured between 2009 and 2018. The main factors behind the widening protection gap are population growth and urbanisation in disaster-prone regions while the provision and purchase of adequate insurance solutions is lagging behind. They expose many more people, businesses and assets to the risk of catastrophic losses.
What's new is that climate change is fuelling the risk that communities face from extreme weather and related secondary perils. Rising temperatures and heavier precipitation are likely to increase the damage caused by wildfires, drought, heatwaves, torrential rain and flooding in many locations around the world. If unmitigated, some of these risks may become uninsurable in the future. This would widen the protection gap even further and severely inhibit our ability to help people get back on their feet after a disaster.
In the face of all this, we’ve reached a defining moment as an industry. More than ever, our long-standing efforts to narrow the gap in insurance coverage for natural catastrophes must converge with our broader actions on climate change and our support for more sustainable business models. This also applies to our own underwriting and asset management practices.
Clearly, a challenge of such magnitude as climate change requires strong collaboration between insurers, their reinsurance partners, as well as clients and partners from industry and the public sector. It's about working together to make our world more resilient – whether it's protecting households, businesses, critical infrastructure or supply chains.
Underwriting risks sustainably
In this respect, tackling the protection gap provides us with an important opportunity. Until recently, historic loss data may have been enough to map, price and underwrite risks. But in a world of expanding urbanisation and a changing climate, this approach is unlikely to be effective in the future. As the rising costs of secondary perils show, underwriting catastrophe business profitably means not just looking at peak risks associated with hurricanes and earthquakes. It also means considering forward-looking trends linked to rising temperatures and heavier precipitation, which are strongly magnified by the continued expansion of cities in regions such as the wildland-urban interface, former floodplains and coastal stretches.
For us in the insurance industry, this is a wake-up call to develop more robust and effective modelling tools that capture climate patterns and environmental changes in real-time rather than in hindsight. Making use of the latest technology available, we can now develop regionalised models that help to assess the local risk posed by weather-related secondary perils. Such insights should give us the confidence to offer a greater product range and make targeted distribution for catastrophe covers a viable option.
For example, when it comes to protecting households against increased levels of flooding, we can now use satellite imagery and cutting-edge algorithms to be more accurate in our flood modelling than ever before. With this approach, we've helped a local insurer in Florida to offer flood insurance protection to people who had never previously been covered. Likewise we work with governments and city planners to make sure everyone has adequate levels of flood insurance, and that flood defences are cost-effective and sustainable.
Together, our industry can also build resilience through our investment decisions, particularly by funding more sustainable infrastructure projects – whether that's transport networks, green buildings, smart grids or offshore windfarms. According to the Swiss Re Institute, the global re/insurance industry has total assets under management of about USD 30 trillion – that's roughly three times the size of China's economy. Even a small part of this sum could unlock a significant amount of capital for infrastructure projects that both protect against the worst climate impacts and support the transition to a low carbon economy.
To speed up such investments, however, we need to lower the barriers for private sector funding. At Swiss Re, we have been investing in infrastructure debt since the early days of our company's founding. But infrastructure projects remain notoriously difficult to access. A transparent and standardised infrastructure asset class on a global level would help unlock the global USD 80 trillion institutional investor capital that is available. This will only happen with more collaboration and open sharing of data about the performance of infrastructure assets.
As climate change is fast rising to the top of our industry's agenda, related sustainability considerations must become a number one priority too. For us at Swiss Re and for many of our clients, this has meant stepping up efforts to advance the transition to a low-carbon economy.
On the investment side of our business, we therefore consciously channel part of our fixed income portfolio into green bonds and allocate a portion of our infrastructure investments to renewable energy operations. In our underwriting, sustainability means we will no longer support projects that harm our planet. For example, we adopted a thermal coal policy last year which commits Swiss Re to no longer underwriting any business with more than 30% thermal coal exposure. Instead, we're looking for new growth opportunities elsewhere, such as supporting the renewable energy industry or partnering with clients and governments to develop scalable solutions to mitigate and adapt to climate change.
Despite these steps, our climate will continue to change and further exacerbate the impact of extreme weather events on local communities. More frequent occurrence of flooding, drought, wildfire and other weather-related catastrophes will remain a reality in many places around the world. This is why strengthening industrywide collaboration and partnerships with the public sector is critical to foster climate-smart innovations and continue to offer protection against the risk of natural catastrophes.
Together, we have the knowledge, technology and capabilities to make the world more resilient to the effects of climate change. Let's step up our collective effort to shift mindsets, business practices and actions to make it happen!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.938708484172821,
"language": "en",
"url": "https://www.wallstreetmojo.com/fund-management/",
"token_count": 2320,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0206298828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:257852cb-efc8-491c-8291-2e248b97da25>"
}
|
What is Fund Management?
Fund Management is the process in which a company that takes the financial assets of a person, company or another fund management company (generally this will be high net worth individuals) and use the funds to invest in companies that use those as an operational investment, financial investment or any other investment in order to grow the fund; post which, the returns will be returned to the actual investor and a small amount of the returns are held back as a profit for the fund.
Fund management is associated with managing the cash flows of a financial institution. The responsibility of the fund manager is to assess the maturity schedules of the deposits received and loans given to maintain the asset-liability framework. Since the flow of money is continuous and dynamic, it is of critical importance that asset-liability mismatch can be prevented. It is essential for the financial health of the entire banking industry is dependent, which in turn has an impact on the overall economy of the country.
For example, Fidelity manages $755 billion in U.S. equity assets under management. The responsibility of the fund manager is to assess the maturity schedules of the deposits received and loans given to maintain the asset-liability framework.
Fund Management also broadly covers any system which maintains the value of an entity. It applies to both tangible and intangible assets and is also referred to as Investment management.
Types of Fund Management
The kinds of Fund Management can be classified by the Investment type, Client type, or the method used for management. The various types of investments managed by fund management professionals include:
When classifying management of a fund by client, fund managers are generally personal fund managers, business fund managers, or corporate fund managers. A personal fund manager typically deals with a small quantum of investment funds, and an individual manager can handle multiple lone funds.
Offering Investment management services includes extensive knowledge of:
- Financial Statement Analysis
- Creation and Maintenance of Portfolio
- Asset Allocation and Continuous Management
Who is a Fund Manager?
A fund manager is essential for the management of the entire fund under all circumstances. This manager is entirely responsible for strategy implementation of the decided fund and its portfolio trading activities. Finding the right fund management professional usually requires Trial and Error combined with specific aid from investors in a similar position.
Generally, the investor will permit a fund manager to handle a limited fund for a specified period to assess and measure the success in proportion to the growth of the investment property.
Fund management uses its means of making decisions with ‘Portfolio Theory’ applicable to various investment situations. A fund manager can also use multiple such theories for managing a fund, especially if the fund includes multiple types of investments. The managers are paid a fee for their work, which is a percentage of the overall ‘Assets under Management.’
The qualifications required for a position in a fund management institution consist of a high level of educational and professional credentials such as a Chartered Financial Analyst (CFA) accompanied with appropriate practical investment managerial experience, which is generally decision making in portfolio management. Investors are on the look-out for consistent and long-term fund performance, whose duration with the fund shall match with its performance period.
4.7 (458 ratings) 9 Courses | 20+ Hours | Full Lifetime Access | Certificate of Completion
Responsibilities of the Fund Manager?
The fund manager is the heart of the entire investment management industry responsible for investing and divesting of the investments of the client. The responsibilities of the fund manager are as below:
#1 – Asset Allocation
The class of asset allocations can be debated, but the standard divisions are Bonds, Stocks, Real estates, and Commodities. The type of assets exhibits market dynamics and a variety of interaction effects, which allocate money amongst various asset classes leading to a significant impact on the targeted performance of the fund. This aspect is very critical as the endurance of the fund in challenging economic conditions will determine its efficiency and how much return it can garner over some time under all circumstances.
Any successful investment relies on the asset allocations and individual holdings for outperforming specific benchmarks such as bond and stock indices.
#2 – Long-term Returns
It is essential to study the proofs of the long-term returns against various assets and holding period returns (returns accruing on average over multiple lengths of investment). For example, investments spread across a very long maturity period (more than ten years) have observed equities generating higher returns than bonds and bonds, generating greater returns than cash. This is due to equities being more risky and volatile than bonds, which are more dangerous than money.
#3 – Diversification
Going hand in hand with asset allocation, the fund manager must consider the degree of diversification, which applies to a client under their risk appetite. Accordingly, a list of planned holding will have to be constructed deciding what percentage of the fund should be invested in a particular stock or bond. Adequate diversification requires the management of the correlation between the asset and liability return, internal issues about the portfolio, and cross-correlation between the returns.
What are Fund Management Styles?
There are various fund management styles and approaches:
#1 – Growth Style
The managers using this style have a lot of emphasis on the current and future Corporate Earnings. They are even prepared to pay a premium on securities having strong growth potential. The growth stocks are generally the cash-cows and are expected to be sold at prices in the northern direction.
Growth managers select companies having a strong competitive edge in their respective sectors. A high level of retained earnings is the expectation for such scripts to be successful as it makes the Balance Sheet of the firm very strong to attract investors. This can be coupled with a limited dividend distributed and low debt on the books, making it a definite pick by the managers. The scripts which are part of such a style will have a relatively high turnover rate since as they are frequently traded in large quantities. The returns on the portfolio are made up of Capital gains resulting from stock trades.
The style produces stunning results when markets are bullish, but the portfolio managers require to show talent and flair for achieving investment objectives during downward spirals.
#2 – Growth at Reasonable Price
The Growth at Reasonable Price style will use a blend of Growth and Value investing for constructing the portfolio. This portfolio will usually include a restricted number of securities that are showing consistent performance. The sector constituents of such portfolios could be slightly different from that of the benchmark index to take advantage of growth prospects from these selected sectors since their ability can be maximized under specific conditions.
#3 – Value Style
Managers following such a response will thrive on bargaining situations and offers. They are on the hunt for securities that are undervalued about their expected returns. Securities could be undervalued even because they do not hold preference with the investors for multiple reasons.
The managers generally purchase the equities at low prices and tend to hold them till they reach their peak, depending on the time frame expected, and hence the portfolio mix will also stay stable. The value system performs at its peak during the bearish situation, although managers do take the benefits in conditions of a bullish market. The objective is to extract the maximum benefit before it reaches its peak.
#4 – Fundamental Style
This is the basic and one of the most defensive styles which aim to match the returns of the benchmark index by replicating its sector breakdown and capitalization. The managers will strive to add value to the existing portfolio. Such styles are generally adopted by mutual funds to maintain a cautious approach since many retail investors with limited investments expect a necessary return on their overall investment.
Portfolios managed according to this style are highly diversified and contains a large number of securities. Capital gains are made by underweighting or overweighting specific securities or sectors, with the differences being regularly monitored.
#5 – Quantitative Style
The managers using such a style rely on computer-based models that track the trends of price and profitability for identification of securities offering higher than market returns. Only necessary data and objective criteria of protection are taken into consideration, and no quantitative analysis of the issuer companies or its sectors are carried out.
#6 – Risk Factor Control
This style is generally adopted for managing fixed-income securities which take into account all elements of risk such as:
- Duration of the portfolio compared with the benchmark index
- The overall interest rate structure
- Breakdown of the deposits by the category of the issuer and so on
#7 – Bottoms-Up Style
The selection of the securities is based on the analysis of individual stocks with less emphasis on the significance of economic and market cycles. The investor will concentrate their efforts on a specific company instead of the overall industry or the economy. The approach is the company exceeding expectations despite the sector or the economy not doing well.
The managers usually employ long-term strategies with a buy and hold approach. They will have a complete understanding of an individual stock and the long-term potential of the script and the company. The investors will take advantage of short-term volatility in the market for maximizing their profits. This is done by quickly entering and exiting their positions.
#8 – Top-Down Investing
This approach of investment involves considering the overall condition of the economy and then further breaking down various components into minute details. Subsequently, analysts examine different industrial sectors for the selection of those scripts which are expected to outperform the market.
Investors will look at the macroeconomic variables such as:
- GDP (Gross Domestic Product)
- Trade Balances
- Current Account Deficit
- Inflation and Interest rate
Based on such variables, the managers will reallocate the monetary assets for earning capital gains rather than extensive analysis of a single company or sector. For instance, if economic growth is doing well in South East Asia compared to the domestic development of the EU (European Union), investors may shift assets internationally by purchasing Exchange-traded funds that track the targeted countries in Asia.
Top Fund Management Companies
Here is the list of Top 10 Fund Management Companies by Asset Under Management. This data has been sourced from Caproasia.com
|Rank||Company||Country of Origin||Founded||AUM (US$ Billion)|
|1||BlackRock, Inc||United States||1988||4,737|
|3||UBS Global Asset Management||Switzerland||2002||2,713|
|4||State Street Global Advisors||United States||1978||2,296|
|5||Fidelity Investments||United States||1946||2110|
|6||Allianz Asset Management||Germany||1890||1,984|
|7||J.P. Morgan Asset Management||United States||1871||1,676|
|8||BNY Mellon||United States||1784||1,639|
|9||PIMCO ( Pacific Investment Management Company)||United States||1971||1,500|
|10||Capital Group||United States||1931||1,390|
This has been a guide to Fund Management. Here we discuss who is a fund manager and what their responsibilities are. We also look at various fund management styles. You may learn more about financing from the following articles –
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9375463724136353,
"language": "en",
"url": "https://buildingefficiencyinitiative.org/articles/big-apple-takes-bold-steps-toward-energy-efficiency-buildings",
"token_count": 2882,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1416015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c2ceb17c-e92c-41b2-8e7e-6befab08812c>"
}
|
The Big Apple Takes Bold Steps Toward Energy Efficiency in Buildings
New York City’s Greener, Greater Buildings Plan is designed to “promote cost-effective steps to create significant economic and environmental impacts”1 through efficiency improvements in buildings. By both guiding the market through increased information and enforcing action through code compliance and retrocommissioning, the legislation sets the tone for efficiency initiatives in municipalities around the world. It also has the potential to transform the building efficiency market by expanding and enforcing the rules of the game.
Questions about the program remain. For example:
Will the program give advantages to larger property owners versus smaller companies?
How will energy-efficiency improvements square with efforts to preserve the character of historic landmark buildings?
How will the legislation affect relationships between commercial building owners and tenants?
Those questions aside, the Greener, Greater Buildings Plan has potential to streamline resource use in a city where three-fourths of total energy demand and carbon emissions come from buildings. It is set to transform and accelerate the pace of energy-efficiency investment in the world’s fifth-largest city.
The plan has four legislative components that affect commercial buildings in the city, and it is supported by two complementary city initiatives aimed at workforce development and energy efficiency financing. Here is a review and analysis of the plan.
The city adopted the Greater, Greener Buildings Plan (GGBP) in December 2009 as part of PlaNYC 2030, introduced by Mayor Michael Bloomberg on Earth Day 2007. PlaNYC targets a 30 percent reduction in the city’s annual greenhouse gas emissions below the 2005 level. The GGBP is expected to account for about five percent of those reductions, in the process saving New Yorkers $700 million in annual energy costs and creating some 17,800 jobs.2
To demonstrate its commitment to the plan, the city led by example, committing $80 million annually – nearly 10 percent of its annual energy budget – to reducing energy and emissions from municipal buildings. In May 2010, the city also completed the first energy benchmarking of the 2,790 buildings in its portfolio, using the U.S. EPA’s ENERGY STAR® Manager tool to benchmark buildings larger than 10,000 square feet. The benchmarking data will be available online by Sept. 1, 2011.
The GGBP’s four legislative components affect all New York City commercial buildings of 50,000 square feet or greater, as well as lots with two or more buildings over 100,000 square feet total. Here is an overview:
New York City Energy Conservation Code: Local Law 85
New York is one of 42 states that use the International Energy Conservation Code as the state energy code. However, New York is the only such state that includes a loophole by which renovations affecting less than 50 percent of a building system are exempt from compliance. Those buildings are essentially grandfathered into the energy code that existed when they were built. The NYC Energy Conservation Code went into effect on July 1, 2010, and affects all buildings. Now, all renovations must meet the Energy Conservation Code regardless of the extent of total renovation. Renovations that include changes to a mechanical system requiring a permit, or that add conditioned space, must also comply.
Energy and Water Benchmarking: Local Law 84
This initiative requires buildings in the GGBP to assess their energy and water use annually, using the ENERGY STAR Portfolio Manager tool. (Certain building types not categorized in the Portfolio Manager tool, such as data centers and trading floors, are exempt.) In January, building owners must ask tenants to provide energy data using a city Office of Long-Term Planning and Sustainability (OLTPS) form, which must be returned to the owner by Feb. 15. The owners must then submit their benchmarking results to the city Department of Finance by May 1 each year, using an OLTPS form. Metrics include:
An energy utilization index
Water use per gross square foot
A rating comparing energy and water use to similar buildings
A comparison of reports for the building over multiple years.3
The first set of annual data is due from building owners on May 1, 2011, and will be posted on the city’s tax assessment website by Sept. 1, 2012.
Requirements for Lighting and Submetering: Local Law 88
Currently, lighting accounts for about 18 percent of New York City’s energy use and greenhouse gas emissions.4 This creates a great opportunity to target lighting upgrades in the city’s one million existing buildings – 85 percent of which are expected to remain functioning in 2030.5 Local Law 88 mandates that buildings over 50,000 square feet be brought up to lighting standards in the New York City Energy Conservation Code by Jan. 1, 2025.
The lighting upgrade must comply with the code as it exists at the time of the upgrade only – the law does not require upgrades to comply with the code as it will exist in 2025. The law also requires major tenant spaces – larger than 10,000 square feet – to be electrically submetered by Jan. 1, 2025. At that point, building owners must provide tenants with monthly electrical consumption data and charges.
Energy Use Audits and Retrocommissioning: Local Law 87
Building owners must undergo both an energy use audit and a retrocommissioning of their central systems, including the building envelope, HVAC systems, conveying systems, domestic hot water systems, and electrical and lighting systems. Reports must be submitted to the city Department of Finance every ten years, beginning in 2013. The calendar year in which a building owner must file with the city corresponds to the facility’s tax block number.
The energy audit must be an ASHRAE Level II Audit that identifies:
The cost, savings, and simple payback for those options
A breakdown of energy use by unique systems in the building
Energy impacts from tenant behavior.
The retrocommissioning must ensure that systems and operations are running optimally. Compliance can be demonstrated through efficient calibration and sequencing, cleaning and repair, training, and documentation of maintenance records.
Buildings that are ENERGY STAR rated or LEED-EB certified can waive the audit, and buildings less than 10 years old can apply for a 10-year deferral for reporting. Owners of buildings under financial hardship may also apply for an extension.
The GGBP does not require building owners to make capital improvements to their facilities as a result of their audit and/or retrocommissioning. However, city buildings must undertake improvements if the measure is projected to provide a payback shorter than seven years.
The PlaNYC program offers two complementary initiatives to support the GGBP through workforce development and energy efficiency financing. Here is an overview:
In collaboration with Green Jobs Green New York legislation, the New York State Energy Research and Development Authority (NYSERDA) has created a Working Group for Green Building Workforce Development. This group began meeting in early 2010 to develop an $8 million strategy for providing the workforce needed to fill about 17,800 green construction and development jobs. Over the next five years, the group will oversee the training of a workforce three times the size of the current workforce. The strategy and budget are divided into four categories:
Equipment and Training Infrastructure
Certifications and Company Accreditation
Apprenticeships and Internships
The program is currently under development.
In September 2009, $16 million of funding from the American Reinvestment and Recovery Act was approved as Energy Efficiency Block Grants to aid with financing the GGBP. The funding will be allocated to building owners in two categories:
Project capital for completing energy audits as part of Local Law 87
Project capital for “shovel-ready” retrofit projects.
Loans will meet up to 100% of those costs, and the repayment will come from the energy savings realized from the projects. A revolving fund, the Greener, Greater Building Fund, will be used to ensure that ongoing financing is available as the GGBP continues. An additional component of the funding is tracking of projects to provide data to the private sector about the viability of energy efficiency risk and returns. The city expects funding requests to begin to be filled in late 2010.
In Step With Other Cities
The GGBP builds off other government energy-efficiency initiatives around the U.S. and the globe. For example, the Berkeley, Calif., Commercial Energy Conservation Ordinance (CECO), adopted in 1994, requires commercial building owners to meet current CECO standards:
Upon building sale
Upon renovations valued greater than $50,000
Upon additions greater than 10% of building area.
San Francisco’s Green Building Ordinance requires fundamental commissioning for new large commercial buildings and for existing buildings undergoing major renovations.6 As seen in Figure 1 below, other entities including the District of Columbia and the State of California have initiated benchmarking policies for commercial buildings. However, the GGBP is the first program in the United States to publicly list energy and water benchmarking and to tie the data into municipal tax information. In the European Union, Energy Performance Certificates (EPCs), which rate buildings based on resource use, have been under legislation since 2002, and the legislation will become stricter in the next five years. By increasing the availability of energy information, municipalities are able to make educated decisions about how to do long-term sustainability and infrastructure planning.
Figure 1: Benchmarking Programs in the U.S.
Questions Awaiting Answers
The GGBP represents a major step forward in building energy efficiency.However, questions about the program remain.
For example, the timing and depth of New York City Energy Conservation Code upgrades will have a tremendous effect on the market for goods and services in New York. The city will need to effectively decide when limits need to be pushed – that is, when it is appropriate to raise the standard for energy conservation in order to meet the PlaNYC goals. The city will also need to decide how far the “stretch” will be in changing the targets of those goals in the face of changing technologies. Because of these two factors, some well-intentioned building owners may alter their buildings early on to capture energy savings, but given that renovations may happen only every 20 years or so, there will be missed opportunities to capture additional energy savings through progressively stricter codes. It will require a delicate balance on the city’s part to find the appropriate timing and level of energy conservation in a cost-effective manner for building owners.
Another question is whether renovations may favor building owners with large facilities or with multiple facilities, as they can leverage economies of scale in the processes and materials they use. They may also be able to use their size to influence future legislation or code updates in their favor. Building owners with fewer square feet under management may need to invest proportionally more to satisfy the GGBP requirements, thus putting a further strain on their budgets.
Also to be resolved is how to square the GGBP with historic renovations. According to the New York City Landmarks Preservation Commission, more than 27,000 buildings in the city have been granted landmark status. These buildings are exempt from complying with Local Law 85 to update to the NYC Energy Conservation Code. It is possible that conflict will continue between those who wish to update buildings to modern energy standards and those who want to preserve the historic character of landmark buildings – even though there are several examples of a successful relationship between these two camps, including Johnson Controls’ work at the iconic Empire State Building in Manhattan. It remains to be seen how the GGBP will work with historic preservationists in New York and how effective renovations can be at reducing energy consumption while satisfying the Landmarks Preservation Commission goals.
It will also be interesting to watch how the tenant-building owner relationship evolves with benchmarking and submetering. For example, the GGBP sets out a timeline for building owners to collect energy data from tenants for use in benchmarking. However, it will be up to the owners to decide how to incentivize their tenants to provide energy data, or how to penalize them for non-compliance. In many cases, this requirement may change leases, and it will be up to the legal system to decide how to best incorporate information transfer between the owner and tenant. The verification process for the tenant data is not yet outlined, and therefore any decision-making by the city or by the building owner based on that data may be suspect on some level.
Finally, the question remains whether knowledge of energy information through benchmarking and energy audits will actually lead building owners to take steps toward reducing consumption. The GGBP assumes that because building owners are competitive with each other, benchmarking will motivate them to use less energy and improve their financial positions. However, beyond meeting the NYC Energy Conservation Code requirements in force at the time of renovations, and completing lighting upgrades by 2025, building owners are not required to make any capital improvements based on the energy information generated. While the required actions will surely lead to some incremental energy-efficiency gains, meeting the ambitious goals of PlaNYC may require more force.
A Framework for Change
Through the Greener, Greater Buildings Plan, New York City has begun to outline the steps required to meet its PlaNYC energy goals. While the program is new and details will need to be ironed out as it unfolds, the GGBP creates a framework for substantial improvements in energy efficiency in a large portfolio of commercial buildings.
2www.sustainablebusiness.com December 10, 2009.
3G. Works: The New York City Greener Greater Buildings Plan and How it Affects You. http://www.g-works-group.com/files/Greener-Greater-Buildings-Plan-Full-Summary.pdf December 10, 2009.
5New York City Global Partners. Best Practice: NYC Greener, Greater Buildings Plan.
6San Francisco Planning + Urban Research Association: Ideas and Action for a Better City.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.965120792388916,
"language": "en",
"url": "https://business-papers.com/analysis-of-the-global-tea-market/",
"token_count": 1364,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:18319c56-2ef9-4658-9820-d8acb4f73516>"
}
|
“The process of absorption costing (also known as overhead recovery) is highly subjective process which can cause organisations to suffer major problems unless the concept is fully understood by management” From the above statement it should be noted that it is management’s ability to understand and comprehend the workings of absorption costing that is of issue. This paper will attempt to highlight the process itself as well as pointing out the possible areas that could prove problematic in the workings of an organisation.
As well as looking at absorption costing in closer detail, this paper will also mention variable costing as measurement milestone for absorption costing. This will be followed by a look at a process of absorption costing, process costing, and how understanding it, along with the whole absorption costing process will enable management to have a clearer picture of the costing processes in their organisations. Numerical examples will be used to further explain all the above processes. Management accounting systems should provide information for various functions.
Internal reporting to managers for cost planning, control and also performance evaluation. Reporting for decisions on how resources should be allocated and priced in relation to product profitability. The information may also be used by management for strategic and often as a tactical decision making tool. The information especially that found in absorption costing will be used for external reports, such as financial statements. With this in mind the role of the management accountant will involve allocation of costs between areas such as cost of goods sold and the stock needed for both internal and external profit reporting.
Provide relevant information to help managers make better decisions. Provide information for planning, control and performance measurement. Planning to turn goals and company objectives into actions and resources, while looking at both the long and short term activities of the company. This would be done by controlling activities through setting of targets and standards, comparing performance and controlling costs with the aim of improving the efficiency of the organization. The information that will be gathered from cost data, will aid in facilitating cost allocation between cost of goods sold and the stock at end of period.
Decision making through use of relevant data, will also aid in planning, control and performance measurement Before looking at the absorption costing system a brief description of costs is needed. This will be helpful as the behavior of these costs affect the costing systems in different ways. The behavior of costs and overheads needs to be well understood by management for them to produce and understand the full mechanics of the absorption process. In business, overheads are made up of: indirect materials, indirect labour and indirect expenses.
Overheads must be shared between all the cost units as they do not relate to any particular unit of output, and are usually classified by function. The next process after understanding the form of overheads is to allocate them. Allocation of overheads is ‘the charging to a cost centre of those overheads that have been directly incurred by that cost centre’. Apportionment is the next process, and this is where cost centres are charged with a proportion of the overheads. After the above processes have been carried out the next step then is charge the overheads to the cost units, and this is the where absorption costing starts.
What is absorption costing? According to Drury (2000) it is “a system in which all fixed manufacturing overheads are allocated to products” from this simplified definition management needs realise that all manufacturing costs fixed or variable should be treated as product costs. Full absorption costing is a traditional method where all manufacturing costs are capitalized in the inventory, i. e. , charged to the inventory and become assets. This means that these costs do not become expenses until the inventory is sold. In this way, matching is more closely approximated.
All selling and administrative costs are charged to expense however. Absorption costing is required for external reporting. The absorption method is also used for internal reporting. Absorption costing can be defined as “the total amount of resources involved with pursuing a particular objective” (Atrill P and Mclaney 1994). This could be repairing a car, producing a can of baked beans, or building a block of high rise flats. In short the purpose of absorption costing is to answer the question ‘ how much did it or will it cost ?
‘ the reasons for us wanting to know the answer to that question will be made evident as the mechanics of absorption costing are made clearer. One reason for this would be for the manufacturer or producer to determine what price to charge to their customers to ensure that all costs are covered and that the company makes the required and acceptable level of profit. It is important to know that there is no distinction between fixed and variable cost when you are dealing with absorption costing. All relevant costs used to achieve the particular objective, whether fixed or variable, are taken into account.
Using the information obtained The information obtained can be used in various areas of accounting. For financial accounting purposes, where identifying the cost of production to find the cost of sales figures, helps in working out the gross profit figure once sales have been taken into account. Over heads are included in the cost of production as standard. Measuring the profitability of departments or divisions. When comparisons are made between the unit costs of some departmental outputs and the external purchase prices absorption costing information is useful.
Use of absorption costing is also useful for pricing purposes. This is because of the need to use all relevant costs in coming up with the cost per unit and thus setting a profit margin based on those findings. Some businesses base their selling prices on the incurred costs of producing each unit and therefore are called price makers because their prices are based on the costs of production. Most though have their prices set by the market depending on the demand of the products or services they supply and are called price takers.
Absorption costing information does not necessarily have to be useless to price takers since knowledge of the full cost will enable them to make a judgment on whether or not to enter or to remain in a particular market given the price dictated by the market. Though the usefulness of absorption costing can be questioned it is probably fair to say that most businesses involved in manufacturing, or in the provision of services, hospitals and universities use absorption costing to determine costs of their output.
Overheads are treated in two ways to get the final figure. Firstly the overheads are taken from the cost centre to the cost unit. The second is the cost of each cost unit forms part of the selling price. This means when management are looking at what to charge the customer all the costs have been taken into account and are passed on to the customer. The Overhead Absorption Rate OAR can be calculated in three ways; units of output, direct labour hour and machine hour. These are shown below with examples.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9430387616157532,
"language": "en",
"url": "https://cattolicaglobalmarketsmagazine.com/2020/03/05/behavioral-finance-influences-investment-decisions/",
"token_count": 683,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.35546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3ab9549c-14e2-4adb-b845-1332fba09e41>"
}
|
Behavioral finance, a sub-field of behavioral economics, proposes that psychological influences and biases affect the financial behaviors of investors and financial practitioners. Moreover, influences and biases can be the source for explanation of all types of market anomalies and specifically market anomalies in the stock market, such as severe rises or falls in stock price.
It is an innovative approach complementary to the traditional and descriptive one. It does not assume that individuals are rational but explains the mistakes we tend to make and how to correct them.
Within behavioural finance it is assumed that the information structure and characteristics of market participants systematically influence individuals’ investment decisions.
Behavioral finance first appears in the neoclassical economics thanks to Adam Smith with his book Theory of Moral Feelings described the functioning of individual psychological behavior and Jeremy Bentham with his text based on the psychology of utility. After having disappeared from economic thought for over half a century, behavioural economics knew a new birth in the economics studies around the 1960s. Through the development of cognitive psychology economic models of rational behaviour merge with cognitive models linked to the decision-making process.In the last fifteen years, three Nobel prizes have been awarded for a work on behavioral finance the psycologist Daniel Kahneman and the economists Richard Thaler and Robert Shiller.
Most economic theories, for example EMH (efficient market hypothesis), are based on the idea that act of individuals within markets is rational. However when anomalies such as speculative bubbles occur within the market the investor’s behaviour is not only due to information asymmetries or the failure of efficient market theory, as standard finance stated, but also to irrational behaviour influenced by strong emotions. Behavioural Finance offers a more realistic and humane interpretation of how financial markets work.
Behavioural errors can be cognitive or emotional. Cognitive errors involve our way of reasoning, emotional ones are dictated by emotions.
Loss aversion is one of the most common cognitive error. It is the attitude of the individual in the treatment of losses versus gains. In other words the fear of losing 1 € is higher than the joy of gaining one. This results in a heavier weighting of losses affecting the investment risk. For this reason have developed a different way of considering and calculating risk creating valuation models that take into account this asymmetry.
Another cognitive error is called home bias. It consists in the investors’ attitude to invest in domestic securities rather than foreign ones. This is due to the fact that the human mind tends to prefer solutions recognized as familiar and notorious, this leads investors to disregard the diversification benefit generated by foreign securities.
Overconfidence is the predisposition of a cognitive error that refers to one’s abilities and awareness of one’s own limits; is a full part of the behavioural traps. This condition can prove to be risky in an investment choice. Forecasts are often wrong because they are based on few and superficial elements such as commonplaces, memories and external reference points.
Beside these cognitive errors we make many emotional mistakes such as being optimistic and euphoric when markets go well and panic when markets go bad. This leads us to do the opposite of what we should do: in the first case prices rise and we should be more cautious, but the emotional aspect leads us to buy. In the second case prices fall and we should evaluate opportunities to buy, but fear leads us to sell.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9759546518325806,
"language": "en",
"url": "https://me.popsugar.com/celebrity/How-Does-Royal-Family-Get-Money-43869862",
"token_count": 1097,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1904296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9f8bdbac-45e9-44df-a567-0e42a8fc4f04>"
}
|
It might seem like we know the British royal family based on how public their lives are, but when you really think about it, how much do you really know about the lifestyles the most famous family in England? While we do know what Prince William and Kate Middleton do when it comes to their jobs — like being a part of the Royal Air Force, in William's case — not many people know where the family's money and wealth actually comes from.
Spoiler alert: the bulk of the money that Queen Elizabeth II and her family have is inherited. What you might not know is how the family first received, and continue to receive, the bulk of that money. Essentially, there are three different ways in which the queen and her heirs make money every year and retain their wealthy status.
1. Private Income
For all intents and purposes, we're going to focus on how Queen Elizabeth II makes money, because technically it's the same formula for her descendants and her heirs. The queen has an undisclosed amount of earnings that come annually from "inherited private estates" including Balmoral Castle and other properties from her personal investment portfolio.
According to the royal family's website, this inheritance initially came from Her Majesty's father, King George VI, and also consists of a valuable artwork and stamp collection. Last year, her private income was estimated to be about £340 million ($490 million), according to a Sunday Times report in 2016.
When it comes to this area of her wealth, the queen does pay taxes on any income she privately makes from her different investments, but what she pays has always remained secret. These properties — unless she chooses to sell them — will most likely go to her descendants someday.
2. The Privy Purse
The Privy Purse isn't just a fancy British term for a handbag or money holder, although if you really think about it, that's sort of what it is — a holder of the set money that the queen receives during her time as monarch. The second form of royal income is all about The Duchy of Lancaster, which provides the Queen with a set income — so no work-related expenses like touring the country and making public appearances have to be paid with this, although some are — called the Privy Purse.
Simply put, The Duchy of Lancaster is a "portfolio of land and other assets that have been in the royal family for hundreds of years." As of early 2017, that compilation of land and property came to about 18,433 hectares. The Privy Purse is all of the income generated from those properties (about £17.8 million or $21.7 million for 2015-2016) and it's been around since 1399.
"Its main purpose is to provide an independent source of income, and is used mainly to pay for official expenditure not met by the Sovereign Grant (primarily to meet expenses incurred by other members of the Royal Family)," the official Royal Family website explains.
Side note: Prince Charles, who is currently next in line for the throne, runs his own estate (that again has properties that have been passed down and cannot be sold from back in 1337) called The Duchy of Cornwall. Charles currently resides over this money (like the queen does for The Duchy of Lancaster), and it is responsible for covering all of the personal, and most of the official expenses, for his family line, which would include Prince William and Harry. When Charles ascends the throne, Prince William will become the heir to The Duchy of Cornwall and it will continue to go down the line to the male heirs.
3. Sovereign Grant
Last but not least is the Sovereign Grant, which is handed out by the Treasury and is funded by taxpayer dollars (or pounds in the UK). This is where the majority of the British Royal Family's income hails from and it's used to carry out the royal duties, cover royal travel, pay for the staff, and help maintain Buckingham Palace's upkeep.
In 1760, the Sovereign Grant was set up thanks to King George III, and it's basically an agreement that was made between Parliament and the Royal Family saying they would hand over all of the profits from the Crown Estate to the government in return for a percentage of the profits each year.
What is the Crown Estate, you ask? Well, according to BBC News, it's "an independent commercial property business and one of the largest property portfolios in the UK." It is made up of residential properties, businesses, shops, and more. Basically, it is everything that the Monarchy owns — for the duration of their reign — and the residents of the United Kingdom pay taxes on it, thanks to the government. How the Sovereign Grant works is this: The Crown (in this case the queen) owns the Crown Estate. That estate makes money, which in turn gets paid to the HM Treasury, who calculates what 15 percent of the surplus income is. That amount is then paid to the queen in the form of a Sovereign Grant and used for royal expenses. It's a mouthful, but it actually makes sense.
While living like a royal does sound great, knowing where the family's money comes from is much more complicated than we expected. But, at the end of the day, money is money, and the British royal family has a lot of it. Cheers!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9537811279296875,
"language": "en",
"url": "https://www.diyinvestor.net/investing-basics-five-simple-investment-rules/",
"token_count": 1696,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07177734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a362f4b5-d6aa-4492-8c74-730faaf0f52c>"
}
|
Investing Basics: Five simple investment rules
Rule 1: Start right now
Start right now, even if it’s just a £1 a day or a week. No matter how small the amount, you’ll see your money grow quickly. That’s because of a really simple but important concept – compound interest.
Another way of explaining compound interest is that you get interest on the interest you get paid, and then you get paid interest on that interest, and pretty soon the interest you earned is bigger than the money you put in there in the first place.
Let say’s you put £100 in the bank. If the interest rate is 5% you’ll earn £5 in interest in the first year. That means you now have £105 instead of £100.
The next time you receive interest, it will be on your original £100 as well as the interest you received, so £5.25. Now you have £110.25.
And it will keep growing and eventually the interest bit will be bigger than the original sum you put in the bank. If you did that for 25 years (at the same rate interest rate) your £100 would become £322.51.
Now imagine that you saved £100 a year at the same interest rate. After 25 years, you’ll have saved £2500 but earned an additional £2273 in interest — almost double the amount you saved.
It’s a bit like a snowball that rolls downhill, gathering snow. By the time it reaches the bottom, the snowball is much larger than it was at the start. That’s how compound interest works, it builds on itself.
The longer you save, the better. After 35 years of saving £100 a year, you would have accumulated £9032 and £5532 would have been interest.
Rule 2: Don’t pick stocks
First, here’s a quick overview of the stock market. To raise capital (money), companies sell shares of their company on the stock market.
These shares are publicly traded and there is a fixed number available. If someone wants to buy a share, someone else who owns a share must be selling it.
It’s this supply and demand that determines the price of a company’s shares, usually driven by how the company is doing (or predicted to do).
Now that we are all on the same page, let’s get down to the nitty-gritty of why the best investors in the world all say the same thing — do NOT pick and invest in individual stocks!
The Cat’s Whiskers
In 2012, the Observer newspaper pitted professionals Justin Urquhart Stewart of Seven Investment Management, Paul Kavanagh of Killick & Co, and Schroders fund manager Andy Brough against students from John Warner School in Hoddesdon, Hertfordshire – and Orlando, a ginger cat, which selected stocks by throwing his favourite toy mouse on a grid of numbers allocated to different companies. Orlando won (more on the experiment here).
Humans cannot predict the future.
Picking stocks is hard.
No one has a crystal ball to predict which company’s shares will go through the roof and which will tank.
If there was a simple way to reliably predict which shares would outperform the rest, then every stock-picker in every corner of the country would be a billionaire! For every billionaire made by the stock market, there are legions more going broke.
You’re probably wondering why people invest if the stock market is so risky and unpredictable? That’s because the best way to invest is through an investment fund that buys shares in lots of companies on behalf of its investors.
The fund will protect you from some of the risk of picking the wrong stocks. In investments, some stocks will go up and some will go down.
There are two types of funds: active fund where the manager uses research, knowledge and experience to decide which stocks to buy.
There is also the index-tracker. An index tracker buys the stocks in an index as a bundle, for example, the FTSE 100.
Although there is some risk, the stock market as a whole has always gone up over the long term. This means that if your portfolio (investments) reflects the broader stock market, your investment will grow over a long period of time even as some individual stocks go down.
Rule 3: Keep costs low
Your investments could be doing very well, but if the cost of managing those investments is high, then it reduces the long-term returns.
Active funds charge higher fees, then there’s the cost of a financial adviser (if you use one) and the investment platform fees where your investments are hosted. All those costs can add up and erode your earnings, so it pays to pay attention.
Let’s say your investment is making an average return of 2% a year. If your adviser, investment platform and fund are charging you 2% a year, then your returns are wiped out. It’s the opposite of compound interest, those fees can snowball out of control.
Let’s look at two people who invest £1,000 a month for 30 years. One is paying fees of 1.5% and one is paying fees of 0.5%. A 1% difference might not seem like a lot but after 30 years, the first person has accumulated a portfolio of £450,741 while the second one has accumulated a portfolio of £387,430. That’s a difference of £63,212, which is huge.
If you use an adviser, ask about the impact of fees on your investments. If you look after your own investments, check what you’re being charged for the platform and the fund charges.
Rule 4: Diversify
Investing in stocks and bonds comes with risk. It’s good to have a diversified portfolio which is the same as saying ‘don’t put all your eggs in one basket’.
What if you’d put all your money into Blockbusters? In rule three we explained why stock-picking never works and why funds do. Smart investors spread the risk by buying funds that give exposure to different markets and asset classes.
If you spread your investments across lots of different industries such as pharmaceutical, industrial, internet, green energy etc across lots of different countries, it will protect you against any ups and downs in any part of your portfolio.
We human beings prefer to invest in what we know and trust, but rather than investing in a bunch of UK equity funds which will be invested in the same underlying investments, you can diversify with investments in UK, Europe, US and Asia-Pacific equity funds for example, spreading the risk and taking advantage of different economic factors in each of those markets.
You should also diversify with cash, bond and mixed asset funds.
Rule 5: Tune out the noise
Emotion is the enemy of investment and when the markets are up and down it’s easy to get emotional. History has shown that trying to time the market usually leads to worse returns. Don’t listen to commentators telling you to buy or sell, have the conviction to stick to your long-term plan.
Investors who chase performance or who run away from poor performance are doomed. The difference between good and bad investors is that good investors can tune out the noise.
You’re bound to say that the 2008 global financial crisis was hardly just noise (or the recession that the coronavirus epidemic that we’re living through right now will probably lead to). And you’re right, situations like that can have a huge impact on investors and affect the economy for years.
But the people who panicked and pulled out in 2008 to protect their money crystallised their losses and then lost out on the next 10 years of stellar growth.
The stock market has risen by 84% in that time – that’s a lot to lose out on.
Smart investors stay invested. And we know you’re smart investors.
Click to visit our friends at:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9302942156791687,
"language": "en",
"url": "https://www.greenprophet.com/2011/07/emefcy-funded-to-make-bacteria-produce-energy/",
"token_count": 560,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0194091796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:01e70508-7f3c-4962-9227-6aa24b20d36b>"
}
|
Two serial water technology entrepreneurs from Israel, Eytan Levy and Ronen Shechter, who also founded Israel’s AqWise, have come up with another way to put bacteria in wastewater to work for us. Their electrogenic bioreactor generates electricity directly during the process of treating wastewater.
Emefcy uses naturally occurring bacteria in an electrogenic bioreactor to treat wastewater. The organic material in the waste produces power and treated water.
Although exactly how it occurs is a little mysterious, Emefcy claims that the process is not the usual methane harvesting technique. Rather than using conventional energy-intensive aerobic processes or methane-producing anaerobic digestion to treat wastewater, Emefcy claims that they can harvest renewable energy directly from the wastewater.
Because wastewater treatment is itself an energy sink, consuming an estimated 2% of energy worldwide, this breakthrough is significant. Instead of guzzling power, Emefcy can feed power to the grid, creating an energy-positive wastewater treatment plant, transforming wastewater treatment from an energy-intensive, cost-intensive and carbon-intensive process, into an energy-generating and carbon-reducing process.
The primary initial applications would be for wastewater treatment in the food, beverage, pharmaceutical and chemical industries, with total market potential of US$10 billion annually. Emefcy’s simple modular equipment can be used “out of the box”.
The investment comes from a consortium of VC groups, that indicates the degree to which this innovation is taken seriously. Part comes from Pond Venture Partners, Plan B Ventures and Israel Cleantech Ventures, all VC groups that are already developing cutting edge technologies in Israel, which is fast becoming the Silicon Valley of clean tech water innovation.
In this round of funding the group was joined by Energy Technology Ventures, a joint venture put together between GE (NYSE: GE), NRG Energy (NYSE: NRG), and ConocoPhillips (NYSE: COP) in order to develop next-generation energy technologies.
Energy Technology Ventures is not a purely clean tech fund, because it does invest in nuclear, oil, coal and natural gas as well. But it aims to promote venture-and growth-stage energy technology companies in the renewable power generation, smart grid, energy efficiency, emission controls, water and biofuels sectors as well, focusing mainly in Europe, Israel and the US.
The group takes an agnostic approach to energy investment. So this investment is perhaps an indication that the time has come for alternative energy innovations like Emefcy’s to be considered as just another form of energy.
With Peak Oil looming, smart energy companies are looking for more energy, of every kind. Even energy made by bugs.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9516527652740479,
"language": "en",
"url": "https://www.tobj.ca/news/personal_finance/2021/02/19/2732-instead-of-a-universal-basic-income-governments-should-enrich-existing-social-programs.html",
"token_count": 558,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.466796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:98d09109-cdce-4681-bf76-a62722dc8db6>"
}
|
Instead of a universal basic income, governments should enrich existing social programs
Amid the COVID-19 pandemic, the idea of a universal basic income (UBI) has been touted by those across the political spectrum as a prospective model of social security that would provide guaranteed cash to citizens.
But while UBI is desirable in principle, it’s not a magic solution to the intricate and perennial problems of poverty and income inequality. Furthermore, its implementation in Canada is not financially, administratively, politically or constitutionally feasible.
Within emerging literature on the implications of the COVID-19 pandemic on employment and earning levels, UBI has been elevated to the status of a panacea that could ease all the social and economic ills that societies are encountering during the crisis.
Ardent advocates of UBI have argued that it has the potential to reduce poverty, narrow income inequality gaps, address automation, eradicate the stigma associated with collecting government assistance, enhance the social well-being of citizens, diminish dependency and streamline existing complex and fragmented social transfer programs and public services.
The appeal of UBI in Canada has become so strong that several Liberal MPs have asked Prime Minister Justin Trudeau to elevate UBI to the top of his policy agenda.
From CERB to a universal basic income?
Some advocates of UBI contend that the gradual conversion of the CERB (Canada Emergency Relief Benefit) into UBI is a logical progression.
However, if UBI is set at a monthly, $1,000 unconditional benefit for every adult Canadian, the total net annual cost would be $364 billion. Obviously, that’s not only financially unsustainable, it’s also politically suicidal.
On the other hand, according to a report released by the Office of the Parliamentary Budget Officer in 2020, the estimated cost of a watered-down version of UBI — called a guaranteed basic income — covering only low-income, working-age Canadians (estimated at 9.6 million Canadians between the ages of 18 to 64) would be in range of $47.5 billion to $98.1 billion for a six-month period.
Under this attenuated version of UBI — similar to the Ontario basic income pilot project introduced by the former provincial Liberal government in 2017 and later abandoned by Doug Ford’s government — individuals and couples would receive an annual income of $18,329 and $25,921 respectively.
The projected cost range depends on how much of the benefit is clawed back from recipients when any other income increases above an established threshold.
Even under this trimmed version of UBI, however, there could be pressure to significantly raise taxes to pay for it, which could inflict colossal costs on the economy.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9536089301109314,
"language": "en",
"url": "http://ageconmt.com/who-is-average/",
"token_count": 652,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0216064453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3f2b4aa4-a3bd-44ae-84b1-1e7f7eb4e0f5>"
}
|
A keynote speaker at a recent Risk Management Conference explored the question “What does the average tell us?” The speaker presented opposing findings from two very well respected researchers. One researcher essentially made the point that based on farm lending data, the agriculture industry is doing fairly well. Debt-to-asset ratios and other indicators are below or near long-term averages indicating the agriculture industry is doing well. Another researcher pointed out a recent rise in defaults and other financial ratios indicates that agriculture is entering a crisis. Is it possible that both are correct? Or is one of these researchers wrong? His response was let’s take a closer look at the data, the conclusions and the different types of agricultural operations that comprise the data.
Agricultural operations in Montana are quite diverse in several important aspects. Here is a short list of some of the differences:
- Crops vs. Livestock
- Size of Operation
- Type of Livestock: Cattle, Sheep, Horses, Goats
- Type of Crops: Wheat, Pulses, Sugar Beets, Barley, Oilseeds, Hay
- Irrigated vs. Non-Irrigated
- Off-Farm Income, Equity in the Operation, Land Tenure, etc.
Because of these differences, examining the average may not be the best way to summarize the industry. It is possible cattle producers are doing well while wheat producers are struggling or maybe they are both facing challenges due a common factor (drought for example) that impacts them both. Some agricultural producers have several income sources (including off-farm income) while for others their agriculture income is their only income source.
This example might highlight how the average may mislead us at times. Let’s assume that we have four ranchers that produce 25 calves each year and they each have a job in town which pays $40,000 annually. Let’s also assume we have one rancher that produces 300 calves each year but does not work off the ranch. We’ll also make the assumption that all the calves sell for $1,000 and each ranch has $800 of costs to produce each calf. Total income for our part time ranchers is $45,000 ($40,000 wages and $5,000 calves) and our full time rancher has income of $60,000. The average ranch income is this example is $16,000 which doesn’t represent either type of operation very well. Let’s change our example and to reflect that calf prices fall to $850. Total income has fallen by 75% to $15,000 for our full time rancher and by 8% for part time ranchers. Average income has fallen by 25%. Neither of these averages tell us anything very useful about the situation of the individual ranches.
Does this mean we should ignore data on averages? Not necessarily. The real message from this exercise is that we should be careful consumers of information. Do we understand the data, the nature of the industry and the conclusions that are being presented? Does the data support the conclusion? If not we should be dig a little deeper and further our understanding of the issues.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9377458691596985,
"language": "en",
"url": "http://pridenews.ca/2018/11/23/africa-set-become-massive-free-trade-area/",
"token_count": 1790,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.189453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:83513461-c7b2-49ab-ba56-bf22ab689fcf>"
}
|
By Kingsley Ighobor
UNITED NATIONS, New York November 23, 2018 (IPS) — Following the unveiling of the African Continental Free Trade Agreement in Kigali, Rwanda, in March 2018, Africa is about to become the world’s largest free trade area: 55 countries merging into a single market of 1.2 billion people, with a combined GDP of $2.5 trillion.
The shelves of Choithrams Supermarket in Freetown, Sierra Leone, boast a plethora of imported products, including toothpicks from China; toilet paper and milk from Holland; sugar from France; chocolates from Switzerland; and matchboxes from Sweden.
Yet many of these products are produced much closer — in Ghana, Morocco, Nigeria, South Africa, and other African countries with an industrial base.
So why do retailers source them halfway around the world? The answer: a patchwork of trade regulations and tariffs that make intra-African commerce costly, time wasting and cumbersome.
The African Continental Free Trade Agreement (AfCFTA), signed by 44 African countries in Kigali, Rwanda, in March 2018, is meant to create a tariff-free continent that can grow local businesses, boost intra-African trade, rev up industrialization and create jobs.
The agreement creates a single continental market for goods and services, as well as a customs union with free movement of capital and business travellers. Countries joining AfCFTA must commit to removing tariffs on at least 90 percent of the goods they produce.
If all 55 African countries join a free trade area, it will be the world’s largest by number of countries, covering more than 1.2 billion people and a combined GDP of $2.5 trillion, according to the UN Economic Commission for Africa (ECA).
The ECA adds that intra-African trade is likely to increase by 52.3 percent by 2020 under the AfCFTA.
Five more countries signed the AfCFTA at the African Union (AU) summit in Mauritania in June, bringing the total number of countries committing to the agreement to 49 by July’s end. But a free trade area has to wait until at least 22 countries submit instruments of ratification.
By July 2018, only six countries — Chad, Eswatini (formerly Swaziland), Ghana, Kenya, Niger and Rwanda — had submitted ratification instruments, although many more countries are expected to do so before the end of the year.
Economists believe that tariff-free access to a huge and unified market will encourage manufacturers and service providers to leverage economies of scale; an increase in demand will instigate an increase in production, which in turn will lower unit costs.
Consumers will pay less for products and services as businesses expand operations and hire additional employees.
“We look to gain more industrial and value-added jobs in Africa because of intra-African trade,” said Mukhisa Kituyi, secretary-general of the UN Conference on Trade and Development, a body that deals with trade, investment and development, in an interview with Africa Renewal.
“The types of exports that would gain most are those that are labour intensive, like manufacturing and agro-processing, rather than the capital-intensive fuels and minerals, which Africa tends to export,” concurred Vera Songwe, executive secretary of the ECA, in an interview with Africa Renewal, emphasizing that the youth will mostly benefit from such job creation.
In addition, African women, who account for 70 percent of informal cross-border trading, will benefit from simplified trading regimes and reduced import duties, which will provide much-needed help to small-scale traders.
If the agreement is successfully implemented, a free trade area could inch Africa toward its age-long economic integration ambition, possibly leading to the establishment of pan-African institutions such as the African Economic Community, African Monetary Union, African Customs Union and so on.
A piece of good news
Many traders and service providers are cautiously optimistic about AfCFTA’s potential benefits.
“I am dreaming of the day I can travel across borders, from Accra to Lomé [in Togo] or Abidjan [in Côte d’Ivoire] and buy locally manufactured goods and bring them into Accra without all the hassles at the borders,” Iso Paelay, who manages The Place Entertainment Complex in Community 18 in Accra, Ghana, told Africa Renewal.
“Right now, I find it easier to import the materials we use in our business—toiletries, cooking utensils, food items—from China or somewhere in Europe than from South Africa, Nigeria or Morocco,” Paelay added.
African leaders and other development experts received a piece of good news at the AU summit in Mauritania in June when South Africa, Africa’s most industrialised economy, along with four other countries, became the latest to sign the AfCFTA.
Nigeria, Africa’s most populous country and another huge economy, has been one of the holdouts, with the government saying it needs to have further consultations with indigenous manufacturers and trade unions. Nigerian unions have warned that free trade may open a floodgate for cheap imported goods that could atrophy Nigeria’s nascent industrial base.
The Nigeria Labour Congress, an umbrella workers’ union, described AfCFTA as a “radioactive neoliberal policy initiative” that could lead to “unbridled foreign interference never before witnessed in the history of the country”.
However, former Nigerian president, Olusegun Obasanjo, expressed the view that the agreement is “where our [economic] salvation lies.”
At a July symposium in Lagos organised in honour of the late Adebayo Adedeji, a onetime executive secretary of the ECA, Yakubu Gowon, another former Nigerian leader, also weighed in, saying, “I hope Nigeria joins.”
Speaking at the same event, Songwe urged Nigeria to get on board after consultations, and offered her organisation’s support.
Last April, Nigerian president, Muhammadu Buhari, signalled a protectionist stance on trade matters, while defending his country’s refusal to sign the Economic Community of West African States-EU Economic Partnership Agreement. He said then, “Our industries cannot compete with the more efficient and highly technologically-driven industries in Europe.”
In some countries, including Nigeria and South Africa, the government would like to have control over industrial policy, reports the Economist, a UK-based publication, adding, “They also worry about losing tariff revenues, because they find other taxes hard to collect.”
While experts believe that Africa’s big and industrialising economies will reap the most from a free trade area, the ECA counters that smaller countries also have a lot to gain because factories in the big countries will source inputs from smaller countries to add value to products.
The AfCFTA has also been designed to address many countries’ multiple and overlapping memberships in Regional Economic Communities (RECs), which complicate integration efforts. Kenya, for example, belongs to five RECs. The RECs will now help achieve the continental goal of a free trade area.
Many traders complain about RECs’ inability to execute infrastructure projects that would support trading across borders. Ibrahim Mayaki, head of the New Partnership for Africa’s Development (NEPAD), the project-implementing wing of the AU, says that many RECs do not have the capacity to implement big projects.
For Mr. Mayaki, infrastructure development is crucial to intra-African trade. NEPAD’s Programme for Infrastructure Development in Africa (PIDA) is an ambitious list of regional projects. Its 20 priority projects have been completed or are under construction, including the Algiers-Lagos trans-Saharan highway, the Lagos-Abidjan transport corridor, the Zambia-Tanzania-Kenya power transmission line and the Brazzaville-Kinshasa bridge.
The AfCFTA could change Africa’s economic fortunes, but concerns remain that implementation could be the agreement’s weakest link.
Meanwhile African leaders and development experts see a free trade area as an inevitable reality.
“We need to summon the required political will for the African Continental Free Trade Area to finally become a reality,” said AU Commission chairperson, Moussa Faki Mahamat, at the launch in Kigali.
*This article first appeared in Africa Renewal which is published by the United Nations.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9531370997428894,
"language": "en",
"url": "https://candider.com/question/how-blood-banks-manage-their-data-in-india-what-is-the-difference-between-the-blood-bank-system-of-india-and-developed-countries",
"token_count": 267,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.33984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c94bb9d1-94ac-4dce-a590-887d68d40ac8>"
}
|
How blood banks manage their data in India? What is the difference between the blood bank system of India and Developed Countries? | Candider
How blood banks manage their data in India? What is the difference between the blood bank system of India and Developed Countries?
25th Feb 2020 11:31 am
The Answer is obtained by Expert Opinion:
In India, the data of blood banks are badly managed. However, there are few efforts being made with initiatives like e-Raktkosh to publish live inventory data of each blood bank. But that doesn't work due to lack of credibility.
Now in Developed countries, any person who needs blood doesn't have to go from pole to post to find blood. Hospitals are supposed to arrange blood. Whereas in India, most of the time attendants with patient need to arrange blood or find blood donors.
The main reason behind this is because developed nations run highly successful voluntary blood donation camps with the highly transparent system, giving complete visibility of the blood cycle to gain the trust of the donor. In India, mostly we rely on replacement blood and that's the reason there is always a huge scarcity of blood everywhere. The system is so opaque that its very difficult for a donor to see the beneficiary of their blood. Which leads to trust deficit among blood donors.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9700350165367126,
"language": "en",
"url": "https://msmoney.com/overview/kids-parents-and-money/",
"token_count": 437,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1025390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2e45e834-c525-4904-87fe-64aa5e37be87>"
}
|
Kids, Parents, and Money
When parents were asked to specifically describe what they have done to teach their kids about financial matters: 56 percent of parents can name only one example; 31 percent cite two examples; and 8 percent say “nothing” or “don’t know.”
Teaching your child about managing money is an ongoing process, one or two examples of money management are not sufficient to raise financially responsible children. If parents don’t teach their children about money who will? Statistics show that only about a quarter of these kids will learn financial information in school and half will ask advice from their friends. Who is to say that their friends will serve as good role models? It is no wonder that most kids fail a basic financial literacy test.
If parents want to be guaranteed that their children will be financially literate, they must step in and take control.
Ironically, 81 percent of parents who feel they do a fair or poor job of managing their money still consider themselves effective in giving their kids financial advice. A new survey by the Teachers Insurance and Annuity Institute College Retirement Equities Fund Institute finds that 55% of parents said they roll over credit card debt every month. And, less than 45% of the parents said they make a budget and stick to it.
So it appears that parents need to not only educate themselves about financial planning, they must also put into practice what they learn. Only then will they be the most qualified to teach their children about financial empowerment. The legacy of financial security they will pass to their children will be well worth their effort. If they don’t take control, the consequences can be devastating for their children.
Keep in mind that you don’t have to be the parent of a child to help with their fiscal education. You could be the aunt, grandparent or close friend of the family and still have a tremendous impact on their lives. Often the parents put their child’s fiscal education on the back burner because they are not comfortable managing their own money, so a little nudge from a relative or friend might be just the impetus to get things going.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9292047619819641,
"language": "en",
"url": "https://pressbooks.library.ryerson.ca/ohsmath/chapter/7-2-bayes-formula/",
"token_count": 1524,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1279296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:993ba9da-22c2-492c-a6e6-8f5928939f7e>"
}
|
In this section, we will develop and use Bayes’ Formula to solve an important type of probability problem. Bayes’ formula is a method of calculating the conditional probability P(F | E) from P(E | F). The ideas involved here are not new, and most of these problems can be solved using a tree diagram. However, Bayes’ formula does provide us with a tool with which we can solve these problems without a tree diagram. We begin with an example.
Suppose you are given two jars. Jar I contains one black and 4 white marbles, and Jar II contains 4 black and 6 white marbles. If a jar is selected at random and a marble is chosen:
a. What is the probability that the marble chosen is a black marble?
b. If the chosen marble is black, what is the probability that it came from Jar I?
c. If the chosen marble is black, what is the probability that it came from Jar II?
Let JI be the event that Jar I is chosen, JII be the event that Jar II is chosen, B be the event that a black marble is chosen and W the event that a white marble is chosen. We illustrate using a tree diagram.
a. The probability that a black marble is chosen is P(B) = 1/10 + 2/10 = 3/10.
b. To find P(JI | B), we use the definition of conditional probability, and we get
In parts b and c, the reader should note that the denominator is the sum of all probabilities of all branches of the tree that produce a black marble, while the numerator is the branch that is associated with the particular jar in question.
This is a statement of Bayes’ formula.
Bayes’ Formula:Let S be a sample space that is divided into n partitions, A1, A2, . . . An. If E is any event in S, then:
A department store buys 50% of its appliances from Manufacturer A, 30% from Manufacturer B, and 20% from Manufacturer C. It is estimated that 6% of Manufacturer A’s appliances, 5% of Manufacturer B’s appliances, and 4% of Manufacturer C’s appliances need repair before the warranty expires. An appliance is chosen at random. If the appliance chosen needed repair before the warranty expired, what is the probability that the appliance was manufactured by Manufacturer A? Manufacturer B? Manufacturer C?
Let events A, B and C be the events that the appliance is manufactured by Manufacturer A, Manufacturer B, and Manufacturer C, respectively. Further, suppose that the event R denotes that the appliance needs repair before the warranty expires.
We need to find P(A | R), P(B | R) and P(C | R).
We will do this problem both by using a tree diagram and by using Bayes’ formula.
We draw a tree diagram.
The probability P(A | R), for example, is a fraction whose denominator is the sum of all probabilities of all branches of the tree that result in an appliance that needs repair before the warranty expires, and the numerator is the branch that is associated with Manufacturer A. P(B | R) and P(C | R) are found in the same way. We list both as follows:
Alternatively, using Bayes’ formula:
P(B | R) and P(C | R) can be determined in the same manner.
There are five Jacy’s department stores in San Jose. The distribution of number of employees by gender is given in the table below.
||Number of Employees
||Proportion of Women Employees
||Total = 1000
If an employee chosen at random is a woman, what is the probability that the employee works at store III?
Let k = 1, 2, …, 5 be the event that the employee worked at store k, and W be the event that the employee is a woman. Since there are a total of 1000 employees at the five stores,
P(1) = 0.30 P(2) = 0.15 P(3) = 0.20 P(4) = 0.25 P(5) = 0.10
Using Bayes’ formula,
For certain problems, we can use a much more intuitive approach than Bayes’ Formula.
A certain disease has an incidence rate of 2%. A test is available to test for the disease, but it is not perfect. The false negative rate
is 10% (that is, about 10% of people who take the test will test negative, even though they actually have the disease). The false positive rate
is 1% (that is, about 1% of people who take the test will test positive, even though they do not actually have the disease). Compute the probability that a person who tests positive actually has the disease:
Imagine 10,000 people are tested. Of these 10,000, 200 will have the disease; 10% of them, or 20, will test negative and the remaining 180 will test positive. Of the 9800 who do not have the disease, 1% of them, or 98, will test positive. These data can be summarized in a table as follows:
|Do not have disease
So of the 278 people who test positive, 180 will have the disease. Thus:
So about 65% of the people who test positive will have the disease.
Using Bayes’ formula directly would give the same result:
1. Jar I contains five red and three white marbles, and Jar II contains four red and two white marbles. A jar is picked at random and a marble is drawn. Draw a tree diagram and find the following probabilities:
a. P (Marble is red)
b. P (The marble came from Jar II given that a white marble is drawn)
c. P (Red marble | Jar I)
2. The table below summarizes the results of a diagnostic test:
|Do not have disease
Using the table, compute the following:
a. P (Negative test | disease positive)
b. P (Disease positive | test positive)
3. A computer company buys its chips from three different manufacturers. Manufacturer I provides 60% of the chips, of which 5% are known to be defective; Manufacturer II supplies 30% of the chips, of which 4% are defective; while the rest are supplied by Manufacturer III, of which 3% are defective. If a chip is chosen at random, find the following probabilities:
a. P (The chip is defective)
b. P (The chip came from Manufacturer II | it is defective)
c. P (The chip is defective | it came from manufacturer III)
4. The following table shows the percent of “Conditional Passes” that different types of food premises received in a city during their last public health inspection.
||Number of Premises
||Proportion that Received Conditional Pass
||Total = 5000
If a premise is selected at random, find the following probabilities:
a. P (Received Conditional Pass)
b. P (Received Conditional Pass | Restaurant)
c. P (Grocery Store | Received Conditional Pass)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9032992720603943,
"language": "en",
"url": "https://qainfotech.com/myths-about-cloud-computing/",
"token_count": 649,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01422119140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8354f875-af54-495c-a2a1-721affc4a140>"
}
|
Today “Cloud computing” is one of the hottest catch word in IT domain. Every organization irrespective of size is jumping on the cloud computing bandwagon.
As with any new technology or process, cloud computing is also subject to misconceptions and myths. These myths probably arise from a poor understanding of the technology or the capabilities of the providers.
The best way to begin to appreciate the potential for cloud computing is through a definition of the term:
“Cloud computing is a model for enabling convenient, on_demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” – The National Institute of Standards and Technology, U.S. Department of Commerce; October, 2009(*1*)
Let us look at some of the myth associated with Cloud computing:
Myth 1: Cloud security and compliance is vulnerable.
Cloud computing security is no different than any secured network service. Cloud computing in itself does not introduce any new or unforeseen vulnerabilities or weaknesses.
Myth 2: All clouds scale on demand.
Not all cloud vendors have the resources or architecture to adequately scale applications and traffic on demand. While all try to maintain a certain number of extra resources to accommodate fluctuations, many cannot dynamically scale operations when demands exceed predicted thresholds.
Myth 3: Performance is worse in the cloud.
If the cloud infrastructure and applications are poorly managed and deployed, this might be true.But when properly configured, most users notice no difference when using cloud-based applications. In some cases, cloud computing provides noticeable improvements in performance as better provisioned machines with access to more resources can better handle more complex processes. The most significant potential bottleneck for cloud computing is access to the network itself.
Myth 4: Virtualization is equal to cloud computing.
Virtualization makes dynamic, scalable cloud computing possible, but does not constitute a cloud architecture on its own. Virtual machines deployed without intelligence or dynamic scalability can be nearly as inefficient and costly as physical resources they replace.
Myth 5: Cloud computing is only good for low end applications and software as a service.
Many vendors have jumped into the cloud computing market with simple software applications and declared themselves “cloud computing” experts. Cloud computing is the backbone on which businesses worldwide can perform thousands of transactions a second, transfer massive amounts of data across the globe. The most robust, secure, and scalable business applications available today can operate using cloud computing.
Myth 6: Cloud computing is less reliable than in-house systems.
Some of the most secure and reliable installations in the industry are cloud computing data centers. The best cloud computing centers are built from the ground up with multiple layers of redundant components,power, physical and cyber security measures.
Through this article, I have tried to bust some of the myth associated with cloud computing. Comments,suggestions and corrections are welcomed with open heart.
1. Badger and Grance. “Cloud Computing Synopsis and Recommendations.” National Institute of Standards and Technology, Information Technology Laboratory, U.S. Department of Commerce; May 29, 2012.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9483489394187927,
"language": "en",
"url": "https://smallbusiness.chron.com/differences-between-net-gross-income-business-22702.html",
"token_count": 622,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.12109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3e5a503a-9cfa-48d0-b7e3-fd6cab0df61f>"
}
|
The Differences Between Net & Gross Income for a Business
Your financial statements are an essential part of your business, and are needed for keeping track of your performance, communicating with lenders, investors and shareholders and preparing tax returns. When you prepare an income statement for your business, you must calculate both gross and net figures, so it is important to be clear on the difference between these two fundamental accounting terms.
Gross income includes all of the income your business earns during the year, while net income includes only the profit you earns after subtracting business expenses.
What Does Gross vs. Net Mean?
Gross income includes all of the income your business earns during the year, while net income includes only the profit your business earns after you subtract business expenses and other allowable deductions from your gross income. If you have a million dollars in sales (and no other sources of income) then your gross income is one million dollars. But your net income must account for costs like rent, salaries, benefits and so on, as well as deductible expenses.
Calculating Gross Income
To calculate your gross income, you must combine the total of all cash, checks, credit card charges, rental income, interest and dividends, canceled debts, promissory notes, kickbacks, damages and lost income payments your business received during the year. Even if your business routed the money to a third party, you must still claim it as income. You shouldn't deduct any expenses when calculating your gross income.
Calculating Net Income
To calculate your net income, you must deduct business expenses from your gross income. Business expenses may include the cost of goods sold, advertising expenses, automobile operation costs, funding of employee benefit programs, insurance, mortgage interest, legal fees, office expenses, repairs, maintenance, supplies, wages paid to employees, utilities, travel, taxes or rental payments.
What Does it Mean?
Calculating your gross and net income allows you to identify your largest expenses, as well as the most lucrative facets of your business, thus allowing you to make improvements. If you are soliciting investors, they will typically request a copy of your income statement before deciding to invest.
You must also list your gross income and net income on your federal tax return. If your net income is positive, then your business may have reportable capital gains. If your net income is negative, your business may have a deductible capital loss. There are special rules for homes businesses. If you use a portion of your home for business purposes, you may be able to deduct a portion of your home expenses, such as mortgage interest and home maintenance, as a business expense. The IRS rules for this deduction are stringent, so be sure to discuss home deductions with your accountant.
Finally, if you need to borrow money for your business, lending institutions will review your gross and net incomes before granting you a loan.
Amanda McMullen is a freelancer who has been writing professionally since 2010. She holds a bachelor's degree in mathematics and statistics and a second bachelor's degree in integrated mathematics education.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8963791728019714,
"language": "en",
"url": "https://thebusinessprofessor.com/banking-lending-credit-industry/float-banking-defined",
"token_count": 1818,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1298828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f98c2416-27ee-43c0-aa0c-6c260acd25aa>"
}
|
Float (Banking) - Definition
If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.
- Marketing, Advertising, Sales & PR
- Accounting, Taxation, and Reporting
- Professionalism & Career Development
Law, Transactions, & Risk Management
Government, Legal System, Administrative Law, & Constitutional Law Legal Disputes - Civil & Criminal Law Agency Law HR, Employment, Labor, & Discrimination Business Entities, Corporate Governance & Ownership Business Transactions, Antitrust, & Securities Law Real Estate, Personal, & Intellectual Property Commercial Law: Contract, Payments, Security Interests, & Bankruptcy Consumer Protection Insurance & Risk Management Immigration Law Environmental Protection Law Inheritance, Estates, and Trusts
- Business Management & Operations
- Economics, Finance, & Analytics
Float (Banking) Definition
Float, in the banking system, refers to money briefly counted two times because of the delays in check processing. Float is built as soon as the check is deposited. The customers account is credited by the bank. However, the payers bank takes some time to send payment on the check. Until the payers bank clears the check, the check amount is displayed both in the payers and recipients bank.
A Little More on What is Float
Because available funds are counted twice, the amount of float in the system effects the money supply by causing inflation and hindering effective monetary policy implementation. There are certain time periods where float fluctuates. For instance, Float is higher on Tuesday because of checks backlog on weekend. The Federal Reserve, based on these trends, forecasts float levels, and to makes monetary policy. Two types of Federal float have been defined by the Federal Reserve: holdover float that occurs due to institutions delays for processing; transportation float that happens due to weather and air traffic issues. The formula to calculate Float is Float = Firm's Available Balance - Firm's Book Balance You can also measure float as: Average Daily Float = total check values in collection phase during a specific period / the number of days falling under that period. Conversely, total check values to be collected is calculated by multiplying a float amount by the number of days outstanding. For example, if your business has a $15,000 float outstanding for first 14 days of the month, and $19,000 for the last 17 days of the month, then you can calculate the average daily float as: = [($15,000 x 14) + ($19,000 x 17)] / 31 = ($210,000 + $323,000) / 31 = $533,000 / 31 = $17,193.55
References For Banking Float
Academic Research on Float
Assetfloatand speculative bubbles, Hong, H., Scheinkman, J., & Xiong, W. (2006). The journal of finance,61(3), 1073-1117. In this paper, the authors model the relationship between asset float (tradable shares) and speculative bubbles. Accountingfor the upwardfloatin foreign currencies, Connor, J. E. (1972).Journal of Accountancy (pre-1986),133(000006), 39. This paper explores the amendment to the dealing in translation of foreign currencies in the U.S, as circulated in the exposure draft by the Accounting Principles Board in 1971. The purpose of this paper is to consider whether present exchange translations methods needs revision, whether revised methods proposed in the exposure draft are appropriate, or whether a better long-range solution can be developed. Multi-country evidence on the behavior of purchasing power parity under the currentfloat, Lothian, J. R. (1997). Journal of International Money and Finance,16(1), 19-35. Using panel data for the United States and 22 other OECD countries for the current float, this paper presents evidence that despite substantial short-term perturbations, purchasing power parity actually performed much better than commonly believed. Stock Market moving towards Weak-form efficiency? Evidence from the Karachi Stock Exchange and the Random Walk Nature of free-floatofsharesof KSE 30 Index., Akber, U., & Muhammad, N. (2013). In this study, the authors attempted to seek evidence for weak-form of market efficiency for KSE 100 Index. For further analysis, return series has been divided into sub-periods. The paper has made use of primarily Non-Parametric tests as well as parametric tests. For further analysis, Runs test has also been run on 20 companies return series for comparison purpose with the results of index return series. In addition, from KSE 30 Index, 20 companies return series based on the free-float of shares have also been analyzed. Seasoned equity offers: The effect of insider ownership andfloat, Intintoli, V. J., & Kahle, K. M. (2010). Financial Management,39(4), 1575-1599. This paper examines the rate of increase of seasoned equity offering (SEO) underpricing since the early 1980s. The authors find that effect of insider ownership on discounts is twofold. First, higher insider ownership reduces float, thereby increasing price pressure and SEO underpricing. Second, the greater the percentage of secondary shares offered, the lower the underpricing, suggesting that manager's pressure banks to reduce underpricing when their personal wealth is at stake. More findings are discussed. Ownership structure and performance: Evidence from the publicfloatin IPOs, Michel, A., Oded, J., & Shaked, I. (2014). Journal of Banking & Finance,40, 54-61. This paper investigates whether the post-IPO market performance of IPO stocks is related to the percentage of shares issued to the public, namely, the public float. Freefloatand market liquidity: A study of Hong Kong government intervention, Chan, K., Chan, Y. C., & Fong, W. M. (2004). Journal of Financial Research,27(2), 179-197. This paper studies the relationship between a free float and market liquidity, by analysing the August 1998 Hong Kong government intervention in the stock market. Examine the relationship between freefloatofsharesand P/E ratio with a price bubble in the companies listed in Tehran stock exchange, Sorayaei, A., Memarian, E., & Amiri, M. O. (2013). World Applied Sciences Journal,21(2), 170-175. The purpose of this paper is to explore the comparative position of Indias trade with Bangladesh and other SAARC countries for the last ten years. It aims to find out the prospects and challenges of Indias trade with Bangladesh and other SAARC countries. The fiscal consequences of privatisation: Australian evidence on privatisation by public sharefloat, Harris, M., & Lye, J. N. (2001). International Review of Applied Economics,15(3), 305-321. Privatisation has become a common government policy in many countries. This paper summarizes the salient features of privatisations by public share float in Australia during the period 1989 to 1997. The costs associated with these privatisations and their impacts are examined. Empirical tests of thefloat-adjusted return model, Zhang, F., Tian, Y., & Wirjanto, T. S. (2009). Finance Research Letters,6(4), 219-229. This paper implements empirical tests of the recently proposed float-adjusted return model by using Chinese stock-market data. The results show that variation in free float can explain cross-sectional variation in asset returns by about 6.7% annually, after controlling for market risk, size, and book-to-market equity. Demand and supply and their relationship to liquidity: evidence from the S&P 500 change to freefloat, Lam, D., Lin, B. X., & Michayluk, D. (2011). Financial Analysts Journal,67(1), 55-71. In the context of the switch to free-float weighting in the S&P 500 Index, this study of the effect of the availability of shares on liquidity in the medium term found cross-sectional differences in liquidity and price impact measures that gradually narrowed following each phase of the free-float adjustment. The impact of freefloat shareson the supply and demand in companies listed in Tehran stock exchange, Esmaeilzadeh, A., & Alipanahi, M. (2015). International Journal of Science and Engineering Investigations,4(38), 12-14. The aim of this study is the impact of free float shares on the supply and demand in companies listed in Tehran Stock Exchange. For this purpose, the information of some companies listed in Tehran Stock Exchange during the years 1388 to 1392 were studied. The panel data technique was used to analysis of the data and information.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9192750453948975,
"language": "en",
"url": "https://www.12manage.com/description_strategic_management.html",
"token_count": 535,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1357421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fda23853-df1e-48a2-a350-67a69ed6c4ed>"
}
|
What is Strategic Management? Meaning.
Strategic management is the field within business administration and Management concerned with the way organizations can determine and execute their strategy to realize their long-term goals effectively and efficiently.
Strategic management does not focus on one functional area of business, like Finance, HRM, or Marketing, but concerns the entire corporation, business unit or organization as a whole. It is a holistic approach to the big picture of how the organization should be run.
Strategic management involves the formulation (formation) and execution (implementation) of the Strategic Vision, major goals and strategic initiatives by a company's top management in order to create value for the owners (Shareholders) and Stakeholders, based on consideration of the Resources, Strengths and Weaknesses of the organization, the Industry Structure and Macro Environment in which the organization operates and competes.
Strategy formulation (formation) involves analyzing the environment in which the organization operates (Strategic Analysis), then making strategic decisions about how the organization will compete (Strategic Decision-making). Formulation ends with a series of goals or objectives and measures for the organization to pursue (CSFs and KPIs).
Note that instead of this such Deliberate Strategy formulation (also called Formal Planning), strategy is increasingly viewed as an ongoing process of constant learning, experimentation and risk-taking; an adaptive, incremental and complex learning process (Emergent Strategy).
Strategy execution (implementation) involves decisions regarding how the organization's Resources (e.g., people, processes and systems) will be aligned and mobilized towards the objectives. Implementation results in the Organizational Structure, Leadership arrangements, Control Systems, Strategic Communication, Executive Compensation, Incentives, and Performance Management to track progress towards objectives, among others.
Note that while the two processes (formulation and execution) above are described sequentially, they are nowadays typically iterative and each provides input for the other.
Schools of Thought on Strategic Management
In his book "Strategy Safari", Henry Mintzberg gives an excellent overview of the entire field of strategic management by describing "Ten Schools of Thought".
Concepts and Methods used in Strategic Management
Go here for an extensive list of business and corporate Strategy Models and Methods.
Return to Management Hub: Strategy
This ends our Strategic Management summary and forum.
About 12manage | Advertising | Link to us / Cite us | Privacy | Suggestions | Terms of Service
© 2021 12manage - The Executive Fast Track. V15.8 - Last updated: 19-4-2021. All names ô of their owners.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9648736119270325,
"language": "en",
"url": "https://www.business.org/finance/accounting/how-to-read-a-financial-statement/",
"token_count": 380,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.018310546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e4f396a9-e1fe-4491-b7d6-87de34a3091a>"
}
|
A balance sheet covers three essential financial categories: a business's assets, liabilities, and equity.
- A company's assets include both its revenue and the amount it would earn from liquidating physical assets like machinery, property, and excess inventory. Assets also include the company's copyrights, investments, and earned interest.
- A company's liabilities include whatever it owes to non-shareholders. The amount could include loans, unpaid wages, income taxes, rent, and interest payments.
- A company's shareholder equity refers to what its shareholders would earn after the company liquidated its assets and paid all its bills.
A balance sheet lists the company's assets on one side (usually the left half) and its liabilities and equity on the other (usually the right half). The two halves of the sheet must equal each other for the sheet to be balanced.
The asset side of the sheet lists assets by how quickly they could be liquidated, starting with current assets like cash and inventory. Current assets also include anything that could either be liquidated or yield returns within a year, such as short-term investments and accounts receivable.
The sheet then lists non-current assets like long-term investments, intangible assets like copyrights, and fixed assets that would take over a year to sell and liquidate—for instance, warehouses or heavy machinery necessary to daily operations.
The liability side of the sheet lists liabilities by how soon each payment is due, starting with current liabilities that are due within a year. Long-term liabilities, which come due more than a year after the balance sheet is created, are listed next.
Shareholders' equity is listed beneath liabilities on the same side of the sheet. This section includes retained earnings, which is income the company reinvests for growth and uses to pay down debt. It should also show the stock invested in the company.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.956301212310791,
"language": "en",
"url": "https://www.business.org/finance/accounting/the-difference-between-bookkeeping-and-accounting/",
"token_count": 272,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08447265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e416acef-723b-4384-bccf-bcca4a6526a2>"
}
|
As you can imagine, there are quite a few differences between bookkeepers and accountants, including the level of education each job requires.
Bookkeepers are responsible for maintaining your business’s financial records. They need solid math and organizational skills, plus a working knowledge of accounting software. As per the Bureau of Labor Statistics, bookkeepers usually have a postsecondary degree, though not necessarily in bookkeeping.1 And most bookkeepers make around $40,000 a year.1
Accountants are responsible for assessing your business’s finances and making financial recommendations that keep your business in the black. They can also prepare financial statements and record financial information, so accountants should have solid bookkeeping skills. Most accountants have, at minimum, a bachelor’s degree, though it might not be in accounting. Most accountants make around $70,000 a year.2
And a Certified Public Accountant, or CPA, is an accountant who has taken a test called the Uniform CPA Examination and met your state’s requirements for state certification. While CPA licensing requirements vary from state to state, they usually include a bachelor’s degree in accounting and at least a year’s worth of on-the-job experience. To maintain their license, CPAs have to continue taking courses throughout their careers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9492799043655396,
"language": "en",
"url": "https://www.halvotec-digitalexperts.com/blog/definition-and-fundamentals-of-data-mining-what-is-it-how-does-it-work",
"token_count": 1423,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0225830078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bb322b1a-334f-442c-9618-345137d89ba2>"
}
|
Big data, business intelligence, data mining and many other similar terms have been the talk of the town for some time. But what exactly does the term data mining actually mean? In this article, you will learn about the benefits and challenges of data mining, what it can achieve and how it is typically used in projects.
Data mining refers to the systematic and computer-aided application of statistical algorithms in order to recognise correlations, patterns, trends and connections in very large data sets (big data/large data sets) and in a highly automated manner. The results are then transferred into usable data structures and made available for further processing.
In a narrower sense, data mining involves the analysis of the "knowledge discovery in databases" process, which is aimed at identifying new relationships in existing data sets. In practice, however, these terms are often used interchangeably to describe not only the actual analysis but also the preparation of the data (e.g. via warehousing/data warehouses), as well as the evaluation and interpretation of the results.
Data mining is a branch of business intelligence (BI) and is also closely linked to predictive analytics, i.e. the prediction of future situations based on existing data.
Data mining is mainly used to analyse existing data sets, to recognise patterns and to make decisions based on the results.
The aim is to make practical predictions about the future, to recognise emerging trends early, to confirm or disprove assumptions about correlations and to improve business processes.
Specific use cases include determining the creditworthiness of customers, calculating available credit limits, discovering purchase patterns and trends (shopping basket analysis such as "product Y is often bought together with product X"), evaluating the connection between diseases and the effectiveness of treatments in drug development, or detecting fraud, for example based on the patterns of credit card transactions.
Depending on the application and the task at hand, data mining software tools employ different algorithms, machine learning and AI to extract information from the data. In practice, a distinction is made between the following mining methods, each of which pursues a specific goal:
This method is aimed at detecting unusual data sets, such as outliers or data errors that require further investigation. Where possible, data errors or unusable anomalies that would impair the results are then hidden during further analysis. In some cases, however, it is precisely these outliers that need to be identified (e.g. when detecting fraud).
In cluster analysis, the aim is to group data records on the basis of their similarities without knowing the underlying data structures/relying on any known structures.
Classification means the allocation of data to certain higher-level classes, e.g. the classification of emails as spam or the division of customers into risk groups according to their creditworthiness.
Association rule learning is used to identify connections and dependencies in the data. One such example is the classic shopping basket analysis, i.e. identifying which products are often purchased in combination with another.
The purpose of regression analysis is to identify relationships between data sets, such as the influence of price and customer purchasing power on sales volume.
As a rule, the data mining process is based on the so-called cross-industry standard process for data mining (CRISP-DM), which a number of well-known industrial companies developed in the framework of an EU-funded project. The aim was to create a standardised process model for data mining that could be used to search and analyse any data stock.
The process model is based on six phases, some of which have to be run several times:
Phase 1 involves the definition of the objectives and business requirements, in order to determine the specific goals and how they are to be achieved.
Once the objectives and the procedure have been determined, the existing data can be analysed. In addition, this phase includes an examination of the data quality and an assessment of whether the quality is sufficient for the stated objectives. Should this not be the case, the objectives and requirements may need to be revised.
As soon as the objectives and the data are available, the data can be prepared for evaluation. Data preparation is usually the phase that takes the most time.
Based on the prepared data, one or more data models can be created by selecting and applying one or more data mining methods. During the modelling phase it often becomes apparent that the preparation of the data needs to be adapted in order to apply the selected methods.
After the data models have been created, they are evaluated to determine whether the stated objectives have been achieved. Either the most suitable model is selected or – if the results prove unsatisfactory – phase 1 is repeated to revise the objectives and requirements.
At the end of the process, the findings are processed and made available in a suitable format.
The evaluation of the data as well as the correlations and insights that have been obtained can be used to discover trends, predict future developments and thus support the management in making decisions.
Efficient analysis of large amounts of data and the information extracted from them can be used to gain a competitive advantage, while the detection of process errors and issues leads to cost reductions.
Business process improvements:
Data mining can be used to confirm or disprove assumptions about problems in business processes and to uncover process weaknesses. Over the years, this has given rise to the special field of process mining, which focuses specifically on the analysis and optimisation of business processes.
Highly qualified data mining experts are required:
Having powerful tools is one thing – using them properly is another. In order to obtain valuable and accurate results with data mining, it is essential that the relevant software is operated by specialists who need to understand the source data to be able to prepare them correctly. Similarly, they also need to be able to assess whether the patterns, connections, interrelationships and results provided by the software are generally accurate and relevant.
Poor data quality:
As with all evaluation methods, the quality of the data is a decisive prerequisite for obtaining valid results. Any error or incomplete data set inevitably leads to a deterioration in the quality of the results, which at worst may prove entirely false. Relying on such poor results in sequence may result in the wrong decisions.
Privacy & security:
The collection of large amounts of data inevitably comes with privacy and security risks. The data sets may contain a lot of user-related data that should not be used or linked to one another. On the other hand, the process also creates opportunities for identifying security risks and breaches and subsequently remedying them.
For companies, data mining can bring about a significant improvement in their operations. The can gain new insights by evaluating the growing volumes of data they collect. Many software products already incorporate BI and thus also data mining, and yet companies worldwide often use them without harnessing the full potential for improvement that they offer. The data mining trend is set to continue, especially in connection with process mining, as it gives companies the opportunity to dramatically optimise their business processes and achieve enormous cost savings.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.