meta
dict | text
stringlengths 224
571k
|
---|---|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9417429566383362,
"language": "en",
"url": "http://www.trade.education/intro-to-charts/",
"token_count": 770,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0927734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:878ecd98-e50c-41a5-a216-39cc7db70d02>"
}
|
Introduction to Stock Charts
Becoming familiar and learning how to read stock charts is an important aspect if you want to become a better trader & investor. By learning how to interpret a stock chart will help you better understand a stock’s price movement.
A stock chart can provide you with a wealth of knowledge as long as you know and understand what you’re looking at. Basic charting knowledge combined with other stock indicators can immensely improve your trading skills.
What many people don’t realize is that technical analysis is just basically analyzing stock charts. So, lets take a look at the stock chart.
What is a Stock Chart
A stock chart is simply a graphical representation of the stocks price over a set period of time that shows the past prices of a particular stock. Stock charts are used to help traders and investors make decision on buying or selling stocks.
Stock charts basically show the history of the stock and where it has traded at. Depending on the particular time frame you are viewing, the price movement can be plotted on a day by day, minute by minute, hour by hour basis and other time frames.
If you have a brokerage account then chances are you have seen a variety of stock charts.
If you are new to stocks, there are a variety of websites where you can get free stock quotes and view quality charts. Here are a few examples:
How to Read Stock Charts
Reading a stock chart is simple and straight forward. Once you have viewed a few stock charts you’ll start understanding how to read them.
Let’s pull up a chart and go over some of the main areas. For our example we will be using the chart of Apple. When you first look at a chart there are 3 key areas you want to become familiar with:
1. Time scale (X axis)
2. Price scale (Y axis)
Time scale (X axis):
The time scale or the “X” axis is the bottom portion of the graph, running horizontally, and it flows left to right. It’s the portion of the graph that has the time frame that you are looking at.
The current time frame for the chart above is set at 1-year. The time frame can easily be adjusted to a shorter or longer period of time. The most frequently used time scales are intraday, daily, weekly, monthly, quarterly and annually. Any adjustments of the time frame will give you a different perspective on the stock.
Price scale (Y axis):
The price scale or the “Y” axis is the right side of the chart, running vertically. This portion of the graph has the price action where it shows a stock’s current price and compares it to past data points.
So, from the Apple stock chart above we can see that the stock has traded in a price range between $400-700 in the past year.
For traders, the volume is another important data piece to look at. A stock’s volume is a measure of the number of stock shares that have been exchanged or traded within a specific period of time. It’s essentially how much buying and how much selling was going on within that period of time.
You could think as volume as the heart of that stock because it is what moves the stock higher or lower. When you see large spikes in volume it means many traders were involved in that movement.
Now that you have become familiar with the basic key areas of a stock chart let’s go over the 3 basic chart types that are used.
There are three main types of charts that are used by investors and traders.
The chart types are:
- line chart
- bar chart
- candlestick chart
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9350644946098328,
"language": "en",
"url": "https://beginnerdetail.com/what-is-sensex/",
"token_count": 1164,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.05126953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:07791302-4b26-4e2b-9f1d-e3a3296effcc>"
}
|
Do you know What is Sensex? and how is it made? Often you have read or seen the word Sensex on TV or in the newspaper. And sometimes you may have noticed that Sensex went up so many points today.
So you must have seen that Sensex has dropped so many points today. Whenever you think about investing in Share Market . At that time, you must have come to Sensex about that time.
But you will not know the meaning of these words. Because you don’t know what Sensex is? So in today’s post, What is Sensex and how is it made? In this post today, we need to know what is Sensex and what does it do?
We have told you in our earlier post what is nifty? In today’s post, we are talking about Sensex. So Sensex is also nifty. But compared to nifty, only 30 companies are listed in Sensex.
Nifty is also called nifty 50. Because 50 companies are listed in it. Let’s see what is Sensex and how is it made?
Full Form of Sensex
|Full Form of Sensex||Sensitive Index|
What is Sensex?
The word Sensex is made up of sensitive and index words. This means that it is a sensory index.
Sensex is the BenchMark index of our Indian stock market. Which explains the rise and fall in the prices of Listed shares in BSE (Bombay Stock Exchange). Through this, we get the information of the display of the 30 largest companies listed in it.
Sensex is the oldest stock market index in India, which started in 1986.
Sensex which is a stock market index and its most important function is that it keeps looking at the prices of all the shares of companies listed in the stock market.
And then give us an average value after a day’s work. So that we can easily get the information about the rapid and slowdown in the prices of listed companies in the stock market.
Bombay Stock Exchange (BSE) which is the oldest stock exchange in India. A total of 30 major Indian companies are covered under this. These companies should be seen according to market capitalization.
So it is very large, it is now 37% of the total of Indian GDP. This company works to set the trend of the Indian market in a way. And to put it in simple words, the index created for assessing the prices of the shares of big companies of India, which keeps an eye on the increasing decreasing prices of shares of these companies. This is called sensex.
How is Sensex made?
Sensex becomes the stock exchange committee. In which top 30 companies are selected from 13 different sectors of BSE.
This top 30 is chosen on the basis of their share transactions, in which it is seen how much of these company’s share has been bought and sold in a year.
Thousands of companies are listed on BSE’s stock exchange. So this process of selecting Sensex continues in this way and many times the company is extracted from Sensex and added.
Sensex gives you a format of the advantages and disadvantages of BSE Stock exchange.
Sensex comes within India’s top thirty company. Which gives us an indication of the boom of the slowdown in the Indian market.
Top 30 Companies
1) Adani Ports and Special Economic Zone Ltd.
2) Asian Paints
3) Axis Bank Ltd.
4) Bajaj Auto Ltd.
5) Bharti Airtel Ltd.
7) Coal India Ltd.
8) Dr. Reddys Laboratories Ltd.
9) HDFC Bank Ltd
10) Hero MotoCorp Ltd.
11) Hindustan Unilever Ltd.
12) Housing Development Finance Corporation Ltd.
13) ICICI Bank Ltd.
15) Infosys Ltd.
16) Kotak Mahindra Bank Ltd.
17) Larsen & Toubro Ltd.
19) Mahindra & Mahindra Ltd.
20) Maruti Suzuki India Ltd.
21) NTPC Ltd.
22) Oil & Natural Gas Corporation Ltd.
23) Power Grid Corporation Of India Ltd.
24) Reliance Industries Ltd.
25) State Bank Of India
26) Sun Pharmaceutical Industries Ltd.
27) Tata Consultancy Services Ltd.
28) Tata Motors
29) Tata Motors – DVR Ordinary
30) Tata Steel Ltd.
31) Wipro Ltd.
Benefits of Sensex
The advantages of Sensex are explained below
1. It consists of 30 major companies of the country. So if Sensex is seen increasing, then Indian companies also grow
2. With this, people from abroad and countries invest in that company and with this the company grows itself.
3. Due to which the job and employment also increase and the production of the company also increases.
4. When foreign investment in Indian companies increases. So the cost of money is low because the rupee is stronger than foreign currency due to investment.
5. Sensex has only 30 companies, so it is very easy to measure compared to nifty.
Do we hope that you must have liked this post of our day What is Sensex? If you have any doubt related to this post, What is Sensex? then you must do the command.
We have done our best to get all the information about Sensex. But if you see any deficiency in our post, then we must do so by ordering and help us rectify those deficiencies, thanks.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.955579400062561,
"language": "en",
"url": "https://www.ictd.ac/blog/sierra-leone-ebola-epidemic-impact-local-tax-public-services-coronavirus-developing-countries/",
"token_count": 2071,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:752a8538-0013-40ba-b69a-52007627cae7>"
}
|
Fill the gaps, feel the pain: Insights from Sierra Leone on an epidemic’s impact on local taxation, public services, and the poor
With over 16,000 deaths globally and the number of cases growing each day, the calamitous impacts of the novel coronavirus (COVID-19) are already being felt. As the virus continues to spread, the human impact in the Global South has the potential to be catastrophic as it strains weak and underfunded health systems. The economic impacts will also be devastating, with preliminary estimates predicting an income shortfall of $220 billion in developing countries (excluding China) and a GDP decrease of at least $25 billion in Africa.
While secondary to the public health emergency, it is important to consider how the pandemic will affect tax revenues in the Global South, given that these are critical to providing public services and achieving the Sustainable Development Goals. While governments anticipate losing revenue from a range of sources, local taxation is particularly vulnerable and affects a larger proportion of taxpayers in low-income countries than direct central government or trade taxes.
Sierra Leone’s catastrophic experience of the Ebola epidemic—which killed more than 3,900 people in Sierra Leone alone—offers an important case study of the impacts on local government revenues and the fiscal burdens of ordinary citizens. The indirect economic effects of the crisis were significant, with individuals losing livelihoods, governments losing the capacity to fund basic services, and individuals and communities bearing a greater direct cost of public goods provision.
Although no cases of COVID-19 have been reported in Sierra Leone yet, there are cases in neighbouring Guinea and Liberia and, as President Bio declared in a national address last week, “It is no longer a question of whether the Corona Virus will come to Sierra Leone, it is a question of WHEN.”
Anatomy of an epidemic’s effect on local government revenue
In Sierra Leone, local government tax revenues unsurprisingly fell during the Ebola epidemic. Although the revenue losses were fairly small in absolute terms, their impact was significant given the low baseline. Indeed, from 2005 to 2017, local governments collected tax revenue amounting on average to only 17 cents per capita — so little that they could barely do more than pay for basic administrative costs. And the fiscal situation is much worse outside the relatively high-revenue Western area (the capital, Freetown, and the surrounding Western Area Rural district). In the provinces, where 79% of the population lives, local tax revenue amounted to on average only 6 cents per capita per year over the same period. With this baseline, any revenue loss can have devastating impacts on local governance and service delivery.
Local taxes and revenues decreased for three key reasons:
- Some local revenues were simply no longer available due to economic lockdowns and social distancing containment measures. For example, markets were closed during the crisis, leaving the government unable to collect market fees and rents in some areas. This was significant as market dues made up on average 23% of total local revenues from 2005 to 2016. Due to COVID-19, the Sierra Leonean and sub-national governments are currently taking preventative containment measures that may similarly limit economic and taxable activity, including restricting market and street trading hours and banning gatherings of more than 100 people. These containment and social distancing measures are likely to only get more stringent.
- With schools, markets, and many businesses closed during the Ebola crisis, many citizens faced extreme economic hardship. In response, local governments issued an effective amnesty on the most commonly paid tax at the local level—the local (poll) tax that is levied on all adults. In my interviews with local government officials, I found that this was based in part on moral economy justifications that it was simply not right to make people pay. Similar justifications were used to explain the non-collection of other local taxes, with leniency of enforcement being a normal, if ad hoc and informal, response to the economic situation.
- As a result of the lack of online infrastructure for tax assessment and payment, many local taxes became too risky to collect. For property taxes, for example, tax bills are delivered by hand to individual properties and all payments must be made in person at banks or the local government office. Face-to-face interaction goes from being an inconvenience and accountability risk in normal times to a major public health risk during times of contagion.
A public crisis increases individual burdens
While local government tax collection efforts diminished significantly during the Ebola crisis, citizens were not relieved of their fiscal burdens. Indeed, Sierra Leoneans contribute considerably to public goods provision and community mobilization efforts through user fees and informal taxation—and have to do so to a greater extent when local services are underfunded.
Sierra Leone’s public health system is woefully underfunded, with a significant proportion of financing coming from user fees. Not only does this discourage low-income individuals from accessing healthcare, it means that individuals bear a significant part of financing the health system. Public clinics often have nurses and staff that are not on government salaries or receive delayed salaries, which results in the common practice of charging extra fees or having bribes or non-monetary “gifts” effectively required in order to access treatment and essential medicines. In a survey of taxpayers in eastern and northern Sierra Leone conducted in 2017, for example, I found that more than a fifth of individuals had made informal payments to doctors or nurses in the previous year and more than a fifth had contributed labour for the construction or maintenance of public health facilities.
At the same time, donor funding during and immediately after the Ebola epidemic shifted from funding public services like education to emergency humanitarian and public health needs. While evidently necessary, this shift in aid funding resulted in shifting the burden of financing local public goods and services— including schools, teachers, and water wells—onto individuals and communities.
The additional burden on citizens is regressive
Shifting the financial responsibility of essential services, governance, and public infrastructure from the state onto citizens results in inequitable outcomes: The degree and quality of access to public goods depends on the relative wealth of a particular area and access to essential goods depend on the ability to pay. Moreover, as user fees and informal taxes are generally charged at a flat rate, they are overwhelmingly regressive, while my research further shows that the burden of user fees and informal taxes is significantly higher for women.
Communities also shouldered much of the burden of the crisis response
While there was a massive increase in international aid during the Ebola crisis, individuals and communities contributed substantially to crisis response efforts. Traditional authorities were central in containment efforts, introducing chiefdom level by-laws to restrict movement and activity; community tracers were critical in identifying at-risk individuals; and community patrols were vital to preventing the movement of people within and across the most affected areas. Though in some areas chiefs received substantial international support, they also relied on communal labour and contributions to support local mobilization efforts. As explained to me by a District Council Chairman during my research, much of the effort was “self-funded, through… the people.”
The burden was exacerbated by the unfortunate fact that millions in emergency funds went missing, leaving frontline officials and response teams unpaid—and thus reliant on community support and contributions to get by. Such contributions, while they may have added to the burden of regular people during a difficult time, were a central part of the community mobilization that allowed the country to eventually overcome the epidemic.
Lessons for COVID-19
While the current global pandemic is unique in many respects, in the Global South it is also likely to have significant direct health effects, in terms of loss of life, and indirect economic effects, including individuals losing livelihoods and governments losing the capacity to fund basic services. While public health and security concerns are of course paramount, the Ebola epidemic in Sierra Leone demonstrates how the negative effects for vulnerable populations are compounded when local governments are forced to cut back on services and individuals are effectively made to bear a greater cost of public goods provision through user fees and informal taxes.
While international policymakers’ first priority is to contain the pandemic, they can also learn valuable lessons from West Africa’s experience with Ebola in order to minimise the real costs for citizens of developing countries in the short and long-term:
- Urgently provide crisis response financing: The international response to the Ebola crisis was slow to arrive, after the epidemic had already spiralled out of control. While the IMF and World Bank have announced $50 billion and $14 billion in global financing, respectively, far more will be needed—and quickly—in grants and low- and no-interest loans, to not only address the health emergency but provide critical cushioning for the economic impacts.
- Measure impacts to tailor the response: Systematic crisis monitoring is crucial for enabling governments to target support and mitigate the most severe effects for vulnerable groups. During the Ebola epidemic for example, researchers made use of existing sampling frames to conduct mobile phone surveys, gathering almost real-time data on which households and businesses were experiencing the greatest health and economic hardships. The willingness and capacity of donors and researchers to similarly be agile and adapt programmes to meet new and pressing data collection objectives will be particularly important to systematically measure and respond to the impacts of COVID-19.
- Invest in health systems now: Unfortunately, the lessons from Ebola did not translate into the required investments in public health systems and pandemic preparedness. Only 3% of international recommendations for financing preparedness efforts have been achieved by the international community, with 50% seeing little to no progress. A review by the World Bank estimates that it would cost on average $1.69 per capita to finance appropriate pandemic preparedness in low- and middle-income countries. In the long-term, this implies that countries need to strengthen equitable taxation in order to ensure sustainable financing for public health and other essential goods in the future. Otherwise, my research shows that where there are financing shortfalls, individuals and communities will fill the gaps and feel the pain.
Rhiannon McCluskey, ICTD Research Uptake and Communications Manager, provided invaluable contributions to the writing of this blog.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9398073554039001,
"language": "en",
"url": "https://www.wildsalmon.org/projects/lower-snake-river-waterway/",
"token_count": 1945,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2216796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:52c61b28-cdae-46b5-bb75-94959a6d1ba5>"
}
|
The lower Snake River dams were originally conceived to establish a 140-mile shipping corridor that would connect to the Columbia River and create an inland seaport in Lewiston, Idaho. The dams’ energy capacity was added late in the planning process by the Army Corps of Engineers to increase the project’s overall economic benefit and improve the chances of Congressional approval - and appropriations.
While the dams’ anticipated impacts on salmon and steelhead populations was held understood - they were opposed at the time by all Northwest state’s fish and game departments - the net positive economic benefit asserted by the Army Corps went largely unchallenged in the 1960s. More recently, however, steeply declining salmon populations, a series of expensive, ineffective and illegal federal salmon plans, a determined lack of transparency by the federal dam agencies, and the rapidly changing market forces in the energy and transportation sectors has attracted new scrutiny about these four dams overall costs and benefits. Independent observers and a series of reports in recent years makes it increasingly difficult to justify further investment in these high-cost, low-value dams on the lower Snake.
Built last century, the four lower Snake River dams produce less than 1,000 aMW of electricity each year - about 4 percent of the Northwest’s supply. More recently, the cost of wind and solar has plummeted, the capacity from renewables expanded, and the regional electric grid is rapidly evolving. We’ve become much smarter about how we generate, consume, and manage electricity.
Our region, for example, has recently developed 2,500+ average megawatts (aMW) from wind, solar, geothermal and biomass energy - with more currently under construction. And we’ve saved 5,500+ aMW of electricity in the last several decades through smart investments in energy efficiency. While the cost of renewables has plunged, the cost of maintaining and operating these four aging federal dams is steadily rising.
Trends on the transportation corridor are similar: decreasing demand and increasing costs. Shipping on the lower Snake River has declined by 70% in the last two decades. Private/public investments are expanding rail networks locally and helped facilitate a shift by many farmers and other businesses to transport their products by train rather than barge.
Our greatest asset is our ingenuity and ability to adapt. We don't have to choose between wild salmon, affordable low-carbon energy and reliable transportation. Working together, the people of the Northwest and the nation can craft a lawful, scientifically- and economically-sound plan that restores our wild salmon and meets the energy and transportation needs of the region’s communities.
By Rocky Barker
October 07, 2017
STANLEY, ID. What is the future of the Columbia River and its salmon? Look to 2015.
That year’s extraordinary combination of overheated river water and low flows killed hundreds of thousands of returning sockeye salmon, devastating a run that had rebounded from near-extinction.
Millions of new sockeye and steelhead smolts migrating the opposite way, to the Pacific, died throughout the river system; only 157 endangered sockeye made it back to the Sawtooth Valley this year.
By the middle of this century, scientists suggest, the temperatures we saw in 2015 will be the norm. The low snowpack and streamflows were examples of what the Pacific Northwest should expect at the end of this century due to rapid climate change caused by the burning of fossil fuels, climatologists say.
“2015 will look like an average year in the (2070s) and there will be extremely warmer years than that,” said Nate Mantua, a NOAA atmospheric scientist in Santa Cruz, Calif.
Scientists, politicians and energy officials have argued for decades over the best way to restore troubled salmon runs along the Columbia and Snake. Their focus has largely been on the dams and human development that reshaped the rivers. But regardless of what other steps we take for the fish, climate change could catch up with them in the coming decades and pose a major threat.
Already, scientists have seen regional snowmelt reach rivers an average of two weeks earlier than historical records indicate. The average temperature of the Columbia River and its tributaries has risen more than 1 degree Fahrenheit since 1960.
Climate modelers at the University of Washington’s Climate Impacts Group predict <https://cig.uw.edu/resources/special-reports/> that the Pacific Northwest’s average annual temperatures will rise a total of 4 to 6 degrees Fahrenheit by 2050. High estimates suggest the increase could exceed 8 degrees, said Joe Casola, the group’s deputy director.
Salmon and steelhead that migrate in the summer and those that spawn and rear in lower-elevation tributaries to the Columbia may not survive these temperatures. In water of just 68 degrees, salmon will begin to die.
READ THE FULL STORY here.
Save Our wild Salmon is leading a coalition of conservation, fishing, clean energy, orca and river advocates to protect and restore abundant, self-sustaining populations of wild salmon and steelhead in the Columbia-Snake River Basin for the benefit of people and ecosystems. Our coordinated legal, policy, communications and organizing activities focus on holding the federal government accountable by requiring Northwest dam agencies (Bonneville Power Administration, Army Corps of Engineer) and NOAA to craft and implement a legally valid, science-based Salmon Plan (or Biological Opinion/”BiOp”) for the Columbia-Snake Basin.
Since 1998, SOS has led a dynamic campaign to restore a natural, freely-flowing lower Snake River in southeast Washington State, expand spill on the federal dams that remain, and other necessary measures, based on the law and best available science.
The removal the four lower Snake dams must be a cornerstone of any lawful salmon restoration strategy in the Columbia Basin. Lower Snake River dam removal will restore 140-mile river and 14,000+ acres of riparian habitat and bottomlands. It will cut dam-caused salmon mortality by at least 50% and restore productive access for wild salmon and steelhead to 5,500+ miles of contiguous, pristine, protected upriver habitat in northweast Oregon, central Idaho and southeast Washington State. Much of this immense spawning/rearing habitat found above the lower Snake River is high elevation and thus provides a much-needed coldwater refuge as a critical buffer against a warming climate. Restoring a freely-flowing lower Snake River will deliver tremendous economic, ecological and cultural benefits to the tribal and non-tribal people of the Northwest and the nation.
Climate change increases the urgency to remove these four dams and restore this river. High harmful water temperatures in the lower Snake River’s four reservoirs are now routine. Their frequency, duration and intensity have been steadily growing in the last several decades – with increasingly devastating impacts on out-migrating juvenile fish and adults returning from the Pacific Ocean to spawn. In 2015, for example, just 1% of 4000 adult Snake River sockeye that entered the Columbia’s mouth reached their Idaho spawning gravels; others perished in warm reservoir waters impounded by federal dams on the lower Snake and lower Columbia Rivers. A restored lower Snake will dramatically lower water temperatures and again offer diverse habitats found in living rivers, including additional coldwater refugia currently lost as a result of these reservoirs today.
Increased spill at all federal dams is needed today as an immediate, interim measure to buy time for these endangered populations until a more effective and a lawful strategy is in place. Spill – water releases during the juvenile salmon out-migration to the ocean in the spring and summer - increases juvenile survival by reducing migration time, exposure to warm waters, predation and the overall numbers of barged (artificially transported) fish. Increased juvenile survival boosts adult returns in subsequent years – benefiting marine/terrestrial/freshwater wildlife and coastal/inland fishing communities.
These policies will substantially increase fish populations with corresponding impacts on the 125+ species that benefit from salmon. They will increase resilience for wild salmon and steelhead, the ecosystems they inhabit, and human communities they impact. And they will deliver critical economic, recreational and cultural benefits to the communities of the Northwest and the nation.
Our coalition recognizes that the removal of the four federal dams on the lower Snake River will affect the communities that currently use them – especially the communities of Lewiston (ID) and Clarkston (WA) and the energy, commercial and irrigation sectors. Based on the significant data on these dams, their modest services and the availability of efficient, cost-effective alternatives, salmon advocates are ready to sit down with both sovereigns and stakeholders to craft a responsible plan that removes these costly dams and replaces their services with alternatives.
A lawful federal salmon plan must restore a freely flowing lower Snake River by removing its four costly dams and increase water releases or ‘spill’ over the dams that remain.
In 2016, a federal judge rejected the federal dam agencies’ latest plan for protecting Columbia-Snake River salmon. This is the fifth plan rejected now by three judges over two decades. Our government has spent $15B+ but has yet to recover a single population. It’s past time for a new approach. A lawful, science-based plan must include the removal of the four costly federal dams on the lower Snake River. We need a Northwest plan that works for the region’s ecology and its economy, for fishermen and farmers, for taxpayers and energy bill payers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9401919841766357,
"language": "en",
"url": "http://www.developmentnews.in/competitiveness-agriculture-will-boost-sectors/",
"token_count": 1088,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0186767578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:201f9e9c-873f-4cf0-8c7b-537d6eba1e0f>"
}
|
By Amit Kapoor
Farmers across the nation are on strike, cutting off vegetables and milk supplies to cities, demanding better prices, loan waivers, power supply and ethanol as fuel — all this hailing from the Swaminathan Commission’s recommendations. It makes one wonder if the agricultural sector is indeed competitive, given its vulnerability to temperature changes and dependence on monsoon. Are we forgetting the agriculture sector and allied activities in the name of progress?
Agriculture contributes 17.4 per cent to the GDP and employs 54.6 per cent of the population. However, the contribution of agriculture to gross value added has been declining since 2014. Recent estimates show that agriculture is expected to grow by 2.1 per cent in fiscal 2018-19, which shows that the wheels are in motion and the hope is that the competitiveness of the agriculture sector itself will reach sky-high. However, the cause of concern remains that the benefits have not yet reached the farmers. Therefore, the question remains as to how long will the policies take to reach the farmers.
Of course, in traditional economic theory, a nation must undergo a structural transformation, moving away from agriculture and towards industry and services. However, since more than half of the population is still involved in the sector, there is a need to focus on tackling its competitiveness.
So, what will turn agriculture and allied activities into a competitiveness miracle? The answer lies in productivity. Agricultural productivity showed a substantial rise from 2004-05 to 2014-15, where the output per hectare rose from 9.1. to 12. But the advanced estimates for 2015-16 have shown a decline to 11.9 output per hectare, indicative of a creeping sluggishness in the agriculture sector. Caution must be taken to understand that a push towards productivity will push up the quantum of produce for farmers, so even a reduction in prices would be more than compensated.
How productive can India’s agriculture sector be? In this regard, there is a lot to be learnt from the Netherlands, the second-largest global exporter of food, where farm productivity has risen by leaps and bounds. The country’s farm productivity in 2015 was five times that in the 1950s.
But, how? For one, the sector would need the right kind of physical infrastructure, financial infrastructure, improved communication, innovation within the sector, focusing on administration, and most importantly, ensuring development of human capital.
On that note, we see that agriculture in India has not been totally forgotten, in terms of the infrastructure, especially power supply. Power availability in agriculture increased from about 0.043KW/ ha in 1960-61 to about 0.077 KW/ ha in 2014-15, and the mechanical and electrical sources of power have increased from seven per cent to about 90 per cent from 1960-61 to 2014-15. To sustain innovation, the R&D expenditure in the agriculture sector has been increasing at a compounded rate of 4.2 per cent.
To ensure the doubling of income of farmers by 2022, the Indian government has made efforts at investing in science and technology for farming activities, where the term ‘Digital Agriculture’ has been coined to emphasise the importance of digitisation, followed by throwing light on the use of electronic devices, tools and a fusion of digitised systems, to enhance its productivity and thus pushing the farmer’s income. A step to ensure equitable regional penetration has been taken via a scheme of bringing the Green Revolution to Eastern India to address the concerns of low rice-productivity, complemented by initiatives such as soil health card, per drop more crop in Pradhan Mantri Krishi Sinchai Yojana (PMKSY), to ensure this.
But, let’s not forget the issues still exist across farmers in the nation. Therefore, it is necessary to ensure the competitiveness of the farmers. The sector needs a smoothness in the functioning and an “ease of doing farming activities”, much like the ease of doing business index. This is inclusive of creating an enabling environment via provision of the above-mention factors with the least amount of hassles. Apart from boosting overall productivity of the sector, this can be taken care of by commodity-based organisations and minimum support price, with an authority that focuses on marketing, storage and processing of the goods, as the Swaminathan recommendations suggest.
To tackle these concerns, the government has taken up the Price Stabilisation Fund (PSF) to control prices and volatility of agricultural products, tackling concerns of inflation. The government has also launched an e-platform called ‘e-NAM’ , i.e., an electronic agricultural market to ensure farmers benefit from remunerative prices for their produce in the market.
Given the policies in action, holistically speaking, there is a need to speed up a little to ensure that agriculture and its allied activities show increased productivity, with the hope that a competitiveness boost in agriculture will boost other sectors, hopefully nurturing food processing clusters and, therefore, make India triumphant in competitiveness. The hope is the policies will work in a way that agricultural productivity shoots up, and the farmers are able to benefit from it.
Amit Kapoor is chair, Institute for Competitiveness, India. The views expressed are personal.
Source: The Quint
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9428144693374634,
"language": "en",
"url": "https://clearlinesaudit.com.au/what-is-risk-management/what-is-risk-management-specialist/what-is-risk-management-crisc/",
"token_count": 1370,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.09521484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e4eb8889-f946-4d9e-ac66-31e62bd2377a>"
}
|
What to read first: What is risk management? (supplement for risk specialists) What is risk management?
|For CRISC candidates (ISACA): This series assumes you have specialist interest in risk management theory, and that you have a copy of the CRISC Study Guide.|
Definition of risk
The CRISC definition of risk is
the combination of the probability of an event occurring and the impact the event has on the enterprise. [2.1 page 14]
CRISC candidates should learn the CRISC definition and then forget it after the exam. For reference after the exam, the specific weaknesses of the CRISC definition of risk are:
- Risk is linked only to events, and not to other uncertainties. Therefore, ‘working on wrong assumptions’ may not be not recognised as a type ‘risk’ within the CRISC definition.
- The ‘impact’ is limited to the effect on the enterprise, and therefore may be taken (unhelpfully) to exclude effects on stakeholders outside the enterprise, such as customers or the community.
- The is no concept of objectives as found in ISO 31000 and COSO ERM. Objectives are very important for evaluating ‘impacts’. Objectives are also very important in COBIT, though with different vocabulary. In some better risk management practices, the objectives are the main basis on which risk scenarios are identified.
- The wording suggests that the probability (likelihood) of the event is the same as the probability of the impact on the enterprise. Those two likelihoods can be the same, if risk scenarios are very precisely and carefully defined to have only a single definite impact. In practice, risk scenarios are often written to include a range of different possible impacts from an event. Any specific impact, such as the worst, may or may not follow from occurrence of the event. There is a chain of unpredictable mitigation and exacerbation effects in between the event and the final consequences. In that common case, likelihood of the worst impact following is far less than the likelihood of the event.
Associating the event likelihood and the worst impact will systematically overstate the actual level of risk. Other sources recommend rating the scenario impact at the impact level that is ‘most likely’ to follow from the event, but that method ignores the less likely but very grave impacts, and is therefore unsafe.
HB 436 spells out that the relevant ‘likelihood’ is the likelihood of the defined effects on objectives arising from the risk scenario. For an event with a range of possible impacts, there should be different likelihood and consequence values for each possible impact.
Definition of risk management
According to CRISC, risk management is
the coordinated activities to direct and control an enterprise with regard to risk. The activities with risk management are defined as the identification, assessment and prioritization of risk followed by coordinated and economical application of resources to minimize, monitor and control the probability and/or impact of adverse events or to maximize the realization of opportunities. [Part 1 Domain 1 C 2.1, page 15]
CRISC follows this definition of risk management with a list of principles, parallel to the key principles of ISO 31000 [Part 1 Domain 1 C 2.1, pages 15-16]. There is a reasonable overlap with the ISO 31000 key principles, with the scope limited to ICT risk management.
In ISACA’s framework for governance and management of ICT, COBIT5, risk management is represented as a minor element. It is designated as Process APO12 within the COBIT5 Process Reference Model. This is paradoxical in view of the fact that ISACA and COBIT exist primarily to manage risk in ICT. However, it can make sense within the COBIT approach.
The RACI chart for risk management reproduced in the CRISC guide [Part 1 Domain 1 C 2.1, page 18] does not include any role for a risk specialist. This may be rather surprising to CRISC candidates. However, it is consistent with this blog’s position that risk is actually managed by decision makers (managers) and not by risk specialists, who only support management without making decisions.
I advise CRISC candidates to simply learn the ISACA models for the purpose of passing the exam. A critical view of those models is helpful in the exam only to the extent that it can make the otherwise dry details easier to remember.
Risk IT Practitioner Guide
The Risk IT Practitioner Guide [RiskIT] is another authoritative statement from ISACA. It is only occasionally referenced from the CRISC study guide. The Risk IT Practitioner Guide is available as a download from the ISACA web site at no cost to ISACA members. Members should take a look at some stage, as it contains some interesting material. I don’t advise non-members to bother with it. (It is not available for download to non-members.)
CRISC candidates can wait until after the exam before downloading RiskIT, because everything from RiskIT within the CRISC curriculum is reproduced in the CRISC study guide. It is very difficult and confusing for learners (and probably for everyone else).
RiskIT has its own complicated and difficult model for risk management, shown on Figure 1 page 8. This model does not include definitions for risk or risk management comparable to those in ISO 31000 and COSO ERM.
RiskIT generally assumes that there is a centrally coordinated risk management activity within the ICT organisation, and in subtle ways tends to move the responsibility for risk management away from managers who make decisions, and on to risk specialists. I believe this tendency should be opposed.
RiskIT, taken as a whole, can also leave the impression that risk management is about following prescribed processes, rather than about making good decisions in the real world. Unlike ISO 31000, RiskIT did not distance itself decisively from the system-following paradigm that has undermined risk management globally. Unlike the other sources I’ve quoted, RiskIT does not have anything resembling the ISO 31000 key principles.
RiskIT would have been developed before ISO 31000:2009 was published, and these tendencies can now be regarded as common faults of the times.
I remain unclear as to why ISACA has not formally replaced RiskIT with something more aligned with current risk management thinking. There are newer publications, COBIT5 for Risk and Risk Scenarios: Using COBIT5 for Risk, but these are not a direct replacement.
|Risk specialists||Version 1.0 Beta|
Main article on What is Risk Management?
Index to the series What is risk management?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9172178506851196,
"language": "en",
"url": "https://clearlinesaudit.com.au/what-separate-activities-are-specific-to-risk-management-everyone/",
"token_count": 1197,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01409912109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:744a817a-f376-46db-a20d-45b72b8391f3>"
}
|
Activities specific to ‘risk management’ are typical activities specific to ‘management’, with special features. They also have special names, defined in places like ISO 31000.
ISO 31000 defines risk management activity at two levels, the definition and maintenance of a risk management framework (Clause 4, summarised in Figure 1 of ISO 31000) and the execution of the risk management process (Clause 5, summarised in Figure 2 of ISO 31000). The activities described by ‘risk management’ are those within the risk management process.
|This table shows how risk management is simply management, with uncertainty taken into account. The left margin is the ISO 31000 label for the risk management activity, the middle column is my summary of what is involved, and the right column describes the corresponding activities in ‘management’ other than ‘risk management’. This argument is original with Clear Lines on Audit and Risk, so it’s fair game for queries and criticism.|
|ISO 31000 risk management process activity||Risk management process activity||Management process activity|
|Establishing the context||Developing risk criteria, through understanding the stakeholders’ risk appetite and tolerance around a particular activity.||Setting objectives, targets, and budgets, having regard to stakeholder expectations and priorities. Budgets will include spending limits for particular management levels (parallel to risk tolerances).|
|Risk assessment||Identifying, analysing and assessing risk.||Developing a plan for the steps necessary to deliver on the objectives and targets, such as an annual business plan or project plan.|
|Evaluating assessed risk in relation to risk criteria.||Evaluating the feasibility of the business plan or project plan.|
|Risk treatment||Implementing treatment actions for evaluated risk. Treatment actions can include communicating, avoiding, transferring, and monitoring the risk, and re-designing the activity to change the risks involved.||Amending the business or project plan to achieve both feasibility and stakeholder objectives. Deciding the controls that need to be maintained.|
|Implementing risk treatment actions for evaluated risk. Treatment actions can include maintaining controls, and adhering to policies and planned strategies designed to optimise risk and reward.||Executing the business plan or project plan. Maintaining controls. Complying with organisational policies.|
|Monitoring and review||Reviewing and improving particular risk management processes, and the management of particular risks, based on experience. An important type of review is monitoring actual events and comparing those to the forecasts made in risk assessment.||Continuous improvement based on activity tracking and performance assessment. An important type of review is comparing actual outcomes (deliveries, expenditures) to planned outcomes.|
|Communication and consultation
Recording and reporting (2017 revision of ISO 31000)
|Communicating and consulting about the overall situation with risk and risk management, particularly with stakeholders and their representatives.||Communicating and consulting about actual business performance or project delivery, forecasts, and plans. Communications and consultation will be with stakeholders and their representatives (e.g. senior manager, project board).|
|Within an organisation, some of these roles are part of management performed by managers, while others may be performed by risk specialists. Work done by risk specialists is done on behalf of decision-making managers at one level or another. Risk specialists are not decision makers.|
Specialities focused on risk management
Different risk specialists assume different boundaries of ‘risk’.
The term ‘risk management’ is often used to describe specific disciplines involving the uncertain potential for trouble, such as security, business continuity, credit, or fraud management. This usage of ‘…risk management’ resembles the way that ‘… science’ or ‘…disorder’ get added when something has doubtful credibility, such as ‘beauty science’ or ‘narcissistic personality disorder’.
But on the whole, this usage is fair and helpful. Activities like security management are an excellent example of risk management, separate from Enterprise Risk Management. Better practices in security management include application of risk management principles consistent with ISO 31000, with some extensions. Standardised extensions for security risk management include asset definition and threat identification based on specific attackers’ capabilities and motivations.
|The thing to watch is that security specialists (for example) tend use the term ‘risk’, without a qualification, in a very narrow and specific way. By ‘risk’ they do not always mean the total effects of uncertainty in any given activity. Sometimes they will use ‘risk’ in the common (but incorrect) way of referring to a kind of threat, without specifying the effect on any objective. At other times they will refer to an effect on an objective, but only a very narrow kind of effect, such as the potential ‘security’ impact. For example, security folks will often assess a ‘risk exposure’ in terms of an asset’s rated value and the likelihood of its compromise. The rated ‘asset value’ is a fiction that simply differentiates severe and minor consequences, without any real link to effects on organisational objectives as understood at CEO and Board level. This can be a good thing to do, but not the same thing as enterprise level risk management. Apart from its security value, it can also be a useful part of the way in which whole of enterprise-wide risk is understood. In a later article I’ll be exploring ways to join different branches and styles of risk management within an organisation, to create an enterprise view.|
Previous article for Everyone
|Everyone||Version 1.0 Beta|
Main article on What is Risk Management?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9239826202392578,
"language": "en",
"url": "https://wadadliphones.com/qa/are-taxes-higher-in-canada.html",
"token_count": 301,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.2578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9c97d643-fb81-461d-978a-6d6dc09d68ef>"
}
|
Canada has a higher average practical tax rate than the United States at 28%.
Business Insider reports that after taxes Canadians bring home $35,299 annually on average.
In the United States, the practical tax rate is lower at 18%.
The Numbeo Cost of Living Index for the U.S.
is 69.91 compared to 65.01 for Canada.
Are income taxes higher in Canada?
According to the Organisation for Economic Co-operation and Development (OECD)’s 2018 report, Canadians pay lower personal income taxes than Americans. Canada’s 2017 debt-to-GDP ratio was 89.7%, compared to the United States at 107.8%.
How much tax do you pay in Canada?
Federal income tax
|2018 Federal income tax brackets*||2018 Federal income tax rates|
|$46,605 or less||15%|
|$46,605 to $93,208||20.5%|
|$93,208 to $144,489||26%|
|$144,489 to $205,842||29%|
2 more rows
Which province has the highest income tax rate in Canada?
Nova Scotia has the highest top marginal income tax rate of 21 percent, which is more than double the lowest top rate in Alberta (10 percent). Quebec is another province with a heavy tax burden at all income levels, especially for lower and middle-income earners.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9365079402923584,
"language": "en",
"url": "https://wiki.trezor.io/Exchange",
"token_count": 633,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1298828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b6fc1803-ec37-43d6-8c85-4fbad7383e1c>"
}
|
A cryptocurrency exchange is a business that allows customers to trade cryptocurrencies or digital currencies for other assets, such as conventional fiat money (state-backed currencies, such as USD) or different digital currencies.
See also Buying and selling
Exchange in Trezor Wallet
For more information about crypto-to-crypto exchanges, see the relevant section of this article below.
Types of exchanges
There are two main types of cryptocurrency exchanges - some exchange fiat currencies for cryptocurrencies, others exchange only cryptocurrencies. Many of the large exchanges combine the two functions.
These exchanges trade fiat currencies for cryptocurrencies and vice versa. Most major exchanges are of this type.
Although a fiat-to-crypto exchange can be a brick-and-mortar business, most function strictly online.
To start using a cryptocurrency exchange, customers have to register with the exchange and go through a thorough verification process to authenticate their identity. This is a result of know-your-customer legislation and anti-money-laundering laws. Once the authentication of a user is complete, an account is opened. The user then has to transfer funds into this account before he or she can start trading. These balances are then used to trade with other customers of the exchange. As long as the exchange itself does not commit fraud or withhold money (and is not hacked), the risk of losing money due to people not fulfilling their part of the deal is significantly lower than in over-the-counter transactions.
Different payment methods can be used for depositing funds, depending on the exchange. These can include bank wires, direct bank transfers, credit or debit cards, bank drafts, or money orders. Making deposits and withdrawals can include a fee, depending on the payment method chosen to transfer funds. The higher the risk of a chargeback from a payment method, the higher the fee. For example, when using PayPal or a credit card, the fiat being transferred can be reversed and returned to the user if requested, so the risk for the exchange is higher.
Additionally, traders may also be subject to currency conversion fees, depending on the currencies used on the exchange.
Some exchanges enable trading cryptocurrencies for other cryptocurrencies only.
Many of these exchanges do not require identity verification or an account to allow transactions. They simply charge a fee for each transaction. This allows cheaper buying and selling as well as increased anonymity. However, due to the increased enforcement of the know-your-customer and anti-money-laundering legislation, many crypto-to-crypto exchanges now require at least some form of user identification.
Risks of keeping cryptocurrency funds in an exchange
When buying crypto funds in an exchange, it is recommended to withdraw them directly to an account protected by your Trezor. There are examples of exchanges being hacked and funds being stolen, the most notorious example being Mt.Gox, where more than 850,000 bitcoins were stolen in 2014 (see this article for more information). It is important to remember that one can only truly claim to own cryptocurrency funds if he or she is the only owner of the private keys enabling access to these funds.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8991978168487549,
"language": "en",
"url": "https://www.antiessays.com/free-essays/Income-Inequality-And-Poverty-327127.html",
"token_count": 1303,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.083984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:871fd869-ae90-43f6-b6f7-98e13b26f3ad>"
}
|
Tax provisions dealing with taxation of corporations encourage economic results. [The structure of federal income tax; p.-14] [types of taxes; p.1-4] [wherewithal to pay; p.1-29] Fill-in-the-Blanks 1. Property or Ad valorem [Ad valorem taxes; p.1-11] 2. proprietorship [proprietorship; p.1-16] value added tax [VAT; p.1-7] 3. 4. severance [severance tax; p.1-14] 5. political [political consideration; p1.-31] 6. equity [equity consideration; p.1-35] 7. FICA (social security and Medicare), FUTA (unemployment) [FICA and FUTA tax; p.1-3] 8. pay-as-you-go [pay-as-you-go; p.1-14] 9. proprietorship, corporations, partnerships, S corporations, and limited liability companies or limited liability partnerships [Income taxation of business entities; p.1-16] 10. tax avoidance; tax evasion [Tax avoidance and tax evasion; p.1-19] 11.
BMCCSu08MicroQuiz2MU Student: ____________________________________________________________ _______________ 1. In the above diagrams for a hypothetical economy, Figure 2 shows the: A. personal distribution of income. B. functional distribution of income. C. microeconomic distribution of income. D. rates of poverty in the United States.
XECO/212 Week 6 Money Train Multimedia Activity Scenario 1 In 150 to 200 words, explain your reasoning for the way you are planning on using Reserve Requirements. Be sure to address the following: 1. How Reserve Requirements affect the economy 2. How your action will affect economic growth 3. Why it is important to increase economic growth 4.
Individual Assessed Questions Leonel Gutierrez ECO/365 December 2, 2013 Emmanuel Ogunji Individual Assessed Questions Prepare your responses to the following questions and submit your write-up by the due date: 1. What is economics? What role does economics play in your personal and organizational decisions? Provide an example of the role of economics in decision making. Economics is the study of how human beings coordinate their wants and desires, given the decision-making mechanisms, social customs, and political realities of society (Colander, 2010).
Higher National Unit specification General information for centres Unit title: Economic Issues: An Introduction Unit code: F7J8 34 Unit purpose: This Unit introduces candidates to fundamental issues in economics with a particular emphasis on the business environment. Candidates will learn about the basic economic problem and how the consumer and other economic agents address this problem. Candidates are introduced to the operation of markets and actions that can be taken to help avoid market failure. The Unit introduces the theory of National Income and the circular flow of income model. On completion of this Unit, the candidate should be able to: 1 2 3 Explain the allocation of resources within the economy.
I will also identify my gender and race ad discuss consequences of each as it relates to my current or potential occupational status, wealth, income and restraints that my race may have in regards to access to educational opportunities. In the third part of my paper I will select two chapters from the text (Essentials of Sociology: A Down-to-Earth Approach 10th ed.) and focus on how each sociologically affect myself and others. To end my paper I will use information from the text, other resources and my own personal experiences to critically evaluate the relationship between the institutions and the social trends and their effects on myself and others. Keywords: sociological imagination/perspective, theoretical perspectives, socialization, social institutions, social trends Core Assessment: Social Imagination Part 1: Sociological Perspective Sociological perspective stresses the social contexts in which people live.
Some of the questions involving risk adjustments are more difficult, and this raises the overall complexity of the case. Ways To Use the Case This case can be used in two different ways. With the introductory and not-very-well prepared second course students, students can be asked to read the case and then to become generally familiar with it. The case can then be used in class in lieu of a lecture to ensure that students understand how to obtain pertinent information from standard financial sources and to understand the issues involved in valuation. When the case is used in this manner, assign the directed version.
The Keynesian Aggregate Expenditure Model Related to Current American Economics Jonah S. Gruner Macroeconomics 201 UMUC European Campus Dr. Ertl July 1, 2012 The Keynesian Aggregate Expenditure Model Related to Current American Economics Much of today’s news focuses on global economic recession, global economic recovery, bailout spending, employment and our struggle to reach a better economic situation. In Europe and the United States, Keynesian economics has generated a greater interest. This paper attempts to relate current developments in American economics to the Keynesian aggregate expenditure model by responding to the following questions: 1. Who was Keynes and what is his basic aggregate expenditure model? 2. How does Keynes relate gross domestic product (GDP) to aggregate spending?
Argument Validity Nicholas Jackson BCOM/275 September 23 2013 George Kelley Argument Validity Week four saw a debate posted by learning Team C arguing for the case of increasing the Minimum wage. There were many arguments presented for both the pro and con side of the debate. The purpose of this paper will be to analyze the validity of the pro-minimum wage increase side of the arguments comparing the presented arguments for the topic against a variety of sources. The pro-minimum wage arguments were based on the assumption that increasing the minimum wage will be beneficial to the economy (increasing economic stabilization). The arguments for increasing the minimum wage included assertions that doing so will
The case concludes with details of eBays re-entry into the country through a joint venture with Yahoo. TEACHING OBJECTIVES AND TARGET AUDIENCE: This case is designed to enable students to: Examine eBays international expansion strategy. Understand eBays operations in Japan. Analyze the reasons for eBays failure in Japan. This case is aimed at UG/MBA students, and is intended to be a part of Strategy and General Management curriculum.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9605825543403625,
"language": "en",
"url": "https://www.intellectualtakeout.org/article/what-origins-money-teaches-us-about-spontaneous-order/",
"token_count": 852,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.38671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:be1d969f-0485-4160-8826-00da0942eca9>"
}
|
Money has been around for most of human history. From Mesopotamia (or even earlier), all civilizations have employed some kind of medium of exchange to facilitate transactions regardless of their geographical locations, legal and economic systems, religious beliefs or political structures. Have you ever wondered why? In a brief essay entitled "On the Origins of Money," the nineteenth-century Austrian economist Carl Menger provides an answer to this question. Menger argues that money emerged spontaneously in different times and places to overcome the disadvantages of barter and facilitate the expansion of trade. Which disadvantages?
Imagine Sandy, a farmer in the Midwest, produces wheat, which she expects to exchange for barley. Two problems arise at this point. First, she needs to find a barley producer with whom to barter her products. This problem can be easily overcome if Sandy goes to a market where another farmer (let’s call him Billy) sells barley. Since both proucts are harvested during the same time of the year, the exchange would easily take place.
But what if we are dealing with products with different life cycles? In this case, Sandy and Billy could only agree on exchanging their products if Sandy accepted the deferral of the payment until Billy’s products have been harvested. This is what economists call deferred barter. Even though deferred barter solves some problems, it has an important limitation: it can only take place within small communities based on mutual trust due to the risks involved for one of the parties. What if Billy decides not to deliver the promised barley? Thus, the use of deferred barter as a system of exchange prevents the expansion of trade beyond the limits of one’s community.
Barter has a second problem. Billy could refuse to trade barley for wheat. He might prefer exchanging his barley for any other commodity or good that better satisfies his needs. This represents another obstacle for the expansion of trade. How did societies overcome these problems?
They did so by using certain commodities as generally-accepted media of exchange, and more specifically precious metals. But why precious metals and not other commodities? According to Menger, gold or silver possess a high degree of saleableness, which he defined as “the greater or less facility with which they may be disposed of at prices corresponding to the general economic situation”. Today we call this property liquidity.
The relative high degree of saleableness of precious metals in relation to other commodities is fundamentally linked to their durability, divisibility, low transport and storing costs as well as the traditional demand for these goods in most places throughout history. The fact that precious metals are more saleable than other commodities implies that it is easier to exchange them for other goods: even though Sandy doesn’t need gold (she wants barley), she will accept it as payment because she knows she won’t have any problem to trade it for barley.
That’s why most civilizations adopted precious metals as money. Since then, money has gone through many changes, some of them spontaneous (e.g. the emergence of paper money) and some induced by the State (e.g. the replacement of commodity standards for central-bank fiat money).
In any case, money emerged as a result of what another Austrian economist, F. A. Hayek, called spontaneous order, that is, as an institution generated not by human design but by impersonal market forces. It wasn’t necessary for a central planner to overcome the limitations of barter. Economic agents through the market process figured out that precious metals could act as universal media of exchange, facilitating the expansion of trade and, thus, the establishment of commercial relationships among communities.
As a human institution that emerged spontaneously from the voluntary interaction of millions of individuals, money should teach us about the power of markets when it comes to providing solutions to economic problems; especially nowadays, when most people look to the government to solve their problems.
[Image Credit: YouTube]
Luis Pablo de la Horra is a PhD Candidate in Economics at the University of Valladolid in Spain. He has been published by several media outlets, including The American Conservative, CapX and the Foundation for Economic Education.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9209036231040955,
"language": "en",
"url": "https://www.newvistas.com/sites/NEWVISTAS%20MODEL/Energy,%20Water,%20and%20Waste",
"token_count": 523,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.04150390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:de7e0265-fdb7-44ca-a882-8f1dd3a97c33>"
}
|
In developing the policy for energy, water or waste in a model NewVistas, the overarching goal is to provide citizens with all the resources they reasonably require for their daily goals, while minimizing or eliminating any adverse social, environmental, or economic impacts. Some specific key goals for each area include: 1. Energy. Drastically decrease our consumption while increasing the efficiency of generation and while reducing adverse effects on the environment. 2. Water. Maximize use of rainwater, eliminate opportunities for waste from infrastructure, and increase costs for more excessive or unreasonable uses. 3. Waste. Capture every bit of methane, and any other useful output from waste, including information about its contents.
If the NewVistas model were implemented today in North America, it would likely make significant use of locally sourced natural gas. This is based on the assumption that it will be feasible to eliminate emissions and contamination risks at all points in the natural gas supply chain, and at levels that give deposits a chance to reform. This is feasible in a NewVistas given the level of conservation (1/10th current energy use) the model enables. Intermittent renewable sources, such as solar, hydro and wind, may make sense for the energy profile of some existing communities. But these are unlikely to be a competitive option for any NewVistas, which can justify the expense of safe localized extraction equipment for the fifteen to twenty thousand community members. Alternative sources need to be considered in locations that ban natural gas extraction, locations that allow multiple extraction points in a small region, or in locations that allow excessive extraction. Such locations may be inherently unsustainable, especially if energy needs go up more than anticipated, or the cost of renewable technology increases due to shortages of green tech materials. Where lower impact fossil fuel usage is feasible, use of traditional renewables should be limited. Some materials to produce renewable technology should be limited in case fossil fuels sources become compromised, and for when renewable technology is more advanced.
Excel spreadsheet (NVF,Schiess 2013) summarizes the time it takes to get from the central square to an outer village at normal walking speed and when crowded.
Paper (Ovuoba 2011) explains cost analysis of system and compares cost of NewVistas System to conventional methods. Costs are as follows: Projected NewVistas system - $438 million, Conventional System - $13 million, Adistrnnual maintenance $0.46 million.
Sketchup model (NVF) shows layout of slow sand filters, potable and non potable water storage, and energy generator for a district.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9354777336120605,
"language": "en",
"url": "http://tewa.ca/qa_faqs/3-what-are-the-implications-of-incorporating-federally-and-provincially-2/",
"token_count": 908,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b04f7148-89f2-4369-a563-a52cb31603de>"
}
|
Incorporation in General
A corporation is constituted under a statute (see: Canada Business Corporations Act. R.S., 1985, c. C-44, and the Companies Act R.S.Q. C-38) and exists as a distinct legal entity from its shareholder or shareholders. Because it is a distinct legal entity, and is not susceptible of being an “Indian” according to the Indian Act, certain benefits may actually be lost through incorporation.
The goal of a corporation is to operate a business for profit and to distribute the profits among the shareholders.
The following are some of the characteristics of a corporation:
- A corporation exists on an ongoing basis, until such time as it is wound up.
- A corporation can be set up under the authority of either a federal or a provincial statute. If you intend to operate your business solely in Québec, it might be advisable to incorporate under a provincial statute. However, the corporate name of a federally incorporated entity is protected throughout Canada.
- A corporation has exclusive ownership of all property (whether money or personal property) transferred to it by its shareholders in exchange for shares of the corporation.
- A shareholder’s liability for the corporation’s debts is limited to his or her investment, unless the shareholder provided personal guarantees for a loan to be invested in the corporation’s business.
Liability of Directors
If the corporation fails to remit an amount payable to the Ministère, the corporation and the directors serving at the time of the omission are jointly liable for the amount in question, as well as any penalties and interest. However, directors are not liable if they acted with reasonable care, dispatch and skill under the circumstances, or if it was impossible for them to be aware of the omission.
Advantages and Disadvantages of Federal and Provincial Incorporation
The next decision is whether to incorporate your company federally or provincially. If you incorporate federally (a “Corporation”), your business will be empowered to conduct business throughout Canada. Although your “corporation” will still be subject to provincial regulations, and will have to pay a license or registration fee in some provinces, no province will be able to prevent your company from conducting business under its corporate name. A provincially incorporated company, (a “Company”) on the other hand, may not be able to operate under the same name in another province, if another corporation with a similar name already exists in that province.
One disadvantage of federally incorporating your company is the required disclosure of financial records. A private corporation’s financial statements must be made public if a federal corporation has gross revenues for a fiscal period in excess of $10 million, or has total assets in excess of $5 million as of the last day of any fiscal period. These gross revenues and total asset figures include those of affiliated companies and the parent company (Canadian Business Guide).
Also, to federally incorporate, the composition of your company’s board of directors must meet the requirements of the Canada Business Corporation Act. Under this Act, a majority of the directors of a federally incorporated company must be resident Canadians, unless “a holding corporation earns in Canada directly or through its subsidiaries less than five per cent of the gross revenues of the holding corporation and all of its subsidiary bodies corporate together, then not more than one-third of the directors of the holding corporation need be resident Canadians” (“…Incorporation Kit, Industry Canada).
Finally, corporations provide more security for minority shareholders than do companies. So when a client is in a minority shareholding position, we suggest incorporating at the federal level.
Industry Canada’s Small Business Guide to Federal Incorporation provides detailed information on how to federally incorporate your company. Federal incorporation costs $500.
If you incorporate your company provincially, you’ll have to register and license your company through the appropriate provincial Registrar in each province and territory you wish to do business in, outside of the original incorporation jurisdiction. So if you incorporate your business in Ontario, and then want to operate in New Brunswick as well, you’ll have to register your business with the New Brunswick Registrar as well, and pay the appropriate additional fees. Incorporation fees vary from province to province, but generally, provincial incorporation costs about half as much as federal incorporation.
Posted in: Small Business Services
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9387698173522949,
"language": "en",
"url": "http://www.frankbritt.com/education-training-innovation/how-workforce-training-gives-learners-a-competitive-edge/",
"token_count": 2116,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.27734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3cfd28e3-54ad-48b3-9464-559bb7364417>"
}
|
As posted on FosterEDU blog
After donning the cap and gown, what comes next for high school graduates?
Completing high school is a major milestone for students, but a high school diploma is no longer the academic finish line. For today’s competitive job market, a high school diploma is merely a prerequisite for the next educational endeavor. Today’s high school graduates have several options for higher education outside the typical four-year college.
Skyrocketing college tuition and fees within the past few decades, referred to as the “college cost crisis,” has caused American students and families to question if studying at a traditional university and earning a bachelor’s degree is the only veritable route to take. Hiring freezes, employer budget cuts and the unemployment epidemic following the Great Recession has left recent college grads jobless or underemployed with colossal student loan debt.
Earlier this year, USA Today called the job outlook for 2014 college grads “puzzling.”1Young people continue to face a slow economic recovery, despite reports of increased job availability and drops in unemployment rates. The Economic Policy Institute released a report in May that read, “The class of 2014 will be the sixth consecutive graduating class to enter the labor market during a period of profound weakness.”
Value of the Skilled-Trades Worker
Amid this uncertain job outlook, the skilled-trades sector is struggling to find qualified workers, particularly for jobs in manufacturing and construction. According to a report from the ManpowerGroup, this skills gap is predicted to become even more acute as an aging labor force faces retirement. Who will fill these in-demand jobs?
High school graduates may not envision a future as professional tradesmen, yet 49 percent of Americans work in a grey-collar position. States like North Dakota, Arizona and Texas have prevalent grey-collar jobs as well as high job growth. Grey-collar jobs are on the rise in states like Texas, California and Illinois as well.2 Iowa identified a gap in qualified workers to fill middle-skilled jobs; a 2013 report by the Iowa Workforce Development found middle-skilled trades make up 55 percent of Iowa jobs, whereas only 33 percent of Iowa workers actually possess the skills needed to perform these jobs.3
This discrepancy calls for a new generation of tradesmen and women, and promotion of the skilled trades. The poor perception of these types of jobs—that they are somehow less valuable and esteemable than careers that require a four-year degree—has crippled both the industry as a whole and the local companies lacking qualified candidates to fill their open positions.
Educators can help dispel the stereotypes associated with grey-collar jobs and create awareness about these available and secure professions, which include electrical technicians, machinery operators, welders and machine tool setters. Steven Schneider, a school counselor at Sheboygan South High School and former American School Counselor Association board member, agrees; he told USA Today that schools need to emphasize the other paths and options that are available today, besides traditional four-year colleges. If school professionals can convey to students how workforce training creates employability and a competitive advantage in the labor market, they can help mitigate the skills gap, employ a population of non-traditional students and support the national economy.
Job Training & Skilled-Trades Preparation
Career colleges, technical institutes and apprenticeship programs are designed to provide students with the specific training and knowledge to work at local companies as skilled tradesmen. Educators should foster awareness around this in-demand employment sector to pique interest in high school students who are unsure about their future pursuits. Unless school professionals advocate grey-collar professions, students don’t know they can pursue these lucrative careers. The employment sector especially needs qualified workers with specialized training and skill sets in the following five careers:
– Civil construction and technology: Our country’s cities and towns never stop changing, which is why skilled workers in infrastructure design and construction are essential. Civil engineering technicians are involved with the development of highways, buildings, structures, bridges, water systems and sewage systems. Trainees should enroll in a rigorous program offering practical applications. Classroom and real-world experience will prepare trainees to work with construction, engineering and architectural firms.4
– Electrician: Electricians comprise the largest skilled-trades group, with more than 600,000 U.S. jobs, according to EMSI.5 Electricians also rank as number 13 on Forbes’ list of 20 high-paying blue-collar jobs. The average annual salary of an electrician is $52,910, and the average hourly wage is $25.44. The top 10 percent can earn an annual pay of $82,680.6 With such employment opportunity and solid income, the electrician profession is a highly promising career choice.
– Allied Health: Allied health professionals work directly or indirectly, independently or as part of a health care team, to evaluate and assess patient health needs. Allied health professions can encompass about 200 careers, including dental hygiene, dietetics, health administration, occupational therapy, respiratory therapy and more. Two broad categories include technicians (or assistants) who are trained in less than two years to perform procedures under the supervision of therapists or technologists, according to ExploreHealthCareers.org.7 Therapists or technologists undergo more intensive educational training in patient evaluation, diagnosis of conditions and treatment plans.
– Cosmetology: Cosmetology and hairdressing professions have a projected job growth of 13 percent from 2012 to 2022 as the demand for personal-service jobs remains stable. In addition, there are 220,600 annual hairdresser, hair stylist and cosmetologist job openings, and by 2022, 688,700 jobs will exist in the hairdressing and cosmetology industries in the U.S., according to information shared by Toni & Guy Hairdressing Academy. And not only is cosmetology a secure field, it’s also an ideal industry for creative expression.8,9
– Culinary Arts: Culinary art professionals specialize in cooking and arranging palatable food, also seen as edible art. Culinary artists, culinarians and chefs create food that’s both pleasing to the tastebuds and aesthetically pleasing. Along with food preparation, the profession encompasses menu planning, restaurant management and overseeing a restaurant’s operation. Aspiring culinary artists and chefs refine cooking skills and earn a degree at a culinary art institute or even online culinary school.10
Noteworthy trade programs initiating and inspiring momentum to bolster the skilled-trades sector include:
– The Hobart Institute of Welding Technology11: The Hobart Institute of Welding Technology is a nine-month program that teaches students about structural steel and pipe welding. Undertaking hands-on training, students spend more than 1,000 hours practicing metal fusing. Students also work with complicated alloys and can enroll in an advanced pipe layout class. One of the top welding programs in the nation since 1930, the school graduates about 300 students annually, and 83 percent graduate with a job.
– Pipeline Programs: A pipeline program is designed for students pursuing careers in medicine and medical research, according to the Association of American Medical Colleges.12 Hostos Community College devised the Allied Health Career Pipeline Program to help more than 900 low-income individuals receive public assistance to pursue a healthcare career. The program operates enhanced allied health training and an internship program to help trainees become health professionals such as patient care or pharmacy technicians, certified nurse assistants and community health workers.13 The Hostos pipeline program even provides supportive services like childcare, transportation assistance and tutoring to help students achieve their long-term career goals.
– Commercial Diving Academy: Commercial diving earned the number seven spot on Forbes’ list of highest paid blue-collar jobs with an average annual salary of $58,640. Commercial divers specialize in industrial construction, including the inspection, repair, removal, installation and testing of underwater equipment and structures. The Commercial Diving Academy (CDA) Technical Institute, the premier dive school in America, invites prospective divers to learn more about a career as a commercial diver with a virtual story adventure that users navigate on the institute’s website. This innovative online tool provides informational details about commercial diving careers as an entertaining, interactive comic.14,15
These three trade programs exemplify how implementing real-world work experience into classroom curriculum, designing a healthcare career program targeted to low-income individuals and attracting prospective trainees with an innovative online tool can engage students uncertain about a career path.
Not only do skilled trades provide career certainty and opportunity, trade industries are extensive and varied. Skilled trades can range from cosmetology school to a commercial truck driving apprenticeship. For example, about 44 percent of hair stylists, beauticians and cosmetologists are self-employed, which creates minimal risk and job stability while building a loyal clientele and following a creative passion.16
On the other end of the spectrum, trucking jobs are in high demand as trucking companies face qualified driver shortages, reports CNNMoney. David Heller, director of safety and policy for the Truckload Carriers Association, said that in 2012, there were 200,000 long haul trucking job openings. And in addition to the 1.5 million professional drivers on the road in 2012, the U.S. Bureau of Labor Statistics predicts 330,100 more trucking jobs (20 percent increase) will be added between 2010 and 2020. Not to mention, the top 10 percent of truck drivers can earn more than $58,000 annually.17
From Career College to Long-Term Career
To match in-demand skilled-trade jobs with highly qualified workers who have the proper skill set, academic leaders and influential educators need to showcase the level of opportunity for nontraditional students embarking on higher education. Debunk the myths and dislodge the perceptions of grey and blue-collar jobs and skilled trades.
Students who engage in skills-based training and hands-on learning for a specific career can then shape themselves into appealing job candidates with immediate employment opportunities. Not only are these types of jobs available, but they provide a steady and sufficient income and a long-term career. And it all starts with a high school diploma while providing students with exciting, suitable options to choose from.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9560659527778625,
"language": "en",
"url": "https://en.bitcoin-news.one/coins",
"token_count": 3398,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.392578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3da2c0a3-b605-4770-b615-137b1963d2bc>"
}
|
🔥 What are cryptocurrencies? All information about cryptocurrency in our guide!
Cryptocurrencies, blockchain, digital money, decentralized, volatility - terms that can be found everywhere, but the necessary explanations are often lacking to begin with. Because what are cryptocurrencies anyway? What possibilities does blockchain technology offer and why is volatility both a curse and a blessing?
In the following guide you will find countless information about cryptocurrencies and the technology behind them as well as outlooks and tips and tricks on how to deal with cryptocurrencies. Because crypto currencies are virtual money, but are also seen as objects of speculation. And if you look at the price histories of some cryptocurrencies, it quickly becomes clear why more and more speculators are concerned with Bitcoin and Co. If you buy or sell again at the right time, extraordinarily high profits are possible here.
What is a cryptocurrency?
A cryptocurrency, which is also repeatedly referred to as cyber currency, digital or virtual money, is a currency that only exists on the Internet. That means there are no bills or coins. The name is derived from cryptography - the science of information encryption. The cryptocurrencies are also based on this principle. Because the data on the owners and movements are stored in encrypted language. Here, cryptocurrencies score with their decentralization, because there is no storage on a single server, but storage processes are documented on thousands of servers at the same time. That is also the reason why it is not possible to forge transactions - these are stored on countless servers. In this case, one speaks of the blockchain, the structure behind (almost) all cryptocurrencies.
Fans and supporters alike want to see the answer to classic, traditional finance in cryptocurrencies. Because digital money is no longer dependent on any bank or government - the owners of the digital coins become a kind of financial institution themselves. Since there is no central body that controls both the currency and the flow of money, the buyers retain control, but are also responsible for keeping their assets safe.
How do cryptocurrencies work?
As already mentioned, this is a decentralized structure. This means that there is no central authority, such as a central bank or government, that can put crypto-currency coins into circulation or even (financially) support them. All cryptocurrency trading is only conducted over a network of thousands upon thousands of computers. It should be noted that compared to classic fiat money, cryptocurrencies are stored in a blockchain. If a user of a crypto currency wants to transfer the units, the transaction takes place between the wallets, which are digital purses. A completed transaction is then spoken of as soon as it has been verified and added to the blockchain in the course of the so-called mining process.
As already mentioned, cryptocurrencies are only available online. This means that you cannot get bills or coins and use them to pay in the store, but only pay with crypto currencies if this option is accepted by the operator of the online shop. Of course, a transfer between two people is possible - in this case one also likes to speak of the peer-to-peer currency.
What are tokens and what are coins?
Coins are colloquially units of cryptocurrencies. It should also be noted that there are also altcoins. It is a combination of the words "alternative" and "coins", because only the units of Bitcoin are referred to as coins. All other digital coins are so-called altcoins, as these also use their own blockchain.
The token differs from the coin in several ways. Because a token does not have its own blockchain. In addition, tokens are issued via an ICO (Initial Coin Offering) or an airdrop on the blockchain with other coins. Furthermore, tokens often do not have to represent a product, part of a company or proof of ownership. The lighting of the EOS coin may be interesting here. Because this coin initially started as a token, as it was used on the Ethereum blockchain. At a later stage, however, the developers developed their own blockchain, so that EOS suddenly became an altcoin.
The fact is: The coin or altcoin is the coin or the means of payment, while a token is a product that has a higher functionality. The purpose of the coin or altcoin is to be treated like money. Coins serve as a unit of account, are used for transfers or stored as a store of value. Tokens may also have value, but are not perceived as money. This is because they are usually hosted on a different blockchain.
What is the blockchain?
The blockchain is the technology that stands behind the cryptocurrencies, so to speak. The unique security features available here cannot be compared to any ordinary computer file. This is exactly what makes the security precautions when trading cryptocurrencies so unique.
It should be noted that the blockchain file is not saved on a computer in the network, but is saved on countless computers. For this reason, the blockchain also remains transparent and is also protected against attacks by hackers. Furthermore, the blockchain is resistant to system errors or errors that have been caused by humans.
The individual blocks are linked in the blockchain using a specific cryptographic encryption method. An attempt to change the data breaks the existing links between the blocks. An attempt at fraud is thus perceived immediately.
The blockchain is ultimately the database in which all transactions are stored. The transactions can be viewed, but there is no information about the parties between which the transactions took place.
How does mining work and who are miners?
So-called crypto mining is a process in which the last cryptocurrency transactions carried out are checked and then added to the blockchain as new blocks.
The mining computers select outstanding transactions from a pool and then check whether the sender can provide enough credit at all to complete the transaction. Furthermore, the transaction details are checked against the transaction history. The transaction is completed with the second check, which confirms that the sender has confirmed the transaction by entering his private key. The transaction is finally completed when a cryptographic link is created between two blocks. The creation of a new block, i.e. an extension of the blockchain file, is rewarded by the network. You currently get 6,25 Bitcoin for this. However, the reward is always halved in the course of Bitcoin halving. This means that after the next halving (expected in 2024) the reward is only 3,125 Bitcoin.
Anyone can work as a miner. Due to the fact that in a decentralized network there are no instances that would then delegate the necessary tasks, a kind of device for crypto currencies is required so that abuse by the controlling group can be prevented from the start. The system would collapse, for example, if a group created thousands of peers and forged transactions and then disseminated them. For this reason, Satoshi Nakamoto, the inventor of the crypto currency Bitcoin, initiated the search for the so-called hash. This means that the miner has to search for a hash, which is a product of encryption technology, so that two blocks can be connected to one another. This proof of work is based on the SHA 256 algorithm. It should be noted that ultimately it is not necessary to understand the details of SHA 256.
At this point, however, it is fair to mention that the difficulty of the encryption puzzles is increasing and increasing - today such powerful computer performance is necessary that, in terms of effort and benefit, it is hardly in relation to mining Bitcoin as a private person. An alternative are mining pools, which you join, make your computer services available and receive a proportionate reward.
It all started with Bitcoin - how cryptocurrencies came about!
In order to know what cryptocurrencies are all about and, above all, to be able to answer the question of how Bitcoin and Co. could develop in the foreseeable and distant future, it is important to deal with the beginnings of digital currencies.
How long have cryptocurrencies existed?
The history of Bitcoin begins with the history of cryptocurrencies. For this reason, Bitcoin is often referred to as the mother of all cryptocurrencies. Because Bitcoin is the first crypto currency - and even today, more than ten years later, Bitcoin is the most famous digital currency in the world. At this point it must be mentioned that the idea of creating a virtual or digital currency that works on the basis of cryptography was created at the turn of the millennium. Because in 1998 Nick Szabo published the idea of “bit gold”.
However, ten years had to pass before Satoshi Nakamoto first developed a concept for the cryptocurrency Bitcoin. The Bitcoin network didn't start until a year later. In January 2009, 50 Bitcoin were created for the first time as part of the mining process.
Over time, other cryptocurrencies have seen the light of day. Today there are around 3.000 different currencies that are available on the crypto market.
- 2009: Bitcoin
- 2011: Litecoin
- 2012: Bytecoin
- 2013: Ripple and Dogecoin
- 2014: Dash
- 2015: Ether
The history of Bitcoin, which began in 2009, is today representative of the enormous success of the entire market. Because anyone who invested in Bitcoin in 2009 could be a millionaire - if not a billionaire - in December 2017.
The 2014 hype
While Bitcoin was only a kind of toy for nerds until mid-2013, slowly but surely an unstoppable dynamic developed. Because after Bitcoin was able to cross the US $ 2013 mark for the first time at the end of 1.000, more and more investors and speculators began to be interested in the cryptocurrency. In 2014, Bitcoin attracted attention, but it went back towards 250 US dollars. Suddenly there were the first critical voices saying that the bubble had burst. After the price even briefly fell below 2015 US dollars in early 200, Bitcoin has already been written off. In November of the same year, the comeback of Bitcoin could suddenly be observed: Suddenly it went towards 400 US dollars - and at the beginning of 2017 then again over the 1.000 US dollar mark.
From January 2017 to December 2017, the price rose to just under 20.000 US dollars. However, the correction followed a little later - at the end of 2018, Bitcoin was only 3.000 US dollars. But in 2019, Bitcoin, repeatedly declared dead, was able to take off again and march towards US $ 12.500. A correction drove the Bitcoin back towards US $ 6.500 (November 2019), but in February 2020 it soared again to over US $ 10.000.
However, the corona virus infected the Bitcoin and caused the price to drop to just under 5.000 US dollars. However, things then went up again: From March to early November 2020, the Bitcoin price rose to over 15.500 US dollars. And if you follow the forecasts, Bitcoin could still jump the all-time high from 2020 in 2017.
What is the point of cryptocurrencies and the projects behind them?
It was the groundbreaking properties of cryptocurrencies that made it successful. A success that not even the inventor, whose identity has not yet been clarified, would have believed in. It is still unclear who is hiding behind the pseudonym - there have been some theories, but none have been confirmed so far.
The fact is: Although new digital payment systems have been brought into being again and again, Nakamoto has managed to arouse enthusiasm and fascination with Bitcoin - today Bitcoin and blockchain are almost more of a revolution than an additional option, in addition to euros and US dollars, also with digital ones To be able to pay money. After all, cryptocurrencies are ultimately referred to as digital gold, but the technology, the blockchain, may be applicable in many more areas and industries than you might have thought at the beginning. Above all, the Bitcoin is convincing due to the inflation protection. And Bitcoin is also often referred to as a crisis currency - even if a sharp correction was briefly observed in the course of the corona crisis. Because there is no other means of payment in the world that can be used everywhere, is anonymous and also incurs hardly any or no fees.
With regard to blockchain technology, there are always new projects. This innovative technology is used, for example, in the course of smart contracts - that is, intelligent contracts. This means that with classic "if-then" programming it is possible to save time and money here. In particular in the insurance sector, smart contracts are also the key to a new system: if the leasing rate or insurance premium is not paid, the key does not unlock the car - if the rate is paid, the vehicle is released again. It all happens automatically.
It should be noted that cryptocurrencies are also perceived as pure objects of speculation. Because the price history, which could be observed with Bitcoin, for example, as well as the forecasts, which sometimes see the crypto currency at over 20.000 US dollars, especially invite risk-conscious and opportunity-oriented investors and savers who no longer want to (seemingly never-ending ) Have the European Central Bank's zero interest rate policy.
What cryptocurrencies are there?
The top 100 cryptocurrencies (in alphabetical order):
- Aave et
- Aave link
- Aave USDC
- Band protocol
- Basic Attention Token (BAT)
- Binance Coin
- Bitcoin Cash
- Bitcoin Gold
- Bitcoin SV
- Celsius Network
- Crypto.com Coin
- Energy web token
- Enjin Coin
- Ethereum Classic
- FTX Token
- Hedera Hashgraph
- Huobi Token
- Kyber Network
- LEO Token
- Nexus mutual
- Ocean Protocol
- OMG Network
- Paxos Standard
- Reserve Rights Token
- Synthetix Network Token
- Tether gold
- Theta Network
- USD Coin
- Wrapped Bitcoin
How many cryptocurrencies are there currently?
At the current time (as of early November 2020) there are around 3.000 crypto currencies.
Which factors influence the price of cryptocurrencies?
The cryptocurrency market primarily responds to requests and offers. Due to the fact that it is a decentralized market, economic and / or political events are hardly decisive. This means that while US dollars, euros or other fiat currencies react to political decisions and / or events or economic development also plays a decisive role, Bitcoin and Co. are quite stable here. Even if the experts are not yet entirely sure which factors cryptocurrencies react to, there are at least a few clues:
- Offer: The total amount of coins as well as the rate at which the coins are destroyed, lost or issued
- Integration: Can the crypto currency already be used in many online shops or is it an unknown currency that only insiders know about?
- Market capitalization: The user's perception of price movements and the value of all coins actually in circulation
- Press: The presentation of the market and the importance and staging of the topic in media reporting
- Key events: Economic setbacks, security breaches or updates by supervisory authorities are so-called main events that have an enormous impact on the crypto market
How can I buy cryptocurrencies?
Anyone who has dealt with the opportunities and risks and has come to the conclusion that it is now time to invest their money in crypto currencies must first find an appropriate platform in order to then be able to buy coins. The crypto exchange eToro is particularly recommended here. More information on buying cryptocurrencies can be found in the article “Buying cryptocurrencies - Everything you should know!”.
Conclusion: cryptocurrencies simply explained!
Cryptocurrencies are digital money. No matter whether Bitcoin, Ether or Ripple - crypto currencies can be used to pay in online shops; there is also the option of speculating with cryptocurrencies.
As long as you are aware of the risk, there is nothing to prevent you from working with cryptocurrencies in whatever way.
What exactly is the difference between a token and a coin?
Probably the biggest difference between the coin and the token is ultimately the fact that coins are independent crypto currencies, for whose further execution no platform is required. However, tokens are created on platforms so that we can speak more of a digital asset than a currency.
Which cryptocurrency is the best?
The crypto currency that - according to the experts - has the greatest potential is Bitcoin. Ultimately, it was also Bitcoin that set its most impressive all-time high to date at the end of 2017, with a price of almost 20.000 US dollars.
What and where can I pay with crypto?
Ultimately, you can pay for all products with crypto money, provided the seller accepts the digital money.
Are cryptocurrencies the money of the future?
Yes. There are some predictions that Bitcoin is already so advanced in society that a disappearance of the cryptocurrency is considered unlikely.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9343462586402893,
"language": "en",
"url": "https://financeyouinternational.com/tips-and-tricks-for-closing-the-books/",
"token_count": 702,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.01556396484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b55a46b9-c888-4375-be81-9e87d8689341>"
}
|
Closing the books is what we accountants live for. For those non-accountants, it is the process where the accounts are checked for accuracy at the end of each period. Closing the books can refer to either the month-end or year-end close. Both are essentially the same process, although the year-end close typically requires greater precision and analysis as the financial reports that are prepared are distributed to a wider audience, often subject to audit. The following discussion focuses on the month-end close.
The purpose of the monthly close is to provide accurate financial reports and key performance indicators to assist management in overseeing the organization. The close helps drive performance, increases accountability and strengthens internal controls. The close involves examining both Balance Sheet and Profit and Loss accounts to ensure that these are correct. Accuracy is ensured through reconciliations, analysis, and comparisons.
The Balance Sheet component of the close typically requires five key processes:
- All cash accounts are reconciled to bank accounts;
- All control accounts are agreed to their subledgers. Control accounts typically include Accounts Receivable, Inventory, Fixed Assets, and Accounts Payable;
- Prepaid balances are reviewed to ensure that amortizations are correct;
- Payroll clearing accounts are cleared; and,
- Sundry debits and credits are analyzed to ensure these are appropriate and necessary accruals are recorded. Accruals can be required for bad debts, severance, claims and other liabilities.
Statement of profit and loss
The Profit and Loss component of the close requires that line items are compared to both prior year and budget with explanations being obtained for significant differences. If the organization is a for-profit entity then changes in Gross Margin or Gross Profit percentages should also be reviewed and understood.
An important part of the close is to review cut-off to ensure that transactions are recorded in the correct period. This also involves matching revenues with expenses.
Roll forwards should be undertaken for changes in Balance Sheet accounts that have Profit and Loss impact. These roll forwards reconcile the changes in accounts with their corresponding Profit and Loss impact. Examples include depreciation, bad debt and severance expenses.
Each controller or finance professional will develop their own best practices through trial and error and their own experience within their organization. Some of these include:
- A month-end checklist should be developed to ensure that all necessary steps are taken are completed. The check list includes the tasks to be performed, when these should be completed, who is to complete and an area for initials to acknowledge completion. This checklist should be maintained for each close.
- Assessment of both financial and non-financial key performance indicators (KPIs). Non-financial indicators could include volume and the number of full time equivalents.
- The month-end close should be completed on a timely basis. Timeliness helps drive performance, identify trends and helps provide for quick resolution of performance issues. To ensure timeliness the month-end close should actually start before month-end. Typically payroll entries can be entered into the accounting system before the end of the month. Preliminary account reviews can identify posting errors and provide for early correction.
The end product of the monthly close is a report to management which includes financial statements, key performance indicators and a narrative explaining significant variances from budget or prior period. The management report is a value add that accountants provide to the organization, assisting other members of the organization to make informed business decisions.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9742128849029541,
"language": "en",
"url": "https://studentfintech.com/repaying-private-student-loans/",
"token_count": 518,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.189453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8117da4c-7725-4909-8c68-90d07992959c>"
}
|
One of the problems with borrowing money is that it has to be paid back sooner or later. This may seem like an obvious statement. But many people fail to consider this concept when they are excited about going to college. They seldom think about repaying private student loans when they are first signing the paperwork. They think of the payback period as a long way in the future.
But time flies faster than people think. Before you know it, the four years are done, graduation happens, and it’s time to pay the loan back. There is often a grace period before payback begins, but this only buys a little time.
Like shopping for a new car or home, salespeople sometimes use the emotional excitement of new borrowers to get them to sign an agreement before they are ready to do so. It’s important to take ones time when considering all of the options.
When considering repaying private student loans, borrowers should think about how they will work the payments into their budget. This is sometimes difficult since students don’t know exactly how much money they will make after graduation. But it should inspire them to do the best they can to get enough education to be successful.
As a general rule, students should never take out more money than they can afford to pay every month. But don’t just think about it in terms of a monthly payment. The long term payment should also be considered. Experts in the financial field recommend never borrowing more money than someone can comfortably pay back in about 10 years.
Why Borrowers Get Into Trouble With Repaying Student Loans
According to financial experts, student borrowers often get into trouble with their student loan debt because they delay paying it so long that a small amount becomes a colossal amount. Since interest accrues over time, the amount that must be paid back increases each year or during each deferment that you choose not to pay.
Most federal lending institutions allow students to avoid paying for periods of one year or six months at a time. However, during this time, the interest will continue to increase unless you pay back the interest at the end or during this deferment period.
If a student returns to school while they are paying back their student loan, they may be able to defer payments if they are in school for at least half time.
One thing to remember is that, with Direct Subsidized federal loans, the government will pay the interest that accrues while the borrower is under an in-school deferment if the loan is a subsidized federal loan.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8511285781860352,
"language": "en",
"url": "https://www.canada.ca/en/financial-consumer-agency/services/financial-toolkit/fraud/fraud-1/15.html",
"token_count": 332,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2373046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:be521ba6-7d6f-4337-be53-79f9b84e5ed6>"
}
|
12.1.14 Summary of key messages
- 12.1.1 Fraud awareness quiz
- 12.1.2 Types of fraud
- 12.1.3 Mass marketing fraud
- 12.1.4 Investment fraud
- 12.1.5 Payment scams
- 12.1.6 Credit card and debit card fraud
- 12.1.7 Video: Debit and credit card fraud
- 12.1.8 Other frauds
- 12.1.9 Why we fall for fraud
- 12.1.10 Case study: Affinity fraud
- 12.1.11 Detect fraud and scams
- 12.1.12 Signs of frauds and scams
- 12.1.13 How to spot fraud
- 12.1.14 Summary of key messages
- Fraud can target anyone.
- There are many different types of fraud. Educate yourself and your family about them.
- Be aware of the techniques that fraud artists use, such as peer pressure and a sense of urgency.
- Never send money or participate in a financial offer unless you are sure it is legitimate.
- Never give out your personal financial information unless you are sure the person or organization is legitimate. If in doubt, contact your financial institution, securities regulator or the Better Business Bureau.
At the end of the module, you will find an Action plan. This is a tool that you can use to track your progress in protecting yourself against financial fraud in the future. Use the action plan as a roadmap for financial action!
Report a problem or mistake on this page
- Date modified:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9374545812606812,
"language": "en",
"url": "https://www.financestrategists.com/finance-terms/noi/",
"token_count": 238,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01031494140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2635a1cb-c2f2-43a7-a51a-397528101bb8>"
}
|
Net Operating Income (NOI) Definition
Define NOI In Simple Terms
NOI is calculated by taking the total revenue of a property and subtracting all reasonably necessary operating expenses.
It is a before-tax figure, showing up on the property’s income and cash flow statements, and it excludes payments on loans, capital expenditures, depreciation, and amortization.
The formula for net operating income is as follows:
Where “RR”is real-state revenues, and “OE”is operating expenses.
Sources of revenue included in the NOI calculation may include rental income, parking structures, vending machines, and laundry facilities.
Operating expenses include the cost of maintaining and operating the building, including insurance, legal fees, and utilities.
NOI is used by real-estate investors to determine the capitalization rate of a property, which itself is a measure of the rate of return on a property investment.
For financed properties, NOI is also used to calculate the debt coverage ratio, or DCR, which tells lenders and investors whether or not a property is generating enough income to cover its debts and expense payments.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8973793983459473,
"language": "en",
"url": "https://www.webnewswire.com/2018/05/02/aerial-imaging-market-2016-2024-historical-analysis-opportunities-and-strong-growth-in-future/",
"token_count": 831,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06494140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e4899d45-a3f4-4e35-9cdb-fb13b06fa100>"
}
|
The Global Aerial Imaging Market is projected to reach USD 3.3 billion by 2024. Aerial imaging involves capturing images of the ground using unmanned aerial vehicles (UAVs), helicopters, dirigibles, blimps, kites, and parachutes. It provides useful information for volume calculation map renovations, planning, and route design.
Factors driving the market are popularity of aerial images and the need for geospatial information in various fields. It derives much of its demand from forestry & agriculture and other commercial sectors. Advances in technology such as LiDAR (Light Detection and Ranging), IMU (Inertial Measurement Unit) and airborne GPS (global positioning system) can boost market growth during the forecast period (2016-2024). Camera technologies such as Microsoft’s Hawk & Eagle and Vexcel Imaging’s UltraCam Osprey can fuel industry demand. But security concerns and operational issues can impede growth.
The worldwide Aerial Imaging Market is segmented according to applications and regions. Insurance, government, military & defense, agricultural & forestry, GIS (geographical information system), civil engineering, commercial, and energy are the key application areas. Military & defense will exceed USD 250 million by 2024. UAVs and Personal Aerial Mapping System (PAMS) are used by the military for reconnaissance missions. These systems are also employed in ground-based commercial applications. PAMS provide detailed photographs of surfaces at affordable costs.
Browse Details of Report @ https://www.hexaresearch.com/research-report/aerial-imaging-market
UAVs can be deployed even in highly dangerous military operations, since these can execute accurate and repetitive commands in unfavorable settings. Farmers use UAVs to monitor their fields and crops. The insurance sector uses it to survey damages after floods or hurricanes and thus settle claims. Energy companies use aerial imagery to inspect transmission lines, gather data from solar & wind power plants, and provide additional security to these plants.
Government applications are estimated to cross over USD 950 million by 2024. The government sector uses drones for urban planning, energy management, environmental planning, and homeland security. Commercial applications could generate maximum revenue during the forecast period. These include advertising and promotional activities. Aerial photography provides land cover maps, soil maps, and vegetation maps with the help of spatial data. Combining this technology with GIS can aid in urban planning.
Regions include Europe, North America, Latin America (LATAM), Asia Pacific (APAC), and Middle East & Africa (MEA). APAC is expected to garner over USD 500 million by 2024. The growth can be attributed to the presence of semiconductor industries in the region. Consumer needs and rapid technological developments will fuel demand. North America will lead the global Aerial Imaging Market over the forecast period. This owes to the growth of the telecommunications industry. Europe will experience modest growth in the aforementioned period.
Prominent players in the global Aerial Imaging Market are Kucera International Inc., EagleView Technologies, AeroMetric Inc., and Google Inc. Some of these companies market their own software & equipment, while others lease them or operate within the industry by collaborating with software providers. For instance, EagleView Technologies offers real estate, energy & utilities, construction, real estate, and safety & federal applications. New market entrants can act as data partners to existing companies or act as separate imaging companies. Participants undertake initiatives to develop current technologies to gain a competitive edge in the market.
Browse Related Category Market Reports @ https://www.hexaresearch.com/research-category/next-generation-technologies-industry
Hexa Research is a market research and consulting organization, offering industry reports, custom research and consulting services to a host of key industries across the globe. We offer comprehensive business intelligence in the form of industry reports which help our clients obtain clarity about their business environment and enable them to undertake strategic growth initiatives.
Felton Office Plaza
6265 Highway 9
Felton, California 95018
Website – https://www.hexaresearch.com
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9375007152557373,
"language": "en",
"url": "http://bcompetitive.in/u-n-approves-9-million-in-aid-for-crisis-stricken-venezuela/",
"token_count": 580,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.458984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7f359008-a26b-454b-97a1-ba60217c697d>"
}
|
The United Nations on Monday announced $9.2 million in health and nutritional aid for crisis-stricken Venezuela, where hunger and preventable disease are soaring amid the collapse of the country’s socialist economic system.
The U.N. Central Emergency Response Fund (CERF) will support projects to provide nutritional support to children under five years old, pregnant women and lactating mothers at risk, and emergency health care for the vulnerable.
Venezuela has been in an economic depression for at least half a decade, adding to hyperinflation and mass food shortages. Millions of citizens have left Venezuela to find more opportunity in other Latin American countries.
About the UN Central Emergency Response Fund:
It is a humanitarian fund established by the United Nations General Assembly on December 15, 2005, and launched in March 2006.
With CERF’s objectives to 1) promote early action and response to reduce the loss of life; 2) enhance response to time-critical requirements; and 3) strengthen core elements of humanitarian response in underfunded crises, CERF seeks to enable more timely and reliable humanitarian assistance to those affected by natural disasters and armed conflicts.
The fund is replenished annually through contributions from governments, the private sector, foundations, and individuals.
The CERF grant element is divided into two windows:
Rapid Responses (approximately two-thirds of the grant element)
The Rapid Response window provides funds intended to mitigate the unevenness and delays of the voluntary contribution system by providing seed money for life-saving, humanitarian activities in the initial days and weeks of a sudden onset crisis or a deterioration in an ongoing situation. The maximum amount applied to a crisis in a given year typically does not exceed $30 million, although higher allocations can be made in exceptional circumstances.
Underfunded Emergencies (approximately one-third of the grant element).
The Underfunded Emergencies window supports countries that are significantly challenged by “forgotten” emergencies.
Hyperinflation is the biggest problem faced by Venezuela. The inflation rate there is expected to reach a stunning one million percent this year, putting it on par with the crises of Zimbabwe in the 2000s and Germany in the 1920s, according to the International Monetary Fund. The government claims that the country is the victim of “economic war” and that the major issues are due to opposition “plots” and American sanctions.
What caused this increase?
The plummeting oil prices since 2014 is one of the main reasons why Venezuela’s currency has weakened sharply. The country, which has rich oil reserves largely depended on it for its revenue. But when the oil price dropped drastically in 2014, Venezuela which received 96 percent of its revenue from the oil exports, suffered a shortage of foreign currency. This made import of basic essentials like food and medicines difficult.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9707427620887756,
"language": "en",
"url": "https://www.europeanceo.com/home/featured/norways-electric-vehicle-market-is-miles-ahead-of-the-rest-of-the-world-but-can-it-maintain-its-position/",
"token_count": 1584,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.228515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3fea2563-7ef0-43c3-a638-7d71afeeea9c>"
}
|
Author: Charlotte Gifford
27 Mar 2020
In 1995, the pop band A-ha began an electric revolution in their native Norway. Driving around in the country’s first electric car, the group repeatedly refused to pay the fare on toll roads. They weren’t just trying to save pennies: the stunt was part of a plan by the environmental NGO Bellona to pressure Norway into granting exemptions for zero-emissions vehicles. A-ha was fined a total of 17 times on the journey and even had their vehicle impounded, but finally the Norwegian Government lifted the fee. Electric cars have been able to drive on toll roads for free ever since.
Today, Norway has the highest rate of electric car ownership in the world – 58 percent of new cars sold in March 2019 were electric – thanks to a host of financial incentives offered by the government. If demand continues to surge, the country should be on track to reach its goal of phasing out fossil fuel cars by 2025. The question is whether Norway’s electric vehicle market will enjoy continued success when the incentives are inevitably scaled back.
While Oslo is renowned for its clean, quiet streets and plentiful green spaces, it’s no stranger to the environmental issues that plague other European capitals. “Two years ago, if you walked on a pedestrian bridge over the ring roads, the smell was terrible,” said Sture Portvik, the project leader and manager for electromobility in Oslo. “You felt like you might as well have gone smoking.” Between 2006 and 2016, air pollution is thought to have contributed to around 50,000 deaths in the city.
It remains to be seen whether Norway’s electric vehicle market can thrive once the training wheels are taken off
But Norway has made great strides towards cleaner air. According to Portvik, Oslo has seen its CO2 emissions drop by 36 percent in the last four years, as consumers ditched fossil fuel vehicles for emission-free alternatives. In the first six months of 2019, almost half of the cars sold in Norway were electric. As a point of comparison, in the UK, electric cars made up just two percent of vehicle sales. The difference is that Norway offers its electric car owners a range of financial benefits: not only are zero-emission cars able to use toll roads for free, but they also enjoy free parking and are exempt from Norway’s high purchase tax.
Initially, Norway’s incentives had little impact on electric car sales. That’s because when incentives were first introduced in the 1990s, electric cars hadn’t yet won consumers over. Norwegians could only choose between the Think and Buddy models – boxy vehicles with space for no more than two people and a shopping bag. Then, in 2011, Nissan released the Leaf to market. It was a family car with room for four to five people. Not long after that came Tesla’s Model S, followed by the Volkswagen e-Golf. Suddenly, it was fashionable to own an electric car – and, thanks to Norway’s package of incentives, it made financial sense too.
Running out of charge
Norwegian officials knew that in order for the majority of consumers to go electric, it had to be convenient as well as affordable. In 2008, Oslo became one of the first cities in the world to launch a charging infrastructure programme for electric vehicles. As a result, Norway now has more than 10,000 public charging points.
But it still hasn’t been enough to keep up with demand. While people who live in detached houses can install chargers in their parking spaces at home, 60 percent of Oslo’s population reside in apartments and are therefore dependent on public chargers. “There are approximately 1,950 fast-charging stations in Norway today, but 75 percent of electric vehicle owners have regularly experienced charging queues,” said Øyvind Solberg Thorsen, Director of OFV, Norway’s information council for road traffic. “It is estimated that we need an additional 1,200 fast chargers per year to keep up with the increase in demand.”
But building more chargers is not only expensive; it’s an inelegant solution. Space in Oslo is limited, so cars need to be charged as efficiently as possible. Technological advances play a crucial role here: in 2019, city officials announced plans to make Oslo the world’s first city to install wireless charging systems for electric taxis. As part of this system, electric taxis would simply park on charging plates in the ground and power up while waiting for new customers.
Portvik admits that systems like these create a ‘chicken and egg’ problem: “You need a mass market, then you can start earning money,” he told European CEO. “But in order to get a mass market, you need the charging infrastructure.” To get around this dilemma, Norway set up joint ventures with the private sector. Companies are paid by the local government to set up and operate fast-charging stations, and then the net profit is divided between them and the city. “Since 2016, we have tripled the number of fast chargers in the city,” Portvik said. “Already, we have earned more than we ever invested in those fast-charging stations. It’s a good business case.”
Applying the brakes
Until now, Norwegians have enjoyed enviable perks as a result of their electric car ownership. Some can save €350 a month in operating costs by going electric. But the good times can’t last forever. “It has to be scaled back,” said Portvik, “and it’s already started. From March 2019, we actually started to charge a little bit for using public chargers.” Drivers of electric cars now have to pay €1 per hour for curbside charging, as well as a €1 toll on the ring road. It’s clear that electric car owners can no longer expect a free ride from the government.
It remains to be seen whether Norway’s electric vehicle market can thrive once the training wheels are taken off. According to the Norwegian Electric Vehicle Association, about 72 percent of buyers choose an electric car for financial reasons, and only 26 percent for environmental ones. This suggests electric cars might not be so popular once the incentives are scrapped. And there are other challenges, as well: while Solberg Thorsen is optimistic that electric car sales could reach an all-time high in 2020, he recognises that supply problems may put a dent in those forecasts. “There is some concern that the original equipment manufacturers won’t be able to deliver enough cars to Norway, due to increasing demand for electric vehicles in other European countries,” he said. Battery shortages, for example, remain a serious obstacle for carmakers: many electric car brands, including the Nissan Leaf and Volkswagen e-Golf, have long waiting lists due to production constraints and the scarcity of battery metals.
So far, Norway has been Europe’s testing ground for widespread electric vehicle adoption. Admittedly, the conditions of this experiment are difficult to imitate, as – despite being a big producer of oil and gas – Norway gets 98 percent of its domestic energy from renewable sources like hydropower. This means that, unlike many nations, it generates almost no carbon footprint in the manufacturing process. It’s also taken significant government expenditure for Norway to make its electric dream a reality. However, this may simply be the price countries have to pay if they hope to follow Norway down the road to electrification.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.931275486946106,
"language": "en",
"url": "https://www.mustpower.net/good-investment-prospects-of-indonesia-renewable-energy-market.html",
"token_count": 479,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0230712890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:cb37d9e5-65d9-477a-a0b5-2fae8a41f756>"
}
|
In recent years, Indonesia’s renewable energy power market increased rapidly. In 2017, Indonesia’s renewable energy power generation account for 12.62% of the national power supply, which was not only higher than the plan target , but also nearly doubled the overall power supply structure in the country. The Indonesian Minister of Energy and Mines said that the government try to reach the 23% for renewable energy applications in Indonesian by 2025. It is one of the areas for investment cooperation between China and Indonesia. Its main basis is as follows:
1. There is a big gap in Indonesia’s power supply
Indonesia is one of the countries with the fastest growing energy consumption in the world. At present, the total installed capacity of electricity is about 50 million kilowatts, and the penetration rate of electricity is less than 75%. In 2017, Indonesia’s per capita electricity consumption was 1012 kWh, and the average annual demand increased by 10% -15%. More than a quarter of the population had no electricity available. Judging from the 2016-2025 power plan, Indonesia’s electricity will remain in short supply for a long time. In addition, as an archipelago country, many remote islands in Indonesia urgently need a large number of off-grid power generation facilities and new power storage equipment, which provides huge development space for renewable energy power generation.
2.The Indonesia government encourages the development of renewable energy power generation
To attract foreign capital into the renewable energy market, the Indonesian government will facilitate investors through fiscal and other preferential policies. Indonesian state-run PJB Power Company and UAE MASDAR Company reached an agreement to jointly build the world’s largest floating photovoltaic power plant with a capacity of 200,000 kilowatts in the Qildada Reservoir in West Java. It cost $ 300 million and will be put into production in 2020. The Indonesian government will grant tax relief for the project to ensure that construction and operating costs are lower than the regional average, thereby encouraging foreign investors to increase efforts to develop the renewable energy market.
In view of Indonesia’s policy support, China’s new energy and power related companies can carry out more extensive and in-depth project cooperation with Indonesia in power generation projects, power equipment, power grid construction, etc., and continue to extend the industrial chain.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9435485005378723,
"language": "en",
"url": "https://blog.wwf.sg/oceans/2017/07/sharkfinmyths/",
"token_count": 1768,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.466796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:821f6790-d99c-4292-82f2-3c6afc5a2851>"
}
|
By Elaine Tan, Chief Executive Officer, WWF-Singapore
What do Jackie Chan and basketball legend Yao Ming have in common with thousands of people in Singapore? All have spoken out against consuming shark’s fin.
An increasing number of people here have stopped eating it altogether. In the past month alone, 3,000 pledged to say “no” to shark’s fin on the WWF-Singapore website.
While an increasing number of people in Singapore have reduced or stopped consuming shark fin, this dish continues to be an emotional and divisive issue. This is why a recent debate that emerged on this issue has been so significant.
Last week, we released a report, together with wildlife monitoring network TRAFFIC, that found Singapore to be the world’s second largest trader of shark fin in terms of value. Even as thousands voiced their support to stop the consumption of shark fin in Singapore, local businesses have chimed in with their views too.
Mr Yio Jin Xian from the Marine and Land Products Association (MPA), which represents shark fin traders supplying 70% of the market in Singapore, followed up on the report with claims that most shark products in Singapore are “sustainable”.
This statement was based on the following claims:
(1) Majority of shark fin imported by Singapore are from developed countries such as the US, EU, and Australia.
(2) Fins from sharks caught in federally regulated waters from these developed countries are “sustainable”.
(3) It is more sustainable when the whole shark carcass is utilised, not just the fins.
It is important that we get our facts right on an issue that so many Singaporeans care about and have taken action on. Nine out of ten people here care about sharks going extinct; eight out of ten have stopped consuming shark fin over the past year. Yet, a significant group of people still view shark fin as a part of their culture and tradition.
This is also an issue with global implications. Sharks are an important source of livelihood for many coastal communities. Demand for shark fin is draining the oceans of these key predators, with impacts on the marine environment, a major food source for us.
Is there truly such a thing as shark fin from sustainable sources – or is it pure fiction?
Traceability from source to seller
You cannot know what is sustainable if you do not know where it comes from. This is especially true in the shark trade.
Singapore’s shark fin traders at the MPA claim that most of our fins are from developed countries like the US, EU and Australia. Current import data completely contradicts these claims.
According to Singapore’s trade statistics, Spain, Namibia, Uruguay, Hong Kong and Indonesia are listed as top countries from which Singapore imports our shark fin. With the exception of Hong Kong, which trades shark fin caught elsewhere, these are all source countries with no known sustainable shark fisheries.
Using Indonesia as an example, environmental groups monitoring the fishing industry estimate that in certain fish markets, about two out of ten sharks brought to shore are threatened with extinction.
More importantly, there is no traceability system in place today – in Singapore or anywhere in the world – that can adequately track individual shark fins from source to seller. This means that any businesses that claim to know the source countries of their shark fin will not have the means to verify their own claims.
Legality does not equate sustainability
While certain countries like the US and Australia have been more progressive with regulating shark fishing in their waters, this does not mean that all shark products from these countries are sustainable. While having laws in place help govern general fishing practices in a country, not all fisheries are the same. In reality, regulating the types of species caught and preventing overfishing remains a challenge.
Independent third party certification of a fishery is the only way that us as consumers can be sure that fisheries do not engage in unsustainable practices. These certification bodies, notably the Marine Stewardship Council (MSC), monitor each fishery on an operational level, in order to ensure that they are run and managed responsibly.
Only one shark fishery in the world has been certified sustainable by MSC – for spiny dogfish in the US. It is worth noting that in this fishery, the shark species is mainly caught for its meat, with fins being a low value by-product.
Apart from this fishery, no other shark fisheries have been certified sustainable by MSC.
Sustainability goes beyond “zero wastage”
There is no doubt that shark finning – where carcasses are thrown back into the sea – is a wasteful and senseless practice. More countries now discourage these practices by having regulations that require the whole shark to be brought to land. Yet, it would be a mistake to assume that this alone makes a fishery “sustainable”.
A few things go into determining what is sustainable: healthy populations of a species, management measures to prevent overfishing and the impact of fishing on the environment. Applying the above criteria to shark fishing, it becomes clear why current practices do not meet our sustainability standards.
First, shark populations are on the decline as a result of overexploitation. Over 70 million sharks are removed from our oceans every year – equivalent to 8,000 sharks every hour. Shark populations are being raced to extinction with such immense demand.
This impacts coastal communities too. Shark fishermen talk of how they used to be able to catch sharks close to home but now have to travel farther afield to catch enough sharks.
Unsustainable shark fisheries may also use non-selective fishing gear, often with terrible consequences for non-target species. This further endangers populations of protected sharks, dolphins and turtles.
As sharks play a major role in maintaining ocean ecosystems, removing them from the oceans will have a knock-on effect on the health of our oceans and marine life, a major food source for us all.
A solution lies within our shores
The shark crisis is a problem we all share. With at least 68 countries and territories involved in the trade through Singapore alone, the complexity of this global trade is staggering.
As fins trade hands across countries, information about the source, type of sharks and the numbers fished get muddled and even lost. What is legal cannot be separated from the illegal; sustainable from the unsustainable. As a result, we continue to catch, trade and consume tens of millions of sharks every year, including endangered species that are protected nationally or internationally.
It is time to come together and put a stop to this. A solution and way forward exists, but it requires everyone to work together – from governments, to businesses and people like you and me.
On an international level, we need traceability systems that can track the movement of shark fin and products across the world. This is where nations like Hong Kong and Singapore – the world’s top shark fin traders – come in. Both countries are key transit hubs for shark fin products.
In these countries, customs procedures need to be in place that can allow for better tracking of species and actual trade volumes of shark fins. Hong Kong is already in the process of integrating such procedures, while Singapore still has some way to go. To their credit, customs officials in Hong Kong have busted some major illegal shipments of shark fin in recent months.
With better monitoring of the global shark trade through these measures, businesses – including Singapore’s shark fin traders – can have more confidence about the sources of their products, including basic information on species and legality.
Will we ever be able to fish sharks sustainably? Yes, but in the near future, the volumes of sustainable products are still a tiny fraction of global demand. Until shark fisheries can prove that they can be sustainably managed and track their products to end-consumers, the only way to protect our oceans is to greatly reduce our demand.
From the Chinese government banning the dish from being served at official functions, to the state of California advocating a complete ban on this product, momentum to address this continues to build around the world.
Right now, the solution does not lie with the fishermen, or even outside of our borders – it is in making a drastic reduction in the rate that we are consuming shark fin and shark products. With each and every consumer making the individual choice to say no to shark fin, we can hopefully work together to turn the tide for sharks, and in doing so, ensure healthier oceans.
You can help to say no to shark fin by sharing our video on Facebook. Click here to share!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.947129487991333,
"language": "en",
"url": "https://qsstudy.com/business-studies/promoter",
"token_count": 242,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4021ac25-4c7d-465c-800b-a289b316b565>"
}
|
Promoter means an entity that plans a project or formation of a new firm, and then sells or promotes the plan or idea to others. He is accountable for promoting the features of a product to an audience or client. Shows how manufactured goods work takes questions, and attempts to influence customers or clients to purchase a product.
Promoters may be classified into the following types.
Professional promoters: There are firms which focus on business promotion, including its amalgamation and flotation, before handing it over to the shareholders or their representatives.
Occasional promoters: These promoters take attention in floating some companies. They are not engaged in promotion work on a usual basis. They take up the promotion of a few companies and once it is ended they go to their original occupation. For example, engineers, lawyers etc. may float some companies.
Entrepreneur promoters: They are both promoters and entrepreneurs. They regard the proposal of a new business component, do the foundation to ascertain it, and might afterward become an ingredient of the administration.
Financier promoters: Some financial institutions, like investment banks or industrial banks, might take up the promotion of a company with a view to verdict opportunities for investment.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9230734705924988,
"language": "en",
"url": "https://www.priorilegal.com/blog/environmental-law-regulations-business",
"token_count": 1318,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.013671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ad8ab36f-ed7a-439e-8ca2-cd0652dc91c9>"
}
|
When starting your business, there is a long checklist of tasks that you need to complete before opening your doors. After choosing the proper business entity form, registering your business, obtaining your federal tax identification number and determining your state tax obligations, you’ll need to consider what federal and state licenses and permits your business needs.
Among these necessary licenses and permits are those that will ensure you are in compliance with federal environmental regulations. From substances regulated under Resource Conservation and Recovery Act (RCRA) to National Pollutant Discharge Elimination System (NPDES) permits, the alphabet soup of environmental law can be overwhelming!
Environmental Regulations for Businesses
The Environmental Protection Agency and state environmental agencies enforce the environmental regulations that apply to businesses. Although it may be obvious that businesses involved in automotive services, metal work, paints and coatings, agricultural services, and chemical production are subject to environmental regulations, other more innocuous ones, such as dry-cleaners and printers, are as well.
It is best to talk with a lawyer to ensure your business complies with any relevant regulations and avoids environmental legal pitfalls. For example, establishing practices to prevent waste generation or improper disposal is the most cost-effective way to achieve environmental compliance, as you will save your business the expenses associated with tracking waste streams, costly disposal methods, and, in the worst case scenario, significant fines.
This blog post will review the categories of regulations that may be applicable to your business, so you can avoid costly problems down the road.
Commonly Required Federal Permits
Before you can engage in certain regulated activities through your business, such as discharging a pollutant, you may be required to obtain a permit. Permits may be required under the following federal environmental laws:
Clean Air Act (CAA):
Some smaller sources of air pollution are required to obtain operating permits under Title V of the Clean Air Act. These sources include businesses that involve incineration units, chemical manufacturing, glass manufacturing, and various types of metal processing, among others. Most permits are issued by state and local permitting authorities, and are legally-enforceable documents that clarify what facilities have to do to control their air emissions. You can find more information about who has to obtain a Title V permit and how the Clean Air Act works on the EPA’s website. A lawyer can help you decipher what type of source your business is, what emissions threshold you need to meet, and how to acquire any necessary permits.
Clean Water Act (CWA):
If your business emits water pollution or operates near wetlands, you may need to meet specific federal, state, and local permit requirements.
- Section 404 - Wetlands The Army Corps of Engineers regulates the discharge of dredged or fill materials into U.S. waters. State environmental agencies may also regulate activities affecting water pollution, shoreline management, and forest management. You should also be mindful of any local zoning ordinances regulating your business’s proximity to a wetland.
- Section 402 - National Pollutant Discharge Elimination System If your business discharges wastewater to surface water or a municipal sewer, or if you experience stormwater runoff from your facility during rain events, you may need to apply for an NPDES permit.
You can find more information about the Clean Water Act on EPA’s website, but working with an attorney will help you navigate the multiple levels of ordinances and permit requirements that apply to your business’s activities
Endangered Species Act (ESA):
If the activities of your business affect threatened or endangered species, you may need a permit from the U.S. Fish and Wildlife Service (FWS), the National Oceanic and Atmospheric Administration’s (NOAA) National Marine Fisheries Service (NMFS), or your state’s wildlife agency. You can find more information about complying with the ESA on the FWS website. An environmental attorney can also assist you in understanding what locally listed species could mean for your company.
Resource Conservation and Recovery Act (RCRA):
RCRA establishes a federal program to manage hazardous waste from cradle to grave, and includes regulations for generation, transportation, treatment, storage, and disposal of hazardous wastes. If your business involves hazardous waste, you will need a RCRA permit from your state or EPA regional office. For more information about RCRA, consult the EPA guide for small businesses on managing your hazardous waste or EPA’s website on RCRA Guidance, Policy and Resources. Again, a legal expert can guide you in understanding how to implement the appropriate architecture of documentation and record keeping to keep your company in compliance with permit requirements.
Compliance Assistance Resources
The EPA website is a great resource if you have general questions about environmental regulations, or need guidance on how to bring your business into compliance.
However, federal environmental law is extremely complex. An attorney will easily break down dense legal jargon into clear tasks for your company and help you ensure your business’s compliance. Useful EPA references to get you started include:
- EPA’s Laws & Regulations page includes links for more information about laws, regulations, compliance and enforcement, and policy and guidance. You can also search for regulatory information by topic or by sector.
- EPA’s Small Business Programs page provides useful links to support for small businesses, including the Asbestos and Small Business Ombudsman.
- EPA’s Retail Industry Portal provides resources to help prevent and resolve environmental issues at retail establishments.
Remember that you will need to be in compliance with both state and federal regulations. Consult your state’s environmental agency to verify that you have the appropriate permits and have taken any necessary environmental precautions. State departments of the environment, such as New York’s, will provide state-specific guidance and policy documents to help you understand your business’s legal needs. Each state’s regulations may be slightly different, so take care to check your specific state’s requirements.
To be sure you are satisfying every requirement of any applicable environmental regulations, it is best to work with a lawyer with strong experience in this area. Protect your business and get started with a Priori attorney today.
If you’d like to move beyond compliance and investigate opportunities to green your workplace, EPA’s website also provides access to many resources for corporate environmental stewardship. A lawyer can help you formalize your actions into a company sustainability policy, as well as explore various national green certification programs.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9435257911682129,
"language": "en",
"url": "https://btcm.co/dxy-explained/",
"token_count": 1843,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.162109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:663a5ca5-1a59-473c-9c53-bd3b27ab2145>"
}
|
Many pundits and commentators like to talk about the dollar strengthening or weakening, and they often site the DXY without understanding how it is constructed. In this post, we'll explain the history of the DXY, how it is formed, and how it is used today. We will also present important alternatives like the Federal Reserve's Trade-Weighted Dollar Index or Broad Index and Bloomberg's Dollar Index (BDXY).
The DXY Explained
The DXY (pronounced either D-X-Y or "dixie") is the ticker symbol for the US Dollar Index, a measure of the value of the US Dollar versus a basket of foreign currencies, first instituted in March of 1973 at a level of 100. It is by far the most widely used dollar index, primarily because it is the oldest and is a trade-able futures product by ICE (Intercontinental Exchange).
The history of the DXY begins shortly after the US left Bretton Woods and the gold standard in 1971. All the world's fiat currencies were floated against one another and, being on a de facto global US dollar standard, banks and investors needed a new metric by which to measure the dollar's strength and performance. A single weighted index is extremely valuable in foreign exchange markets.
Composition of the DXY
- Euro (EUR), 57.6% weight
- Japanese yen (JPY) 13.6% weight
- Pound sterling (GBP), 11.9% weight
- Canadian dollar (CAD), 9.1% weight
- Swedish krona (SEK), 4.2% weight
- Swiss franc (CHF) 3.6% weight
The DXY composition was forced to change in 1999 with the introduction of the Euro. But it didn't change much, all the previous European currency weights were rolled into the weighting for the Euro. That means the formula essentially didn't change, and has not changed since its introduction in 1973, back when it captured the trade and monetary relationships of the US. (For those interested, here is the original European currency weights, because it took awhile for us to find them. They have disappeared down the memory hole for the most part.)
- Deutsch Mark (DEM), 20.8%
- French franc (FRF), 13.1%
- Italian lira (ITL), 9.0%
- Dutch guilder (NLG), 8.3%
- Belgium franc (BEF), 6.4%
Problems with the DXY
The largest criticism of the DXY is that it no longer represents an accurate picture of US dollar trading partners. Over the years, the index became less relevant as a indication of strength based on trade but remained useful due to its relevance as a financial flow indicator. Even though emerging markets were a bigger part of global trade, especially China, US dollar financing was still primarily being done among the US, Europe, and Japan, all part of the DXY.
Even with it's new role as a indicator of financial flows, the DXY's usefulness continued to fade as the emerging markets rose initially in international trade and then in international finance, resulting in the DXY losing most attachment to actual dollar strength or weakness. Adding insult to injury, the interconnectedness and complex banking relationships between the US, Europe, Japan, and UK meant a deep understanding of financial markets was needed to interpret DXY signals. Movements of the dollar against the Euro or the Yen don't mean what they once did. Capital, credit, and currency relationships are blurred by derivatives and funding markets.
Despite all these drawbacks, the DXY is still very important because it remains the most widely used and traded dollar index.
Pros and Cons of DXY
Pros: (1) Liquidity of the futures product itself. Being a highly traded instrument makes it's price signal hard to ignore; this also means (2) its price feed is readily available on most charting websites.
Cons: (1) Lack of flexibility and completeness as a measure of dollar strength and weakness. The DXY is widely regarded as "mostly useful against the Euro and Yen", two very important currencies to be sure, but not a full representation of dollar dominance or value. (2) The DXY has never comprehensively updated it's formula. Even when the Euro replaced 5 of its currencies, the formula simply added them together. The global economy has changed drastically, and this old formula is not an accurate representation of the global economy.
What is the Trade-Weighted Dollar Index?
As the usefulness of the DXY as an economic indicator waned in the 1990's, the Federal Reserve created a more flexible dollar index called the Trade-weighted Dollar or Broad Index in 1998 to capture use of the dollar in international trade. Instead of only six currencies, this index weighs 26 currencies, notably including the Chinese Yuan, Mexican Peso, and South Korean Won. It is also regularly updated according to IMF trade statistics. The most recent update was in December 2019.
Trade-Weighted Dollar formula as of December 2019
- Euro Area - 18.947%
- China - 15.835%
- Mexico - 13.524%
- Canada - 13.384%
- Japan - 6.272%
- United Kingdom - 5.306%
- Korea - 3.322%
- India - 2.874%
- Switzerland - 2.554%
- Brazil - 1.979%
- Taiwan - 1.95%
- Singapore - 1.848%
- Hong Kong - 1.41%
- Australia - 1.395%
- Vietnam - 1.364%
- Malaysia - 1.246%
- Thailand - 1.096%
- Israel - 1.053%
- Philippines - 0.687%
- Indonesia - 0.675%
- Chile - 0.625%
- Colombia - 0.604%
- Russia - 0.526%
- Sweden - 0.52%
- Argentina - 0.507%
- Saudi Arabia - 0.499%
Pros and Cons of the Broad Dollar Index
Pros: (1) A better representation of the dollar use in global trade. (2) It does a decent job of capturing countries' overall relationships in the global economy; since the USD is the global reserve currency, a country's trade with the US is somewhat representative of their total economic activity. (3) The Broad Dollar Index is re-balanced periodically to retain accuracy of what it is supposed to be measuring.
Cons: (1) Lack of consideration for relative liquidity of currencies. China is a huge global trading partner, but the use of their currency is highly limited by strict capital controls. It is given a high weighting in the Broad Dollar Index but perhaps China should have a lower weighting. (2) It is not a trade-able product and consistently only available on the FRED website. Since it changes its composition periodically when weightings are included on popular charting websites, it is not long until the formula changes, resulting in a gap in availability.
Bloomberg Dollar Index (BDXY)
Introduced in 2004, the BDXY falls in between the DXY and the Fed's Trade-Weighted Dollar, both in number of currencies included and flexibility. The BDXY tracks the dollar versus the exchange rate of 10 other major currencies and is re-balanced each year according to Fed-reported trade flows and BIS-reported Forex liquidity. It is an interesting compromise, but falls just short of being a conclusive dollar measure, as well.
Bloomberg Dollar Index weights as of 2020
- EUR - 32.65%
- JPY - 14.64%
- CAD - 11.94%
- GBP - 11.49%
- MXN - 9.95%
- AUD - 5.15%
- CHF - 4.78%
- KRW - 3.43%
- CNH - 3.00%
- INR - 2.96%
Pros and Cons of the Bloomberg Dollar Index
Pros: (1) It tracks more currencies than the DXY, and (2) tracks those currencies in a more flexible manner. (3) It factors in liquidity, which is very important to getting an accurate representation of dollar strength and weakness.
Cons: (1) Proprietary, meaning access to real-time or slightly delayed quotes are available on the Bloomberg terminal only. It is also available on their market website but not conveniently accessible. (2) Still slightly overweight the Euro and underweight emerging markets. The Euro represents 32% of the basket while the three emerging markets included, Mexico, China, and India, only total half the Euro's weighting at 16%. Since the dollar standard is global, this measure is still slightly too US trade centric.
Enjoying these posts? Subscribe for moreSubscribe now
Already have an account? Sign in
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9596882462501526,
"language": "en",
"url": "https://chinadialogue.net/en/energy/1784-energy-leapfrogging-in-china-and-india/",
"token_count": 2083,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.26953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b38d5d42-d69f-4be4-a913-659f40b8a8a9>"
}
|
China and India’s energy development pathways are a frequent focus of international attention. In the climate change arena, the two countries’ current and future energy growth trajectories raise concerns about increasing greenhouse-gas emissions. China recently surpassed the United States as the largest national emitter of greenhouse gases, and India will soon surpass Russia to become the fourth-largest emitter after the European Union. China and India use coal to fuel most of their electricity generation, and both countries have plans to expand their coal power capacity considerably in the coming decade. For these reasons, China and India are perhaps two of the least likely places one might expect to find a burgeoning wind power industry.
While there are many potential benefits to local wind manufacturing, there are also significant barriers to entry into an industry that contains companies which have been manufacturing wind turbines for more than 20 years. In developing countries, limited indigenous technical capacity and quality control makes entry even more difficult. International technology transfers can be a solution, although leading companies in this industry are unlikely to license information to companies that could become competitors.
Nevertheless, India and China are both home to firms among the global top 10 wind turbine manufacturing companies. India currently leads the developing world in manufacturing utility-scale (multi-kilowatt) wind turbines, and China is close behind. Initiatives by domestic firms, supported by national policies to promote renewable energy development, are at the core of wind power innovation in both countries.
Suzlon, an Indian-owned company, emerged on the global scene over the past decade, and is proving itself to be a worthy competitor among more established wind turbine manufacturers. As of 2006, it had captured 8% of market share in global wind turbine sales: a modest share, but one that has been increasing annually. Suzlon is currently the leading manufacturer of wind turbines for the Indian market, holding 52% of the market share in India. Its success has made India the developing country leader in advanced wind turbine technology.
Goldwind recently emerged as the leading Chinese wind turbine manufacturer. The company currently holds 2.8% of market share in global wind turbine sales, reaching the top 10 for the first time in 2006. Within China, it captured 31% of sales in 2006. The company is rapidly expanding production, and has benefited from government policies that promote the utilisation of domestically manufactured wind turbines in Chinese wind farm projects. In 2006, Goldwind installed 442 megawatts, by far its largest annual installation to date.
Suzlon and Goldwind have used similar strategies to access wind power technology from developed-country firms. Although there are several technology transfer models available to a company looking to enter the wind industry, both Suzlon and Goldwind decided to pursue multiple licensing arrangements with established, yet second-tier, companies.
The acquisition of technology from overseas companies is one of the easiest ways for a new wind company to quickly obtain advanced technology and begin manufacturing turbines; however, there is a disincentive for leading wind turbine manufacturers to license proprietary information to companies that could become competitors. This is particularly true for technology transferred from developed to developing countries, where a similar technology potentially could be manufactured in a developing country with less expensive labour and materials, resulting in an identical but cheaper turbine. Consequently, developing country manufacturers often obtain technology from smaller wind power companies that have less to lose in terms of international competition, and more to gain in license fees. The technology obtained from these smaller technology suppliers may not necessarily be inferior to that provided by the larger manufacturing companies, but it may have been used less and will therefore have less operation experience.
Suzlon’s licensing arrangements with Sudwind, Aerpac, and Enron Wind provided it with the necessary base of technical knowledge needed to enter the wind turbine manufacturing business. Building on the knowledge gained through these licenses, Suzlon also formed many overseas subsidiaries. Some overseas partnerships were formed with foreign-owned companies, either to manufacturer a specific component, such as its gearbox company in Austria, or to undertake collaborative research-and-development, such as its Netherlands-based blade design centre and its gearbox research centre in Germany. Suzlon also situated its international headquarters in Denmark, which is a major industrial centre for the wind turbine industry.
Goldwind’s licensing arrangements with German wind turbine manufacturer REpower allowed it to jump into the wind turbine industry with little indigenous knowledge. These arrangements provided the transfer of enough technical know-how that Goldwind could innovate upon the transferred technology. It has more recently chosen to also pursue licensing arrangements with Vensys to gain experience related to larger turbine designs.
While Goldwind has relied only on licensed technology to date, Suzlon has expanded beyond the license model, and has purchased majority control of several wind turbine technology and components suppliers. These acquisitions include leading gearbox manufacturer, Hansen, as well as REpower. This combination of licensing arrangements with foreign firms and internationally based research-and-development and other facilities, complemented by the hiring of skilled personnel from around the world, has created a global learning network for Suzlon, customised to fill in the gaps in its technical knowledge base. Suzlon has been able to draw upon this self-designed learning network to take advantage of regional expertise located around the world, such as in the early wind turbine technology development centres of Denmark and the Netherlands. Suzlon differs from Goldwind because it has not restricted its technological learning and innovation networks primarily to India, while Goldwind has remained centred on China.
However, Goldwind’s lack of internationally oriented expansion does not necessarily mean that it has been unable to tap into regional, or even global, learning networks. The company’s origins in northwest China’s Xinjiang autonomous region put it at the centre of wind turbine technology experimentation in China in the early 1990s. As wind development momentum shifted eastward, Goldwind also established manufacturing facilities in east China, including in the new manufacturing hub around Beijing and Tianjin. Popular wind farm sites such as Dabancheng, in Xinjiang, and Huitengxile, in Inner Mongolia, have served as test sites for almost all of the leading global wind turbine manufacturers. Many firms, including Vestas, NEG Micon, Nordex, Bonus, Zond, and Tacke, all installed turbines in China during the 1990s. Consequently, while they tested their designs in China, Goldwind was able to benefit from knowledge that these manufacturers had gained in other wind learning hubs of the world. In addition, Goldwind hired employees trained by foreign-owned firms (often when they were based in China), taking advantage of the small but specialised work force foreign wind power technology firms effectively developed within China.
Since both Suzlon and Goldwind are most successful at home, each company’s outlook for future success is largely focused on its continued ability to thrive in domestic markets. India’s wind power policies, although lacking a clear national direction, are thriving on a regional basis. Goldwind has relied greatly on China’s policies that mandate local content, as well as an unstated preference for Chinese-owned technology. The companies’ continued success will also depend on how their turbine technology stands the test of time in reliability, as well as their ability to continue to design larger and more efficient turbines. If Indian and Chinese manufacturers are able to capture significant cost savings by manufacturing turbines locally, there would be excellent potential for both locations to serve as manufacturing bases for regional export. Suzlon and Goldwind believe that they are able to beat the prices being offered by their foreign competitors by locally sourcing their turbines.
The leapfrog effect
The institutional and other barriers present in large, developing countries such as China and India certainly challenge simplistic notions of energy leapfrogging. But substantial technical advances have been possible in relatively short amounts of time. It took both countries less than 10 years before companies were capable of manufacturing complete wind turbine systems, with almost all components produced locally. This was done within the constraints of national and international intellectual property law, and primarily through the acquisition of technology licenses or purchasing smaller wind technology companies.
Suzlon’s growth model, in particular, highlights an increasingly popular model of innovation for transnational firms, which is based on globally dispersed operations and utilises regional variation in technical expertise and low input costs to its advantage. Expansive international innovation networks allow it to stay abreast of wind technology innovations around the world, which it can then incorporate into its own designs through extensive research-and-development facilities. It has developed this network of global innovation subsidiaries while maintaining control of enough intellectual property rights to remain at the forefront of wind turbine manufacturing and sales around the globe. By contrast, Goldwind has pursued research and manufacturing operations that are primarily China-based, which has limited its interaction with hubs of wind power innovation expertise outside China. However, China is becoming a hub in its own right, with diverse international players actively manufacturing wind turbines, and many in close regional proximity.
These illustrations of energy leapfrogging demonstrate how two developing country firms used a creative blend of strategies to enter new technology markets. A combination of licensing intellectual property, creating strategic technology partnerships, accessing regional and global learning networks and taking advantage of regional advantages like lower labor costs, were all important components of each company’s successful business model. As technology development becomes increasingly global, developing country firms can and should take advantage of their increasing access to technological know-how, which was previously developed primarily by and for the developed world. The lessons of Suzlon and Goldwind’s success in harnessing global technology for local – and potentially global – use illustrate new models of technology development in the developing world.
Dr. Joanna Lewis is a senior international fellow at the Pew Center on Global Climate Change and an adjunct professor at Georgetown University’s Walsh School of Foreign Service. Her current research focuses on mechanisms for low-carbon technology transfer in the developing world and options for post-2012 international climate agreements. She has worked extensively in China examining renewable energy technology industry development, and advising the government on renewable energy policy design.
An extended version of this article appeared in Studies in Comparative International Development, Volume 42, Issue 3, December 2007.
Homepage photo by dengski
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.936413586139679,
"language": "en",
"url": "https://groww.in/p/operating-cash-flow/",
"token_count": 1371,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01446533203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f7a8b09e-d3d5-4178-91ce-8bafbc2320ff>"
}
|
Operating cash flow or OCF along with other financial metrics proves effective in measuring the financial standing and proficiency of a company. By reviewing the same, investors, creditors and firm owners can make an informed decision about the firm and its future.
In this article
What is Operating Cash Flow?
Operating cash flow or OCF can be simply described as the measure of cash a company generates through its core business operations within a specific time. It helps to analyse if a company is capable enough to generate the required amount of cash flow to maintain and expand its existing business operations.
In short, OCF serves as an effective benchmark for determining a company’s financial success concerning its operational activities.
Operating cash flow generates money through activities, which include –
- Aggregate sales of goods and services within a given period.
- Payments to goods and service suppliers.
- Pay-outs forwarded to employees or other expenses incurred for production.
Notably, operating cash flow is recorded on a cash flow statement right in the first section. Furthermore, it also highlights a clear demarcation between the cash generated through investing activities and financing activities.
Methods of Operating Cash Flow
Usually, there are 2 methods of computing operating cash flow or OCF, namely –
1. Direct Method
It is regarded to be a simple formula that helps to obtain accurate results. However, this operating cash formula does not provide much insight to potential investors. Resultantly, it is used mostly by the companies to track their operational performance.
The formula is expressed as,
Operating cash flow = Total Revenue – Operating Expense
2. Indirect Method
In this method, the net income is adjusted by adding the non-cash items to account for the changes in the balance sheet. On the other hand, depreciation is also added to the net income to adjust the changes in cash receivable and inventory.
In other words, the indirect method of calculating OCF requires the addition of non-cash items to the net income and also tunes out the changes in the net capital.
It is further expressed as,
Operating cash flow = Net income (+/-) Changes in assets and liabilities + Non-cash expenditure
Operating Cash Flow Formula and Example
Like discussed, the operating cash flow formula can be given by –
OCF = Net Income + Depreciation + Deferred Tax + Stock-oriented Compensation + non-cash items – Increase in Accounts Receivable – Increase in Inventory + Increase in Accounts Payable + Increase in Deferred Revenue + Increase in Accrued Expenses
OCF = Net income + Non-cash expenses – Increase in working capital
Operating cash flow example:
Joe Limited’s financial statements for the financial year 2017 comprise the following information.
- Net income: Rs.100000
- Depreciation: Rs.10000
- Change in inventory: Rs.20000
- Change in accounts receivable: Rs.50000
- Change in accounts payable: Rs.25000
Solution: By using the indirect method of operating cash flow,
OCF = Net Income (+/-) Changes in Assets and Liabilities + Non Cash Expenses
= Rs.(100000 – 50000 + 20000 – 25000 – 10000)
Significance of Operating Cash Flow
Importance of operating cash flow is as follows –
- A negative OCF indicates that a company does not have sufficient funds to run its core operations and needs to borrow funds to maintain the same.
- A relatively high net income may indicate that the firm finds it challenging to collect accounts receivable.
- It is considered to be among the purest measures of cash sources and offers a transparent insight into a company’s operational performance.
- It serves as a gateway to other reported financial statements.
It essentially is a measurement of a company’s capability to cover its current liabilities with the help of the cash generated through its main operations. It is calculated by dividing a company’s total operating cash flow by its current liabilities.
Typically, the ratio proves effective in assessing the liquidity of a company in the short-term. When pitted against net income, operating cash flow is considered to be a more transparent way of measuring a company’s earnings. It is mainly because operating cash flow is more challenging to manipulate.
The operating cash flow ratio formula is expressed as –
OCF ratio = OCF or Operating Cash Flow / Current Liabilities
Suppose, Doubtfire Limited has generated an operating cash flow of Rs.250000. It has also accumulated current liabilities of Rs.120000. From the given information, ascertain its operating cash flow or OCF ratio.
As per the information,
OCF ratio = Operating Cash Flow / Current Liabilities
Notably, a higher than 1 OCF ratio signifies that a firm has generated more money than what it needs to pay off its liabilities. On the other hand, a ratio of less than 1 suggests that the firm has not generated enough to meet its current liabilities and needs more capital.
Nonetheless, it must be noted that a low ratio does not always suggest poor financial standing. In fact, it may indicate a fruitful investment opportunity.
Net Income vs Operating Cash Flow
The basic differences between the two are highlighted in this table below –
|Operating Cash Flow||Net Income|
|It is the cash generated through the core operations of a company.||It is essentially the profit earned within a period.|
|It serves as a measurement of a company’s daily cash inflow and outflow concerning its operations.||Net income serves as the starting for computing a company’s operating cash flow.|
|It serves as a metric of a company’s capability to pay off its debt in the short-term.||It is a crucial measure of a company’s profitability and a driver of bond valuation and stock pricing.|
|OCF projects a more transparent image of a company’s finances.||In case of net income, there is room to manipulate the figures.|
|High operating cash flow indicates a greater cash inflow than outflow.||A company with a positive OCF can still have negative net income.|
|OCF formula = Net Income (+/-) Changes in Assets and Liabilities + Non-Cash Expenses||Net income formula = Total revenue – Total expenses|
Therefore, if investors and financial analysts wishing to obtain a transparent report of a company’s finances, they should check its operating cash flow or OCF. Also, for a better idea of its proficiency and financial standing, they may use the OCF ratio along with other financial metrics.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9698076248168945,
"language": "en",
"url": "https://www.feedstuffs.com/story-focus-comparing-current-farm-prosperity-1970s-45-98153",
"token_count": 569,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0986328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b5f0f865-9dca-46ab-a71e-cea352e1d1c6>"
}
|
As commodity prices and farm incomes have appreciated in recent years, many farmers and agricultural observers have drawn comparisons between current farm prosperity and the prosperity of the 1970s. Ohio State University agricultural economist Carl Zulauf examined those comparisons in a series of articles focusing on crop prices, farm debt, cash income and real estate, and cash farm expenses.
"Both the period of U.S. farm prosperity during the 1970s and the current period of U.S. farm prosperity experienced sizeable increases in crop prices, whether measured in nominal prices, real deflated prices, or relative to crop input prices," Zulauf concluded in his introductory analysis in the series. "However, notable differences exist between the time paths of prices during each period. The differences become more pronounced if crop prices are adjusted for general economic inflation or examined relative to farm input prices."
Zulauf noted that by both 1979 and 2012 (year 7 of each period), the index of prices that U.S. farms received for the crops they raised was appropriately double (200%) the index of crop prices at the start of the period. A key difference, however, is that price increases were larger earlier in the 1970s, while price increases have been more consistent over the current period of prosperity, at least through 2012.
One key difference between the two periods is farm-sector debt. By 2012, deflated farm real estate debt had increased 37% compared with the 2001-2005 benchmark period. By 1979 on the other hand, deflated farm real estate debt had increased 55% compared with the 1968-1972 benchmark period.
While Zulauf points out that the current increase in farm real estate debt is not trivial, it is significantly smaller than the increase observed in the 1970s. The major difference between the two time periods, it turns out, is non-real estate debt.
"The time path of farm non-real estate debt is strikingly different in the two periods of farm prosperity," he wrote. "Relative to their respective benchmarks, deflated farm non-real estate debt had grown 83% by 1979 but only 13% by 2012. Thus, deflated farm non-real estate debt has increased substantially less during the current period of farm prosperity."
Perhaps the most important difference between the two periods is the difference in what is driving current farm prosperity. In the 1970s, Zulauf concluded that the primary driver of prosperity was an increase in asset values, namely the value of farm real estate. In the current period, on the other hand, asset values appear to be increasing in response to increasing farm income.
The professor discussed the results of his analysis in a recent Feedstuffs interview. You can hear his comments by listening to Feedstuffs In Focus, the podcast of big news in agriculture.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9603946208953857,
"language": "en",
"url": "https://www.finance-ni.gov.uk/articles/rating-policy-questions-and-answers",
"token_count": 2165,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08935546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6cdb7720-53a8-4100-b821-00c86c8f79f4>"
}
|
What does a revaluation do?
A revaluation will not affect the total amount of money that is raised. Generally a revaluation exercise is revenue neutral and simply redistributes the same revenue burden in a different way.
The impact on individual ratepayers of a revaluation depends on how much their individual values have changed since the last revaluation.
Many people are under the mistaken impression that reductions in value since the last revaluation would lead to corresponding reductions in rate bills. This is not necessarily the case. Even when values decline overall the Executive and district councils still need to raise the same amount of money to pay for public services and therefore the tax rate, or rate in the pound, is adjusted accordingly. So, there are winners and losers.
The same principles apply to both sectors.
Why have rates bills continued to rise despite falling property values?
Many people are under the mistaken impression that reductions in rents or property values since the downturn should, or were a revaluation undertaken would, lead to corresponding reductions in rate bills. This is not the case. Even when values decline the Executive and district councils still need to raise the same amount of money to pay for public services.
The Executive and district councils together raise over £1 billion in rates. Even if, for example, property values were to decrease by 50 per cent it does not follow that rate bills would then be half what they were. The Executive and councils could not manage to maintain essential public services by raising £500 million less. Conversely, when values doubled, rates liability and the total revenue raised did not double.
Rate bills are worked out after the total amount that needs to be raised is decided. This is translated into individually assessed rate bills, by sharing the overall burden out in proportion to the estimated value of the property at a fixed point in time. Regardless of shifts in values the rate poundage is adjusted to raise the revenue needed.
A downward trend in values will not affect rate charges until there is a revaluation and even then it merely redistributes the same rates burden; it won’t decrease ‘the overall revenue take’. Lower values at a revaluation would simply result in higher rates in the pound.
At a revaluation those whose property value has reduced in value by more than the average since the last revaluation would end up paying less rates. Those whose property values have reduced by less than the average, or indeed increased, would have an increased rates liability.
Does the Executive have any plans to carry out a domestic revaluation?
The Executive has no plans to carry out a domestic capital value revaluation during the current spending review period and life of the current Assembly. Furthermore, carrying one out would be difficult at the moment, as the evidence required to establish the values is not sufficiently reliable, given the low volume of sales and the continued volatility in the housing market.
Broadly speaking residential values have now returned to values that existed in January 2005, the base date for the existing domestic values.
The domestic revaluation works in the same way as a non domestic revaluation insofar as Executive and councils still need the same amount of money out of the system to pay for public services. If a general revaluation of all domestic properties were to take place soon and it was found that all values had decreased below the 2005 levels, the tax rate or rate in the pound would simply have to go up.
The important issue in deciding whether to undertake a revaluation, however, is the extent to which some areas of the market have declined over this period relative to others. Revaluation always creates winners and losers; houses that have reduced in value by more than the average since 1 January 2005 would end up paying less rates, those that have reduced by less than the average, or indeed increased, would have an increased rates liability.
When these relativities get significantly out of line, and the housing market is sufficiently stable and active to provide the underlying evidence, the matter can be reconsidered.
What is the small business rate relief scheme and what support can it give my business?
The small business rate relief scheme was introduced by the Executive in April 2010 to provided help to small business premises across a wide range of sectors.
While the scheme has a general application, certain property types are excluded. These are unoccupied or partially unoccupied properties, ATMs, property used for the display of advertisements, car parks, sewage works, telecommunications masts and properties occupied by public bodies. Further information can be found in this small business rate relief fact sheet.
The relief is awarded automatically, on eligible properties, by Land & Property Services. Small Post Offices get enhanced relief and this element of the scheme may require an application to be made in some cases. Details of this scheme can be found in this factsheet.
In April 2012 the Executive agreed to the extension of the scheme, to be funded through a large retail levy. The extended scheme is providing additional support of around £6 million to up to 8,300 businesses ratepayers with a net annual value of £5,001 - £10,000 for three years.
The small business rate relief scheme was expanded in 2012. A 20 per cent relief is now awarded on eligible small business premises with a NAV of between £5,001 and £10,000. This expansion applied for three years, through to 31 March 2015, when the small business rate relief scheme is due to end.
This also coincided with implementation of the general revaluation of non domestic properties, which redistributed the rating burden. Following revaluation some smaller business may end up paying more but more are expected to pay less. It does of course depend on location and sector, as reflected in relative changes in rental values since the last revaluation in 2003.
On 26 November 2012, the Minister announced his intention to extend the small business rate relief. This change came into effect on 1 April 2013.
In April 2013 as part of the Executive's Jobs and Economy Initiative, the scheme was further extended to provide additional support to a further 3,500 businesses with a net annual value of up to £15,000.
|£2,000 or less||50%|
|£2,001 - £5,000||25%|
|£5,001 - £10,000||20%|
|Up to £15,000||20%|
On March 10 2015 the Minister announced that the scheme would be extended allowing £20 million worth of support to business ratepayers to continue into 2015/2016. The Minister is currently considering the recommendations of an independent evaluation of the scheme carried out by the Northern Ireland Centre for Economic Policy and will announce her decisions on the future of the scheme later this year following consultation with Executive colleagues.
What other measures, aside from small business rate relief, have been introduced to help businesses in Northern Ireland?
Manufacturing rates continue to be held at 30 per cent through to 2016, saving those businesses around £56 million per year compared to full rates liability. It also provides certainty to our hard pressed manufacturing sector, crucial during the current economic climate.
Non-domestic vacant rating
Locally full rates on unoccupied properties remain at 50 per cent, compared to 100 per cent in England and Wales, while rates are not applied to vacant factories. In Scotland the level of relief is to be reduced, to 10 per cent, after a three month period.
Regional rate increase
For 2015-16 the regional rate was frozen in real terms. The continuing freeze in the regional rate means rates are lower than otherwise would have been the case.
Commercial rates package
As well as expanding the SBRR scheme the 50 per cent empty shops rates concession has been extended to 31 March 2016 allowing 50 per cent relief where an empty retail premises becomes occupied. The property must have been empty for at least 12 months.
What help is provided to households with their rates?
Decisions by the Executive on the regional rate, plus the postponement of water charges, have provided much needed help to households across Northern Ireland. In terms of rate bills households are better off than they would have been under direct rule had the previous trend increases continued.
Alongside lower bills the Executive has made significant progress in providing assistance to vulnerable households. The lone pensioner allowance gives a 20 per cent discount to those aged 70 or over living alone. The rate relief scheme helps those on low incomes, or just outside the housing benefit thresholds, with their rate bills.
Combined with housing benefit around a quarter of households have their rates bill paid for them in part or fully.
What does my rates bill consist of?
A rate bill consists of both a regional and district rate. District rates are fixed by each district council to meet its net expenditure on such functions as leisure facilities, economic development and environmental matters. District rates vary from district council to district council reflecting the rateable resources and spending policies of individual councils. The regional rate element is just over half of a typical rate bill and is set by Central Government.
What do rates pay for?
Rates are an unhypothecated tax, meaning that it is not linked directly or ring fenced to the provision or consumption of particular services. Revenue from rates funds a wide range of public services including health, education, economic development, water, main roads and council functions. While a contribution is made by each individual or non-domestic ratepayer towards funding regional public services, such as water and sewerage or education services, there is no specific proportion of any rates bill that can be linked to the availability or usage of any particular public service.
In the case of the regional rate the amount collected from domestic and non domestic rate payers is added to the amounts received from the Treasury to provide a total sum available to the Executive for allocation to the public services for which it is responsible. The same applies to the district rate element. Rates therefore provide an overall contribution to the funding of a variety of local and central government services, for the benefit of all households across Northern Ireland.
The rating system itself is based on an individual assessment of value for each and every privately owned or rented house in Northern Ireland. These individual assessments are based on valuations that naturally take into account all the general and unique advantages and disadvantages that a hypothetical purchaser would take into account if purchasing the house in its current state; at a fixed valuation date. This includes the features of the house itself as well as locational factors, including access to services.
Further details can be found in this guide to rates.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9385626316070557,
"language": "en",
"url": "https://www.investkorea.org/ik-en/bbs/i-465/detail.do?ntt_sn=491248",
"token_count": 927,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.189453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a533275c-1a16-4964-b7d2-4a55061c5364>"
}
|
- Information Center
- Investment News
According to The Korea Economic Daily Global Edition,
Graphene is often called a “wonder material” because of the extraordinary properties that come from its nano-scale structure. It is an excellent heat and electricity conductor, is elastic and was referred to by the American Chemical Society as the thinnest and strongest material available – being 100,000 times thinner than paper and 200 times stronger than steel.
South Korea’s Graphene Square Inc. is a pioneer in commercializing graphene material and graphene film. Established in 2012 as a spin-off of chemistry professor Hong Byung-hee’s lab at Seoul National University, the company researches graphene technology and provides production equipment for academic and industrial use.
The company is preparing for an IPO to fuel global expansion. It will select the underwriter this year and plans its Kosdaq listing in 2022 through the Technology Special Listing program, a system introduced in 2005 to allow promising startups to list the local bourse based on a technology evaluation conducted by Korea Exchange-designated institutions.
Graphene Square’s competitive edge lies in its proprietary chemical vapor deposition (CVD) method used for the mass production of graphene. The method is based on Hong’s own research on the synthesis of large-area graphene in 2009. His CVD research was the first of its kind on graphene, and was published in top academic journals, including Nature.
CVD is a method that transfers graphene, synthesized with a catalyst substrate such as copper at high temperatures using carbon gases, onto the target substrate.
“Simply put, our CVD technology adheres high molecular compounds onto the copper-synthesized graphene, then removes the copper using an etchant, and finally separates the graphene from the molecular compound,” explained Hong.
Graphene Square opened the door for mass production of graphene by combining the CVD technology with the "roll-to-roll" technique also devised by Hong, who still oversees the company as its CEO. The technique, apparent from its name, is just like printing newspapers as every production stage is conducted on a single production line, thus maximizing productivity.
The professor-CEO highlighted that his company’s graphene synthesis and production equipment is particularly sought after by overseas universities and research institutions, driving the vast majority of the company’s current revenue. Its most recent sales were to the Israel Institute of Technology, or Technion, which purchased three sets of CVD equipment earlier this year.
FAST-GROWING MARKET WITH MULTI-INDUSTRY APPLICATIONS
Graphene Square sees almost endless potential in the industry use of graphene, from electronic devices to electric vehicles (EVs). According to industry forecasts, the global graphene market is expected to grow at a CAGR of 65% to reach $53.5 billion by 2025.
Among the various potential segments, Graphene Square is currently focusing on the development of transparent heaters for EVs. The heater attaches graphene onto the glass film of the front windshield of the vehicle and produces heat, thereby preventing frost.
The company stated that “there is severe energy waste when EVs make hot air for defrosting. We are currently working closely with a global auto company to manufacture the transparent heaters by 2022.”
The company is also developing pellicles for the extreme ultra violet (EUV) lithography process. EUV lithography in the semiconductor industry uses EUV light that has been penetrated through the patterned EUV "mask" to draw circuits on a wafer. The pellicles are basically thin films that protect the expensive EUV masks by covering their surfaces.
Graphene Square stated that it sees a huge opportunity in the EUV sector as “the pellicle technology used in the EUV sector is not yet fully advanced in the market, despite the global trend of semiconductor manufacturers increasingly considering adopting the use of pellicles to protect the masks.”
Other areas of the company’s research for potential commercialization include applying graphene onto the current collectors of rechargeable batteries for improved capacity and charging speed, and onto bulletproof jackets.
Copyrights The Korea Economic Daily Global Edition. All Rights Reserved.
Reprint or redistribution without permission is prohibited.
Dong-hyun Kim ([email protected])
Daniel Cho edited this article.
Source: The Korea Economic Daily Global Edition (Mar 2, 2021)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9635325074195862,
"language": "en",
"url": "https://www.kenyacic.org/2017/01/entrepreneurship-can-address-kenyas-drought-crisis/",
"token_count": 765,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1103515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5f8c394b-cc10-4ef6-9e9d-666ad9493b2b>"
}
|
The drought situation that Kenya is currently experiencing ranges from moderate to severe especially in the coastal and northern parts of the country. It is predicted that the situation is likely to get worse in the coming months up until April when the long rains are expected. However, in the past two years, there has been a delay pattern sometimes getting to early May with very little rainfall.
According to the Food and Agriculture Organization of the United Nations (FAO), Kenya experienced decreased rainfall in 2015 and 2016 and people have barely recovered from the effects. In Nairobi, residents are affected by water rationing and there is a likelihood the cost of electricity will go up in the next few months.
The negative effects of climate change are now being felt across the country. The situation calls for a radical change in the way business has previously been conducted. The evidence is clear that business as usual will not continue to serve this country. The country must adopt a green economy pathway.
It is high time that key players in the society – government, businesses, civil society and academia – change focus to high priority issues like food security and building the resilience of Kenyans to cope with the weather extremes that are increasing in frequency and intensity.
Key to the expansion of any economy is the net expansion of entrepreneurship. Kenya’s green economy model must among other things foster an environment that encourages the flourishing of green enterprises especially small and medium enterprises. Already these small enterprises have come up with innovative products and business models that are helping to mitigate or adapt to climate change.
For instance, Kickstart International now manufactures agro solar pumps that use solar energy to pump water from depths of up to seven meters. These pumps are a good replacement for diesel pumps that many small scale farmers use to irrigate their crops. Their main benefits include no recurrent costs, no noise and no pollution of the environment. These benefits should make every farmer replace their diesel pump with a solar pump, but that is not the case. The cheapest solar pump costs about Ksh40,000 (USD 400) while a diesel pump costs half that amount. Unless there is some intervention at a policy level, farmers will continue to use diesel pumps that are more expensive to maintain and still pollute the environment. There are other companies selling different models of the agro solar pumps like Future Pump, Sunculture, and Davis and Shirtliff. However, the uptake has been slow.
Another example is Ukulima Tech, a company that provides vertical gardens to urban and peri-urban residents allowing them to grow vegetables on small portions of land and even balconies. The vertical gardens are very economical in the amount of water used and can even be irrigated remotely. The gardens can allow farmers to have a year-round supply of vegetables provided they have some water reserves.
Hydroponics is also another innovative solution that is being promoted by Mineral and Allied. The technology involves growing crops in water with adequate mineral nutrients in a controlled environment. In Kenya, shade net is being used to control temperatures in sections where hydroponic crops and fodder are grown. Even though the crops are grown on water, the system is ten times more water-efficient compared to growing crops in an open field using soil.
These are just a few of the technologies that have received support from the Kenya Climate Innovation Center. There are climate adaptation technologies suitable for almost any region in the country. The main challenge has been that Kenya lacks a mechanism to promote and increase awareness of the existence of such technologies.
Thankfully the Climate Change Act which was enacted in September 2016 has a provision for incentives for the promotion of climate change initiatives. Hopefully, there will be a mechanism where the Kenyan public can learn about existing solutions for climate change and a structure that facilitates people to own and use these technologies.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9716583490371704,
"language": "en",
"url": "https://www.public-international-law.net/the-cost-of-living-in-the-uk-soars-despite-economic-policies-to-reduce-inflation/",
"token_count": 366,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.045654296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:14fc6c40-a96d-4a2c-8419-4ac5002c0558>"
}
|
UK’s citizens are groaning over the high cost of living. This can be traced to the lockdown activated by the UK authorities to limit the spread of COVID-19.
UK’s cost of living experiences unexpected surge
Rates of services and products have surged in Britain, even with the COVID-19 curbs which forced certain shops to shut down. The consumer rates index inflation increased to 0.7%, from 0.4% in December.
This will mean that non-essential services prices are affected, according to the office of National Statistics. A lot of citizens raced to escape Christmas lock downs which forced prices of most services to increase.
Although this price hike was expected, it was higher than what economic experts forecasted. About inflation. Inflation is how prices of goods and services increase due to several factors. Inflation affects all sectors, which includes Housing shopping and transportation services.
It is a key economic concept that measures financial stability and consumers’ well-being according to Jonathan Athow clothing rates increase because of inflation in the UK in December, even with discounts.
“ Despite these Increases on non-essential goods, food rates decreases, especially Vegetables and meat”
Financial analysts opined that it is normal for the cost of flights and transportation to increase due to few people traveling, however, it is higher than envisaged.
Transportation and electronics rates increases
Even with various transportation restrictions in most of the UK in December, prices of fares still increased between October to December. Although it has to be noted, these prices could reduce when the movement restrictions are lifted and citizens travel more.
Prices of home electronics like Game devices and kids games also aided the increase in December. The only cheap commodities are food and alcohol according to ONS.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9550307989120483,
"language": "en",
"url": "https://www.softheon.com/blog/cbo-report-a-new-framework-on-single-payer-health-care/",
"token_count": 826,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.181640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a7f1886d-7940-4f9a-b122-611e12d6871b>"
}
|
Earlier this month, the Congressional Budget Office (CBO) released a report outlining options and technicalities lawmakers need to consider to establish a single-payer health care system in the United States. Although a lot to assess, the “single-payer” term generally refers to a system in which health care is paid for by a single public authority, as stated by Kaiser Health News (KHN). While once a pipe dream, this topic is officially mainstream and cultivating questions quickly.
Here’s what you need to know:
The Program will Cover Necessary Medical Services
Single-payer architects are potentially looking at existing standards from the essential health benefits that govern Obamacare health plans to decipher what’s covered. The report is also displaying a more generous outlook on long-term care, which isn’t considered a benefit by Medicare or most private insurance companies. At a glance, the report is projecting an increase in public spending if all citizens were offered and received long-term care services and support. Under our current system, Americans use Medicaid benefits for such services but use their own money before they’ve been accepted into the Medicaid program.
According the KHN the CBO report suggested some kind of “cost-effectiveness criterion” could determine what the government is willing to cover. A single-payer benefit system would also need to specify which new treatments and technologies would be covered. The CBO report explained, “An independent board could recommend whether or not new treatments and drugs should be covered after their clinical and cost-effectiveness had been demonstrated.”
Although no easy formula has been created, conversation around cost-sharing has been developing. In some single-payer structures, individuals must pay a copay, meet a deductible or pay a premium as part of their selected health plan. That could potentially decrease the need for new taxes. KHN analyzes: “The CBO report suggests that new taxes would likely play a role in financing a new single-payer plan. But what kind of taxes — a payroll tax, an income tax or a sales tax, for instance — has not yet been stipulated. And each would have different consequences.” Under the proposed single-payer system, enrollees may pay nothing or a small percentage of the cost when receiving care. Individuals enrolled in private insurance plans and Medicare would still share costs for most services.
Lowered Health Expenses and Increased Value
By abolishing private insurers, a single-payer system would cut hospital administrative overhead, leaving the government to pay a reduced rate in hospital costs. The single-payer bottom line is reflective of what the system pays hospitals, doctors, and drug companies for products and services. The CBO reports the probable negotiation in drug and administrative pricing, meaning direct negotiations between a single-payer system and manufactures could determine prescription drug prices. In addition, “Government spending on health care would increase substantially under a single-payer system.” The proposed system explains how beneficiaries would pay out-of-pocket for premiums, cost sharing evaluated on income-based subsides, and additional contributions from high-income beneficiaries. Lastly, the CBO report states, “A single-payer system could establish provider payment rates through negotiations. Organizations representing providers, such as the American Medical Association, could negotiate payment rates with the system, and those negotiations could occur within broad budgetary guidelines such as a national spending limit.”
Some members of Congress are proposing a new framework on single-payer health care in hopes of all citizens of the United States having health insurance. The CBO report discusses many features of their projected system while leaving major factors in question. Design considerations and choices are ultimately left to our policymakers as they analyze cost, payment, administration, provider rules, enrollment, cost-sharing, and role of current systems.
The views and opinions expressed by the authors on this blog website and those providing comments are theirs alone, and do not reflect the opinions of Softheon. Please direct any questions or comments to [email protected]
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9576020240783691,
"language": "en",
"url": "https://aerospaceamerica.aiaa.org/departments/satellites-driving-a-burgeoning-space-economy/",
"token_count": 1009,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.08154296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3049bec1-fd8b-4031-8384-fac8cd190604>"
}
|
Satellites: Driving A Burgeoning Space Economy
By Steve Isakowitz and Robbie Schingler|March 2020
One of the biggest opportunities for economic growth on the planet actually begins 250 miles above it, where a developing space economy is building at speed. It’s empowered by new business models, burgeoning partnerships, and technological advancements that are welcoming new entrants and fresh thinking to the space enterprise.
Space has long been home to thousands of satellites that provide crucial services for society. Most of this investment in space has been driven by national and military utility, such as communications; position, navigation and timing (PNT) systems; weather; and early warning systems. However, nearly all of these assets have dual use, and data and services have been made publicly or commercially available to help farmers maximize their crops, financial institutions process transactions, fishermen increase their haul, utility companies manage power grids, and more. The economic impact of these satellites cannot be understated.
Since the 1980s, GPS satellites have helped generate nearly $1.4 trillion in economic benefits. With roughly 8,950 satellites placed in orbit and more than 16,000 small satellites expected to launch by 2030, the importance of satellites in the economy will only increase. The majority of these activities have been funded by or in close collaboration with governments. With a new decade upon us, we see the space economy, led by innovative companies, new technologies, and novel business models, becoming commercialized and governments transitioning into enterprise customers.
A lot of progress has been made by innovators in the space industry who are building businesses with commercial business models. Leveraging investments from major technology companies in cloud computing, computer vision, and machine learning, space-enabled businesses are using these commoditized and open source technologies to build cost-effective products that deliver business value quickly. Applying these technologies and processes to remote sensing further decreases the barrier to entry for a non-remote sensing expert to extract insights within geospatial data.
By reducing the cost to reach space by a factor of 10 and developing satellites at 1,000x lower mass per unit performance and cost than 10 years ago, new commercial space companies—such as Planet, Spire and HawkEye 360—have made data that was once only accessible by government entities available to the masses. And it is being utilized daily across industries to achieve great things that were never imagined.
Meanwhile, the U.S. government is focused on advancing the capabilities of the space sector and relies on organizations such as The Aerospace Corporation to solve the hardest problems in space for both industry and government. This includes working to accelerate the speed of innovation and product development through an agile aerospace enterprise aimed at rapidly replacing space assets at speeds previously unprecedented, as well as developing new technologies, informing space policy, and aiding new entries into space.
These initiatives are already working. There has been an increase in medium-lift and heavy-lift launch vehicles offering piggyback launch opportunities for small satellites. The Indian PSLV and Russian Soyuz are much too expensive for most small satellite companies to purchase the full capacity of the rocket, but there’s usually several hundred spare kilograms available on each flight—and Planet has already launched over 200 of its satellites as hitchhikers on bigger rockets. The launch side of the equation is also picking up with Rocket Lab launching six dedicated small satellites last year and SpaceX’s announcement of a smallsat rideshare program, offering launch capacity as low as $5,000 per kilogram, an approximately 75 percent reduction in price from most options.
The growing support of government entities for the commercial enterprise sector has been particularly notable. Some governments and agencies are becoming enterprise customers and buying commercial subscription products, thereby incentivizing industry to build and deliver upgradable products. That means rethinking the way spacecraft are designed, built, and operated.
Satellites of the past were large, costly, and took a long time to test and build; they were also often in space for so many years that their technology became outdated. Small satellites are much less expensive and business models can incorporate rapid iteration of hardware and software. There’s been continued support for companies that inspire evolution and growth following a classic market dynamics for disruptive innovation. Conferences such as the new ASCEND event, powered by AIAA, 16–18 November, as well as Satellite 2020, Space Symposium, and GEOINT 2020, are critical to the development of these ideas and advancements.
Many individuals are in the space community because of the effect it has on the future of humanity. They have a desire to understand the cosmos, become a multiplanetary species, and devise ways to live more sustainable lifestyles. It is exciting to be a part of the Space Renaissance and the 21st century’s rapidly-evolving aerospace ecosystem. ★
Steve Isakowitz is Chief Executive Officer, The Aerospace Corporation, and Robbie Schingler is Co-Founder & Chief Strategy Officer, Planet.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9588258862495422,
"language": "en",
"url": "https://aleveleconomicstuition.com.sg/what-you-need-are-techniques/",
"token_count": 384,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1103515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0f9a767a-813d-41e0-93f0-667cd4a0fd49>"
}
|
The key to doing well in Economics is actually technique. It is an open secret, but students find it hard to develop the skills necessary for them to score. The techniques involved are numerous, and they come together for you to be fully prepared for any examination that comes your way. Here are the techniques you will need and will be equipped with at JCEconomics.com.
- Reading Skills
You need to read case studies and essay questions, quickly and accurately. After all, there is only that much time during an exam and there is no time to lose. This may sound easy but is actually more challenging than you think. JCEconomics.com will help you cut down on reading time but increase your accuracy. Know what to look out for and you will be done getting the information you need in less time, leaving you with more to craft a better answer.
- Writing Skills
While Economics is not linguistically driven, writing skills are still imperative for your answer to be fluent. You will be presenting an argument after all so you need your ideas to flow and your answer to be coherent. You need to learn how to structure your paragraphs so that you can present an impressive answer.
- Analysis Skills
Coming up with original, well-thought out answers that are not regurgitated from notes is a great challenge. Students are impeded by their habit of memorizing answers and repeating their answers wholesale from the lecture notes, without thinking of the question. Analyzing the question and the issue is what you need in order to bring your answer to the next level in both your case studies and essays.
There is so much more to Economics than theory and concepts. What you need is to go beyond your lecture notes and theories, and explore real world situations. JCEconomics.com will help you develop all these skills so that you are all ready to tackle Economics like a Pro!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9558506011962891,
"language": "en",
"url": "https://indiaclimatedialogue.net/2019/09/14/delhi-declaration-calls-for-land-based-solutions-to-fight-climate-change/",
"token_count": 1159,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.337890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:24360f56-b88a-44c8-a884-260e83b2a6af>"
}
|
Land-based solutions must segue into actions to stem biodiversity loss and restrain global warming, the United Nations desertification summit said in a declaration
The biennial summit organized by the United Nations Convention to Combat Desertification (UNCCD) and hosted by India said at the end of two weeks of deliberations that land restoration makes business sense if there are regulations and incentives to reward investments.
There is an urgent need to conserve and restore land and soil affected by desertification, land degradation, drought and floods, which will also long-term benefits for the health, well-being and socioeconomic development of society, particularly livelihoods of the rural poor, the New Delhi Declaration: Investing in Land and Unlocking Opportunities announced on September 13.
Calling the Declaration a powerful document, UNCCD Executive Secretary Ibrahim Thiaw said, “There is a clear link between land restoration, climate and biodiversity. Investing in land will unlock a lot of opportunities.” Land restoration at scale is the cheapest solution to meet challenges posed by climate change and biodiversity loss, he said.
Restoring land is good business
Thiaw said that UNCCD has established a clear business case for land restoration. He called upon national governments to incentivize land restoration. “We have woken up to the fact that we will see more frequent and severe droughts, a phenomenon that will be exacerbated by climate change,” he added.
“The Delhi Declaration is an ambitious statement of global action and shows ways to achieve land degradation neutrality,” said Prakash Javadekar, India’s Environment Minister and President of the 14th Conference of Parties (as national governments are called by the UN). “For the next two years, India will have the presidentship of UNCCD. India is committed to our own actions in our country towards land restoration.”
More than 9,000 delegates and representatives from 196 countries and the European Union participated in the conference, which also grappled with the contentious issues of land tenure, migration due to land degradation and how to deal with droughts.
Countries will address insecurity of land tenure, including gender inequality in land tenure, promote land restoration to reduce land-related carbon emissions and mobilize innovative sources of finance from public and private sources to support the implementation of these decisions at country-level, the Delhi Declaration said.
Money remains a problem
The conference was bogged down for long periods over the issue of including land tenure, migration and financing mechanisms in the UNCCD agenda. Many developing countries, especially from Africa, wanted community ownership of land to be given international legal recognition, and for the UNCCD to start discussing this.
Some industrialised countries – led by the US and Australia – objected, saying that individual ownership of land was a core principle in their countries and not open to challenge from any international agreement at any stage. The Delhi Declaration finally came up with a compromise that did not please many African governments.
A bigger issue of contention was migration forced by land degradation. Countries in the Sahel region of Africa wanted drought and land degradation to be recognized as legitimate reasons for international migration, a stance opposed strongly by the European Union.
Indian negotiators spoke privately of such a move opening the doors to migration from Bangladesh, which is suffering serious land degradation due to sea level rise. But as hosts, India did not take a stance in public. The entire issue was finally dropped, much to the chagrin of countries such as Niger, Chad and Burkina Faso. See: India promotes cooperation, but key questions unaddressed
Money remains the biggest problem. For 25 years since the UNCCD was set up, its projects to control desertification and deal with drought have been chronically underfunded by the international community. With the IPCC now establishing the scientific relationship between land degradation and climate change, developing countries made a strong pitch at this conference to get money from the Green Climate Fund (GCF) to restore degraded land. Once again, this was strongly opposed by rich nations, who pointed out that the GCF was seriously underfunded itself.
That was why Thiaw made a strong pitch for private money, but many of the relatively smaller African nations are unsure whether they will get any of that, or any financing from the World Bank either for the purpose of land restoration. A delegate from Cameroon complained about the lengthy and expensive process of applying to the World Bank to finance a project.
Earlier in the week on Monday, India’s Prime Minister Narendra Modi had said while opening the high-level segment of the summit, “I call upon leadership of UNCCD to create a global water action agenda, which is central to land degradation neutrality strategy.” The Indian government has launched a programme to double the income of farmers by increasing crop yield through various measures, which includes land restoration and micro irrigation. Modi had also announced that India would raise its ambition of the total land to be restored from 21 million hectares to 26 million hectares by 2030.
The UNCCD has suggested a mechanism for reporting action to ensure it captures key issues such as gender inequality, drought response and the influence of consumption and production patterns and flows on land degradation. Through the Delhi Declaration, ministers expressed support for new initiatives or coalitions aiming to improve human health and well-being, the health of ecosystems, and to advance peace and security.
The Delhi Declaration urged the development of community-driven transformative projects and programmes that are gender-responsive at the local, national and regional levels to drive the implementation of UNCCD. “Land restoration will not succeed if we don’t put people first,” Thiaw said.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9465372562408447,
"language": "en",
"url": "https://robohub.org/contrasting-two-robotic-developments/",
"token_count": 243,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.04296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3b21abae-2f97-4aa4-9fbf-9f9ef28e7d0a>"
}
|
Contrasting two robotic developments
The first is an autonomous agricultural robot that you can actually buy, or will be able to soon. It runs on gas and will cost around $100,000 when it becomes available early next year. FHI claims the machine can grow fruit and vegetables independently, although this is difficult to imagine based on the one available photo.
The second is the combination of a robotic hand possessing touch sensitivity and quick, flexible movement with a fast vision system, allowing some rather amazing manipulations of objects (check out the video!).
Of the two, the latter provides me far more hope for the future of robotic land management. A pair of hands like that, mounted on comparably quick arms, themselves mounted on a mobile platform, could be expected to cover every square foot of a several acre plot, every day, performing mechanical operations like planting, weeding, pruning, and harvesting. This represents a significant head start on the necessary hardware.
It’s becoming clear that the hardware development will pretty much take care of itself, as basic abilities like this are developed and combined. The software may require more focused effort; probably will.
Reposted from Cultibotics.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9204872846603394,
"language": "en",
"url": "https://www.alliancemagazine.org/feature/the-clean-transport-revolution/",
"token_count": 905,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.08154296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2f873e3b-aa2e-472a-a2dd-564ebd029ddb>"
}
|
Functioning transportation systems are fundamental to modern, prosperous societies. They provide a means to get to work, connect us to friends and family, and catalyse economic activity. But transportation today comes with a cost. It produces approximately 18 per cent of human-related greenhouse gas (GHG) emissions and local air pollution that contributes to health problems for millions.
To ensure a safer, more prosperous future, transport emissions need to be reduced significantly by mid-century. Reaching this goal requires a comprehensive set of strategies. These include equipping cities with increased transit, biking and walking options and more efficient cars, trucks, planes and ships that use increasingly lower-carbon sources of energy.
‘Fortunately, an electric-drive vehicle (EDV) revolution is under way that has the potential to end over 100 years of dominance by oil-powered vehicles.’
Fortunately, an electric-drive vehicle (EDV) revolution is under way that has the potential to end over 100 years of dominance by oil-powered vehicles. Governments including California, China, India, the Netherlands and Norway are leading full-scale shifts to EDVs including plug-in hybrid, battery and fuel-cell vehicles. Consumer demand is on the rise, and manufacturers like GM, Nissan, Tesla and others are starting to bring affordable EDVs to market.
These developments are welcome and necessary. According to the International Energy Agency, EDVs should account for at least 75 per cent of car sales by 2050. Such a transition will also produce economic benefits by contributing trillions of dollars in fuel savings for consumers.
Increasing the adoption of EDVs consistent with international climate goals, however, requires overcoming four early-market challenges: cost, convenience, consumer awareness and lasting commitment by governments and industry. Fortunately, solutions to these early challenges exist in the form of improved incentives, infrastructure and information.
- Incentives: Well-designed regulatory instruments like fuel economy, GHG and zero-emission policies for car makers and fuel providers; and other incentives like low-emission zones for cities and parking privileges.
- Infrastructure: Support for planning and implementation of public fuelling stations and building codes that are EDV-friendly. Support for integration of vehicles with renewable energy including public-private partnerships and planning across governments, industry and utilities.
- Information: Credible information for consumers, businesses and policymakers about the benefits of EDVs and the most effective ways to shift the market.
Philanthropy and philanthropic institutions are uniquely positioned to help with the shift to EDVs. As a global organization, the ClimateWorks Foundation works with foundation partners and grantees in key markets. We develop and implement strategies, evaluate investment opportunities, support diverse coalitions, share policy lessons across regions, and help to mobilize and inform citizens and key stakeholders. The global nature of automobile and supplier markets makes a global strategy particularly powerful.
There are several recent examples of philanthropically supported activities on EDVs. The International Zero-Emission Vehicle (ZEV) Alliance is a collaboration of 14 members comprising countries, states and provinces sharing the goal of 100 per cent zero emissions on passenger vehicle sales by 2050. The China-US Zero-Emission Vehicle Policy Lab is another example established to conduct joint policy research, share best practice and explore policy collaboration and implementation across two of the largest EDV markets. The Platform for Electro-Mobility unites businesses and stakeholders from the road, rail and electricity supply sectors in Europe with civil society and cities to promote the benefits of sustainable electrification of transport. Finally, the Charge Ahead Coalition in California aims to put a million electric cars, trucks and buses on the road within 10 years.
‘Transitioning to a sustainable transportation system requires transforming one of the largest industrial sectors in the world. Philanthropy will need to be at the heart of these efforts to maximize the social benefits and likelihood of success.’
Looking ahead, transitioning to a sustainable transportation system requires transforming one of the largest industrial sectors in the world. Philanthropy will need to be at the heart of these efforts to maximize the social benefits and likelihood of success. Together, we can make significant investments to support transportation that is clean, affordable, accessible, and consistent with the world’s climate goals. We look forward to collaborating with other foundations and partners to accelerate this work and the world’s response to the climate crisis.
Anthony Eggert is programme director, ClimateWorks Foundation.
For more information
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9399144649505615,
"language": "en",
"url": "https://www.hedegard.nu/biofuel-from-algae/not-there-yet/",
"token_count": 205,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1279296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6f2c1eff-deab-499c-831f-b9ffec47938d>"
}
|
Not there yet
Today, costs are sky-high compared to fossil fuels (Rupprecht, 2009). Any other technique than open ponds does not seem to have a chance to compete (Sheehan et al., 1998). However, there are a lot of external costs that are not included in the price of fossil fuel that might change the balance. This is called internalization of external costs and is important as support for both political and corporate decisions (Rafaj & Kypreos, 2003). One example of an external cost is floods or draught that will increase due to global warming. Caused, but not priced, by fossil fuels.
Future research includes identifying the best algal strain, developing nutrient protocols for growth and to find cost efficient harvesting and extraction methods (Pipe, 2010). The fuel industry needs cheap, pure, large and scalable solutions that do not exist today (Børresen, 2010).
Written by Per Hedegård 15:th June 2011 03:23
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9397817254066467,
"language": "en",
"url": "https://www.lspatents.com/lose-rights-pitch-invention-investors-file-a-patent-application/",
"token_count": 1318,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.462890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:03437310-02f0-43ab-a030-7f1725b90c42>"
}
|
Did you know that the individual often credited with popularizing karaoke did not reap the financial rewards of his invention to the extent possible? It’s true—Japanese musician Daisuke Inoue invented karaoke in Kobe, Japan in the early 1970s, and as we all know, karaoke became, and still is, popular in the United States and globally. However, Inoue did not patent his invention, and sadly, he did not capitalize on this popular form of entertainment. This is just one example of a failure to capitalize because the invention wasn’t patent protected.
Why Seek Patent Protection?
The reasons for seeking patent protection include:
Public Disclosure of Your Invention May Place Patent Protection at Risk
After discovering an invention, often your next step will be to begin monetizing your invention whether by raising money to support testing, or commercializing your invention, or attempting to license or sell it. Regardless, the next step in furthering the progress of your invention usually entails disclosing your invention to a third party. However, unless you obtain an enforceable confidentiality agreement before the disclosure, you risk losing the benefit of a patent on your invention.
In most of the world, the standard for obtaining a patent has generally been absolute novelty, requiring the filing of a patent application prior to disclosure of the invention. The system was different in the United States until passage, in 2011, of the Leahy–Smith America Invents Act (“AIA”) which provided that as of March 16, 2013, the United States adopted the first inventor to file (“FITF”) patent system. The AIA harmonized the United States patent system with those in the rest of the world.
Accordingly, in most cases, it is now the first person to file a patent application who receives the patent instead of the first person to invent. Obviously, disclosure prior to filing is dangerous because it jeopardizes the inventor’s priority in filing.
What Constitutes Public Disclosure?
The following is a list of non-exhaustive activities and types of information that can be considered public disclosures preventing you from obtaining a patent if they occur more than one year before you file a patent application:
The United States’ One-Year Grace Period
Any of the above or similar public disclosures start a one-year clock for filing a patent application in the United States. If your patent application for your invention is not filed within one year of its first public disclosure, you will have likely dedicated your invention to the public.
The AIA, however, provides you, as an inventor, a one-year grace period following the disclosure of your invention to someone else if the disclosure is made by you as the inventor, someone who obtained the disclosed information from you, or a joint inventor associated with you generally pursuant to a joint research agreement.
When Patent Protection May Not Be Your Best Option
A patent may not be an option or your best option in some situations. For example, by statute, in order to obtain a patent, the United States Patent and Trademark Office (“USPTO”) must determine that your invention is new and not obvious in view of existing technology. If your invention can be designed around with ease, or there are equally viable alternatives available, it may not be worth allocating your resources to patent protect the invention.
In some instances, the technology may be obsolete within a short period of time such that the time to obtain a patent or the 20-year patent term may not be worth the time, effort, or expense of trying to obtain a patent.
Lastly, it may be more beneficial to maintain your invention as a trade secret. After all, a patent does teach the public how to create your idea in exchange for a limited monopoly allowing you to prohibit the public’s ability to create the idea for a limited duration of time. In contrast, maintaining your invention as a trade secret provides a potentially unlimited duration of protection if the technology is kept confidential using appropriate safeguards.
What You Should Do If You Decide to Apply For a Patent; Use of a Provisional Patent Application
Should you decide that patent protection is appropriate for your situation, a provisional patent application can provide you an opportunity to set a “priority date,” that is, the date used to establish the novelty and/or obviousness of your invention relative to that which was known in the prior art at the time of your invention. This allows you priority while buying one year to decide whether to proceed with patent protection.
In general, a provisional application preserves your filing date for one year. Your provisional application will not be examined for issuance as a patent. It can be very informal and it is not required to have a formal patent claim, or an oath or declaration. However, priority, and thus, patent protection, will be provided only for what you disclose in the provisional application. Therefore, it is advisable to file a thorough disclosure in the provisional application.
Your provisional application will generally expire 12 months after you file it, at which time you will lose its benefit. However, you may retain the benefit of your provisional application if you either:
If you do not either file the grantable petition and/or one of these three filings within the specified time periods, you will lose your priority date associated with your provisional application, and consequently, any patent rights to your invention as of that original filing date.
If you are interested in patent protection in the United States only, you may decide to file a nonprovisional utility application that claims priority pursuant to your provisional application. The successful prosecution of a nonprovisional utility application in the USPTO will result in a patent generally having a 20-year patent term.
Because of the 12-month grace period in the United States, and generally no grace period in the rest of the world, you may run the risk of losing the benefits of your invention as a result of a public disclosure. To protect your invention, you may opt to file a provisional patent application before launching a fundraising campaign or discussing your invention in general. This option has become particularly attractive under the FITF patent system in the United States. Filing a provisional patent application will allow you to claim priority for your invention while determining during the one-year period whether to proceed with trying to obtain patent protection and will also allow you to pitch your technology to third parties with a certain level of assurance.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9408419728279114,
"language": "en",
"url": "https://dwaterson.com/2012/12/08/how-secure-is-nfc/",
"token_count": 970,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ae81db05-e4e7-425e-8ad4-a470e47fb72f>"
}
|
The roll-out of Near Field Communication (NFC) on smartphones will revolutionize the payments of small amounts, which account for the majority of retail purchases. As most consumers carry their smartphone on them at all times, NFC is a convenient, simple and quick payment method for transactions of a few dollars. However, the ease with which payments can be made by swiping the smartphone on the reader in a retail store leads many consumers to question the level of security involved. Is NFC secure?
NFC – A technology with multiple uses
NFC is a technology with multiple uses, such as:
– Payments for low cost items in a retail store
– Sharing information with friends or colleagues, such as business card data, by touching smartphones
– Proof of payment for a subway or concert, in lieu of a printed ticket
– An access device for example, getting into the office, clocking in and out of work
– Unlocking your car
– A replacement for barcodes when shopping
How does the technology work?
NFC is a contactless technology used to transmit information. It is a form of Radio Frequency Identification (RFID), where a chip in the NFC-enabled smartphone generates an electromagnetic field. This field can then be used to communicate with a reader, or a tag on a poster or shelf, or to another NFC-enabled smartphone. By embedding an NFC chip inside a smartphone, users can store their credit card information on a virtual wallet and make payments for transactions by swiping their phone over the NFC reader in the retail store.
The radio communication generated by NFC can be received over a short range of only a few centimetres. The smartphone requires an NFC chip and antenna. It is an extension of the technology that has already been in use for a number of years such as for public transport payments via smartcard.
NFC devices may be active or passive. Active devices, for example an NFC-enabled smartphone, can send and read data. Passive devices, for example an NFC tag, contains information that other NFC devices can read, but the passive device does not read any information itself.
What are the security threats and how does NFC safeguard against these?
Intrinsically, NFC appears to many to be insecure. Simply swiping one’s smartphone to pay for a coffee or lunch, seems too easy and open to unauthorised payments. Here are some threats that spring to mind:
1. The threat of having your smartphone stolen, and then used to purchase goods
Owners of NFC-enabled smartphones should always lock their phone using the device passcode. However, what if your smartphone is stolen while it is unlocked – could it then be used to make unauthorised payments? The NFC software application on the smartphone requires it’s own PIN in order to activate a payment. The only danger then, is for the thief to have watched over your shoulder when you previously entered the NFC PIN, and to subsequently steal your smartphone while it is unlocked. This is the same threat level of using an ATM with a cash card.
2. The threat of a criminal placing an NFC receptor in close proximity to your smartphone in order to steal your funds. For example, a criminal placing a receptor near your phone while it is in your pocket and you are in a crowded elevator or subway.
Once again, this method would not succeed in stealing your money because the NFC application on your smartphone is required to be activated by you entering your PIN before it will transfer any funds.
3. The threat of intercepting the NFC signal by eavesdropping while you are undertaking a transaction and then altering the signal so that the funds are transferred elsewhere.
The threat of eavesdropping an NFC signal are almost negligible. NFC signals are extremely direction-sensitive – by turning the smartphone even at a slight angle means the signal cannot be read by the receptor. The transmitting and receiving devices need to be closely and accurately aligned before the signal is successfully completed making eavesdropping very unlikely.
4. Malware on the smartphone.
The threat of malware intercepting a transmission and then modifying or cancelling (denial of service) that transmission. The extent of this threat depends upon the nature of the software application powering the NFC feature. There are no known proof of concept malware examples of this threat currently, time will tell whether any emerge. As an added precautionary measure, sensitive information that is transmitted via NFC is encrypted, so the data would be of little use if malware was able to accomplish this attack.
If you password protect your smartphone which will safeguard it in the event of physical loss or theft of the device, keep your NFC PIN secure, then NFC-enabled smartphones will remain a convenient and secure method for small transactions, purchasing tickets, and transferring information.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9732765555381775,
"language": "en",
"url": "https://justiciadetodos.org/what-does-apr-mean-on-credit-cards/",
"token_count": 580,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.10302734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dfd13e57-6dfd-4f92-9289-965213ac050a>"
}
|
What does apr mean on credit cards – When deciding between credit cards apr can help you compare how expensive a transaction will be on each one. This is called the annual percentage rate apr.
How it s applied and how it s calculated.
What does apr mean on credit cards – If you know how to navigate an introductory purchase apr offer on a credit card you can save money on interest and get extra time to pay off expensive charges during the 0 intro apr period. On most cards you can avoid paying interest on purchases if you pay your balance in full each month by the due date. The apr on credit cards however is simply the interest rate you d pay when you don t pay off your balance in full each month and leaves out any other charges such as a card s annual fee. What does apr mean on credit cards
It represents the yearly cost of funds but it can be applied to loans made for much shorter periods of time. Apr stands for annual percentage rate. You probably understand that a lower apr is better but what s a. What does apr mean on credit cards
For credit cards the interest rates are typically stated as a yearly rate. Read full answer hide full answer. An annual percentage rate or apr is the price you pay for borrowing money stated as a yearly interest rate. What does apr mean on credit cards
A fixed apr means that you pay the same interest rate for the entire term of the loan. For credit cards interest rate and apr for purchases are essentially the same thing. Apr is an acronym for annual percentage rate and what it tells you is what you ll pay if you carry a balance on your credit card. What does apr mean on credit cards
With a variable rate loan or credit card however your interest rate can go up or down depending on the prime rate or other index chosen by your lender. Variable apr means that the annual percentage rate on your credit card can change over time. It s helpful to consider two main things about how apr works. What does apr mean on credit cards
It s the amount of interest you pay annually on any money you borrow. Apr is the annual percentage rate of interest you re charged to borrow money. How your credit card apr is determined the term apr stands for annual percentage rate which is the rate lenders charge when you borrow money. What does apr mean on credit cards
A card s purchase apr is the rate of interest the credit card company charges on purchases if you carry a balance on the card. Apr is an annualized representation of your interest rate. Variable rate financial products can be attractive because they often come with low introductory rate aprs. What does apr mean on credit cards
As you shop around for financing it s important to understand how to calculate aprs and compare them between lenders and card issuers. A credit card s interest rate is the price you pay for borrowing money. What does apr mean on credit cards
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.945742666721344,
"language": "en",
"url": "https://newsabode.com/covid-19-who-chief-outlines-five-vital-changes-to-address-inequities/",
"token_count": 991,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.06640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:56471c85-44c3-43dc-b998-67e1bbaf591e>"
}
|
COVID-19: WHO chief outlines five ‘vital changes’ to address inequities
Investing inequitable production and access to COVID-19 vaccines, tests, and treatments, is among five “vital changes” the world needs to make this year to address the inequalities the pandemic has exacerbated, the UN’s top health official said, marking World Health Day on Wednesday.
“While we have all undoubtedly been impacted by the pandemic, the poorest and most marginalized have been hit hardest – both in terms of lives and livelihoods lost,” said Tedros Adhanom Ghebreyesus, Director-General of the World Health Organization, speaking in Geneva on Tuesday.
At the beginning of the year, Tedros called for countries to start vaccinating all health workers within the first 100 days of 2021. Some 190 nations have met the deadline, while the global vaccine equity initiative, COVAX, as delivered 36 million doses worldwide.
Tedros said scaling up production and equitable distribution remains the major barrier to ending the acute stage of the pandemic. “It is a travesty that in some countries health workers and those at-risk groups remain completely unvaccinated”, he stated.
WHO will continue to call on governments to share vaccine doses and to support the ACT Accelerator for the equitable distribution of vaccines, rapid tests, and therapeutics.
Invest in primary health care
With the pandemic exposing the fragility of health systems, Tedros stressed investment in primary health care must also be stepped up. At least half of the world’s population still do not have access to essential health services, while 100 million are pushed into poverty each year due to medical expenses.
“As countries move forward post-COVID-19, it will be vital to avoid cuts in public spending on health and other social sectors. Such cuts are likely to increase hardship among already disadvantaged groups,” he said.
Instead, governments should target spending an additional one percent of GDP on primary health care, while also working to address the shortfall of 18 million health workers needed globally to achieve universal health coverage by 2030.
Social protection, safe neighborhoods
Tedros also encouraged national authorities to prioritize health and social protection and to build safe, healthy, and inclusive neighborhoods.
“Access to healthy housing, in safe neighborhoods, is key to achieving health for all”, he said. “But too often, the lack of basic social services for some communities traps them in a spiral of sickness and insecurity. That must change.”
Countries must also intensify efforts to reach rural communities with health and other basic services. Tedros noted that “80 percent of the world’s populations living in extreme poverty are in rural areas where 7 out of 10 people lack access to basic sanitation and water services.”
For his final point, the WHO chief emphasized the need to enhance data and health information systems, which are critical to finding and addressing inequalities.
“Health inequality monitoring has to be an integral part of all national health information systems – at present just half the world’s countries have any capacity to do this”, he said.
Change the rules
The huge inequalities in health care also figured heavily in the statement from the Executive Director of UNAIDS, Winnie Byanyima, for World Health Day, which further revealed that 10,000 people die every day because they cannot access services.
She warned that the gaps will continue to widen as health systems increasingly become profit-led, but added that the pandemic could lead to greater commitment towards ensuring all people have access to quality healthcare.
“Now, in the midst of the COVID-19 crisis, leaders across the world have an opportunity to build the health systems that were always needed, and which cannot be delayed any longer,” Ms Byanyima said.
“We cannot tinker around the edges—we need radical, transformative shifts. The COVID-19 response gives us an opportunity to change the rules and guarantee equality.”
COVID-19: WHO chief/COVID-19: WHO chief /COVID-19: WHO chief
Please click the link below & support our initiative newsabode.com
(NOTE: We seek your support—At a time when the news is under threat, we have opted for a different approach with hopes of your support. The purpose of launching newsabode.com is primarily aimed at ensuring the survival of true independent Journalism. We bring news to you from across the world. To support truly independent journalism, please consider making a contribution to newsabode.com or taking a subscription)
We provide a link to another option to back up the initiative —https://mail.google.com/mail/u/
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9428002238273621,
"language": "en",
"url": "https://openspace.org.in/node/724",
"token_count": 158,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.353515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:42933390-b98e-42fb-a737-e174f43dc402>"
}
|
- In 2004-05, a total of 836 million (77%) had below Rs. 20 a day.
- Poverty increased by a 100 million
- The new rich has grown by 93 million.
- The middle class and the rich grew from 162 million to 253 million
- The middle class grew from 15.5% to 19.3%
- Extreme poor have also benefited (274 to 237 million) – 43 million of them to be precise. Their per capita consumption has gone up from Rs 9 to Rs 12.
Source: Arjun Sengupta Chairman, National Commission for Enterprises in Unorganised Sector report on the Conditions of Work and Promotion of Livelihood in the Unorganised Sector. based on government data 1993-94 and 2004-05.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9670199155807495,
"language": "en",
"url": "https://www.entreprenerd.net/pros-and-cons-of-student-loans/",
"token_count": 635,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.27734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3babfc50-8820-4d8e-a2bb-528e6a1bb80b>"
}
|
While education remains a basic need for all, access to education remains a challenge. For years, student loans bridged the gap allowing students from lower economic backgrounds to access education and other facilities while on campus. Student loans create equality in education for both the rich and the poor by enabling the less privileged to get access to quality schools. The student loan works with the presumption that you will come out of college with the knowledge acquired as the human capital, which will be used to repay the student loan as the principal plus interest. However, it is essential for students to understand the responsibility that comes with student loans and what is expected of them as suggested by Shalom Lamm. In this article, we are going to discuss the pros and cons associated with student loans to help the student be informed.
Pros Associated with a Student Loan:
It Facilitates College Fees
The value of education has increased over the years. Education is progressively becoming expensive, making it inaccessible to a majority of the population. This situation is made worse by the ever-increasing cost of living, making student loans a crucial role in the modern education system. Most students, therefore, have to take the loan to fund their studies.
Facilitates your Dream of Being in Your Dream School
Imagine getting an admission letter from your dream private university. You will take the chance and the cost involved. Private universities are expensive, and for the majority of the citizens, it will be hard to pay. It is at this point that student loans come in handy to help fulfill that dream.
Student Loans Facilitate Other Things Besides Tuition and Room Fees
Most people have a perception that student loan is specific in catering for tuition and school fees. This is not the case, student loans can be used for other uses like; buying textbooks, buying software used in school, or even a laptop. Therefore, students are advised to use their loans wisely in buying things that will be of value to them.
It Can be Used to Determine Your Credit Score
For most students, student loans are, in most cases, the only loan they have on record. This means your activity of how you use your money and the diligence of repayment can be used to determine your credit score.
Cons Associated with Student Loans
Student loans are expensive. When paying for a student’s loan, you pay for the amount you borrowed and included interest on it. This makes payment expensive as suggested by Shalom Lamm. Paying student loans may result in suspending some life goals. Students add on the hard cost of living, minimizing savings. This makes achieving some life goals to be suspended because students can’t be financed at the moment. If defaulted, your credit score is affected negatively.
When your credit score is poorly rated because of default, you will have difficulty finding apartments or credit facilities because your record raises suspicion of your trustworthiness.
With student loans, life starts with debt. Student loans mean you start life with a debt that you will take a long time to repay.
I hope the article was informative and helps you answer all the previously hard questions regarding student loans.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9641608595848083,
"language": "en",
"url": "https://www.fincap.org.uk/en/lifestages/retirement-planning",
"token_count": 182,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09423828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:26e91ffa-ae21-4f7d-a6d8-e360321eba09>"
}
|
According to the Department for Work and Pensions, 12 million of us are not saving enough for retirement. This is an area that has undergone significant change, with the introduction of workplace pensions automatic enrolment and new freedoms around how people can use their pension savings. While automatic enrolment will see more people saving into pensions, some people will need to save more to have an adequate income in retirement. The ability to access pension money from age 55, and have the freedom to make their own income choices, may also have an impact on later life.
The Strategy reflects the unique challenges people face when saving into a pension, making decisions about retirement income, and managing retirement savings throughout life, working with employers and leading the co-ordination of collaborative efforts to provide better access to pension information and improving the consumer retirement journey.Read more
Different life stages have an effect on people's ability to manage their money.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9517267346382141,
"language": "en",
"url": "https://www.genpaysdebitche.net/what-is-gridcoin-crypto/",
"token_count": 768,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:be5a2e0b-8507-401d-9329-3af7fe5a0392>"
}
|
What Is Gridcoin Crypto? – What is Cryptocurrency? Basically, Cryptocurrency is digital cash that can be utilized in place of standard currency. Essentially, the word Cryptocurrency comes from the Greek word Crypto which implies coin and Currency. In essence, Cryptocurrency is just as old as Blockchains. The distinction in between Cryptocurrency and Blockchains is that there is no centralization or ledger system in place. In essence, Cryptocurrency is an open source protocol based upon peer-to Peer transaction technologies that can be performed on a dispersed computer system network.
One particular method in which the Ethereum Project is trying to fix the issue of clever contracts is through the Foundation. The Ethereum Foundation was developed with the objective of developing software services around smart contract performance. The Foundation has actually released its open source libraries under an open license.
What does this mean for the larger community thinking about participating in the advancement and execution of smart contracts on the Ethereum platform? For starters, the significant distinction between the Bitcoin Project and the Ethereum Project is that the former does not have a governing board and for that reason is open to contributors from all walks of life. However, the Ethereum Project enjoys a far more regulated environment. Therefore, anybody wishing to contribute to the task needs to abide by a code of conduct.
As for the jobs underlying the Ethereum Platform, they are both aiming to provide users with a new method to get involved in the decentralized exchange. The major distinctions between the two are that the Bitcoin procedure does not use the Proof Of Consensus (POC) process that the Ethereum Project makes use of.
On the other hand, the Ethereum Project has taken an aggressive approach to scale the network while also taking on scalability concerns. In contrast to the Satoshi Roundtable, which focused on increasing the block size, the Ethereum Project will be able to carry out enhancements to the UTX procedure that increase transaction speed and reduction costs.
The decentralized element of the Linux Foundation and the Bitcoin Unlimited Association represent a conventional design of governance that puts an emphasis on strong neighborhood participation and the promo of agreement. This design of governance has been adopted by several dispersed application groups as a way of handling their jobs.
The significant difference between the two platforms comes from the fact that the Bitcoin neighborhood is mainly self-sufficient, while the Ethereum Project expects the involvement of miners to support its development. By contrast, the Ethereum network is open to factors who will contribute code to the Ethereum software application stack, forming what is known as “code forks “.
Just like any other open source innovation, much controversy surrounds the relationship between the Linux Foundation and the Ethereum Project. Although both have embraced different perspectives on how to finest utilize the decentralized element of the technology, they have both nonetheless worked hard to develop a positive working relationship. The designers of the Linux and Android mobile platforms have actually openly supported the work of the Ethereum Foundation, contributing code to protect the performance of its users. Similarly, the Facebook team is supporting the work of the Ethereum Project by providing their own structure and creating applications that incorporate with it. Both the Linux Foundation and Facebook view the ethereal project as a method to further their own interests by providing an expense scalable and reliable platform for developers and users alike.
Merely put, Cryptocurrency is digital cash that can be used in place of traditional currency. Essentially, the word Cryptocurrency comes from the Greek word Crypto which suggests coin and Currency. In essence, Cryptocurrency is simply as old as Blockchains. The difference between Cryptocurrency and Blockchains is that there is no centralization or journal system in location. In essence, Cryptocurrency is an open source protocol based on peer-to Peer deal technologies that can be performed on a dispersed computer system network. What Is Gridcoin Crypto?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9405125975608826,
"language": "en",
"url": "https://www.indiafilings.com/learn/competition-law-in-india/",
"token_count": 1938,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.40234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f03ece87-477c-45b3-933f-fca1e779a70b>"
}
|
Competition Law in India
Competition Law in India
India’s foray into free-market liberalization and its transition from a “command and control” based regime to that of a free economy paved the way for the annulling of the Monopolies and Restrictive Trade Practices Act, 1969 (MRTP Act), which restricted the growth of monopolies in the market. India now boasts of a contemporary competition law which is on par with established competition principles of the world. This transformation was initiated with the launch of the Competition Act in the year 2002. The Act is now known as the Competition (Amendment) Act, 2007, thanks to the amendments enacted during that year. This article provides an overview of the competition law in India.
Why Encourage Competition?
The Indian economy is consistently on the rise, albeit with a few setbacks which are a part and parcel of every economy. This economic growth is triggered by competition, as the vigor to better the competitor plays a catalytic role in unlocking the potential of growth in vital areas of the economy. A competitive yet healthy environment facilitates fair competition in the market, thereby not only propelling the national economy but the global financial system as well.
Objectives of the Act
The Competition Act is established in view of the following objectives:
- To prevent practices that are detrimental to competition.
- To promote and sustain competition in the markets.
- To safeguard the interests of the consumers.
- To ensure freedom of trade carried out by other participants.
The Establishment of a Commission and Tribunal
The Competition Commission of India (CCI) was established so as to prohibit anti-competitive agreements and abuse of dominant positions by enterprises. Moreover, the establishment aims to regulate combinations such as mergers, amalgamations or acquisitions; courtesy of a process that includes inquiry and investigation. The commission is constituted of a Chairman and other members, whose total strength could be a minimum of two and a maximum of six. Such members are appointed by the Central Government.
The amendment of the Competition Act in 2017 led to the creation of the Competition Appellate Tribunal (COMPAT). The entity was established to adjudicate appeals against the orders of the CCI and to determine the compensation claims arising out of the commission. The Tribunal isn’t existent now, thanks to the changes brought forth into Section 401 of the Companies Act, 2013. Its powers are now vested with the National Company Law Appellate Tribunal (NCLAT), which comprises of a chairperson and three judicial members.
Responsibilities of the Commission
The Commission is empowered to:
- Eliminate practices which are detrimental to competition, promote and sustain competition, safeguard the interest of the consumers and ensure freedom of trade by other participants.
- Inquire into matters of concern.
- Issuance of interim orders in case of anti-competitive agreements and abuse of dominant position, which would temporarily restrict any party from pursuing such an Act.
- Competition Advocacy i.e.; to provide a clearer depiction of the provision, the Central or State Government may make a reference to the commission while formulating any policy on competition or other relevant affairs. The reference may state the opinion on the possible effect of such policy on Competition, though the opinion of the Commission isn’t binding on the Central Government.
The Amendments in Brief
The Competition Act, which crept into the framework of the Indian constitution in 2003, was amended in 2007 on the backdrop of economic developments and liberalization; prompting the Indian fraternity to allow both international and domestic competition into its market. The amendment resulted in the following:
- The CCI was designated as a regulator for the prevention and regulation of anti-competitive practices in the country as per the discretions of the Act.
- The CCI, as was announced then, must be intimated through a notice in the event of any merger or combination within a span of 30 days. Penal consequences have been prescribed for failure in issuing the notice.
- A Competition Appellate Tribunal was established, but the entity ceased to exist in due course of time.
Note: The Act was amended again in 2009 vide the Competition (Amendment) Act, 2009. The amendment resulted in the transfer of cases from the Monopolies and Restrictive Trade Practices Commission to the Competition Appellate Tribunal and National Consumer Protection. In the absence of the Tribunal, the affairs are now co-managed by NCLAT, in association with National Consumer Protection.
Elements of Composition Law
Competition law consists of the following major elements:
- Anti-competitive Agreements
- Abuse of Dominance
- Merger, amalgamations and acquisitions control
- Competition Advocacy
Anti-competitive agreements are contractual obligations that restrict competition. The provisions pertaining to the same are provided in Section 3 of the Competition Act, 2002. The Act prohibits any agreement connected with production, supply, distribution, storage, and acquisition or control of goods or services as it may cause an appreciable adverse effect on the competitive affairs of India.
As per Section 3(2) of the Companies Act, 2002, any anti-competitive agreement within the meaning of section 3(1) is void. The entire agreement is deemed void on the existence of anti-competitive clauses, which has an appreciable adverse effect on the competition.
Here’s a list of stipulations stated under this provision:
- No enterprise; association of enterprises; person or association of persons are allowed to enter into any agreement associated with the production, supply, distribution, storage, acquisition or control of goods/provision/services as it may potentially cause an appreciable adverse effect of competition within India. Any agreement that contravenes these provisions shall be declared as void.
- Any agreement between enterprises or associations of enterprises; or persons or association of persons; or between any person and enterprise.
- Any agreements amongst enterprises or persons at various stages or levels of production chain in different markets connected with production, supply, distribution, storage, sale or price of, or trade in goods or provision of services.
Abuse of Dominant Position
Dominant position, as specified in the Act, is a position of strength enjoyed by an enterprise in the relevant Indian market. It facilitates:
- The independent conduct of operations of competitive forces in the relevant market.
- The influencing of competitors, consumers or relevant market in its favor.
No enterprises or groups are permitted to abuse their dominant position. Section 4(2) of the Act specifies the following Acts as an abuse of a dominant position:
- Direct or indirect imposition of unfair or discriminatory conditions or price in the sale and procurement of goods or services.
- Curtailment of the production of goods or survives, or that of technical or scientific development pertaining to the goods or service.
- Involvement in a practice that results in the denial of market access.
- Drawing conclusions to contracts based on acceptance by other parties of supplementary obligations which is not associated with the subject of such contact.
- Utilization of the dominant position in one relevant market to either enter into or protect another relevant market.
The powers of determining the eligibility of any enterprise or group to enjoy a dominant position is vested with the Competition Commission of India (CCI). It may be noted that the mere existence of dominance must not be categorized as a dominant position unless the dominance is abused.
Merger, Amalgamation and Acquisition Control
Section 6 of the Companies Act, 2002, prohibits a person or enterprise from entering into a combination that could result in an appreciable adverse effect on competition within the relevant Indian market. On the other hand, if any person or an entire enterprise desires an entry into any combination, a notice requesting the same shall be addressed to the Competition Commission of India. The notice specified in the details must include the details of the proposed combination. The form must be submitted with the prescribed fee within 30 days of the approval of the proposal pertaining to merger or amalgamation or execution of any agreement or other documents required for acquisition purposes.
Again, the CCII is the body that is empowered to determine whether the disclosure made under section 6(2) of the Act is appropriate or if the combination may potentially have an appreciable adverse effect on the competitive affairs of India.
Competition advocacy forms an integral part of the Competition Law. The provision facilitates the Central Government/State Government to avail the opinion of the CCI on the potential implications of the policy on competition or other affairs. For this purpose, the concerned governments may make a reference to the CCI, upon the receipt of which the latter will issue its opinion within 60 days. The government may then formulate a policy which it considers to be appropriate.
The Act also entitles the CCI to adopt suitable measures for the promotion of competition advocacy. Moreover, it is aimed at creating awareness and rendering training of competitive issues.
Competition Law and Competition Policy
Competition law is a subset of the composition policy. The latter consists of competition laws that prohibit anti-competitive conduct by business and special regulatory laws which are meant for analyzing the scenarios of market failure.
The Factor of Confidentiality
The CCI should maintain its confidentiality over the information received by it, considering that it is commercially sensitive and its disclosure may hamper the efficient performance of the business. As stated in Section 57 of the Act, no information pertaining to any enterprise can be disclosed without obtaining a written consent from the enterprise, except on certain specified occasions.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9501706957817078,
"language": "en",
"url": "https://www.rand.org/blog/2020/05/weekly-recap-may-15.html",
"token_count": 704,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f63d95b4-ec0a-43c4-9109-994b13bf7193>"
}
|
This week we discuss how to make a COVID-19 vaccine accessible and affordable; what more can be done to address a rise in domestic violence; the pitfalls of poor data analysis; the pandemic's historic economic effects; planning hospital needs for ventilators and respiratory therapists; and what the future of warfare might look like.
The race to develop a vaccine to combat COVID-19 is on. Optimistic projections peg an approved vaccine in a matter of months, but most experts don't expect one to be available until the middle of 2021. However long it takes, there's little time to lose in devising a blueprint to ensure that the vaccine is accessible and affordable for all, say RAND experts. Important challenges to address include the complex issues of financing, intellectual property rights, and global production.
One of the most worrying and consistent trends during the pandemic is an increase in domestic violence. Stay-at-home orders force victims to remain under the same roof as their abusers and can also make it harder to get help. Governments across the globe are taking different approaches to address this problem. But according to RAND experts, more needs to be done. Although securing adequate resources for support services is vital, it's important to acknowledge that family members and friends can also help—by reaching out to make sure their loved ones are safe.
The rapid spread of COVID-19 has forced world leaders to make difficult public health decisions based on incomplete information. At the same time, an overabundance of new data sources and the free sharing of information has made it easier to draw spurious conclusions. RAND researchers have highlighted two recent examples to showcase the pitfalls of incomplete analysis and insufficient data. These illustrate why analysts, journalists, and policymakers must apply the best methods possible and be clear about the limitations of findings.
Numbers released last week indicate that the U.S. unemployment rate soared in April to its highest level since the Great Depression. But even if this hadn't happened, the economic effects of the pandemic would still be historic. This is because the unemployment rate doesn't tell the whole story. According to RAND experts, to get a complete view of the seismic downturn Americans are living through today, it's important to consider other factors. For example, discouraged workers—those who are not currently looking for work—are left out of unemployment rate calculations.
During the COVID-19 crisis, many hospitals have run short on ventilators, as well as the respiratory therapists who operate them. RAND experts developed a model that can help administrators prepare for and respond to these shortages. The model can be used to assess ventilator needs, allocate patients or resources efficiently across hospitals, and drive protocol decisions. It could also help states develop guidelines for ventilator management during pandemics.
Where will the next war occur? Who will fight in it, and why? How will it be fought? A new series of RAND reports seeks to answer these questions by examining the many factors that shape conflict, including trends in geopolitics, the global economy, and even climate change. The authors find that the United States will face a grand strategic choice: become more selective about committing its forces, or maintain or even double down on its commitments, knowing that doing so will come with significantly greater cost—in treasure and, perhaps, in blood.
Listen to the Recap
Get Weekly Updates from RAND
If you enjoyed this weekly recap, consider subscribing to Policy Currents, our newsletter and podcast.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9473056793212891,
"language": "en",
"url": "https://www.technologyreview.com/2020/06/16/1002974/what-1918-can-teach-us/",
"token_count": 212,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.294921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e3e640e9-de33-45f0-b2c4-8b2117cbef75>"
}
|
In the face of a debate about when the US might “reopen” commerce to limit economic fallout from the covid-19 pandemic, a study coauthored by MIT Sloan economist Emil Verner shows that restricting economic activity to protect public health actually generates a stronger economic rebound.
Using data from the flu pandemic that swept the US in 1918–19, the working paper finds cities that did more to limit social and civic interactions had more economic growth later. Indeed, cities that implemented social distancing and other interventions just 10 days earlier than their counterparts saw a 5% relative increase in manufacturing employment after the pandemic ended, through 1923. Similarly, an extra 50 days of social distancing was worth a 6.5% increase in manufacturing employment.
Verner says the implications are clear: “It casts doubt on the idea there is a trade-off between addressing the impact of the virus, on the one hand, and economic activity, on the other hand, because the pandemic itself is so destructive for the economy.”
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9427016973495483,
"language": "en",
"url": "https://courses.lumenlearning.com/suny-internationalbusiness/chapter/overview/",
"token_count": 144,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.061767578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5ef36524-c6a7-4311-9730-94b957e68eb5>"
}
|
Both the IMF and the World Bank act as global outreach partners to assist both developed and developing countries in areas that range from Economic Management to Rule of Law to Human Development. Although controversial at times, these two organizations and their over 180 members have touched almost every country on the globe, either directly or indirectly. In this module we will examine the difference between the two organizations and look at some of the projects they have undertaken.
After you complete the required assignments you will be able to:
- Illustrate the history of the WTO and its contribution to international business
- Identify ways that the World Bank and IMF provide support to global business ventures
- Identify the impact global organizations have on business decisions to trade internationally
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9378005266189575,
"language": "en",
"url": "https://howtodiscuss.com/t/routing-number/10655",
"token_count": 174,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0181884765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:805db9f9-a117-4340-82ef-a51c46db2968>"
}
|
Definition of Routing number:
A series of numbers associated with a check, savings account or other bank account that the financial institution has linked to that account. This number is used to determine the destination of the money transfer. Anyone wishing to make a direct deposit with a company will usually be required to provide an account number and root address.
How to use Routing number in a sentence?
- Ken wants to pay taxes online for an immediate refund, but after forgetting his routing number, he needs to submit taxes and documents.
- By obtaining a bank routing number, thieves can access the manager's bank account and divert cash from the account as soon as the management company deposits money into it.
- Customers need to know the classification code for their bank account to save transactions in the correct account.
Meaning of Routing number & Routing number Definition
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9327558279037476,
"language": "en",
"url": "https://jbdcolley.com/company-valuation-value-company/",
"token_count": 1623,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1142578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7b0ea512-9d9d-4a6b-8a19-0d7985be9e6b>"
}
|
Company Valuation need not be difficult. There are essentially three main ways to value a company:
- Balance sheet methods which include book value, adjusted book value and liquidation value
- Profit and Loss methods including profit multiples, price earnings ratio, sales multiples, EBITDA multiples
- Discounted Cash Flow methods, including cash flow to equity, free cash flow,
There are two other methods, which we will not be discussing in this lecture. They are:
- Value Creation such as Economic Value Added (EVA)
- Options methods including Black Scholes
Lets consider why we would want to value a company?
It is important to make the distinction between the perspective of the buyer and the seller. The value will also differ for different buyers.
For a buyer, the valuation process will establish the maximum price he will be prepared to pay for the business.
For the seller, the valuation will show him the minimum price he will be prepared to receive for the company.
Do not confuse the value of a company with the price a buyer may be prepared to pay for it. The two things are very different.
The Purposes of a Valuation
A valuation of a company will be useful in a range of circumstances. These include:
- In an M&A transaction helping the buyer and seller separately to come to a view and ultimately an agreement on the price.
- When valuing Listed companies to establish whether they are either over valued or under valued compared to their market value.
- When preparing a company for an initial public offering (IPO) on a stock market, which will be used to help the company correctly price its shares
- For evaluating the outcome of the financial performance of senior management under a incentivisation programme.
- To help identify strategic drivers in a business
- To assist management teams and their advisers in the strategic planning process.
Balance Sheet Methods of Valuation
This is the first of the main valuation methods which we highlighted above. The methodology is based on valuing the value of the assets of the business.
Book Value considers the net worth attributable to the holders of equity and is the net difference, hopefully positive, between the assets and liabilities of the business.
Adjusted book value attempts to take account of some of the potential pitfalls which may occur as a result of accounting conventions. These include:
- Adjusting or updating the value of land and property
- Adjusting for the impact of bad debts
- Removing the overstatement of value caused by obsolete stock
A different perspective may also be gained by considering the Liquidation value of a business. This assumes that the assets are sold, potentially at short notice and at conservative values and these proceeds used to pay back all the liabilities of the business. This is a way of estimating the minimum value of the company.
In Summary, book value has little actual correlation with market value and, in my view, is not a very robust way to value a business.
Profit and Loss Account Methods
These methods focus on the sales and profits of the business to establish a value for the company.
One of the most common methods used in the public markets is the Price Earnings Ratio which compares the company market value as a ratio of its earnings per share. This shows how high the market values the company on a per share basis and provides a very useful, if simple, comparative ratio when looking at one or more companies.
You can of course reverse this to arrive at a value by multiplying the earnings of the company by the ratio.
Shareholders may receive a dividend from the company and this cash flow and its future projections can be used to derive a value of the business. The dividend is considered as cash perpetuity and this is used, with a terminal value, to discount to the present the value of the business, using the same discounting method that we will meet when considering discounted cash flow methods of valuation.
The dilemma for a business is that the payment of dividends distrubutes cash to shareholders which otherwise might be used to grow the business. This method would derive a higher value for high dividend paying companies when in fact, the growth of lower dividend paying business would in reality exceed that of the high dividend businesses.
For years Microsoft did not pay a dividend for exactly this reason.
Sales multiples are a somewhat crude method of company valuation and are common practice “rule of thumb” ways to value a business within a certain industry.
While this can be used informally when discussing comparative valuations for two businesses in the same sector, this method provides little more than an indicator of value.
Other sales multiples can be derived from different profit multiples.
The two most common are EBIT – Earnings before Interest and Tax
EBITDA – Earnings before Interest, Tax, Depreciation and Amortisation.
The latter is particularly useful in M&A and Private Equity transactions because it values the business independent of its financing structure by removing the costs of Interest, Tax and historical goodwill and acquistion depreciation, the two latter figures being purely book entries and having no impact on cash flow.
Technology Company Valuations can be compromised when they are valued very highly compared to conventional approaches reflecting anticipated stellar growth
In these cases, it is necessary to resort to the rather simplistic approach of pure sales multiples on the basis that there are seldom any earnings to value!
Discounted Cash Flow is one of the most robust valuation methods when its component parts are calculated correctly.
The underlying basis is to discount the free cash flow forecast back to the present to arrive at a value of the business today.
The methodology is very sensitive to some key assumptions
- The forecast earnings growth of the business into the future
- The discount rate used to bring this forecasted cash flow back to the present
- The number of years in the forecast, typically 5 to 10.
- The assumption of the terminal value of the business in its final year.
The Free Cash Flow that is measured is the Unlevered Cash Flow, which is to say that the costs of debt financing are removed from the forecast, enabling the business to be valued irrespective of its financing structure.
There are several steps to creating a cash flow forecast
1. Create an integrated profit and loss, balance sheet and cash flow model of the business
2. The model should ideally calculate the cash flows for at least 10 years or the modeller runs the risk that the terminal value will be a disproportionately large part of the valuation
3. Calculate the terminal value in year 10.
4. Using an appropriate discount rate, discount the cash flow back to the present time.
The Discount Factor used is the Weighted Average Cost of Capital of the business.
The theory behind this is the Capital Asset Pricing Model if you want to investigate this in more detail right now. For the moment, it is enough for us to consider that the method calculates a discount rate for the equity and debt separately and then combines them in a weighted ratio depending on the mix of debt and equity financing the company.
When preparing a Discounted Cash Flow valuation, the following factors must be given careful consideration:
The future cash flows will be heavily dependent on the assumed returns on future investments and the assumed growth rate for the company’s sales.
The Return on Equity, represented by the Equity portion of the WACC, comprises:
- A risk free rate
- A market risk premium of beta
- A further discount fact for company specific operating risk and
- A discount factor for the company’s financial risks.
The calculation of this discount rate can be highly contentious and the end valuation is very sensitive to the assumptions made in its calculation.
The Break Up Value considers the value of the company as a sum of the parts of it’s different businesses where their values are independently assessed. This assumes that the separate businesses would be sold on a going concern basis.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9015746116638184,
"language": "en",
"url": "https://rhfp.xn--38-6kcyiygbhb9b0d.xn--p1ai/cryptocurrencies-blockchain-and-ico.php",
"token_count": 3079,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.119140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4d8bfb34-4dac-457d-b8e4-6655913e4ef9>"
}
|
Cryptocurrencies Blockchain And Ico
Blockchain, Cryptocurrencies, ICO – Learn the basics The use of gold as money began thousands of years ago – as gold was the most resistant to aging and elements. During early and high Middle Ages, Byzantine gold Solidus was de-fact standard through the Europe and Mediterranean.
· ICO, Blockchain and Cryptocurrencies: Impact on Different Spheres May 25 · UTC | Updated Feb 4 · by Julia Beyers · 7 min read Photo: QuoteInspector. · A new trend in cryptocurrencies is the “initial coin offering,” or ICO. ICOs raised $ billion in the first nine months ofaccording to one industry estimate.
These are not bitcoins, but essentially a new digital currency used to fund a specific product. ICOs work like this: A company raises capital by selling virtual coins or tokens. ICOs have provided blockchain entrepreneurs with a means to raise funds, not by traditional routes, but directly online; instead of selling shares or securities, ICO issuers offer digital “coins” or “tokens”.
An investor can typically only buy ICO tokens with established cryptocurrencies (usually Bitcoin or Ethereum). There has been a hesitant take-up by family offices of cryptocurrencies and ICOs. Note: This section is a rudimentary explanation on the basics of the blockchain technology and cryptocurrencies.
However, it should give you a basic understanding of how it works. Back when Bitcoin blasted into the the mass media with its rapid rise (and eventual downfall) inthe general public was exposed to the idea of the blockchain.
Richly illustrated with original lecture slides taught by the authors, Inclusive FinTech: Blockchain, Cryptocurrency and ICO hopes to dispel the many misconceptions about blockchain and cryptocurrencies (especially bitcoin, Initial Crypto-Token Offering or ICO), as well as the idea that businesses can be sustainable without a social dimension Cited by: 4. · A cryptocurrency or ICO whitepaper is the foundational document for that project.
The whitepaper should lay out the background, goals, strategy, concerns, and. An initial coin offering is similar in concept to an initial public offering (IPO), both a process in which companies raise capital, while an ICO is an investment that gives the investor a. · Bitcoin-led cryptocurrencies are the first application of blockchain technology and the first digital money that can be sent over the Internet like email.
Both are an example and represent a widespread application. That’s why a frequent comparison of cryptocurrencies and email is very good. An introduction to distributed ledger technology, blockchain and cryptocurrencies. Types of ICOs, tokens and ICO process. Trading cryptocurrencies.
Latest Cryptocurrency, ICO & Blockchain News | CoinDelite
Regulation, strategy, the state of the technology, real world use cases applications. For more information, please feel free to contact us. · Cryptocurrencies and ICO tokens are intangible property. 58 Ownership of an intangible cryptocurrency coin or an ICO token is controlled with two keys: a public key and a private key. 59 The public key generally identifies specific coins or tokens. 60 Every coin or token address has a matching private key.
61 The private key proves that you are. Blockchain: the ICO fever must be reduced This form of funding, which allows small business to reach millions via blockchain and cryptocurrencies, is creating a dangerous bubble driven by speculation. · Clash of the Cryptocurrencies. Other coins and blockchain projects claim different unique selling points. Ripple, for example, is targeted at use in. · The SEC and CFTC on Blockchain, Cryptocurrencies and ICOs. He went on to warn the ICO market that “those who engage in semantic gymnastics or elaborate structuring exercises in an effort to avoid having a coin be a security are squarely within the crosshairs of.
· Involvement in cryptocurrencies, digital assets or blockchain would be considered a material change. FinCEN and the SEC Weigh In In a speech at a blockchain conference on August 9,FinCEN director Kenneth A.
Blanco was less than positive on the state of compliance of money transmitter businesses such as cryptocurrency exchanges.
Top Five Cryptocurrencies Experts Talk about Bitcoin, Blockchain and ICO’s Yes, cryptocurrencies, again.
Valuation of Cryptocurrencies and ICO Tokens for Tax ...
It’s the talk of the day, a live history and, stay in your seat, but perhaps an end of. · LAToken is a blockchain platform for trading asset tokens. It allows cryptoholders to diversify their portfolio by getting access to tokens linked to prices of real assets.
LAToken enables asset owners to unlock the value of assets by creating and selling their asset tokens.
Hidden ICO Gem of 2020 - Launching Soon - Price Prediction potential 30X -🚀🚀🚀
As a result, cryptocurrencies will be widely used in the real economy. Cryptocurrencies typically use decentralized control as opposed to centralized digital currency and central banking systems. When a cryptocurrency is minted or created prior to issuance or issued by a single issuer, it is generally considered centralized.
A blockchain is a continuously growing list of records, (ICO) is a controversial. Blockchain, Cryptocurrency & ICO It has been a decade since Bitcoin was introduced and we are continuing to see the rapid increase of cryptocurrencies throughout Australia and around the world. Unfortunately, legislation and regulation surrounding blockchain, cryptocurrency and ICOs remains mostly undefined and unregulated due to the sudden. Bitcoin blockchain structure A blockchain, originally block chain, is a growing list of records, called blocks, that are linked using cryptography.
Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a Merkle tree). By design, a blockchain is resistant to modification of its data. This is because once recorded, the data in.
On CoinCodex, you can find crypto prices for over cryptocurrencies, and we are listing new cryptocurrencies every single day.
Tether (USDT) Doing a Great Job Hedging Several ...
What is an ICO? ICO stands for Initial Coin Offering and refers to a method of raising capital for cryptocurrency and blockchain-related projects. We simply aim to help reorganized publicly available information to help better educate and inform the public on advances in blockchain technology and we provide an educational forum to explore crypto projects working in the blockchain space. Coinist is a Cryptocurrency and ICO data and news portal, discussion forum and content aggregator.
Hence, it is the task of this book to shed light on the introduction and trends in FinTech, blockchain and token rhfp.xn--38-6kcyiygbhb9b0d.xn--p1ai illustrated with original lecture slides taught by the authors, Inclusive FinTech: Blockchain, Cryptocurrency and ICO hopes to dispel the many misconceptions about blockchain and cryptocurrencies (especially bitcoin. Blockchain and Cryptocurrencies will change the world as we know it, but most people still fail to understand the concept.
We are here to help. In a series of articles, we try to cover Topics, that include: The history behind cryptocurrencies, blockchains, and Ethereum; The process behind digital, public ledgers; The blockchain mining process.
Blockchain Companies & Crypto Startups Nowadays, it is obvious that crypto traders do not always have time or knowledge to learn all the important aspects of investing in crypto projects, even if they are very easy to understand and rhfp.xn--38-6kcyiygbhb9b0d.xn--p1ai a market as volatile as the crypto market, psychological aspects behind ICOs, DAICOs, STOs, ETOs, or IEOs can play a greater role than anything.
· Exponential Growth: What Research Into Blockchain and Cryptocurrencies Tells Us About Law Practice Disruption Inmore than $11 billion was raised through ICO.
· Changing the world one hash at a time, rhfp.xn--38-6kcyiygbhb9b0d.xn--p1ai is a news site for the latest in blockchain, bitcoin, ethereum, market updates, innovations in tech, and ICO analyses Cryptocurrency Archives - Cryptos - the latest news in Crypto, Blockchain, and ICO.
Initial Coin Offering (ICO) and how it Works ICO is a means for a company to raise funds to promote or fund a project, and all interested investors would have the opportunity to buy the offering to receive a new blockchain token has been issued by the company. It’s the. We are an accomplished team of Technocrats dealing with Blockchain & Crytocurrencies. We are backed by sound experienced legal professionals, who are masters of the subject.
We have a network of International Blockchain & ICO experts who can successfully help in bringing ICO. We provide legal assistance to you and answer your queries. Cryptocurrency Investing Bible: The Ultimate Guide About Blockchain, Mining, Trading, ICO, Ethereum Platform, Exchanges, Top Cryptocurrencies for Investing and Perfect Strategies to Make Money - Kindle edition by Norman, Alan T.
How to Identify Cryptocurrency and ICO Scams
Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Cryptocurrency /5(). · With the Bitcoin bubble testing astronomical prices every day, cryptocurrencies and the blockchain technology that drives them are currently taking their turn in this one-tech-fits-all role. A blockchain is a cryptographically protected dispersed ledger–it is what protects you or anyone else from creating a copy of that Bitcoin you just purchased.
Blockchain, Cryptocurrency and Initial Coin Offering (ICO) Fraud and SEC Whistleblower Program. Sincemoney raised from initial coin offerings (ICOs) exceeded $20 rhfp.xn--38-6kcyiygbhb9b0d.xn--p1ai blockchain technology and cryptocurrencies can help prevent fraud, they can also be used to perpetrate rhfp.xn--38-6kcyiygbhb9b0d.xn--p1ai the SEC warned in a recent investor bulletin, “[f]raudsters often use innovations and new.
· Travel Tech Firm Monaker Group Eyes Regulated Thai ICO Market at p.m. UTC Updated at p.m. UTC (Photo by Mathew Schwartz/Unsplash). · rhfp.xn--38-6kcyiygbhb9b0d.xn--p1ai Bitcoin Bitcoin Cash Bitcoin Gold Bitcoin Gold Bitcoin_Gold blockchain Bubble bull run cryptocurrencies cryptocurrency Dollar-cost averaging Equity Crowdfunding Ethereum Financial Independence Financial Market Gold Growth ICO Initial Cost Offering investing Investment Strategies Litecoin lump sum prediction rhfp.xn--38-6kcyiygbhb9b0d.xn--p1ai Retirement.
We offer expert Blockchain solution to make your ICO launch successful.
What is an ICO?
ICO stands for Initial Coin Offering and it is one of the most efficient and reliable sources to generate funds and investments for SMEs and entrepreneurs. At Smart Crypto Solution, we support companies by providing customized ICO development and ICO launching consulting.
· Cryptocurrencies are essential for any business. Traditional finances have many loopholes that only a robust system for dealing with cryptocurrencies can fill.
Cryptocurrencies Blockchain And Ico: Family Offices: Cryptocurrencies, ICOs And Blockchain ...
The three following points further support the argument as to why businesses need crypto: Compared to routine and standard banking services, blockchain transactions are considered more. The core blockchain, cryptocurrency developers and the community related to the launch of the ICO will suggest an initial allocation of tokens which will be developed by using blockchain protocol. Smart contracts are developed and released by using blockchain technology to set aside for future blockchain protocol development/5(18).
We love Blockchain and cryptocurrencies, but we also love digging deeper to find the real news. You won't find paid for, sponsored posts here. Recent Posts. View All. Cashaa blows $33m from ICO & raises $5m to pay off hack. Sep 2, 5 min read. DeFi. Crypto banks and IBANs for crypto, is it really possible?. 5 min read. Gaming. Discover the latest Bitcoin news updates, ICO news, and upcoming cryptocurrency platforms. Stay updated with the most up-to-date events in Blockchain technology and Bitcoin regulations.
Compare various cryptocurrency prices and learn about upcoming crypto startups. · Tether (USDT) Doing a Great Job Hedging Several Cryptocurrencies in the Blockchain Space Novem Off By Steven Anderson Tether claim that they are the prime example of how global markets can operate more efficiently by leveraging blockchain technology, and that they represent a payments rail that’s actually built for the future of.
KodakCoin would underpin a rights-management platform for photographers based on blockchain. The idea had real merit, but eventually, interest in cryptocurrencies faded, ICOs became a target for US regulators, and the launch never took place. It is not uncommon for brands to hop on a bandwagon as soon as it starts trending.
Patience Forex Trading Quotes
|How to purchase petro cryptocurrency||Winners edge forex power indicator||How to value the price of a cryptocurrency|
|How to prevent getting hacked cryptocurrency||Could cryptocurrency fix monetary policy||Cant option trade on desktop|
|Cryptocurrency exchange comparison chart||Cfd broker trading synthetic instruments||Cursos online forex donde hacer|
· rhfp.xn--38-6kcyiygbhb9b0d.xn--p1ai is a brand new platform about blockchain and cryptocurrencies. Our goal is to become one of your main information providers about this environment. At rhfp.xn--38-6kcyiygbhb9b0d.xn--p1ai we focus on unique content in the form of in-depth articles and interviews about blockchain and cryptocurrencies.
- Coinisseur: cryptocurrencies & blockchain, In-depth ...
- Blockchain Technology, Bitcoins, Cryptocurrencies
- ICO List 2020: 5700 Blockchain Companies | ICOs, IEOs & STOs
- How Businesses Can Work with Cryptocurrencies Using a ...
- CryptiBIT – ICO Landing Page, ICO Consulting, Bitcoin ...
CryptiBIT – IEO, ICO Landing Page, ICO Consulting, Bitcoin, Blockchain and Cryptocurrency WordPress Theme. Cryptocurrency widgets, ICO elements and more. Resourses. Founded inBitcomio is a digital media platform aimed to cover high-quality free guides, tutorials on cryptocurrency, and blockchain for beginners. Are you. Blockchain is the network upon which most of these cryptocurrencies operate on.
The history of blockchain and bitcoin, in particular, does not have a definite story.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9300734996795654,
"language": "en",
"url": "https://smallbusiness.chron.com/audit-programs-cash-receipts-18355.html",
"token_count": 685,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03564453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:98b76846-a1a9-436a-b82d-84cd321b432a>"
}
|
Audit Programs for Cash Receipts
The importance of developing an audit program that includes thorough procedures to audit cash balances, or cash receipts, is important because it can affect an organization's profits. The auditor must perform audit procedures of the internal controls of the company to detect any shortcomings. He must also perform substantive tests of details, tests to detect lapping and analytical procedures so that he can conclude that the cash balances are reasonably accurate.
The internal controls pertinent to cash receipts transactions safeguard the organization from theft. Internal control mechanisms the auditor should check for include documents that establish accountability for the reception of cash and completion of bank deposits, an accurate daily cash summary and deposit slip, requiring daily journal entries that post the amount received to customer accounts and appropriate segregation of duties. This is the first procedure in auditing cash receipts and cash balances.
Tests to Detect Lapping
Lapping is the deliberate misappropriation of cash receipts and one of the most common means by which an employee or manager can steal from an organization. Usually the auditor only performs these tests when he believes there is a lack of internal controls. The auditor should first ensure that there is an appropriate segregation of duties, so as to best prevent lapping from occurring. He should also confirm accounts receivable balances with customers, compare the details of cash receipts with journal entries and corresponding bank deposit slips and make a surprise count of the cash on hand.
Analytical procedures help the auditor note any obvious discrepancies or errors before performing tests of details. However, these procedures do not provide any significant assurance for the auditing team or management. These types of procedures include comparing cash balances with forecasts and budgets. When cash balances greatly exceed or fall below expectations for the year, it should place the auditor on alert for items to look for during the tests of details. Other analytical procedures include reviewing company policies regarding minimum cash balances and the investment of surplus cash.
Tests to Details
There are two types of tests of details that the auditor performs when auditing cash balances -- tests of details of transactions and tests of details of balances. During the tests of details of transactions, the auditor traces bank transfers and performs cash cutoff tests. When approaching the balance sheet date, the auditor uses the cash cutoff tests to ensure that all of the appropriate transactions are included in the financial statements. Tests of details of balances include confirming bank deposits and loan amounts, obtaining bank cutoff statements to ensure all relevant cash is included in the balance, reconciling the bank account to the books, and confirming arrangements with all banks the organization uses, including those with zero balances.
An auditor may provide other types of services, other than reasonable assurance of the accuracy of a company's financial statements, to an organization. For cash receipts or cash balances, an auditor may help management forecast the cash flow and accurate projections for the subsequent year. He may also identify areas for improvement and make recommendations for short-term investment of excess cash.
- "Modern Auditing: Assurance Services"; W. Boyton, et al.; 2006
- MCCC.edu: Auditing; Arens, et al.; 2008
Christine Aldridge is a financial planner who has been writing articles related to personal finance since 2011. She has bachelor's degrees in political science from North Carolina State University and in accounting from University of Phoenix. Aldridge is completing her Certified Financial Planner designation via New York University.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9723208546638489,
"language": "en",
"url": "https://www.wessexscene.co.uk/opinion/2016/09/23/a-world-without-cash/",
"token_count": 1173,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1494140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8fb0c02f-884f-4a8e-850c-bd345efe8186>"
}
|
On the 9th of September, the Bank of England released its new Fiver made from a durable and flexible polymer. It also plans to release a new £1 coin in 2017. But with the many inconveniences of notes and coins, and card payments on the rise among the younger generation, how long can we really expect this cash to last?
In our society, money is largely what dictates the opportunities available to a person. Money in itself has no intrinsic value – it’s pretty useless, but acts as a token which can be exchanged for useful things. Such a token is essential because without it we would need to rely on trading one item for another. The problem with this is there is no guarantee people will want or need what you have to offer. Money solves this problem; it is universally seen as valuable, so everybody will accept it.
Over time, money has become considerably smaller, easy to carry around in one’s pocket but it still has flaws. In times of hyperinflation, cash becomes completely impractical and difficult to use. During tough economic times in Germany and Zimbabwe, wheelbarrows full of notes were often needed for trivial purchases. Because the value of currency plummeted so much in Zimbabwe, new notes had to be continually reprinted to reflect this – eventually reaching millions for a note less than a US dollar. But more commonly, cash is just susceptible to damage, loss and dealing with it can be annoying.
The new five pound note follows suit from Canada and other counties also issuing plastic money. However, contactless payments on debit cards now eliminate the need for physical money altogether, without any added debt like the ones incurred on credit cards. Ditching cash altogether seems like a natural next step – one that may happen in the not so distant future.
That said, cash has a few distinct advantages. Interestingly, people tend to be more frugal when they use cash, perhaps because when paying for something, you are physically giving it away. Whether spending recklessly is necessarily a disadvantage or not, it’s definitely a psychological curiosity that everyone should be aware of. Although cards are safer overall, many people like to know that they have money – the physical feel of it is something that can’t be matched. The culture of the society to which it belongs is also embedded into the currency – which often features historical figures and famous places. Coins and notes are quite beautiful to look at and with card taking over, this cultural side to money might be lost; currency could simply be a number and nothing more.
Cash – and change in particular – is something I personally find infuriating. Items are often priced at £X.99, so change endlessly builds up making it virtually impossible to spend it all. Lloyd’s TSB predicted that because of this, the average UK household has £14.15 in change just lying around. Change is traditionally dealt with by sorting and counting it manually into tiny plastic bags by denomination, then paying it into the bank. Paying in exact change can sometimes be satisfying, but requires a lot of fumbling around like a fool. This kind of patience is not really possessed by our generation – and with many advances in technology, it needn’t be. We live in a world where necessary tasks can be done almost instantly, and we’re not happy about wasting time waiting around or doing things manually. Without cash, redundant money would no longer be a problem that needed sorting, with instant payments saving time and money everywhere.
A UK survey shows that card payments are rising dramatically in popularity whilst cash payments are falling. If the trends continue, the number of card payments is set to exceed the number of cash payments in the next few years. In the US, 80% of consumers use their debit card for some everyday purchases, but that number rises to 100% for 18-24 year olds. These numbers show that times are changing.
For those readers into the Netflix series Narcos, you may know that having too much money can be a problem. The infamous Colombian cocaine smuggler Pablo Escobar made $60 million a day – he had so much cash, he had no idea what to do with it all. He gave wads of cash to the poor, perhaps out of compassion but maybe just to get rid of it. He eventually resorted to burying huge piles of cash at various locations around the country. Around 10% of the money he earned each year (amounting to $2.1 billion annually) was lost – eaten by rats, damaged by water or simply misplaced. In addition to this, his cartel paid around $2,500 a month on rubber bands just to keep notes together. This is perhaps the most extraordinary case of wasted, redundant cash ever that wouldn’t have happened if El Patron’s followers only took card payments.
Although criminals losing out on money might not seem like a bad thing, without cash these big-time drug bosses would probably find it harder to reach their fortunes in the first place. Illicit transactions nearly always take place by cash because its movement is much harder to track. In-fact, the director of Europol (an intergovernmental EU police force) Rob Wainwright said this presents “significant barriers to successful investigations and prosecution”.
There are certain off-hand payments that you might think are only possible by cash – paying buskers, tips and the homeless. However, even some of Stockholm’s homeless community now accept contactless payments for Sweden’s Big Issue equivalent.
All in all, the new Fiver is shinier, nicer and more reliable than the old one. But it is fair to question how long these new editions will actually be in circulation for. Cash is definitely on the way out.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9303828477859497,
"language": "en",
"url": "http://www.vastavamtv.com/2017/05/02/how-to-calculate-net-operating-income-noi-week3/",
"token_count": 536,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.013671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0696a6f5-c70c-4b20-8408-46860dba4b3d>"
}
|
By.. “Venu Vankadhara, Real Estate Investor”
Net operating income (NOI) is simply the annual income generated by an income producing residential/commercial property after deducting yearly expenses from yearly income. It plays a major role in deciding a whether property is a good investment or not? Hence it is very important for a real estate investor to understand this thoroughly.
NOI = Yearly income – Yearly expenses
Yearly income – This would includes rental income
Yearly Expense – This would include expenses for management, legal and accounting, insurance, janitorial, maintenance, supplies, taxes, utilities, Vacancy (usually 5%) etc.
Subtract the yearly expenses from the yearly Income to arrive at the Net Operating Income.
For example, let’s say a residential rental property generates a yearly income of $24,000 and yearly expenses are $4,000. NOI would be $24,000- $4000 = $20,000.
Estimating the Value of the investment property = NOI * 10
In the above example you can buy an investment property up to $20,000 * 10 = $200,000. That means you are earning 10% a year on the property. This is a very good return but in current housing market where home values are going up due to low inventory, you need to be ready to take 7 to 8% return a year. In this example where NOI is $20,000, you can pay up to $260,000 for a house.
Real estate property that generate around 8% return on investment is a good investment
Useful Link – http://www.coachcarson.com/rental-property-cash-flow/
Next Week Topic: Property Management
“Venu Vankadhara, Real Estate Investor”
“If you have any questions please reach out to Venu at his email address- [email protected]”
Disclaimer – This article is for information purpose only. All information presented here is accurate to the best of author’s knowledge. Vastavam.net and its undersigned are not liable for any misrepresentation on this web site or in any additional information given verbally or in writing relating to Vastavam.net. and its’ investments. It is the readers’ responsibility to verify all information given. Readers should consult their own legal, real estate and tax advisors about the suitability of real estate investment for their particular needs and situations.
More Articles On Real Estate Investment:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9695044755935669,
"language": "en",
"url": "https://ctacusa.com/does-everyone-need-to-go-to-college/",
"token_count": 1189,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0771484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1187b77b-9557-430d-b48b-c6b7ea8d67fa>"
}
|
Just google “does everyone need to go to college?” and you will have ample opportunity to view all the reasons why other pathways to good paying jobs are available. That’s what I did. In many cases, I did read some persuasive arguments about the rewards of pursuing career and technical education (CTE) skills leading to good paying jobs post high school. But is high school really enough?
Under the Obama-era education policy, educational standards focused on preparing students to be college and/or career ready. The goal was simple—all students need a legitimate pathway for the future after high school graduation. Additionally, in 2016 when the Obama administration announced a $90 million initiative to increase the number of apprenticeships in this country, it acknowledged that “The jobs available today, and the jobs of the future, are higher-skill jobs that require more education and advanced skills.” Completing high school, getting a good paying job and joining the middle class is not a sure thing.
Educators know this. More and more you see greater articulation among high schools, community colleges and four-year colleges relating to CTE, early college high schools, AP courses and dual enrollment programs. Good paying jobs require a higher level of knowledge and skills than they did at the beginning of the 21st century. Lifelong learning went from being a nice idea to an essential trait to ensure ongoing employability in an ever-evolving market place.
Thankfully, most Americans still strongly believe in post-secondary education. In a 2015 report from Gallup and the Lumina Foundation, some of their findings include:
- 95% said it is important for adults in this country to have a degree or professional certificate beyond high school.
- 93% said that it will be important in the future to have a degree or professional certificate beyond high school in order to get a good job.
- 78% agreed that a good job is essential to having a high quality of life.
The economics of more education are very real. The U.S. Bureau of Labor Statistics report that:
- For the first quarter of 2018, full-time workers age 25 and over without a high school diploma had median weekly earnings of $563, compared with $713 for high school graduates (no college), $808 for some college or an associate degree, and $1,286 for those holding a bachelor’s degree.
- For March 2018, the unemployment rate for individuals age 25 and over with at least a bachelor’s degree was 2.2%, 3.6% for some college or associate degree recipients, 4.3% for high school graduates, and 5.5% with less than a high school diploma.
A 2016 College Board report points out an obvious but important fact:
- Bachelor’s degree recipients paid an estimated $6,900 (91%) more in taxes and took home $17,700 (61%) more in after-tax income than high school graduates. Associate degree students paid $2,500 more in taxes than high school graduates and took home $6,700 more in after-tax income.
While the economic benefits of more education are important, that is not what attending college is solely about. College is not meant to be simply a job training program. If college was only meant to be a job training program, we certainly could do it a lot cheaper. Personally, I completed two degrees in political science and never went into politics. But in my collegiate work, I learned how to analyze information and data, undertake research, problem solve, collaborate, innovate, think critically and acquired a healthy amount of skepticism and caution when confronted with the “next big thing.” I was more ready to enter the workforce than I was coming out of high school.
The College Board also collects data from national databases and surveys that identify the non-economic benefits of a college education. Here is a sample:
- College education is associated with healthier lifestyles, reducing health care costs.
- In 2014, 69% of 25- to 34-year-olds with at least a bachelor’s degree, 61% with an associate degree and 45% of high school graduates reported exercising vigorously at least once a week.
- Among adults age 25 and older, 16% of those with a high school diploma volunteered in 2015, compared with 39% of those with a bachelor’s degree and 27% with an associate degree.
- In the 2014 midterm election, the voting rate of 25- to 44-year-olds with at least a bachelor’s degree (45%), with an associate degree or some college (32%) and for high school graduates (20%).
- Children of parents with higher levels of educational attainment are more likely than others to engage in a variety of educational activities with their family members.
All good things.
Let’s get back to our initial question, “does everyone need to go to college?” Proponents of CTE are also seeing the value of post-secondary education. In a 2012 policy report on CTE, the authors report that:
“For many Americans, CTE starts at high school, although the share of high school students concentrating in vocational programming has declined for decades. This decline has been heavily influenced by the shift toward an economy in which post-secondary education and training has become the dominant pathway to jobs that pay middle-class wages.”
Of course, there are still issues that must be addressed to improve access to a college education; for example, cost, time needed to complete a degree and college completion—especially for students historically underrepresented in higher education. But given the benefits of a college education, these are issues that we should be vigorously pursuing for all students. In the long run, it benefits us all.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.951772928237915,
"language": "en",
"url": "https://explorer.aapg.org/story/articleid/357/coal-at-center-of-power-shift?utm_medium=website&utm_source=explorer_issue_page",
"token_count": 1513,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.478515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:22517cd8-fcf3-4edb-b48b-52a8a3f13d85>"
}
|
Coal occupies an unenviable position in the fossil fuel hierarchy.
With its use concentrated in large power stations in most countries, it is a prime candidate for carbon capture and storage, even though technologies for this are not yet commercial – they face enormous cost hurdles and use vast amounts of energy in such steps as concentrating oxygen prior to combustion and separating CO2, not to mention a host of geo-engineering and institutional issues associated with sequestration.
Technology development now requires large-scale demonstrations – a critical stage on the path to commercial development – and further innovations are sought.
This places carbon capture and storage on a timeline where the technology response in the best of worlds may lag society’s desire to curb emissions.
Opposition to coal (along with extraordinary escalation of capital costs) came to a peak in 2007, stalling or derailing some 50 gigawatts (GW) of once-proposed plants in the United States.
At present, about 50 plants totaling 30 GW are either under construction (50 percent of the total) or in early stages of development, and thus not assured of completion in the 2008-16 period. This compares to about 70 GW of gas-fired plants (80 percent combined cycles and 20 percent combustion turbines) and 40 GW of wind capacity.
Notably, the lead in new generation has now been taken up by natural gas and renewables. In fact, natural gas is experiencing a development boom, albeit smaller than the 2000-04 merchant plant boom (figure 1). A turn to gas at the expense of coal will intensify after enactment of legislation to curb greenhouse gases.
Analysts have reached remarkably different conclusions about how much natural gas can replace existing coal-fired generation (a range from 0 to 284 GW of natural gas capacity additions by 2030 in response to stringent, early cutbacks in CO2 emissions), but all conclude that obstacles to nuclear capacity, delays in mastering carbon capture and sequestration – or achievement of only moderate levels of renewables – would translate into greater natural gas use.
Early indications are that gas impacts would be far from uniform, with demand surging regionally, perhaps first in the Southeast.
So much for the long run. In the short run, natural gas-fired generation is viewed as the likely “default” choice, and this was before the financial crisis turned lenders against both capital cost and technology risk.
The implications for both coal and natural gas markets are considerable – arresting coal demand growth (unless associated with carbon capture and storage) and establishing new gas demands of about 0.5 trillion cubic feet per year (1.4 billion cubic feet per day) for every 10 GW of coal capacity replaced by gas generation. If there is a moose in the room, this is it.
Power sector demand growth will not materialize in time to prevent the looming oversupply from gas shales/Rockies production, but it certainly appears capable of stressing supplies (and widening the door to LNG imports) in the post-2015 period.
While coal’s future is uncertain and insecure, the exact opposite is true of its present role – both domestically (it provides 49 percent of U.S. power generation) and internationally (see our eye-opening comments on China below).
Volatility and globalization are the two watchwords that best describe the current market.
During 2008 U.S. coal prices were buffeted as never before by international forces. Between the summers of 2007 and 2008 prices at the three principal export hubs of Newcastle, Australia (principally to Asian markets), Richards Bay, South Africa (principally to Amsterdam-Rotterdam-Antwerp or ARA) and Colombia (to ARA and the United States) rose from about $60/metric ton to $160. This is astonishing.
It was accompanied by the added burden of unprecedented hikes in dry bulk shipping costs (e.g., from a norm of $15-20/metric ton to $50 for Richards Bay to ARA), it had very little to do with oil’s coincident price escalation and, among other things, it led to expansion of the United States’ usually very modest role as a coal exporter (to ARA) and a wave of price escalation in U.S. coal prices.
Metallurgical coal prices experienced a similar but even more extreme rise (e.g., the annual settlement of Japanese high quality hard coking coal went from $100/metric ton in 2007 to $300 in April 2008).
The journey of U.S. spot coal prices is summarized in EIA’s price chart (figure 2). Northern and Central Appalachian prices went from $45/short ton to $140-150 between summers of 2007 and 2008.
Illinois Basin prices, not directly participating in the export market and slightly lagging Appalachian movements, climbed from $30/short ton to an equally astonishing $90.
The principal question in U.S. and international markets now is “how hard will these prices fall?”
Hard times to come are indicated in the stock values of coal producers, which have dropped sharply since July, preceding by several months the emergence of the global financial crisis.
No comments about coal, however cursory, would be complete without a few words about China, for two reasons:
- China is the world’s largest and fastest growing producer and consumer of coal, by a factor of 2.2 or more .
- While almost 100 percent self-sufficient, China’s small shift from being a net exporter to a net importer of coal during early 2008 was one of many factors behind the anomalous global price surge.
China’s industrialization has brought about nearly incomprehensible changes in its infrastructure. In 2006, 102 GW of new generating capacity was added in China, and the pace of development over the past three years has been estimated as equivalent to adding three to four 500-megawatt power plants per week.
About half of the coal produced in China is used to make electricity, and about 80 percent of the country’s electric generation is derived from coal.
Power sector growth has been the primary engine behind China’s growing coal consumption and production. Production doubled between 2001 and 2006. It is this phenomenon that is behind BP’s observation when releasing its 2008 Statistical Review that “coal was again the fastest growing fuel in 2007.”
And it also is behind growing recognition by world policymakers that development in China is the trump card in controlling CO2 emissions.
Discussions about coal as an energy resource often turn to its reserves, resources and global distribution. For those concerned with world energy developments, it makes sense to focus on countries that are most important in the world coal trade. This is done in the following table (figure 3), which ranks countries by their combined exports of thermal and metallurgical coal.
While Australia is at the top, Indonesia is the fastest growing exporter – and by 2006 it was the world leader among exporters of thermal coal, 50 percent greater than Australia.
Indonesia’s electricity needs also have been rapidly growing, which is leading to policies to assure sufficient supplies to serve its domestic markets.
Rather than attempt to answer the many questions a table such as this may raise, we leave it as a portrayal of some of the features of the global coal industry.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.969562828540802,
"language": "en",
"url": "https://the-spark.net/np978404.html",
"token_count": 633,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.396484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:10021c98-9b55-4f04-99f2-183696826c98>"
}
|
Jan 5, 2015
The news media has trumpeted the 80 cents a gallon drop in gasoline prices over the last six months as a real savings for consumers.
This drop in gasoline prices reflects a huge global oil glut. Oil production is up, especially right here in the U.S., where production has risen from five million barrels a day in 2008 to an average of about nine million, according to the U.S. Energy Information Administration. That four million barrel increase is more than either Iraq or Iran, the second and third largest OPEC producers after Saudi Arabia, produces each day. Today, the U.S. accounts for 10 per cent of the world’s oil production, more than any other country, even more than such big oil exporters as Saudi Arabia and Russia.
On the other hand, even in the U.S., with its supposedly “strong recovery,” gasoline consumption is still lower than it was in 2007. And no, the main reason for that drop is not more fuel efficient cars and trucks. According to the U.S. government, the number of miles that people in the U.S. drive is still lower than it was in 2007, despite the fact that the population is increasing at a rate of about 1 percent a year. Working people just don’t have the money. A smaller percentage of the population has jobs, and those with jobs are earning less. The job situation is so bad, many more young adults can’t even afford a car.
With demand down and production way up, the U.S. economy has been importing much less oil from countries like Angola and Nigeria. So, those producers have been compelled to sell their oil to China and other Asian markets at steep discounts, compounding the impact of American production on world markets and driving down crude oil prices faster.
Today, the average price of crude oil is about half what it was just six months ago. This depressed price has deepened the crisis on economies that depend on oil exports, such as Russia, Iran and Venezuela – causing worsening unemployment and galloping inflation in those countries. But big cuts are also expected in parts of the U.S., such as Texas and North Dakota, that depend on oil production for both jobs and state and local government budgets.
The big drop in oil prices could also be amplified by the financial system. Back in the early 1980s, when an oil glut led to steep crude oil price drops, some banks went bankrupt, including the sixth largest, Continental Illinois, as vast loans to oil producers were not paid back. The same could happen again – but much worse – because of massive speculative holdings in the oil industry by big Wall Street financial companies, which have quietly and stealthily, through shell companies, gained ownership of a stunning amount of oil capacity.
Because all of this activity is carried out in secret, no one knows how big the financial losses will be. But one thing is clear: the fall in crude and gasoline prices does not mark in any way a step in economic recovery, but merely another stage in a very long economic crisis and depression.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9530833959579468,
"language": "en",
"url": "http://lubnaqassim.com/blog/how-does-corporate-governance-vary-around-the-world/",
"token_count": 1708,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7d88e3c8-9bb6-4d61-9f66-efbd180a1b4d>"
}
|
Read Lubna’s blog insights where she takes a fresh look at governance to create stronger countries and business
How does corporate governance vary around the world?
Our language, beliefs, cultures, tastes, attitudes and politics have developed over centuries and are what make each of our nations unique. Unsurprisingly, our business practices have evolved in an equally as diverse pattern and, despite the increasing globalisation of our biggest businesses, governance rules now also vary considerably between countries.
All global governance rules aim to protect shareholders and stakeholders while ensuring the prosperity of the business, but not all follow the same model and can lead to very different structures, spheres of influence and succession plans.
In this blog I look at the different governance structures around the world, consider the ongoing reforms and ask if one global governance standard would be a benefit or hindrance to international organisations?
What governance models are there around the globe?
Corporate governance is the system of rules, practices and processes by which a company is directed and controlled. Corporate governance essentially involves balancing the interests of a company’s many stakeholders, such as shareholders, management, customers, suppliers, financiers, government and the community.
While all systems have the same objective and have origins in similar legal frameworks, the framework, rights and make-up of governance structures can vary considerably.
For example, in the US, shareholders elect a board of directors, who in turn hire and fire the managers who actually run the company. In Germany, the board is not legally charged with representing the interests of shareholders, but is rather charged with representing the interests of stakeholders, including workers and creditors as well as the shareholders. It also usually has a member of the labour union on the board.
In the UK, the majority of public companies voluntarily abide by the Code of Best Practice on corporate governance. It recommends there should be at least three outside directors and the board chairman and the CEO should be different individuals.
Japan’s corporate boards are dominated with insiders – loyal managers who cap off their careers with a stint inside the boardroom – and they are primarily concerned with the welfare of keiretsu (parent company) to which the company belongs.
China has colossal corporate structures where businesses have parent, grandparent and even great-grandparent companies. Each level has a board and Communist Party officials usually have a seat. In India, the founding family members usually hold sway over the board.
Korean manufacturers’ strategy is to grow as rapidly as possible and do this by borrowing money from banks. As a result, the government holds sway over their corporate governance structure through the banks. This relationship gives the Government influence over the company, while the company has a say in government issues and Korean corporate governance.
Finally, the French corporate governance structure often attracts criticism for involving a complex network of public sector organisations, large businesses and banks. However, this ensures the French excel at collaborative projects between business and Government. For example, France leads the world in the production of nuclear reactors and high-speed trains.
An international corporate governance standard?
But, is this diverse array of corporate governance structures a good or bad thing for business?
The Wharton School’s Mauro Guillen has studied global differences and says in his paper ‘Corporate Governance and Globalization: Is There Convergence Across Countries?’: “There’s a very important connection between corporate governance and the competitive strategy of firms. It’s not as simple as saying, ‘Oh, we’re going to change corporate governance so that we all have the same rules.’ The system of corporate governance interacts with many other things in an economy, such as the way labor laws are regulated, tax laws and bankruptcy legislation. If you change one component without changing the others, you’re essentially causing trouble.”
Despite major scandals like Enron, the general assumption is that international investors want a global standard to protect their shareholdings and many organisations are moving towards the US shareholder-centred model.
Jay Lorsch, professor of human relations at the Harvard Business School, agrees that corporate boards are converging towards a common model and says that new regulations like the Sarbanes-Oxley Act in the US that affect companies seeking capital investment by trading shares are forcing a global convergence. He adds: “It’s particularly strong among the industrialized nations of Europe and the United States.”
Charles Elson, director of the John L Weinberg Center for Corporate Governance at the University of Delaware, refers to the International Corporate Governance Network (ICGN), whose members control $10 trillion in assets and says: “They are large pension funds in the world and they have a common interest in creating boards that are independent of management and that act as an appropriate monitor of investor interaction. That’s the model we’re moving to. No matter where you happen to be, that model produces the best potential returns.”
According to the ICGN Statement on Global Corporate Governance Principles regarding corporate boards: “Independent non-executives should comprise no fewer than three members and as much as a substantial majority. Audit, remuneration and nomination board committees should be composed wholly or predominantly of independent non-executives.”
Wharton management professor Michael Useem predicts boards around the world will move to a standard model within 15 years, driven by globalisation and the need to move huge sums of investment freely between countries.
He argues: “Put yourself in the shoes of Fidelity or Vanguard or other investors out there who are diversifying out of US stocks. You want to assure yourself that the companies you are going into are reasonably well governed — that they have acceptable accounting standards and are transparent.”
He says the central focus of corporate governance is the structure of the corporate board and that firms around the world are moving to create boards that are more independent from management, populated by non-executive members and organised around committees overseeing management, compensation and auditing. He adds: “All these factors point to good governance and thus the company becomes more attractive to investors and legitimate in the eyes of suppliers and customers. An investment manager anywhere in the world looking to put cash in the stock of a company in Lithuania or Italy will come at the company with an eye to whether it is following good practices.”
Diversity in corporate governance
Despite the globalisation of business and investor desire for solid governance, Guillen’s research contradicts the generally-held assumption that governance rules will converge. His study shows foreign investment has fallen in Anglo-Saxon nations, while investment in other nations’ models has risen the equivalent amount in the time period.
“Corporate managers should not assume the world, from the point of view of corporate governance, is becoming one big place,” says Guillen. “If your company is expanding throughout the world, you still need to take into account those differences. You can’t ignore them thinking that they will be going away. Such an approach is bound to fail.”
Despite the debate around the need for a global governance standard, one thing is clear. Governance is critical to not only the success of a company, but can also have far wider implications for the economy of a nation.
Research from the Organisation for Economic Co-operation and Development on Corporate Governance: Effects on Firm Performance and Economic Growth says: “There is no single model of good corporate governance, and both insider and outsider systems have their strengths, weaknesses, and different economic implications. However, corporate governance affects the development and functioning of capital markets and exerts a strong influence on resource allocation.
“In an era of increasing capital mobility and globalisation, it has also become an important framework condition affecting the industrial competitiveness and economies of member countries. On balance, therefore, the empirical evidence is supportive of the hypothesis that large shareholders are active monitors in companies, and that direct shareholder monitoring helps boost the overall profitability of firms.”
With significant differences in corporate governance structures between major economies like China and the US, I suspect the likelihood of an international standard is slim. However, I’d love to hear your thoughts on how corporate governance needs to change to match the needs of a globalised business community.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.949726939201355,
"language": "en",
"url": "https://dailytimes.com.pk/708726/pakistans-brighter-future-depends-upon-renewable-energy-generation-projects/",
"token_count": 1487,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0634765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7ba97f5f-0b02-4bf0-86eb-a0c87835e6e4>"
}
|
There are two types of energy: renewable and non-renewable. Non-renewable energy includes the resources from coal, gas and oil. They are made by burning fossil fuels to create energy. Renewable energy, often referred to as clean energy, comes from natural resources or processes that are constantly replenished. The renewable energy sources are solar energy, wind energy, hydropower, geothermal energy, and biomass energy. Renewable energy sources make upto 26% of the world’s electricity today, but according to the International Electricity Agency, IEA its share is expected to reach more than 30% by 2024. Overall, renewable electricity is predicted to grow by 1200 GW by 2024 all over the world, which is equivalent of the total electricity generation capacity of the USA. According to data compiled by the U.S. Energy Information Administration, there are seven countries already at, or very, near to 100 percent renewable power: Iceland (100 percent), Paraguay (100), Costa Rica (99), Norway (98.5), Austria (80), Brazil (75), and Denmark (69.4). The main renewables in these countries are hydropower, wind, geothermal, and solar. Bloomberg New Energy Finance (BNEF) has projected that by 2040, Germany’s grid will see nearly 75 percent renewable penetration, Mexico will be over 80 percent, and Brazil and Italy will be over 95 percent. BNEF was not looking at what could theoretically happen by mid-century if countries pushed as hard as required by the Paris Climate Accord. They were just looking at business as usual over the next two decades. After the emergence of Pakistan on the world’s map, in the decades of 60’s and 70’s, the hydropower power generation projects were constructed and electricity on cheaper rates was provided to the nation. These projects included Warsak, Mangla and Tarbela hydropower projects. Because of the cheaper electricity available in that era, the industry of Pakistan was on the rise. Hydropower is considered to be the cheapest form of thepower generation then and today.In the decades of 80’s and 90’s our governments introduced new power generation policy in private sector because of which the IPPs were introducedfor thermal power generation. Under these contracts with IPPs, very expensive power generation projects were built, which caused the price of electricity to rise sky high. Today the situation in the country is that more than 60% of the total generation capacity is generated from thermal projects in which more than 50% by IPPs while about 10% by the public sector. Only about 25% of the total generation capacityis hydropower andthe remaining 10 to 15 percent comes from solar, wind, nuclear and other resources. Still with over half of the rural population unable to access electricity, Pakistan is rightfully undertaking a major build-out of electricity generation capacity to meet demand growth into the future The Developed and many developing countries around the world are now turning to renewable energy, including Hydropower, Solar power and Wind power projects are at the forefront.The benefits of renewable energy are enormous. This type of power generation does not harm the environment and iscalled the Green Energy while thermal power generation makes the environment very much polluted. In the past, solar and wind power projects used to cost a lot and alsothe unit price of electricity generated from them was very high, but over the time, the cost of all the machinery of such projects, the cost of construction, installation and operating them has decreased significantly and will gradually going on decreasing in the coming days. If we talk about the previous solar power project Quaid-e-Azam Solar Park, built in 2014-15, which had a capacity of 100 MW but it is generating only 18 to 25 megawatts of electricity, and power generation is also quite expensive. The project is generating electricity at a rate of about Rs. 24 per unit.Similarly, from past wind power projects, electricity is very expensive and it costs around Rs. 20 to 22 per unit. With the passage of time, the cost of solar and wind power projects has come down significantly and the electricity generated from them is now around Rs. 6 to Rs. 7 per unit refer to the new wind and solar energy contracts signed recently by the present Government in 2020. The Pakistan Meteorological Department conducted a study in 2013 entitled “Wind Power Potential Survey of Coastal Areas of Pakistan”, which the Ministry of Science & Technology provided funding for. This study enabled PMD to identify potential “wind corridors” where economically feasible wind farm could be established. The Gharo-Jhimpir wind corridor in Sindh was identified as the most lucrative site for wind power plants. The wind power potential covered an area of 9700 km with a gross wind power potential of 43000 MW. Similarly, near the Pakistan Iran and Afghanistan borders in Baluchistan, there is a big wind corridor, which can be developed in future. There is another one in Swat, KPK which can be developed also. As per Alternative Electricity Development board, AEDB, at present, twenty-Four (24) wind power projects of 1235.20 MW cumulative capacity have achieved Commercial Operation and are supplying electricity to National Grid while Twelve (12) wind power projects of 610 MW capacity have achieved Financial Closing and are under construction.Six (06) solar power projects of 430 MW capacity are operational while (04) IPPs with a cumulative capacity of 41.80 MW are in the process of achieving Financial Closing of their projects and other Twelve (12) Solar PV power projects of 419 MW cumulative capacity are at different stages of project development. Still with over half of the rural population unable to access electricity, Pakistan is rightfully undertaking a major build-out of electricity generation capacity to meet demand growth into the future. Further adoption of ever-cheaper and accessible renewable energy can make a greater contribution towards meeting Pakistan’s growing electricity demands. Instead, Pakistan is currently on an energy pathway towards over-reliance on imported fossil fuels and out-dated coal technology. The Institute for Energy Economics and Financial Analysis (IEEFA) of Pakistan has modelled a high level, alternative future for Pakistan’s electricity system that addresses cost burdens and energy security. In IEEFA’s model, the renewables through wind and solar supplying 28% of Pakistan’s increasing electricity requirements by 2030. Generation costs would be reduced and energy security increased through a diversity of generation technologies roughly split 30:30:30:10 between renewable energy through wind & solar, thermal power, hydro power, and nuclear power. Pakistan needs to focus on projects that provide us with the cheapest electricity. Pakistan cannot afford expensive electricity i.e. thermal power. All our industries need cheaper electricity so that the industry can flourish and our GDP growth can beimproved along with the increase in Pakistan’s exports. In the future, the cheaper electricity can only come from renewable energy projects. Therefore, our present and future governments will have to focus only on renewable energy projects, i.e. hydropower, solar power and wind power. So that Pakistan can get out of troubles and in next 5 to 10 years, Pakistan will be in the list of fastest growing countries Insha-Allah.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9511914849281311,
"language": "en",
"url": "https://eurboghana.eu/ebo/the-grand-duchy-of-luxembourg-capital-luxembourg/",
"token_count": 595,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.275390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5d5e51cc-d911-4362-90b9-36f209014aea>"
}
|
THE GRAND DUCHY OF LUXEMBOURG, CAPITAL LUXEMBOURG
Luxemburg is a landlocked country in Western Europe. It is bordered by Belgium to the west and north, Germany to the east, and France to the south. Luxembourg had a population of 524,853 in October 2012 and has an area of 2,586 square kilometers (998 sq. mi), making it one of the smallest sovereign nations in Europe. The city of Luxembourg, which is the capital and largest city, is the seat of several institutions and agencies of the EU.
As a representative democracy with a constitutional monarch, it is headed by a grand duke, Henri, Grand Duke of Luxembourg, and is the world’s only remaining grand duchy.Three languages are recognized as official in Luxembourg: French, German, and Luxembourgish.Luxembourg has an oceanicclimate, marked by high precipitation, particularly in late summer.The summers are cool and winters mild.
Luxembourg is a founding member of the European Union, NATO, OECD, the United Nations, and Benelux, reflecting its political consensus in favour of economic, political, and military integration.
On 18 October 2012, Luxembourg was elected to a temporary seat on the United Nations Security Council for the first time in its history. The country served on the Security Council from 1 January 2013 until 31 December 2014.
The economy of Luxembourg is largely dependent on the banking, steel, and industrial sectors. Luxembourgers enjoy the second highest per capita gross domestic product in the world (CIA 2007 est.), behind Qatar. Luxembourg is seen as a diversified industrialized nation, contrasting the oil boom in Qatar, the major monetary source of that nation.
Luxembourg is a developed country, with an advanced economy and the world’s second highest GDP (PPP) per capita, according to the World Bank. In 2013 the GDP was $60.54 billion of which services, including the financial sector, produced 86%. The financial sector comprised 36% of GDP, industry comprised 13.3% and agriculture only 0.3%.The industrial sector, which was dominated by steel until the 1960s, has since diversified to include chemicals, rubber, and other product.
Tourism is an important component of the national economy, representing about 8.3% of GDP in 2009 and employing some 25,000 people or 11.7% of the working population. The Grand Duchy still welcomes over 900,000 visitors a year who spend an average of 2.5 nights in hotels, hostels or on camping sites.
Business travel is flourishing representing 44% of overnight stays in the country and 60% in the capital, up 11% and 25% between 2009 and 2010.
Luxembourg has won an Oscar in 2014 in the Animated Short Films category with Mr. Hublot.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9199390411376953,
"language": "en",
"url": "https://percent.info/bps-to-percent/what-is-87-basis-points-in-percentage.html",
"token_count": 249,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.05908203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8b4cadb8-a1ad-490f-9af5-9b1464c596fb>"
}
|
Here we will explain what 87 basis points means and show you how to convert 87 basis points (bps) to percentage.
First, note that 87 basis points are also referred to as 87 bps, 87 bibs, and even 87 beeps. Basis points are frequently used in the financial markets to communicate percentage change. For example, your interest rate may have decreased by 87 basis points or your stock price went up by 87 basis points.
87 basis points means 87 hundredth of a percent. In other words, 87 basis points is 87 percent of one. Therefore, to calculate 87 basis points in percentage, we calculate 87 percent of one percent. Below is the math and the answer to 87 basis points to percent:
(87 x 1)/100 = 0.87
87 basis points = 0.87%
Shortcut: As you can see from our calculation above, you can convert 87 basis points, or any other basis points, to percentage by dividing the basis points by 100.
Basis Points to Percentage Calculator
Use this tool to convert another basis point value to percentage.
88 Basis Points in Percentage
Here is the next basis points value on our list that we have converted to percentage.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9522857069969177,
"language": "en",
"url": "https://thriveglobal.com/stories/putting-health-back-into-health-care/",
"token_count": 1479,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b1cc7c19-50e0-4b90-ad2e-aad43740b2da>"
}
|
As the cost of health care skyrockets, American governments, policy makers, and individuals increasingly wrestle with how to come up with the money to pay for it. Possible solutions, including Obamacare and draconian Trumpcare, focus mainly on responding to disease once it has struck rather than on preventing disease in the first place. Zooming out will help us see the big picture, identify the root causes of rising health care costs, and find solutions that can work over the long term.
How Are We Doing?
Globally, the World Health Organization concludes that chronic disease is now the chief cause of death, that 1 in 4 deaths is caused by living or working in an unhealthy environment, and that depression is the leading cause of disability.
Nearly half of US adults have at least one chronic disease. The National Cancer Institute predicts that 40 percent of Americans will get cancer, and the Centers for Disease Control and Prevention predicts that 1 in 3 could have diabetes by 2050 if current trends continue. Clearly, our health care system is not successfully addressing these diseases.
Health care expenditures account for more than one-sixth of the American economy (gross domestic product) and are projected to increase to one-fifth of the economy by 2025. Americans pay far more for health care but have the worst health, compared with other rich Western nations. Medical errors are the third leading cause of death in the US, and medical debt is the leading cause of personal bankruptcy filings.
Better Care for Our Cars Than Ourselves
Many of us take better care of our cars than our bodies. We give our cars proper fuel. We periodically change the oil, check fluids and tire pressure, replace filters, and do other maintenance to minimize the potential for dangerous, disruptive, and expensive problems.
But many of us show much less care for our own health. We may sit too much, move too little, sleep too briefly, stress ourselves out, or expose ourselves to contaminated air, water, and products. We may consume sugar, white flour, toxic pesticides, and junk the human body was never designed to eat, while missing out on essential nutrients. When problems come up, we go to the doctor, say “fix me,” and get prescriptions for “magic bullets,” aka pharmaceuticals. Yet all drugs have potential side effects, which can lead to taking even more drugs to address them, creating a downward health spiral.
Our collective institutions and policies put profits before health. Industry is allowed to pollute air and water and put new synthetic and often health-damaging chemicals into our world without adequate testing for safety. Taxpayer-funded food subsidies primarily promote junk food rather than organically grown food or fruits and vegetables. Revolving doors between industry and government allow industry to steer policy at regulatory agencies such as the US Department of Agriculture and Food and Drug Administration.
Government agencies end up protecting industry from citizens rather than protecting citizens from industry. The food industry is allowed to conceal what’s in our food and deceive people about unhealthy food. The drug industry can keep unfavorable clinical trial studies locked up in file cabinets and publish just the favorable ones. The drug industry can market drugs directly to consumers and influence medical school curriculums. And even though poor nutrition has been implicated as a key factor in many chronic diseases, most doctors receive scant training about it.
Damage Control Costs Too Much and Often Fails
Our health care system waits for trouble and then delivers expensive and painful damage control. It’s the equivalent of parking an ambulance at the bottom of a cliff instead of building a fence at the top. For example, health insurers often won’t pay $150 for a diabetic to see a podiatrist, who could help prevent foot ailments associated with the disease. But most will pay $30,000 or more for an amputation. Both costs could be avoided. Best-selling author Joel Fuhrman, MD, a self-described “nutritarian,” estimates that about 90 percent of what US doctors do wouldn’t be necessary if people took better care of themselves. Chronic diseases are among the most preventable, and costly, of all health problems.
Alternatives to Conventional Medicine
Many doctors and patients are disillusioned with the current system, which operates predominantly on a fee-for-service basis, rewarding doctors for doing as much as possible rather than for providing the best care possible. Questioning whether they are providing good care in conventional medical practices, doctors increasingly turn to “alternative” forms of medicine. More than one-third of US adults use some form of complementary or alternative medicine, a long list that includes naturopathy, homeopathy, chiropractic manipulation, and acupuncture. It’s hard to argue with success, and alternative approaches are making their way into mainstream medicine, as evidenced by the Cleveland Clinic Center for Functional Medicine, located in one of America’s leading hospitals.
Make Health Our Top Priority
Health should be the first priority of individuals and society alike because it is foundational to happiness. The desire for happiness ultimately motivates everything we do. While preoccupied with illness, we feel worse and enjoy life less, work and earn less, help others less and burden them more, spend time and money trying to get well, have lower energy, and think less clearly and optimistically.
If health were our top priority, we would take actions that would improve our lives and our world on many fronts, such as these:
- Plant gardens at schools and teach students about food, nutrition, cooking, and health.
- Verify chemicals are safe before putting them into commercial use.
- Stop burning fossil fuels, biofuels, and wastes, thereby preventing air pollution and helping the climate.
- Prevent water pollution.
- Promote adequate exercise, rest, relaxation, and stress management.
- Train doctors about nutrition, toxins, consequences of lifestyle choices, and alternative forms of medicinal care.
- Reduce industry influence on food and medicine, and staff regulatory agencies and governments with people who don’t have conflicts of interest.
- Practice and promote preventative medicine, and pay doctors to keep us healthy. One option is “capitation,” in which a doctor, medical group, hospital, or health system receives a flat fee every month for taking care of an individual enrolled in a managed health care plan.
If we implemented these actions, improvements would compound. We would spend a lot less on health care. We could use the saved money for productive pursuits such as education, research, the arts, and environmental restoration and conservation. We’d get more done and have more fun. And we’d have a stronger democracy that works for citizens, as the Founding Fathers intended.
It’s hard to imagine any good way to pay for the irresponsibility and senseless waste in our current health care system. Some health care repair schemes are, of course, better than others. The Republicans’ current plan to slash health care coverage, further enriching the wealthy, would take us backwards. But regardless of the plan, fixing health care for real will require putting health promotion at the top of our priority list.
Originally published at www.huffingtonpost.com
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8811759948730469,
"language": "en",
"url": "https://www.bd.undp.org/content/bangladesh/en/home/projects/development-of-sustainable-renewable-energy-power-generation.html",
"token_count": 477,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.12890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6445848c-ed91-4f5f-a277-d37bc2935a15>"
}
|
The objective of the Project is to reduce the annual growth rate of GHG emissions from the fossil fuel-based power generation by exploiting Bangladesh’s renewable energy resources for electricity generation. The basic approach of the Project will be to promote renewable energy in Bangladesh through the recently established Sustainable and Renewable Energy Development Authority (SREDA). For Bangladesh to achieve a greater share of renewable energy (RE) in its energy mix, the Project will support activities that will (i) transform SREDA into a strong RE project facilitation center to bring confidence to private RE investors and increase the number of approved RE projects; (ii) increase the capacities of appropriate government agencies to generate, process, obtain and disseminate reliable RE resource information for use by potential project developers and investors; (iii) increase the affordability of photo-voltaic solar lanterns (PVSLs) for low income households by supporting pilot PVSL diffusion activities; and (iv) increase the share of RE in Bangladesh’s power mix through facilitating the financing, implementation and operation of pilot (RE) energy projects using rice husk and solar panels. The lessons learned from the pilot plants will be utilized to scale-up the dissemination of PVSLs and investment in on-grid RE projects and RE technologies.
Development of Sustainable Renewable Energy Power Generation (SREPGen) Project has 4 major components:
Component 1: Policy support and capacity building to bring confidence to private RE investors; and to increase the number of approved RE projects
Component 2: Resource Assessment Support Program to increase capacities of relevant government agencies to generate, process, obtain and disseminate reliable RE resource information
Component 3: To support diffusion of affordable PV power and other RE technology solution for low-income households and associated livelihood enhancement
Component 4: Renewable energy investment scale-up to support for an increased share of Bangladesh’s power generation mix.
What we do
Reduction in the annual growth rate of GHG emissions from fossil fuel-fired power generation through the exploitation of Bangladesh’s renewable energy resources for power generation
Specific Objective 1:
Cumulative direct and indirect CO2 emission reductions by end of project (EOP) resulting from project RE technical assistance and investments, Mtons CO2
Specific Objective 2:
MW of RE power generation in Bangladesh, including on and off grid
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9549226760864258,
"language": "en",
"url": "https://www.investopedia.com/articles/investing/110613/market-value-versus-book-value.asp",
"token_count": 3183,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0196533203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6e140270-33fd-4e00-9ab2-94a495a95c5f>"
}
|
Book Value vs. Market Value: An Overview
Determining the book value of a company is more difficult than finding its market value, but it can also be far more rewarding. Many famous investors, including billionaire Warren Buffett, built their fortunes in part by buying stocks with market valuations below their book valuations. The market value depends on what people are willing to pay for a company's stock. The book value is similar to a firm's net asset value, which jumps around much less than stock prices. Learning how to use the book value formula gives investors a more stable path to achieving their financial goals.
- Book value is the net value of a firm's assets found on its balance sheet, and it is roughly equal to the total amount all shareholders would get if they liquidated the company.
- Market value is the company's worth based on the total value of its outstanding shares in the market, which is its market capitalization.
- Market value tends to be greater than a company's book value since market value captures profitability, intangibles, and future growth prospects.
- Book value per share is a way to measure the net asset value investors get when they buy a share.
- The price-to-book (P/B) ratio is a popular way to compare book and market values, and a lower ratio may indicate a better deal.
The book value literally means the value of a business according to its books or accounts, as reflected on its financial statements. Theoretically, it is what investors would get if they sold all the company's assets and paid all its debts and obligations. Therefore, book value is roughly equal to the amount stockholders would receive if they decided to liquidate the company.
Understanding Book Value
Book Value Formula
Mathematically, book value is the difference between a company's total assets and total liabilities.
Book value of a company=Total assets−Total liabilities
Suppose that XYZ Company has total assets of $100 million and total liabilities of $80 million. Then, the book valuation of the company is $20 million. If the company sold its assets and paid its liabilities, the net worth of the business would be $20 million.
Total assets cover all types of financial assets, including cash, short-term investments, and accounts receivable. Physical assets, such as inventory, property, plant, and equipment, are also part of total assets. Intangible assets, including brand names and intellectual property, can be part of total assets if they appear on financial statements. Total liabilities include items like debt obligations, accounts payable, and deferred taxes.
Book Value Examples
Deriving the book value of a company becomes easier when you know where to look. Companies report their total assets and total liabilities on their balance sheets on a quarterly and annual basis. Additionally, it is also available as shareholders' equity on the balance sheet.
Investors can find a company's financial information in quarterly and annual reports on its investor relations page. However, it is often easier to get the information by going to a ticker, such as AAPL, and scrolling down to the fundamental data section.
Consider technology giant Microsoft Corp.’s (MSFT) balance sheet for the fiscal year ending June 2020. It reported total assets of around $301 billion and total liabilities of about $183 billion. That leads to a book valuation of $118 billion ($301 billion - $183 billion). $118 billion is the same figure reported as total shareholders' equity.
Note that if the company has a minority interest component, the correct value is lower. Minority interest is the ownership of less than 50 percent of a subsidiary's equity by an investor or a company other than the parent company.
Mega retailer Walmart Inc. (WMT) provides an example of minority interest. It had total assets of about $236.50 billion and total liabilities of approximately $154.94 billion for the fiscal year ending January 2020. That gave Walmart a net worth of around $81.55 billion. Additionally, the company had accumulated minority interest of $6.88 billion. After subtracting that, the net book value or shareholders' equity was about $74.67 billion for Walmart during the given period.
Companies with lots of real estate, machinery, inventory, and equipment tend to have large book values. In contrast, gaming companies, consultancies, fashion designers, and trading firms may have very little. They mainly rely on human capital, which is a measure of the economic value of an employee's skill set.
Book Value Per Share (BVPS)
Book Value of Equity Per Share (BVPS)
When we divide book value by the number of outstanding shares, we get the book value per share (BVPS). It allows us to make per-share comparisons. Outstanding shares consist of all the company's stock currently held by all its shareholders. That includes share blocks held by institutional investors and restricted shares.
Limitations of Book Value
One of the major issues with book value is that companies report the figure quarterly or annually. It is only after the reporting that an investor would know how it has changed over the months.
Book valuation is an accounting concept, so it is subject to adjustments. Some of these adjustments, such as depreciation, may not be easy to understand and assess. If the company has been depreciating its assets, investors might need several years of financial statements to understand its impact. Additionally, depreciation-linked rules and accounting practices can create other issues. For instance, a company may have to report an overly high value for some of its equipment. That could happen if it always uses straight-line depreciation as a matter of policy.
Book value does not always include the full impact of claims on assets and the costs of selling them. Book valuation might be too high if the company is a bankruptcy candidate and has liens against its assets. What is more, assets will not fetch their full values if creditors sell them in a depressed market at fire-sale prices.
The increased importance of intangibles and difficulty assigning values for them raises questions about book value. As technology advances, factors like intellectual property play larger parts in determining profitability. Ultimately, accountants must come up with a way of consistently valuing intangibles to keep book value up to date.
The market value represents the value of a company according to the stock market. It is the price an asset would get in the marketplace. In the context of companies, market value is equal to market capitalization. It is a dollar amount computed based on the current market price of the company's shares.
Market Value Formula
Market value—also known as market cap—is calculated by multiplying a company's outstanding shares by its current market price.
Market cap of a company=Current market price (per share)∗Total number of outstanding shares
If XYZ Company trades at $25 per share and has 1 million shares outstanding, its market value is $25 million. Financial analysts, reporters, and investors usually mean market value when they mention a company's value.
As the market price of shares changes throughout the day, the market cap of a company does so as well. On the other hand, the number of shares outstanding almost always remains the same. That number is constant unless a company pursues specific corporate actions. Therefore, market value changes nearly always occur because of per-share price changes.
Market Value Examples
Returning to the examples from before, Microsoft had 7.57 billion shares outstanding at the end of its fiscal year on June 30, 2020. On that day, the company's stock closed at $203.51 per share. The resulting market cap was about $1,540.6 billion (7.57 billion * $203.51). This market value is over 13 times the value of the company on the books.
Similarly, Walmart had 2.87 billion shares outstanding. Its closing price was $114.49 per share at the end of Walmart's fiscal year on January 31, 2020. Therefore, the firm's market value was roughly $328.59 billion (2.87 billion * $114.49). That is more than four times Walmart's book valuation of $74.67 billion that we calculated earlier.
It is quite common to see the book value and market value differ significantly. The difference is due to several factors, including the company's operating model, its sector of the market, and the company's specific attributes. The nature of a company's assets and liabilities also factor into valuations.
Market Value Limitations
While market cap represents the market perception of a company's valuation, it may not necessarily represent the real picture. It is common to see even large-cap stocks moving 3 to 5 percent up or down during a day's session. Stocks often become overbought or oversold on a short-term basis, according to technical analysis.
Long-term investors also need to be wary of the occasional manias and panics that impact market values. Market values shot high above book valuations and common sense during the 1920s and the dotcom bubble. Market values for many companies actually fell below their book valuations following the stock market crash of 1929 and during the inflation of the 1970s. Relying solely on market value may not be the best method to assess a stock’s potential.
The examples given above should make it clear that book and market values are very different. Many investors and traders use both book and market values to make decisions. There are three different scenarios possible when comparing the book valuation to the market value of a company.
Book Value Greater Than Market Value
It is unusual for a company to trade at a market value that is lower than its book valuation. When that happens, it usually indicates that the market has momentarily lost confidence in the company. It may be due to business problems, loss of critical lawsuits, or other random events. In other words, the market doesn't believe that the company is worth the value on its books. Mismanagement or economic conditions might put the firm's future profits and cash flows in question. Many banks, such as Bank of America (BAC) and Citigroup (C), had book values greater than their market values during the coronavirus crisis.
Value investors actively seek out companies with their market values below their book valuations. They see it as a sign of undervaluation and hope market perceptions turn out to be incorrect. In this scenario, the market is giving investors an opportunity to buy a company for less than its stated net worth. However, there is no guarantee that the price will rise in the future.
Market Value Greater Than Book Value
The market value of a company will usually exceed its book valuation. The stock market assigns a higher value to most companies because they have more earnings power than their assets. It indicates that investors believe the company has excellent future prospects for growth, expansion, and increased profits. They may also think the company's value is higher than what the current book valuation calculation shows.
Profitable companies typically have market values greater than book values. Most of the companies in the top indexes meet this standard, as seen from the examples of Microsoft and Walmart mentioned above. Growth investors may find such companies promising. However, it may also indicate overvalued or overbought stocks trading at high prices.
Book Value Equals Market Value
Sometimes, book valuation and market value are nearly equal to each other. In those cases, the market sees no reason to value a company differently from its assets.
The price-to-book (P/B) ratio is a popular way to compare market value and book value. It is equal to the price per share divided by the book value per share.
For example, a company has a P/B of one when the book valuation and market valuation are equal. The next day, the market price drops, so the P/B ratio becomes less than one. That means the market valuation is less than the book valuation, so the market might undervalue the stock. The following day, the market price zooms higher and creates a P/B ratio greater than one. That tells us the market valuation now exceeds book valuation, indicating potential overvaluation. However, the P/B ratio is only one of several ways investors use book value.
Most publicly listed companies fulfill their capital needs through a combination of debt and equity. Companies get debt by taking loans from banks and other financial institutions or by floating interest-paying corporate bonds. They typically raise equity capital by listing the shares on the stock exchange through an initial public offering (IPO). Sometimes, companies get equity capital through other measures, such as follow-on issues, rights issues, and additional share sales.
Debt capital requires payment of interest, as well as eventual repayment of loans and bonds. However, equity capital creates no such obligation for the company. Equity investors aim for dividend income or capital gains driven by increases in stock prices.
Creditors who provide the necessary capital to the business are more interested in the company's asset value. After all, they are mostly concerned about repayment. Therefore, creditors use book value to determine how much capital to lend to the company since assets make good collateral. The book valuation can also help to determine a company's ability to pay back a loan over a given time.
On the other hand, investors and traders are more interested in buying or selling a stock at a fair price. When used together, market value and book value can help investors determine whether a stock is fairly valued, overvalued, or undervalued.
Book Value FAQs
How do you calculate book value?
The book value of a company is equal to its total assets minus its total liabilities. The total assets and total liabilities are on the company's balance sheet in annual and quarterly reports.
What is book value per share?
Book value per share is a way to measure the net asset value that investors get when they buy a share of stock. Investors can calculate book value per share by dividing the company's book value by its number of shares outstanding.
Is a higher book value better?
All other things being equal, a higher book value is better, but it is essential to consider several other factors. People who have already invested in a successful company can realistically expect its book valuation to increase during most years. However, larger companies within a particular industry will generally have higher book values, just as they have higher market values. Furthermore, some businesses are more profitable than others. Such firms can afford to pay a higher dividend yield. That may justify buying a higher-priced stock with less book value per share.
What is price per book value?
The price per book value is a way of measuring the value offered by a firm's shares. It is possible to get the price per book value by dividing the market price of a company's shares by its book value per share. A lower price per book value provides a higher margin of safety. It implies that investors can recover more money if the company goes out of business. The price-to-book ratio is another name for the price per book value.
The Bottom Line
Both book and market values offer meaningful insights into a company's valuation. Comparing the two can help investors determine if a stock is overvalued or undervalued given its assets, liabilities, and ability to generate income. Like all financial measurements, the real benefits come from recognizing the advantages and limitations of book and market values. The investor must determine when to use the book value, market value, or another tool to analyze a company.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9464318752288818,
"language": "en",
"url": "https://www.solarthermalworld.org/news/india-rajasthan-subsidises-electricity-bill-solar-water-heater-users",
"token_count": 679,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11279296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:46fb28f9-b45b-404c-8f15-e480f202ddaf>"
}
|
The Indian state of Rajasthan has strongly supported the market for solar thermal technology. Since 2004, the state in the northwest of India has had a comprehensive mandatory law for solar water heaters: Solar energy use has been an essential requirement in setting up new hospitals, sports complexes, swimming pools, hostels, barracks, hotels, industrial buildings in which hot water is needed to process the goods, as well as public buildings and residential buildings with a plot size of 500 m2 and above. In 2011, the state government has also granted an indirect subsidy to residential users of Solar Water Heaters (SWH). Having come into force three months ago, the scheme allows every SWH user to receive a rebate on its electricity bill of INR 0.25 INR/kWh of electricity, capped at a maximum of INR 300 per month – independent of the age of the system.
Photo courtesy: Jaideep Malaviya
For example, this means that an SWH user who consumes 325 kWh of grid-supplied electricity per month receives a reduction on its electricity bill of INR 81.25 (325 kWh x 0.25 kWh/INR). The rebate is granted over a period of 5 years. The new regulation is part of the “Tariff for Supply of Electricity – 2011” (No. 5 page 8) by Jaipur Power Distribution Company which was published on 7 October 2011. Prior to this date, the electricity company from the capital of Rajasthan paid an incentive of 0.05 INR/kWh of consumed electricity to users of solar water heaters.
The new solar thermal incentive was an idea by the state government and could first be found in the Rajasthan Solar Energy Policy 2011 statement published in April 2011. In September, the Rajasthan Electricity Regulatory Commission included the new regulation in its updated tariff order.
According to the Rajasthan Renewable Energy Corporation Ltd (RRECL) – the nodal agency promoting renewable energy - the subsidy has been extended to the other two power distribution companies of the state: Ajmer Power Distribution Corporation and Jodhpur Power Distribution Corporation. The rebate is now available throughout the entire state and the incentive is automatically included in the monthly electricity bills created by the respective distribution companies. The state government later compensates the utilities for granting the reduction.
To apply for the rebate, consumers must fill in a form that can be downloaded on the website of the Rajasthan Electricity Regulatory Commission. This form has to be submitted to the executive engineer’s office at the respective electricity supplier, and it has to list all available details, such as the code number of the electric metre, date of purchase of the SWH, name of the SWH manufacturer, the type of SWH and its tank capacity.
Rajasthan’s electricity bill rebate is paid in addition to the investment subsidy from the federal financial scheme. RRECL says that the state has already received applications totalling 2,000 m2 of collector area since the electricity tariff rebate was announced in October and the industry is in optimistic mood. The state of Rajasthan enjoys one of the highest irradiations in India.
This text was written by Jaideep Malaviya, an expert in solar thermal based in India ([email protected])
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9589253664016724,
"language": "en",
"url": "https://www.universalcpareview.com/ask-joey/what-is-lapping/",
"token_count": 332,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.435546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:adf68ac5-b91a-404d-bbb4-d7e259b5da33>"
}
|
What is lapping?
Lapping is a fraudulent accounting techniques that occurs when an employee alters the financial records to hide cash stolen from the company. Basically, the employee will take subsequent cash received and apply it to an accounts receivable to cover the theft. The employee must keep up this practice, otherwise the fraud will be discovered.
As you can see below, the fraudster employee would take the payment from Customer A. However, to cover the invoice for Customer A, they would take the payment from Customer B. Then for Customer B’s invoice, they would use the payment for Customer C. Once the fraudster starts on this path, it never stops!
How can the audit team or other employees detect lapping?
Lapping can be easily detected by understanding how cash receipts have been applied to accounts receivable. If cash received from a customer is being applied to a different customer account, then lapping may be in play (or it could be an error!).
How can lapping be prevented?
The best way to prevent lapping is to have proper segregation of duties in the cash or treasury department. This involves separating the role of receiving the cash and applying the cash in the accounting records to specific customer accounts. Lapping could still occur, but both employees would have to be in on it!
You might also be interested in...
What is check kiting?
Check kiting is a form of fraud that involves floating checks from one bank account to another. Generally, the objective of check kiting is for the client to attempt to make use of a fund or bank account that might not actually exist.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9587719440460205,
"language": "en",
"url": "https://business-accounting.net/how-to-handle-sales-commissions-in-financial/",
"token_count": 1996,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0458984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:cdb5f729-3fd6-49d9-8501-a50fa6e9aaee>"
}
|
The cost of labor is broken into direct and indirect (overhead) costs. Direct labor costs are those expenses that are directly related to product production. Direct costs include the wages of employees who directly make the product. Indirect labor costs are those expenses related to supporting product production. Indirect costs would include the wages of office workers, security personnel, or employees who maintain factory equipment.
The work they provide isn’t directly related to producing a product. Many employees receive fringe benefits—employers pay for payroll taxes, pension costs, and paid vacations. These fringe benefit costs can significantly increase the direct labor hourly wage rate. Other companies include fringe benefit costs in overhead if they can be traced to the product only with great difficulty and effort. Fixed expenses are those that will remain same despite any change in the sales amount, production or some other activity.
Variable expenses are those expenses that change with each unit of production and it is directly proportional to the level of production. When there is an increase in production of goods, then the variable costs will also increase and vice-versa. For example expenses like variable, production wages, raw materials, sales commission, shipping costs etc. are examples of variable expense. Since fixed costs are more challenging to bring down (for example, reducing rent may entail the company moving to a cheaper location), most businesses seek to reduce their variable costs.
A cleaning business uses detergents, sponges and cloths to provide services, so the products consumed in a month contribute to selling expenses. COGS may include raw materials, direct labor, packaging and shipping. If you require help determining your small business’s payroll expenses and cost of labor, contact The Payroll Department, located in Brownsburg, Indiana.
You must report sales commissions as part of the operating expenses on your income statement. Based on accrual accounting, you must report all commissions in the period in which the related sales occur, even though you might pay some commissions to your employees in a later period.
What is sales commission in accounting?
A commission is a fee that a business pays to a salesperson in exchange for his or her services in either facilitating, supervising, or completing a sale. You can classify the commission expense as part of the cost of goods sold, since it directly relates to the sale of goods or services.
You would normally report selling expenses in the income statement within the operating expenses section, which is located below the cost of goods sold. Let us assume that the cost the company spends on manufacturing 100 packets of chips per month is Rs. 1000.(Assume that the cost of a packet is Rs 10). Rs. 1000 includes Rs. 500 on administration, insurance and marketing expenses that are usually variable and fixed expenses. Even though the company total cost increases from Rs. 1000 to Rs. 1500, the individual packets of chips will become less expensive to produce and hence the profit increases.
Definition of Commissions
So if the company has to hold off on booking the revenue, then they also need to hold off on booking the expenses. Determine from your company’s accounting records the total amount of sales commissions expense your small business incurred during an accounting period, regardless of when you will pay your employees. For example, assume your small business incurred $100,000 in sales commissions expense during the year.
Why Data-driven Sales Leadership Matters
A variable cost is a cost that changes in relation to variations in an activity. In a business, the “activity” is frequently production volume, with sales volume being another likely triggering event.
How do commission expenses get classified?
They are expenses that will have to be paid by the company even though there are any changes in business activities. They remain constant for a specific level of production over a certain period of time. However, it may change if the production level increases beyond a limit.
A company that has focused on a quite large amount of variable expense will predict more profit per unit in comparison to a company with a large amount of fixed expenses. This implies that if a firm has more fixed expenses, profit margin will be held when there is a fall in sales which is likely to add a level of risk to the companies’ stocks. Equally fixed costs will also allow a company to experience the increase in profit as and when the income increases, they are applied at a constant cost level. A company that seeks to increase its profit by decreasing variable costs may need to cut down on fluctuating costs for raw materials, direct labor, and advertising. However, the cost cut should not affect product or service quality as this would have an adverse effect on sales.
- This implies that if a firm has more fixed expenses, profit margin will be held when there is a fall in sales which is likely to add a level of risk to the companies’ stocks.
- A company that has focused on a quite large amount of variable expense will predict more profit per unit in comparison to a company with a large amount of fixed expenses.
- Equally fixed costs will also allow a company to experience the increase in profit as and when the income increases, they are applied at a constant cost level.
For example, a company that manufactures bolts spends more on raw materials and labor when producing 10,000 units compared to producing 5,000. However, salespeople work 40 hour weeks, so their salaries are paid regardless of sales level for a period.
The total variable cost is simply the quantity of output multiplied by the variable cost per unit of output. The total expenses incurred by any business consist of fixed costs and variable costs. Fixed costs are expenses that remain the same regardless of production output. Whether a firm makes sales or not, it must pay its fixed costs, as these costs are independent of output.
Some materials used in making a product have a minimal cost, such as screws, nails, and glue, or do not become part of the final product, such as lubricants for machines and tape used when painting. Such materials are called indirect materials and are accounted for as manufacturing overhead. Manufacturing overhead costs include indirect materials, indirect labor, and all other manufacturing costs. Depreciation on factory equipment, factory rent, factory insurance, factory property taxes, and factory utilities are all examples of manufacturing overhead costs.
The portion of the sales commissions expense that you have yet to pay your employees is money you owe, which you must report as a liability on your balance sheet. Some fixed expenses like advertising and promotional expense are assumed or incurred at the decisions of the management of the company. It is understood that all reserved fixed expenses will suffer even if the sales fall zero.
Managing Commissions Under the New Revenue Recognition Standard
A variable cost is a corporate expense that changes in proportion to production output. Variable costs increase or decrease depending on a company’s production volume; they rise as production increases and fall as production decreases.
Cost Capitalization of Commissions Under ASC 606
Thus, the materials used as the components in a product are considered variable costs, because they vary directly with the number of units of product manufactured. Selling expenses, often called cost of goods sold, refer to costs and purchases needed to create products or deliver services for which consumers pay your small business money. The difference between sales revenue and sales expenses determine gross profit, from which overhead is deducted to calculate net profit. Most businesses figure out selling expenses monthly, but it can also be done weekly or quarterly. Direct materials are those materials (including purchased parts) that are used to make a product and can be directly associated with the product.
What is a commission expense?
Sales commissions are considered to be operating expenses and are presented on the income statement as SG&A expenses. Sales commissions are not part of the cost of a product. Therefore, sales commissions are not assigned to the cost of goods held in inventory or to the cost of goods sold.
The proportions of costs incurred can vary dramatically by business, depending upon the sales model used. For example, a customized product will require considerable in-person staff time to obtain sales leads and develop quotes, and so will require a large compensation and travel cost. Alternatively, if most sales are handed off to outside salespeople, commissions may be the largest component of selling expense.
By reducing its variable costs, a business increases its gross profit margin or contribution margin. Conversely, when fewer products are produced, the variable costs associated with production will consequently decrease. Examples of variable costs are sales commissions, direct labor costs, cost of raw materials used in production, and utility costs.
Examples of variable costs include the costs of raw materials and packaging. Another reason is your cost of labor (plus your material and overhead costs) needs to be factored into your product prices. If you don’t include the total costs incurred by your company in your sales price, the amount of profit you make will be lower than you expect. Also, if customer demand for your products declines, or a competitor forces you to cut your prices, you will have to reduce your cost of labor if you want to stay profitable. A sales commission is money your small business pays an employee when she sells your products or services to customers.
Together, the direct materials, direct labor, and manufacturing overhead are referred to as manufacturing costs. The costs of selling the product are operating expenses (period cost) and not part of manufacturing overhead costs because they are not incurred to make a product. The cost of labor is the total amount of all salaries, wages, and other forms of income paid to employees. It also includes the total amounts of all employee benefits and federal, state, and local payroll taxes that your business has paid (not the portion your employees paid). Accounting for sales commissions requires companies to book the commission expenses when the company books the revenue from the deal the rep closed.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9548438787460327,
"language": "en",
"url": "https://www.redoverblue.net/what-are-crypto-keys-in-bo3/",
"token_count": 798,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0595703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:49e53a0c-718b-4786-9112-2a633b94171a>"
}
|
What Are Crypto Keys In Bo3 – Merely put, Cryptocurrency is digital cash that can be used in place of traditional currency. The difference between Cryptocurrency and Blockchains is that there is no centralization or ledger system in location. In essence, Cryptocurrency is an open source protocol based on peer-to Peer transaction innovations that can be carried out on a dispersed computer system network.
One specific way in which the Ethereum Project is attempting to solve the issue of clever agreements is through the Foundation. The Ethereum Foundation was established with the objective of establishing software services around wise agreement performance. The Foundation has launched its open source libraries under an open license.
What does this mean for the larger neighborhood thinking about taking part in the advancement and application of smart agreements on the Ethereum platform? For starters, the major difference between the Bitcoin Project and the Ethereum Project is that the former does not have a governing board and therefore is open to contributors from all walks of life. Nevertheless, the Ethereum Project enjoys a far more regulated environment. For that reason, anyone wishing to add to the job must stick to a standard procedure.
As for the tasks underlying the Ethereum Platform, they are both making every effort to offer users with a new way to get involved in the decentralized exchange. The significant differences in between the 2 are that the Bitcoin protocol does not utilize the Proof Of Consensus (POC) process that the Ethereum Project uses.
On the one hand, the Bitcoin neighborhood has actually had some battles with its efforts to scale its network. On the other hand, the Ethereum Project has taken an aggressive technique to scale the network while also taking on scalability issues. As an outcome, the 2 tasks are intending to offer different ways of proceeding. In contrast to the Satoshi Roundtable, which concentrated on increasing the block size, the Ethereum Project will have the ability to carry out enhancements to the UTX procedure that increase transaction speed and decline fees. In contrast to the Bitcoin Project ‘s plan to increase the total supply, the Ethereum team will be dealing with decreasing the rate of blocks mined per minute.
The significant distinction between the 2 platforms comes from the operational system that the 2 groups employ. The decentralized aspect of the Linux Foundation and the Bitcoin Unlimited Association represent a conventional design of governance that places an emphasis on strong community involvement and the promotion of agreement. By contrast, the ethereal foundation is dedicated to building a system that is flexible enough to accommodate changes and include brand-new features as the needs of the users and the industry change. This model of governance has actually been embraced by numerous distributed application groups as a way of managing their tasks.
The major distinction in between the two platforms comes from the reality that the Bitcoin community is largely self-sufficient, while the Ethereum Project expects the participation of miners to support its development. By contrast, the Ethereum network is open to factors who will contribute code to the Ethereum software stack, forming what is known as “code forks “. This feature increases the level of involvement desired by the neighborhood. This design also differs from the Byzantine Fault design that was embraced by the Byzantine algorithm when it was used in forex trading.
As with any other open source technology, much debate surrounds the relationship between the Linux Foundation and the Ethereum Project. The Facebook team is supporting the work of the Ethereum Project by providing their own structure and creating applications that integrate with it.
Just put, Cryptocurrency is digital cash that can be utilized in place of traditional currency. Basically, the word Cryptocurrency comes from the Greek word Crypto which means coin and Currency. In essence, Cryptocurrency is simply as old as Blockchains. The difference between Cryptocurrency and Blockchains is that there is no centralization or ledger system in location. In essence, Cryptocurrency is an open source protocol based on peer-to Peer transaction innovations that can be executed on a distributed computer system network. What Are Crypto Keys In Bo3
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9716400504112244,
"language": "en",
"url": "https://www.saturn.network/blog/what-is-delegated-proof-of-stake/",
"token_count": 1592,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.014892578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:210f090b-dc04-4b2e-8afc-55bf8194db0d>"
}
|
Recently during our article where we reviewed how Binance DEX will work, we briefly touched on different consensus algorithms and discussed what could be the most likely set up for Binance Chain. For cryptocurrencies, the consensus algorithms are an integral part of every blockchain network, it is the way that they ensure every transaction made is verified and secured. So you could definitely argue it is the most important part of a blockchain network, as it is responsible for maintaining the integrity and security of these distributed systems.
The first cryptocurrency consensus algorithm created, is one that you are probably already familiar with, was the Proof of Work (PoW). It was designed by Satoshi Nakamoto to be implemented into Bitcoin, and if I have already lost you: don't worry! Just start here: What is Bitcoin and Blockchain? Today, we are going to learn about the Delegated Proof Of Stake (dPoS) consensus algorithm which you find in TRON or EOS.
To truly understand how Delegated Proof of Stake works, we first need to have a look at the basics of its predecessors: the Proof of Work and Proof of Stake algorithms.
Proof Of Work (PoW)
Proof-Of-Work or PoW is the first and original consensus algorithm used in a blockchain network. Through a process referred to as mining, the algorithm is used to securely confirm transactions and add new blocks to the chain.
A network of mining nodes is responsible for maintaining a Proof of Work system, this is normally done with specialised hardware such as ASICs which compete against each other to solve complex cryptographic (mathematical) problems. A miner can only add a new block into the blockchain if the solution for that block is found. Essentially, only by completing a proof of work can a miner do so. In return, the miner is rewarded with newly created coins and also all the transaction fees for that block. Overall this can still become a high cost for the miner as it requires a lot of energy & failed attempts before finding a solution and ASIC hardware is becoming increasingly expensive.
PoW blockchains are the standard in crypto for a fault-tolerance solution as they are considered the most secure and reliable. That being said they do see worries over the amount of effort(electricity) required to maintain the system and also regularly see debates on how they can continue to scale. Bitcoin, for example, has a very limited amount of transactions per second (though this limit does help it stay secure).
Proof of Stake (PoS)
The Proof of Stake consensus algorithm was designed to solve some of the emerging problems that have been noted on PoW-based blockchains. Specifically, it aims to solve the problem of cost that is associated with PoW mining (power consumption and hardware). You will not find any mining in a PoS system as the validation of new blocks happens in a deterministic way. New blocks are validated depending on the number of coins being staked, and the more staking coins a person holds, the more chance they have of being picked as the block validator (also referred to as minter or forger).
The key difference here is that while a PoW-based blockchain will rely on an external investment such as paying for power consumption or better hardware, a Proof of Stake blockchain is secured by the cryptocurrency itself (which would be an internal investment).
Another benefit of a PoS system is that it makes attacking a blockchain way more costly, as any successful attack would need the ownership of at least 51% of the total existing coins. Huge financial losses would be incurred for any failed attack. Yet despite the positive arguments for a PoS-based chain, we are yet to really see them being tested on a big scale.
Delegated Proof of Stake (DPoS)
The Delegated Proof of Stake (DPoS) consensus algorithm is seen in many cryptocurrency projects today such as Bitshares, Steem, Ark, TRON, EOS or Lisk and was developed by Daniel Larimer in 2014.
A DPoS-based blockchain makes use of a voting system where stakeholders decide to outsource their work to a third-party. Coin holders are able to vote and decide which third parties secure the network on their behalf, generally, these third parties are referred to as delegates or witnesses. It is these delegates that are responsible for securely confirming transactions and validating new blocks. The voting system can vary from project to project, but generally, we see that the voting power is proportional to the amount of coins a user holds and that delegates have to submit a proposal when asking for votes. Usually, delegates receive rewards for securing the network which they also share with their respective electors.
In this way, the DPoS algorithm produces a voting system which is heavily tied to the delegates' reputation. If an elected node does not work as expected or is simply not up to the task, they will quickly be expelled and another delegate will be chosen. We also note that so far, DPoS blockchains appear to be much more scalable and capable of processing a much higher amount of transaction per second (TPS) than PoW or PoS based chains.
DPoS vs PoS
While both algorithms are very similar due to the stakeholding aspect, DPoS provides a democratic voting system where block producers are elected. Delegates are motivated to stay honest and provide an efficient system, or they will simply be voted out. Furthermore, DPoS blockchains tend to be faster than PoS simply because these block producers have created their infrastructure in mind for the task at hand (think enterprise-grade systems with datacenter connections) whereas PoS-based chains are relying on Bob and Alice staking their coins from home or setting up a VPS.
Not to say the latter cannot work, and you can definitely argue it would be more decentralized, however, you can understand why venture capital groups are not going to make that gamble. Which could be another reason why we see more development groups choosing to develop a DPoS-based chain as it is potentially easier to seek investment?
DPoS vs PoW
Where we could say that PoS was simply created to solve the emerging problems of PoW, DPoS is also trying to provide a streamlined approach to the block production process. It is for this reason we find that DPoS systems are able to process large amounts of blockchains transactions in a short amount of time. PoW is still considered to be the most secure consensus algorithm which is why we see the most amount of money transmission going through PoW-based blockchains. PoS is still faster than PoW and potentially has more use cases. But, DPoS is not really used in the same way as PoW or PoS.
DPoS only uses staking to decide or elect the block producers, its actual block production is predetermined and no one is really competing against each other for a reward. Every delegate or witness will get a turn at block production, which is why there have been debates that DPoS should actually be considered a Proof of Authority system.
DPoS is substantially different from any Proof of Work based blockchain and even a PoS-based one. It only incorporates stakeholding for voting as a mechanism to keep the elected block producers (delegates) honest and efficient. Yet the actual block production is very different from any PoS system which is why in most cases it provides a higher performance in terms of transactions per second.
1. Fault-tolerance solution: The ability to continue operating properly in the event of failure of some (one or more) components. [↩]
2. Money transmission: the act of moving money between bank accounts or from one person to another, or between organizations. [↩]
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9550530314445496,
"language": "en",
"url": "https://www.wsup.com/blog/can-we-harness-the-market-to-provide-services-for-the-worlds-poorest-urban-citizens/",
"token_count": 401,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07177734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b65fb2bf-7eca-4175-98fe-66dd494fe78b>"
}
|
Blog by Neil Jeffery, published on Huffington Post
A few months ago, I took part in a Guardian Global Development Professionals Network debate on the effectiveness of Public-Private Partnerships (PPPs) in international development. The other panellists in general took a questioning view of PPPs, understandably cautious after high-profile cases between the private sector and governments have not delivered what was promised, or did not offer value for money.
As I listened to my fellow panellists, I realised that although we all had different views of the strengths and weaknesses of PPPs, we all agreed about what PPPs are trying to achieve, or at the very least, the gaps that they are trying to bridge.
At current funding rates, the Sustainable Development Goals will not be realised. The World Bank estimates that countries need to invest $114 billion per year (more than triple their current spending) on infrastructure to reach the drinking water and sanitation targets – and this doesn’t include the cost of operating and maintaining infrastructure after it has been built.
This projected shortfall means the opportunity to reach millions of people who need improved water and sanitation will be missed – and by a long way. Right now, one in four urban residents in sub-Saharan Africa currently lack access to safe sanitation services – and as cities grow, this number is likely to get worse.
To combat this challenge, African and Asian governments cannot be solely dependent on international funders – they need to find their own ways to generate resources for the provision of basic services.
The private sector can provide some of these resources. But is it possible to harness the power of the market to reach the poorest with adequate services at an affordable cost?
This report is based on findings from three new WSUP reports:
Public-Private Partnerships explained: urban sanitation delivery in Bangladesh
Public-Private Partnerships explained: urban sanitation delivery in Kenya
Public-Private Partnerships explained: urban sanitation delivery in Zambia
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9378973245620728,
"language": "en",
"url": "https://bridgemastersinc.com/the-pros-and-cons-of-cashless-toll-system/",
"token_count": 1228,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.107421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:27d1a83f-c947-48f0-bf37-c6908504331b>"
}
|
It seems that more agencies and municipalities are implementing cashless toll collection systems. Some use electronic readers, such as E-ZPass, to automatically charge drivers for crossing bridges and driving on roads. Others use systems that snap photos of license plates. The images are used to track travel, and drivers are billed based on these records.
Nearly half of the nation’s 336 tolled highways, bridges, and tunnels use cashless tolling exclusively, according to the International Bridge, Tunnel, and Turnpike Association, a respected industry group. The trend toward collecting tolls electronically is expected to increase significantly in the years ahead.
Here are negative and positive impacts this system could have on your projects. If you’re thinking about implementing an electronic toll payment system on your bridges or roads, this article will help you decide if that’s a smart move.
The transition to automated toll collection has presented drivers with significant challenges.
The biggest issue: Even though it’s easier than ever for drivers to purchase and refill automated toll passes, many forget to do it or choose not to. Because there are fewer (or no) toll lanes on bridges and roads that accept money, they’re forced to wait in long traffic lines to pay using cash, or they get tickets in the mail and face large penalties for not paying their tolls.
In addition to collecting money, counting it, and making change, toll collectors provide other services that drivers miss when they are no longer available, including:
- Warnings. Toll collectors tell drivers when there are accidents ahead or other issues that could cause traffic slowdowns on bridges and roadways.
- Directions. Of course, GPS is ubiquitous these days, but many drivers, especially those who are older, prefer to get directions from real human beings who know the local areas well. People also like to get a second opinion that their GPS systems are accurate at recommending the best, most efficient routes.
- Monitoring. Toll collectors look out for inebriated people and others who shouldn’t be behind the wheel, and report them to police.
- Advice. There’s plenty of information available online about hotels, restaurants, and gas stations. However, many travelers appreciate personal advice when selecting lodging and dining options — or finding mechanics who offer the automotive services they need.
- Tips. Toll personnel often provide advice on how to get from one place to another faster and more efficiently.
- Friendliness. Many locals enjoy a cheerful “good morning” at the beginning of the day or a “good night” at the end from a familiar face in a toll booth.
- Emergency help. Toll collectors have been known to use their own money when drivers don’t have enough cash to cover tolls. Sometimes it’s a loan. Other times, it’s a gift.
Other concerns for drivers include:
- Surprises. When people hand over cash, they know how much they’re spending on tolls in real time. With cashless systems, they may not be aware of how much it coststhey’re spending to cross bridges and use highways until they get their statements.
- Privacy issues: Many individuals and advocacy organizations are concerned that automated toll collection provides a way to track people as they drive from place to place.
- Billing errors. Drivers have little recourse when they’re charged in error or are forced to pay unexpected penalties that can far exceed the actual tolls.
Many of these issues can be mitigated with signage, good communications, well-trained customer service reps, and additional traveler advisory kiosks at rest stations.
For toll collectors
Toll collectors who are forced to leave their dependable jobs often face reduced circumstances. Even with help from their labor unions, it’s difficult for many displaced workers to find jobs that pay as much or provide comparable benefits. Allowing plenty of time to transition toll workers to new jobs will help prevent the negative impacts of a forced job change.
There are many benefits to cashless tolling for drivers, including:
- Convenience. Most people prefer the ease of loading a pass or paying a monthly bill online compared to having to find cash while driving and waiting in long lines to pay tolls.
- Less wait time. Cashless tolling eliminates the human factor and keeps traffic moving through toll plazas.
- Enhanced safety. Drivers in cashless toll plazas are less likely to unexpectedly change lanes to find shorter toll lines or slow down or brake as they approach the toll booths. This reduces the possibility of accidents.
- Improved traffic. Electronic toll collection allows officials to raise and lower tolls to control traffic. Higher tolls discourage drivers from entering congested areas, while lower ones encourage them to use less busy, albeit often less convenient, bridges and roads.
Make sure you communicate these benefits to motorists when you implement an electronic system.
For toll collectors
- Enhanced workplace safety. Toll collectors are frequently victims of robberies, harassment, and unwanted advances. Moving them into other jobs helps prevent crimes against them.
- Better use of skills. People who shift from toll collecting jobs to other positions in their agencies usually end up doing more valuable and engaging work. This includes ensuring safety and security, helping out in emergencies, and managing traffic.
- Greater job satisfaction. Toll collecting jobs are mentally exhausting because of boredom and the stress of dealing with difficult drivers. Workers who move to other types of positions generally experience less stress and enjoy their jobs more.
Highlight these benefits to help reduce anxieties about making a job change.
For the environment
Electronic toll collection has been shown to reduce pollution. Vehicles idling in toll lines release contaminantes that foul the air and contribute to global warming. The issue is eliminated when traffic can pass through toll plazas without stopping. Promote this benefit when transitioning to a new cashless system. It will resonate with drivers who care about the planet.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9705122709274292,
"language": "en",
"url": "https://investmentvaluefinders.com/money-changing-medicine-health-care/",
"token_count": 714,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:56d992de-a779-442f-914a-56f62eb9f30b>"
}
|
Medicine and health care have become an interesting binomial in which to invest, especially if you approach laboratories for rare diseases.
Rare diseases affect more than 230 million people in the world, and the great majority of the great pharmacists little turn to see. Anyhow, some few laboratories work on research and development for this kind of illnesses.
Just as a backdrop I can tell you that global sales of orphan drugs (they are called so because they are intended for people with rare conditions) are expected to jump as more companies enter this field, increasing 11 percent each year to reach the 176 billion dollars in 2020, according to the research group EvaluatePharma.
Many investors have taken note of this. Another example is that, according to the orphan diseases index gathered by JMP Securities, a San Francisco investment bank, just in the past year the value of US biotechnology companies focused on rare diseases has increased by 56%.
That performance was even better than the broader biotechnology index on the Nasdaq, which rose 37 percent over the same period, breaking the S & P 500 index, which is 5 percent higher.
Investing in medicine and health care
Venture capitalists investing in early stage companies have also increased their stakes in the US. In 2012-14, these early-stage companies raised more than $500 million a year, compared to 2008, when they made less than $200 million.
Some companies like Revival Therapeutics have begun to obtain patents by the US FDA for the treatment of diseases such as cystinuria, and they keep innovating, researching, improving and developing new drugs in this industry. A good option to invest, as it is in the OTC market (Over The Counter), and you can set aside some dollars and have good investment returns in the medium term.
Orphan drugs account for about one in three drug approvals in the United States, but no one in the industry expects the proliferation of drugs to end in the near future. With 7,000 rare diseases, there is a lot of potential, because only 10% have effective treatments.
Many of the rare or uncommon diseases are potentially deadly or chronically debilitating, affecting a small number of people compared to the general population, and for which therapeutic resources are almost non-existent. Because of their rarity, they require combined efforts.
In medicine, a disease is considered rare if it affects less than one in two thousand people. They were also called orphan diseases because they did not have a biomedical specialty that was responsible for the clinical conditions they present, and also because they were “orphans” of effective treatments.
For those affected and their families, this disease is a great suffering and effort in the day to day. Until you get a diagnosis, if there are any, you often have to wait several years. Many doctors are hardly aware of this disease and there are few specialists or centers who are familiar with the corresponding treatment.
Rare diseases cannot be prevented. The number depends on the degree of specialization used to classify the different disorders. Until now, in the field of medicine, disease is defined as an alteration of health status, which presents with a single pattern of symptoms and a single treatment.
Hence the opportunity to invest in this binomial of medicine and health, where companies actively seek licensing, acquisition and partnership opportunities from the industry and the academic world, to generate profits for its investors, and to support people which not every pharmacist takes into account. Are you ready to invest? You will not regret.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9596423506736755,
"language": "en",
"url": "https://www.ethereumnews.news/learn/",
"token_count": 513,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1201171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:427d92a4-ead9-4e86-8906-275144f76632>"
}
|
How does Ethereum mining work?
Ethereum uses a Proof-of-work consensus mechanism for which mining is essential. Ethereum miners use their time and all of their processing power to solve the cryptographically difficult puzzles. If miners are successful, they can add the blocks to the Ethereum blockchain and can earn the reward in return.
Different Types of Mining:
CPU mining – This is one of the most basic forms of mining. Anybody from anywhere in the world can use the computer to mine. However, this is not applicable anymore.
GPU mining – GPU is a graphic processing unit, and it is a part of a video rendering system on the computer. The GPU functions to assist in rendering the 3D visual and graphic effects. GPU is a far stronger system than CPU for mining. Some coins like Monero are mined via GPU. However, as the difficulty increased, mining became more difficult.
FPGA Mining – FPGA is a device having a series of gate arrays to create truth tables and to calculate inputs from the data stream in order to get the desired result. FPGA makes any task possible such as mining hash to create an output that results in a successful hash.
How will the Ethereum scale?
From the past few months, we all are hearing that the Ethereum platform is going to fail due to its inability to scale. The main problem with Ethereum scalability is the network protocol that each node in the network needs to process in each transaction. The miners have to race to find the nonce to meet the target difficulty. Each node should verify that the miner’s work is valid and should keep an accurate copy of the network state. The entire process limits the capability of the transaction process.
Sharding is the process that offers the scalability problem of Ethereum. Sharding means partitioning the huge database into small and faster pieces known as Shards. Each shard is going to have its own transactions chain. Ethereum accounts will be assigned a shard, and the transaction with other accounts can happen on that shard. The idea is to facilitate cross-shard communication. While sharding offers the benefits of scalability by splitting the load of network transactions, it also poses some new security problems. With Proof of work, an attacker will need 51% of the hash rate to launch the attack. Now that the network is split into many shards, it will only take less of the hash rate to attack a shard successfully. While developers have proposed some solutions to this problem, they still need to be tested.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9669414162635803,
"language": "en",
"url": "https://cdrta.com.au/2021/03/08/how-owners-of-bitcoin-and-other-cryptocurrencies-will-be-taxed/",
"token_count": 1884,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.38671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:240f6da5-4d0c-4030-908d-ce9673d7d00d>"
}
|
The price of Bitcoin and many other cryptocurrencies has surged in recent years. One bitcoin currently equals just under $60,000 Australian dollars. As result of this financial phenomena many investors around the world have managed to make large sums of money in a short period of time. At the same time, many inexperienced investors have jumped on the bitcoin bandwagon and lost large amounts of money as well.
An important thing for those who are already involved in or considering getting into bitcoin and cryptocurrencies are the tax implications for investing and trading these digital products. As cryptocurrencies become more and more entrenched in the mainstream, one of the biggest challenges in accounting for them has been that some investors may not be aware that it is a requirement for them to disclose crypto assets to their accountants.
What is cryptocurrency and how to get it?
Cryptocurrencies also known as digital currencies are created and held electronically. Unlike traditional money, cryptocurrencies aren’t printed and no one controls them. Cryptocurrencies are produced by people and in more recent times by businesses who are running computers all over the world, by using software that attempts to solve mathematical problems.
There are estimated to be over 4,000 cryptocurrencies in the world. Bitcoin is the most well-known type of cryptocurrency. Other well-known kinds include; Ethereum, Ripple XRP, Litecoin and NEO.
There are three ways to obtain cryptocurrencies. You can get them by mining them, buying them or providing goods and services in order to earn them.
Mining is the process by which cryptocurrency is created. A computer will crunch through a number of difficult mathematical problems and by succeeding with the right answers the person who uses the computer is rewarded with a unit of the currency.
A person can also create an ‘online wall’ or acquire the digital currency by visiting a ‘cryptocurrency exchange system’ that aims to put sellers in touch with people who are interested in buying. The buyers will pay for the cryptocurrency they purchased by transfer money via their online banking account.
As a result of cryptocurrency becoming a more accepted virtual currency by businesses and individuals around the world, the third and final way to obtain the digital currency which is providing goods and services has become an increasingly common occurrence. Most commonly it is done in restaurants and cafes that accept bitcoin as a form of payment. This is sometimes done by completing online surveys or undertaking some form of affiliative marketing.
What is ATO doing to tax cryptocurrency?
The ATO has estimated that somewhere between 50,000 and one million Australians have currently invested in crypto related assets. Many of these individuals have failed, or will fail, to properly report the profits they have made for tax purposes.
In response to this common occurrence, the ATO is gathering bulk records from Australian cryptocurrency designated service providers (DSPs) as part of a data matching program which aims to ensure that people trading in cryptocurrency are paying the amount of tax that is required for them to pay.
The data that has been provided to the ATO consists of cryptocurrency sales and purchase information. The data will identify Australian taxpayers who have failed to disclose their income details correctly.
A number of Australian taxpayers may find themselves being contacted by the ATO as a result of the data matching exercise. Those who are contacted will be given an opportunity to amend their tax returns to include any information highlighted by the ATO.
Individuals will have a timeframe of 28 days to clarify any information that has been obtained via the data provider. Thousands of letters have already been issued to taxpayers across the country and more will continue to be sent.
Recipients of the letters who fail to modify their tax return to include the missing transactions will be met with more forceful compliance action which could include a full tax audit.
How the ATO taxes cryptocurrencies
Generally speaking, there is no income tax or GST implications if you are not in business or carrying on an enterprise and you simply pay for goods or services using cryptocurrency. An example of this would be purchasing personal goods or services on the internet with bitcoin.
Cryptocurrency is regarded as capital gains tax (CGT) assets therefore CGT potentially applies when an Australian resident sends a unit of the currency to another individual. Despite this, transactions are excused from the CGT if the cryptocurrency is used to pay for services or goods for personal use for example online hotel bookings, or at a café or restaurant which accepts bitcoins.
Transactions are also exempted from CGT if the cost of the cryptocurrency used to pay for the transaction is under $10,000 (this is the exemption for personal use assets).
If the cost of the cryptocurrency used in a transaction surpasses $10,000, the personal use exemption will be unavailable and Capital Gains Tax will apply. The capital gain is calculated as the increase in value of the cryptocurrency between the time it was acquired and the time it was disposed of.
Australian Businesses using bitcoin to buy and sell goods and services
If you are a business owner who has received cryptocurrency for your goods or services, you will need to record the value of the cryptocurrency units in Australian dollars as a requirement when reporting your ordinary income for tax purposes.
When your business purchases items (including trading stock) with cryptocurrency you are entitled to a tax deduction based on the arm’s length value of any item you acquire.
The disposal of cryptocurrency may have may also have capital gains tax consequences if you are running a business. However, it is important to know that the any capital gain is reduced by the amount included in assessable income as ordinary income (so you aren’t taxed twice on the same amount).
Mining cryptocurrency as a business
If your business is mining cryptocurrency, any income resulting from the transfer of the mined digital currency to someone else is included in assessable income. Any expenses which occur as a result of the mining activity are allowed as a deduction.
Any losses resulting from the mining of cryptocurrency might also be subject to the non-commercial loss provisions, this means they won’t automatically be available to offset against other income (there are tests you will have to meet first).
The non-commercial loss provisions exist to prevent individuals or businesses from using up losses from activities which they never realistically had a chance of making a profit or which don’t arise from genuine business activities. For example; people who are trading as a hobby or are engaging in speculation which is basically a form of gambling.
If you are carrying on a business of mining with the intention to sell the currency, it is a form of trading stoke. You will therefore need to bring into account any currency on hand at the end of each income year.
Taxpayers conducting a cryptocurrency exchange (including ATMs)
If you are running a business that aims to buy and sell cryptocurrency as an exchange service, the proceeds consequent from the sale of the currency are included in assessable income.
Any expenses accumulated as a of the exchange service, including the acquisition of the cryptocurrency for sale, are tax deductible.
Disposing of cryptocurrency acquired for investment
When acquiring cryptocurrency as an investment, CGT will apply, however when the cost of the cryptocurrency does not exceed $10,000 the personal use asset exemption may apply if you can prove that the cryptocurrency was to fund personal consumption.
The ATO pays extra attention to taxpayers who try to rely on the personal use asset exemption to avoid CGT; prepare to be asked to provide evidence that you either did – or intended to – use your cryptocurrency to fund personal spending on services and goods.
When the price of the cryptocurrency surpasses $10,000, the personal use exemption will not be available and CGT will apply. The capital gain is calculated as the increase in value of the cryptocurrency between the time it was acquired and the time it was disposed of.
If the transactions result in profit-making undertaking or plan then the profits on disposal of the cryptocurrency will be assessable income since you will be viewed as a trader instead of an investor.
Disposing of cryptocurrency acquired for investment
The rules around trading cryptocurrency for business or profit in comparison to buying and selling cryptocurrency as an investment are pretty much the same as those applying to share traders versus investors. There are number of things to take into consideration but generally speaking, if you are holding the cryptocurrency with a goal to long term gain, you are most likely to be an investor. If you are buying and selling cryptocurrency over the short term with a view to making profits, you are likely to be perceived as a trader.
Every Australian dealing with cryptocurrency needs to maintain the following records for tax purposes: the date of every transaction, the amount in Australian dollars at the time when the transaction was made, why the transaction was made and the details of the other party involved in the transaction.
Latest posts by Craig Dangar (see all)
- Calls for Targeted Stimulus Package to Assist Hospitality Venues Post JobKeeper - April 20, 2021
- The ATO Is Set To Turn the Tide on Insolvencies - April 19, 2021
- New Mental Health Training for Accountants and Bookkeepers Has Been Officially Launched - April 18, 2021
- Job Vacancies in Australia Hit Record Levels as JobKeeper Ends - April 17, 2021
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9327220320701599,
"language": "en",
"url": "https://circularsupplychains.com/2019/10/01/is-the-circular-economy-a-myth/",
"token_count": 3132,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0302734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d52685c8-f362-4b73-85fe-8e6a9b555699>"
}
|
The circular economy is a generic term for an industrial economy that is, by design or intention, restorative and in which material flows are of two types, biological nutrients, designed to re-enter the biosphere safely, and technical nutrients, which are designed to circulate at high quality without entering the biosphere.
The term encompasses more than the production and consumption of goods and services. It includes a shift from fossil fuels to renewable energy, the role of diversity as a characteristic of resilient and productive systems and of the role of money and finance as part of the wider debate. To move forward a common language and metrics need to be created around how to manage stocks of products in a circular economy. If you can’t measure it, you can’t manage it!
Pursuing the circular economy opportunity in an ambitious way would represent a big shift in economic thinking and priorities. In a circular economy a more balanced set of metrics need to be developed to measure the success of an economy, metrics more aligned with consumer utility and public expectations(1). For example, sharing and digitization have major potential to increase consumer utility but, are not well captured in GDP. Currently, we have no established metrics for the utilization of key infrastructure and products, for their longevity, or for success in preserving material and ecosystem value. Articles, policy seminars, statements, and targets for these topics are rare, compared with the pervasive focus on improving flows, as measured by GDP. Today’s linear economic model measures success almost exclusively in terms of a flow metric (GDP), and economic policies are designed to maximize flow. Most companies have linear business models. The linear “Make, Use, Dispose” thinking that feeds the GDP metric is well known and misses the impact this model has on our world as a whole. A new system and way of thinking needs to be implemented.
Systems thinking, as it applies to the circular economy, is the ability to understand how things influence one another within a whole. Elements are considered as ‘fitting in’ their infrastructure, environment and social context. Whilst a machine is also a system, systems thinking usually refers to nonlinear systems: systems where through feedback and imprecise starting conditions the outcome is not necessarily proportional to the input and where evolution of the system is possible: the system can display emergent properties.
Linear “Make, Use, Dispose” industrial processes and the lifestyles that feed on them deplete finite reserves to create products that end up in landfills or in incinerators. Circular “Make, Use, Re-use” processes attempt to maximize resource utilization thereby re-balancing the attention paid to stocks and flows. Effective flows achieve efficiency by optimizing the system, not the part. This is a key tenet to the idea of a circular economy.
Efficiency can lower the amount of energy and material used per dollar of GDP, but fails to decouple the consumption and degradation of resources from economic growth. The linear ‘Make, Use, Dispose’ economic model relies on large quantities of easily accessible resources and energy. Much of our existing efforts to decouple the global economy from resource constraints focus on driving ‘linear’ efficiencies—i.e., a reduction of resources and fossil energy consumed per unit of manufacturing output. The risk to supply security and safety associated with long, elaborately optimized global supply chains appears to be increasing. Against this backdrop, business leaders are in search of a ‘better hedge’ and many are moving towards an industrial model that decouples revenues from material input. Proponents of the circular economy stress that focusing on efficiency alone will not alter the finite nature of resource stocks, and—at best— simply delays the inevitable. A change of the entire operating system is necessary.
This represents a considerable challenge to say the least so, where would we start? The current linear supply chain has been developed since the dawn of civilization. It moves usually homogeneous materials from a single source towards an ever-widening audience of consumers but the idea of a circular economy proposes to include a system to do just the opposite; move heterogeneous materials from a wide audience back towards…..what? The original source? A re-manufacturer? A host of liquidation or recycling options? Landfill? You quickly begin to recognize the enormity and complexity of the challenge. It’s not simply a matter of adding a reverse chain but adding a reverse chain that caters to a very diverse audience of, not end users (this breaks the concept of circularity) but re-users and their re-uses can be greater than the uses of the original end users of the supply chain.
As with every major transformation, it is vital to take a systematic approach, unraveling the issues at the point of greatest leverage. We can identify several basic commonalities that address circularity between the supply and reverse movement of materials so, in an effort to bring this challenge back into perspective, let’s start with those. Network design, materials purity and demand-side business model innovation each involves important commonalities between supply and reverse chains as it applies to circularity but, it is vital to see all three as a whole, as they are so intertwined.
The Ellen MacArthur Foundation, in their series of reports titled, “Towards the Circular Economy”, suggest that materials purity – reorganizing and streamlining pure materials flows – is the most viable starting concept with the potential for carrying circularity to a tipping point(2). I don’t argue that conclusion considering a global perspective and projects already being undertaken but, network design and demand-side business model innovation need to be addressed simultaneously. Having pure materials alone fails if no efficient system of getting them to the recycling facility exist and there is no demand for the recycled pure material.
This article will start from a network design perspective. Whether it’s supply or reverse, the movement of each involves chains and represent the key units of action. The reverse chain is an entirely different animal than the supply chain and you miss that point at your peril. Circularity is not a concept we can identify with current chains and, if we hope to do so, there are some fundamental challenges that we need to address before we can even consider circularity.
We need to separate the long-term efficiency that we seek from circularity away from the short-term margins that are identified with instant gratification and, so far, we seem to have this entire concept backwards. Why do we tax the labour component of supply instead of the material component when labor, material and overhead are the only three components of production? To encourage business to create pure material products and use recycled materials, we should be taxing material consumption in some measure, not labour.
How can we address “improvements” with a circular model while we’re still struggling with the ridiculously dismal inefficiencies of the linear supply model? With the technological advances we have available to us today we should develop a focus towards automating the supply chain from end to end. With automated unmanned vehicles, radio frequency identification, satellite communications, software, sensors, analytics and decision modelling we should be able to further the available efficiencies of the current linear supply chain with an eye towards it supporting circularity.
Are we developing a circular model to “improve” the supply model or are we developing a completely new model that separates supply from return. We need to answer this question because, as I mentioned earlier and, as I’ll embellish upon in future posts, the reverse chain is an entirely different animal than the supply chain.
Seventeen years ago I started a company that handled reverse logistics for major retailers in western Canada. Wal-Mart was relatively new to Canada and my focus was liquidations which Wal-Mart needed. I quickly learned that efficient reverse logistics is a much more complex study than supply chain logistics. As an accountant and business owner I was able to experiment with different strategies and, in dealing with distressed inventory I’ve experienced the intricacies of what we’ll be up against in trying to develop a circular economy.
Just consider warehouse or carrier damages that occur in the average distribution center each and every day for a standard department store. Depending on the extent and value of the damage there can be a lengthy insurance claim leaving this inventory in limbo until it’s settled. So, who covers the cost of this inventory as it sits in limbo? Where do you store it? How do you value it post settlement? How or where do you sell it for the greatest economic recovery? How do you account for it and how does that affect performance metrics? That’s just one small aspect of the reverse chain for one industry. When you realize there can be multiple chains for countless different industries you begin to grasp just how much is encompassed by the term “circular economy”.
At this point you begin to ask, “Are we developing metrics for the supply chain or the reverse?” because the reverse is quickly becoming as big and economically important as the supply! Hopefully, at this point you also begin to realize what we’re up against in terms of the complexity involved in trying to address circularity because it’s not just economics that needs to be considered nor is it single systems.
Third party expertise is coming to the forefront of the reverse chain because it has to. Recognizing this begins to support the title of this paper, “Is the Circular Economy a Myth?”
I can categorically state that the circular economy is definitely NOT a myth. The intention of this paper is to recognize not just the immediate importance of such a quest but perhaps more to caution against acting without deep consideration for what we’re up against. Certainly, we can begin efforts to address the more obvious problems of our current systems but let’s not kid ourselves that we actually “know” what we’re doing yet.
If it’s taken from the dawn of civilization to get this far and overall, the global transport efficacy has recently been estimated to be lower than 10%(3) then who do we think we’re kidding? If 450 million wooden pallets are produced each year in the U.S. alone(4) to replenish the worn out pallets that are discarded and GHG emissions continue to increase unabated then how well are we progressing with our sustainability efforts? And that’s just in the supply chain! What we’ve created to date is so woefully inefficient and unsustainable that trying to identify a singular starting point is like playing the carnival game “whack-a-mole” by yourself for eternity; you’ll simply never win!
As a planetary population we all have to be playing together and all from a “systems thinking” perspective, unraveling the issues at the point of greatest leverage for each system in unison so the circular economy that we’re trying to achieve is allowed to display those emergent properties because it is from those emergent properties that we learn how to continue a successful forward progression.
I would suggest that a focal starting point extends farther beyond network design, materials purity or demand-side business model innovation. To repeat, today, we have no established metrics for the utilization of key infrastructure and products, for their longevity, or for success in preserving material and ecosystem value. If you can’t measure it, you can’t manage it!
Six major oil companies have written an open letter to governments and the United Nations saying that they can take faster climate action, if governments provide even stronger carbon pricing and eventually link it all up into a global system that puts a proper price on the environmental and economic costs of greenhouse gas emissions.
BG Group, BP, Eni, Royal Dutch Shell, Statoil and Total sent the letter to France’s Foreign Minister Laurent Fabius and Christiana Figueres, Executive Secretary of the UN Framework Convention on Climate Change (UNFCCC) The letter said:
“Our companies are already taking a number of actions to help limit emissions … For us to do more, we need governments across the world to provide us with clear, stable, long-term, ambitious policy frameworks. We believe that a price on carbon should be a key element of these frameworks.” (5)
How do we even start without first developing metrics to measure…..anything?
Secondly, even if we accept the Ellen MacArthur Foundation’s suggestion that materials purity – reorganizing and streamlining pure materials flows – is the most viable starting concept with the potential for carrying circularity to a tipping point, this won’t be achieved without being able to uniquely identify not only the material itself but whether that material is in the supply or reverse chain. The concept of circularity begins where the linear concept ends so we need to be able to identify which products are still in the supply chain and which products have come back into the reverse chain and this is a major crux of the entire problem.
The missing link between the supply and reverse chains is the end user. Privacy laws prevent tracking material past the point of sale so the vast majority of the materials we need to track are not in the supply or reverse chains. It is the end user that decides the fate of these materials and it is the end user that we need to focus on.
An emphasis on legislation and consumer education would provide a model of circularity similar to that of Japan’s; one of the leading nations in the field of circular economics. The idea of a circular economy is embedded in their education and culture and the rest of us could learn from their example. As an island nation Japan has always lived with natural resource scarcity due to geological and geographical limits. This increased the pressure for Japan to develop a circular economy but, virgin resource scarcity of certain materials is quickly becoming a global problem. The longer we wait the worse it will get!
Data synchronization on material identification like that provided by GS1 needs to be the other focus of our efforts. We need to be able to uniquely identify individual products allowing identification of materials in the supply and reverse chains to streamline the material flows. Coupling this sort of unique identification capability with a leasing products strategy as opposed to a sale of products strategy would bypass the privacy laws and allow tracking throughout the products life-cycle.
I realize that this post leaves many unanswered questions but, the intention is to open the door for comment and debate on the idea of a circular economy. Future posts will offer options on advancing the circular economy.
Take part in the poll to vote on what you think is the best starting point to advance the circular economy.
- Walter Stahel, “How to Measure it”, The Performance Economy second edition – Palgrave MacMillan, page 84
- Ellen MacArthur Foundation, “Towards the Circular Economy”, Vol. 3, page 62
- Ballot E. and F. Fontane, “Rendement et efficience du transport: un nouvel indicateur de performance,” Revue Francaise de Gestion Industrielle, vol. 27, 41-55, 2008
- National Wooden Pallet and Container Association.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9602691531181335,
"language": "en",
"url": "https://fcw.com/articles/2012/06/15/feat-tech-intro.aspx",
"token_count": 2962,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.32421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:395fcb09-f8ce-49f0-8b70-c4b5cd61effe>"
}
|
4 technologies that transformed government
Being in charge of a government agency’s IT programs has never been an easy job. People have watched IT transform the economy, the culture and their personal lives in the past two decades, so they naturally expect a similar swift pace of technology-fueled reinvention from government — all without any missteps or wasting of taxpayer dollars, of course.
Unfortunately, despite having invented some world-changing technologies, such as the Internet and the Global Positioning System, the government is often viewed as a technology laggard that is encumbered by outdated attitudes and procurement processes.
Comparisons to the private sector are inevitable but mostly unfair. Government IT leaders deal with unique challenges and responsibilities when it comes to buying and deploying IT, including organizations that drive their own parochial IT agendas, project funding that is dependent on annual re-approval, and myriad regulations that dictate how agencies plan, develop and manage their IT systems.
In the stories that follow, we take a closer look at some world-changing technology developments of the past 25 years, including the Internet, a game changer if there ever was one, and GPS, which has revolutionized the way we interact with our world and underscored the power of place.
Now the demand for mobile technology and apps, with all the inherent security challenges, is driving a new revolution in employee productivity and public interaction, whether agencies are ready or not.
Fortunately, the government has kept pace with many of those changes by gradually moving away from custom-built systems to commercial off-the-shelf technology. That shift has been accompanied by changes in policy that further streamlined the procurement process and gave agencies access to commodity products wrapped in solutions tailored to their specific needs.
All those achievements face challenges, but that is only natural as technologies continually evolve to meet the government’s and the public’s ever-changing needs and expectations.
NEXT: Government at your fingertips
How the birth of the Internet enabled e-government
The Internet has changed the way the whole world does business, so it is no wonder that it has transformed — and is still transforming — the way the government delivers services to the public, buys products and shares information.
The Defense Department developed the Internet’s predecessor, the Advanced Research Projects Agency Network, in the 1960s and 1970s as a way for its university partners and research labs to communicate. By 1996, many civilian agencies were flocking to the Internet, notably the General Services Administration, which became one of the first to give Internet access to all its employees.
1996 was the year that the Clinger-Cohen Act effectively ended GSA’s reign as a mandated supplier to the government, so the agency was looking for ways to improve its operations and offer better services to its agency customers. Acting GSA Commissioner David Barram, a 24-year veteran of Silicon Valley technology companies, said at the time that the Internet would be a key to GSA’s future competitiveness.
“Some people did still wonder what anyone in GSA would need it for,” said Bob Woods, president of Topside Consulting Group and former commissioner of GSA’s Federal Technology Service. “But the Internet’s communications potential quickly became apparent.”
GSA’s leaders were hardly alone in their assessment. The National Institutes of Health set up a virtual store in 1996 that allowed its employees to shop for computer products over the Internet. Many agencies had been using electronic data interchange for years to conduct business, but those systems relied on esoteric back-office software and proprietary networks controlled by procurement specialists. By comparison, the emerging World Wide Web and the online storefronts it enabled were democratizing e-commerce.
The Internet’s increasing popularity also gave agencies a new way to interact with and serve the public. As soon as they began going online, agencies established websites to provide basic information and government data to the public and later added a variety of electronic services.
The e-government investments have paid off, especially in recent times when overall trust in government has taken a hit. For example, taxpayers who file their returns electronically give the Internal Revenue Service a fairly high score on the American Customer Satisfaction Index: 78 out of 100 versus 57 for those who file on paper.
“ACSI results confirm that the promotion of e-government initiatives is not only a worthwhile pursuit but is one that will likely continue to alter the landscape of government,” said Claes Fornell, ACSI’s founder.
However, security and privacy concerns continue to be major hurdles for the government’s expanded use of the Internet, particularly in the era of cloud computing.
“I think the government has done a fairly good job in enhancing things to do with [agency use of the] Internet from a bureaucratic perspective,” said Rick Nelson, director of the Homeland Security and Counterterrorism Program at the Center for Strategic and International Studies. “But there will be this constant tension going forward about adopting technologies because of security concerns.”
NEXT: The power of place
How a technology developed for the Cold War permeated government operations
The Global Positioning System is pervasive in today’s government operations, whether it’s supporting surveillance of the country’s borders, disaster response or critical functions on the battlefield. And it plays a role in a variety of products and services the public has enthusiastically adopted.
That widespread use of the government-developed, satellite-based navigation system is a far cry from its origins as a highly secret, specialized and expensive asset, conceived during the Cold War as a means to improve the accuracy of the country’s nuclear defenses and other military capabilities.
Over time, the government opened the system to civilian and public use, and GPS — and the parallel developments of publicly available, high-resolution satellite imagery and geographic information systems to manage all that data — has fundamentally changed people’s ability to understand and interact with the world around them.
“We’ve had this explosion of 'the power of place,’ provided by the ubiquity of GPS and the availability of precise geospatial information,” said Keith Masback, president of the U.S. Geospatial Intelligence Foundation. “All members of the federal government are interacting with their IT systems and their data differently because they’re geo-enabled and enabled with precise location information.”
That capacity is being applied to a multitude of government functions, many of them vital to both routine and emergency operations. For example, GPS and satellite imagery were central to the success of the decennial census in 2010 and are used every day in law enforcement, air traffic control, agriculture and emergency response.
The technology has also proved invaluable in disaster response. “The Haiti earthquake and hurricanes Katrina and Rita…really brought everything to bear,” Masback said. “We had crowdsourcing of critical information that was enabled by GPS. Those were major turning points.”
GPS is also playing a key role in the comprehensive overhaul of records at Arlington National Cemetery. Two years after allegations of gross mismanagement surfaced, officials are using GPS-based tools to digitize and organize operations — and enhance the visitor’s experience.
“Arlington is now able to visualize operations across 624 acres, in real time, to understand what’s occurring at the cemetery,” said Maj. Nicholas Miller, the cemetery’s CIO. “We’ve transformed into a GIS-managed operation.”
But there are challenges for the future of GPS and satellite imagery. The heavy dependence on the systems heightens existing and emerging vulnerabilities and complexities. Current satellite constellations are aging, and a number of policy-related issues threaten progress. Furthermore, geospatial systems in development in other countries could introduce interoperability challenges and fragment the supporting civilian industry.
“We have to understand we’ve become reliant on GPS, and we’ve set the global standard: We built it, we launched it, we maintain it,” Masback said. He noted that Europe and China are developing their own satellite navigation systems and added, “but other countries are cognizant of the vulnerability that comes with having the keys to the kingdom. These are going to be different approaches to GPS.… How is that going to impact us?”
NEXT: Better acquisition with commodity IT
How the shift from custom-built to ready-made commercial products has streamlined federal acquisition
The shift away from proprietary and custom-built systems to off-the-shelf hardware and software has had a significant impact not only on what the government buys but also how it buys.
Although government has not achieved the level of gains that private industry has, commodity computing in government has had similar benefits: lower hardware costs, greater efficiency and rapid innovation in applications.
The road to the broad commoditization of computing in government began with the development of the Unix operating system, said Tim Hoechst, chief technology officer at Agilex. He believes that Unix’s ability to run on a number of platforms led to companies competing to build cheaper systems that could host Unix.
The next phase came on the desktop as first DOS and then Microsoft Windows became the standard for most computing purposes, and machines based on the Wintel architecture became pervasive. Procurement reform followed closely on the heels of commodity PCs, and agencywide contracts made acquisition even simpler.
With the passage of the Federal Acquisition Streamlining Act of 1994 and then the Clinger-Cohen Act of 1996, the government truly became a buyer of commercial IT, said Larry Allen, president of Allen Federal Business Partners. Those laws were an acknowledgment that the government was no longer the major market driver in the development of IT systems, he added. The innovations of the commercial market had outstripped the government's ability to keep up, especially given its arcane laws.
The reforms enabled agencies to buy the same technologies as their commercial counterparts. Costs dropped and competition accelerated as companies sought to establish themselves in a newly defined market. The government was freer to buy and companies were freer to offer commodity-like solutions.
Once the rules came down and a firm preference for commercial products was established, buyers and sellers alike flocked to the GSA Schedules program to take advantage of its commercial offerings, reduced procurement lead times and streamlined competitions, Allen said.
Commoditization further entrenched itself when government invested heavily in client/server computing, Hoechst said. That’s when people realized they could virtualize their back-end systems with lots of cheap, commodity processors.
“In the late 1990s and early 2000s, they found they could use racks full of low-cost blades they could buy from a range of suppliers as long as they could run the operating systems and applications they wanted,” he said. “Linux was a big driver for this.”
When GSA added IT services to its Schedules program, the growth accelerated, Allen said. It allowed federal buyers to obtain commodity products wrapped in tailored solutions. The service offerings helped differentiate many suppliers from one another, and the ease of use offered by the commercial nature of the solutions made the Schedules very popular with buyers.
Where once the market was defined by government specifications and obsolete rules, it was now driven by commercial market trends and fewer rules, which allowed for more competitors and faster acquisitions.
NEXT: The perils, and promise, of mobility
How mobile technology is upending the workplace and remapping the security landscape
Mobile technology isn’t exactly new. Portable computers and basic cell phones became popular in the 1990s, and though the bulky early devices were not always the most convenient to cart around or put in a pocket, they marked the breakout of computing and communications from the confines of the office.
Now the popular BlackBerry and Palm devices of the early days have been supplanted by Apple iPhones and Android smart phones — and tablet PCs threaten to overtake laptops — thanks to a wave of innovation in the mobile technology market and broadband networking in the past several years.
However, almost from the moment the first federal manager accessed his or her e-mail on a BlackBerry or took a laptop PC home or on the road, agencies have struggled to keep up with security challenges and users’ growing expectations for the technology. Meanwhile, citizens increasingly expect to access government information and services via portable devices.
The huge surge in popularity is pressuring agencies to find secure ways to incorporate mobile devices into the enterprise — from “bring your own device” policies to Federal CIO Steven VanRoekel’s new digital strategy, touted as a much-needed blueprint for securing and managing mobile devices governmentwide.
“Mobility is not growing from our needs as an enterprise,” said Simon Szykman, CIO at the Commerce Department, at a recent mobility event. “It is being thrust upon us.”
Those security concerns came to head in 2006, when a laptop computer and external hard drive containing personal information on 26 million veterans and active-duty military personnel was stolen from the home of a Veterans Affairs Department data analyst. It was the largest information security breach in the government’s history, and VA later agreed to pay $20 million to settle a class-action lawsuit brought on behalf of the people whose personal information had potentially been compromised.
That theft has had “a lasting impact that has been substantial and sustained within the VA,” CIO Roger Baker said.
It was also a wakeup call for the rest of the government. Because the portability that makes the new devices so popular is also what makes them so vulnerable, new approaches focus on securing data, with one promising solution being to use mobile devices as secured thin clients that access applications in the cloud or on agency servers rather than on the local hard drive.
“With the emergence of the federal digital strategy, we have identified the problems, and the next step is working together to resolve the problems,” said Tom Suder, president of Mobilegov and co-chairman of the Advanced Mobility Working Group at the American Council for Technology/Industry Advisory Council. “I’m very optimistic. There is a good group of government people leading the effort, and they are being very proactive.”
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9642491340637207,
"language": "en",
"url": "https://fiatjaf.alhur.es/dfd6b115.html",
"token_count": 783,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.318359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:caa499e1-dce6-4676-aeef-766085525c4e>"
}
|
Ryan Fugger’s Ripple
Before XRP, the shitcoin, bought it, “Ripple” was used by Ryan Fugger as the name for his project to create a peer-to-peer network of trust channels for money transfer. The basic idea is that Alice trusts Bob personally, Bob trusts Carol personally and Carol trusts David personally, therefore it is possible for Alice to send a payment to David by creating debt across A–B, B–C and C–D. Later either payments in the opposite direction (not necessarily from David to Alice, as the network can have trust relationships to multiple other peers in a complex graph) would maybe clear that debt (or not), but ultimately Bob would expect Alice to pay him in kind to settle the debt, Carol would expect Bob to pay her in kind and David would expect Carol to pay him in kind.
The system above works quite well inside a centralized trusted platform like Fugger’s own Ripplepay website (even when it was supposed to be just proof-of-concept, it ended up being actually used to facilitate payments across small communities), but that cannot scale as participants would all rely on it and ultimately have to blindly trust that platform.1
If a truly peer-to-peer system could be designed, it would have a chance of scaling across the entire society and the ability to enable truly open payments over the internet, an unreachable goal unless you use either a credit card provider, which is bureaucratic, unsafe, expensive, taxable, not private at all and cumbersome – or, which is awesome and excel in all aspects except scalability for day-to-day transactions.
The protocol can take many forms, but essentially it goes like this:
- A finds a route (A–B–C–D) between her and D somehow;
- A “prepares” a payment to B, tells B to do the same with C and so on (to prepare means to give B a conditional IOU that will be valid as long as the full payment completes);
- When the chain of prepared messages reaches D, D somehow “commits” the payment.
- After the commit, A now really does owe B and so on, and D really knows it has been effectively paid by A (in the form of debt from C) so it can ship goods to A.
The step 3 is the point in whicharises.
Fugger and the original Ripple community failed to solve the problem of the decentralized commit, which is required for such a system to be deployed. Not to blame them, as they’ve recognized the problem (unlike other people that had the same idea later2) and documented many sub-optimal solutions3.
No one thinks about it in these terms, but the Bitcoin Lightning Network is itself a Ripple-like system with.
You may ask why is it bad to trust a central point if all this is already based on trust relationships between peers. If the platform goes malicious peers can jump out and resolve things on their own! But that’s not so simple, it’s not obvious when the platform will be malicious or not, it’s not clear what to do if the platform deletes data or change history. Ultimately it cannot scale because even if it was very trustworthy you wouldn’t want the entire global economy resting upon Ryan Fugger’s webserver, nor does he want that.↩︎
The old Ripple wiki lists the “registry commit method” (which requires trust in a third-party), the “bare commit method” (which is not an atomic commit) and the “blockchain commit method” (which publishes transactions to the Bitcoin blockchain and so does not scale.↩︎
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9526628255844116,
"language": "en",
"url": "https://ginovus.com/tax-increment-financing-a-critically-important-tool-in-economic-development/",
"token_count": 843,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e1202a0b-9a56-4bfe-a954-6859dee83087>"
}
|
Tax Increment Financing: A Critically Important Tool in Economic Development
ax Increment Financing or TIF, is one of the most utilized and controversial economic development tools across the United States. In its most common form, TIF is a tool that allows governmental entities to borrow against an area’s future tax revenues to make present investments that are intended to spur economic growth.
The tax revenue captured in a TIF is often from a specific geographic area that is defined upon the creation of a TIF district. Upon defining a TIF district, a baseline condition is generally established relative to the tax base within the TIF district. New tax revenues created in the future above this base tax level can then be “captured” within the TIF district and generally used to service debt issued for improvements or to directly fund investments in the geographic area.
While most TIF districts capture tax revenues generated from ad valorem taxes such as real and personal property tax, other taxes, including income tax, payroll withholding taxes and sales tax can also be subject to capture within a TIF district. Since taxes captured within a TIF district do not flow to the respective taxing districts, it is incumbent upon governmental leaders to ensure that investments made with TIF funds have a positive overall impact to the communities they represent. Generally TIF districts are created in anticipation of planned investment and, as such, there is often a “but for” determination that the TIF is a critical component to encourage the investment and that absent the TIF the increase in tax revenue captured would not occur. Or to put it another way, the use of a TIF in these cases does not have a negative impact on government finances because the revenue would not have been generated in the first place without the contribution made from the creation of the TIF district. Despite this analysis, communities should also consider the impact a potential development may have on the need for public services and make sure that the TIF is structured in a way to allow those important functions to be properly funded.
The primary controversy surrounding the use of TIF is often related to the types of projects funded with TIF funds, whether the use of TIF is really required to support a particular project and what is determined to be “blighted” property, or property that would not otherwise be subject to normal development. While all of these are legitimate concerns that should be considered, there is no question that TIF is a critically important economic development tool that when used for the right purposes can significantly increase a community’s ability to compete for economic development opportunities, whether those are commercial, industrial or residential.
In particular, TIF is often the only economic development tool available to offset up front project costs such as the construction of public infrastructure required to attract a new prospect or make available land ready for development. As we represent clients across the United States, we have noted both an increased use of TIF for economic development projects and also more creativity in the application of TIF funds.
In particular, we have noted thoughtful negotiation between governmental entities and commercial prospects on the appropriate allocation of future TIF revenues between funding project costs and funding governmental services. In addition, instead of using projected TIF revenues solely to support a bond issuance, which often requires significant transaction costs, interest expense and the incurrence of governmental debt, we have seen many communities take a creative approach of directly refunding future TIF revenues, or entering into arrangements for the company or a developer to provide the up-front capital, to be reimbursed from TIF funds after project completion. These structures can significantly reduce transaction costs and result in a more equitable distribution of risk in the event the level of future TIF revenues fails to meet the initial project projections.
In summary, while TIF financing is often grouped into a single category with a certain set of preconceptions regarding its use, the reality is that there is a great deal of variety of tax increment structures across different states and communities. In addition, there are various approaches to TIF that should be considered to accomplish the shared goals of companies and communities in pursuing successful economic development projects.
Meet the author
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9344562292098999,
"language": "en",
"url": "https://roscongress.org/en/materials/obzor-issledovaniy-mckinsey-world-bank-o-vozdeystvii-pandemii-koronavirusa-na-ekonomiku-gosudarstv-a/",
"token_count": 228,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1142578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5170c927-47b5-4fbe-be51-1a733ded1fca>"
}
|
A review of studies conducted by McKinsey and the World Bank examines the economic impact of the pandemic on African economies and possible scenarios for the development of the economic crisis.
The factors that are set to have the biggest impact on the African economy are disruptions in global supply chains, a decline in demand for a wide range of African exports, a delay or significant reduction in foreign direct investment, and a collapse in oil and other commodity prices.
McKinsey analysts modeled four scenarios of how the prevalence of COVID-19 will impact African economic growth. Even in the most optimistic scenario, GDP growth in Africa will drop to 0.4% in 2020, and this scenario looks less likely every day. In all other scenarios, analysts predict that in 2020 Africa will experience an economic downturn, with GDP growth declining by 5-8 percentage points. Experts from the World Bank share the pessimism of their colleagues and predict that the continent will experience its first recession in 25 years.
The studies also provided proposed anti-crisis measures that African countries are recommended to pay priority attention to.
You can find the full text of the material:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9416096210479736,
"language": "en",
"url": "https://wandilesihlobo.com/2019/12/21/southern-africa-could-face-another-season-of-poor-agricultural-output/",
"token_count": 964,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.08251953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ad353b52-83d4-442b-a826-73d036e54617>"
}
|
Essay by Wandile Sihlobo and Tinashe Kapuya
There are preliminary indications that Southern Africa could face yet another year of poor rains, which will inevitably lead to lower agricultural output. A recent report from the Group on Earth Observations Global Agricultural Monitoring Initiative (GEOGLAM) indicates a high probability of below-normal rainfall in Southern Africa between December 2019 and February 2020.
The potential for a poor output in agriculture across the region brings into question the need for forward planning, which is key to mitigate the effects of food insecurity.
To this end, there is first a need to improve the timeliness and quality of agricultural conditions data across the Southern Africa region, especially for maize, which is a key staple crop. Unfortunately, this remains a challenge for most African countries with the exception of Zambia and South Africa who frequently release data on agricultural conditions and expected crops harvest.
What we currently know is that the maize planting season began across the region in mid-October and some countries in November 2019, and the process has thus far disappointed because of dryness in various countries, notably; Namibia to Mozambique, and southwards from Zambia through South Africa. These countries have already experienced a double-digit decline in maize production in the 2018/19 production season, leaving Zimbabwe and Mozambique as net importers of maize and other agricultural products. As a result, the forecasts of another drought have raised fears that there might not be a recovery in general agriculture production that many had hoped for going into the 2019/20 season.
But the dearth of timely data also increases prospects of a slow response from policymakers, the private sector and various non-governmental institutions which operate within the food industry in Southern Africa.
Although we are yet to know how crop conditions will materialize over the coming months, as well as the potential size of import needs thereof, early and timely confirmations of a poor harvest would be critical for planning and implementation of mitigation interventions. It is best to be warned about the impacts of below-normal rainfall rather than acting when it is too late.
This then raises two important questions for consideration;
- Where is Southern Africa going to find the white maize supplies?
- Does the Southern Africa region have the infrastructure to move potentially required maize imports efficiently?
First, Africa’s maize exporters in normal rainfall seasons include South Africa, Zambia and Tanzania. But in the 2019/20 production season, these countries could experience similar weather conditions as the rest of the Southern Africa region. This means their ability to export maize is limited. The focus should, therefore, be to import from countries outside the African continent. To this end, the only country that can potentially supply white maize is Mexico but it is unclear if there are sufficient supplies there that could serve the whole Southern Africa region (depending on maize needs which will be clear early 2020). Therefore, depending on how crop conditions will be by the end of January 2020, the Southern Africa region might have to encourage US farmers to increase their typically small hectares of white maize and produce for the Southern Africa market. The same message could be passed to the Mexican farmers. It might be too late to encourage South American farmers as they typically plant around the same time as Southern Africa.
Second, the infrastructure might not be the main challenge, especially if the maize and other agricultural import needs are detected ahead of time to allow for planning of flow of imports. South Africa and other coastal countries such as Mozambique have in the past managed to handle large volumes of agricultural imports and thus, there is not a lot of concern about this point if needs are determined on time.
We will have a better sense of crop conditions and the potential size of maize imports in the coming months. In the meantime, policymakers can consider the following options as ways of easing food insecurity concerns within Southern Africa:
- Encourage white maize production under irrigation where possible in various countries
- Temporarily ease restrictions on imports of genetically modified (GM) white maize (this applies to the whole Southern Africa, with the exception of South Africa which has no restrictions).
- Capacitate local government institutions to improve the quality of information about agricultural conditions and flow of grain to market players.
- Collaborate with the private sector to ensure that required maize is sourced from various parts of the world timeously. This collaboration should include easing regulations on imports in countries where such exists.
Sihlobo is chief economist of the Agricultural Business Chamber of South Africa (Agbiz). Kapuya leads value chain research at the Bureau for Research and Food Policy (BFAP).
Follow me on Twitter (@WandileSihlobo). E-mail: [email protected]
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9738220572471619,
"language": "en",
"url": "https://www.greencarreports.com/news/1056872_want-to-beat-high-gas-prices-public-transit-winner",
"token_count": 349,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.146484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:011880d4-4d15-4f4e-809c-64260c9b5b66>"
}
|
There’s no doubt consumers will be faced with higher gas prices going into the future, and alternatives, like diesel powered cars and hybrids, all have their respective shortcomings.
We recently looked at why mainstream automakers refuse to launch more diesel models in the U.S. and, just yesterday, we saw the results of a study that claimed consumers would often be worse off buying a hybrid vehicle if their sole goal was to save cash.
However, there’s one distinct alternative that’s often overlooked. It may not be as glamorous or time-efficient as driving your own car, but public transit has the potential to save you hundreds, even thousands, of dollars over the space of a single year.
A new study conducted by the American Public Transportation Association (APTA) has determined that individuals, on average, would save as much as $825 per month by switching to a public transit system.
Note that the data used was based on a two-car household paying for a monthly unreserved parking space, with fuel prices set at $3.47 per gallon. For the resultant savings of $825 per month, the household would need to give up its second car and that parking space.
The biggest saving from ditching the car and switching to the public transit system would occur for residents of New York City, who could save on average $1,198 per month, followed by the residents of Boston and San Francisco, who would save $1,099 and $1,088 per month, respectively.
Of course, in reality, not everybody in the States has adequate access to a public transit system. Those drivers have no choices at all; for them, the car is here to stay.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9669387340545654,
"language": "en",
"url": "https://www.marketcalls.in/trading-lessons/factors-affecting-share-prices.html",
"token_count": 657,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09033203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:aac94fd1-d36a-478f-ae59-5811d99bb57f>"
}
|
Like any other commodity, in the stock market, share prices are also dependent on so many factors. So, it is hard to point out just one or two factors that affect the price of the stocks. There are still some factors that are that directly influence the share prices.
Demand and Supply – This fundamental rule of economics holds good for the equity market as well. The price is directly affected by the trend of stock market trading. When more people are buying a certain stock, the price of that stock increases and when more people are selling he stock, the price of that particular stock falls. Now it is difficult to predict the trend of the market but your stock broker can give you fair idea of the ongoing trend of the market but be careful before you blindly follow the advice.
News – News is undoubtedly a huge factor when it comes to stock price. Positive news about a company can increase buying interest in the market while a negative press release can ruin the prospect of a stock. Having said that, you must always remember that often times, despite amazingly good news, a stock can show least movement. It is the overall performance of the company that matters more than news. It is always wise to take a wait and watch policy in a volatile market or when there is mixed reaction about a particular stock.
Market Cap – If you are trying to guess the worth of a company from the price of the stock, you are making a huge mistake. It is the market capitalization of the company, rather than the stock, that is more important when it comes to determining the worth of the company. You need to multiply the stock price with the total number of outstanding stocks in the market to get the market cap of a company and that is the worth of the company.
Earning Per Share – Earning per share is the profit that the company made per share on the last quarter. It is mandatory for every public company to publish the quarterly report that states the earning per share of the company. This is perhaps the most important factor for deciding the health of any company and they influence the buying tendency in the market resulting in the increase in the price of that particular stock. So, if you want to make a profitable investment, you need to keep watch on the quarterly reports that the companies and scrutinize the possibilities before buying stocks of particular stock.
Price/Earning Ratio – Price/Earning ratio or the P/E ratio gives you fair idea of how a company’s share price compares to its earnings. If the price of the share is too much lower than the earning of the company, the stock is undervalued and it has the potential to rise in the near future. On the other hand, if the price is way too much higher than the actual earning of the company and then the stock is said to overvalued and the price can fall at any point.
Before we conclude this discussion on share prices, let me remind you that there are so many other reasons behind the fall or rise of the share price. Especially there are stock specific factors that also play its part in the price of the stock. So, it is always important that you do your research well and stock trading on the basis of your research and information that you get from your broker.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9390173554420471,
"language": "en",
"url": "http://www.ipsnews.net/2010/12/china-researchers-race-toward-renewable-energy/",
"token_count": 1173,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ba55369b-3f29-4fc1-b7cd-5c8a2792b24e>"
}
|
- Development & Aid
- Economy & Trade
- Human Rights
- Global Governance
- Civil Society
Friday, April 23, 2021
BEIJING, Dec 27 2010 (IPS) - Researchers in China, the world’s leading provider of wind turbines and solar panels, are working toward making renewable energy cheaper, more efficient and a bigger part of the country’s power grid.
Zhao Xingzhong, professor at Wuhan University’s School of Physics and Technology, is researching dye-sensitised solar cells, a low-cost, high- efficiency alternative to more prevalent solid-state semiconductor solar cell technology.
The practical implications are apparent, Zhao says.
“The production process of dye-sensitised solar cells doesn’t produce carbon dioxide, which means it won’t induce environmental pollution,” Zhao tells IPS. “And dye-sensitised solar cells only cost one- fifth of traditional semiconductor solar cells made from crystalline silicon.”
Although Zhao’s team’s research is unique at home and abroad, he says support from the Chinese government is far from enough. He notes that Japan and South Korea have jointly invested about 1.6 billion U.S. dollars on research on third-generation solar technology since 2000. In China, however, Zhao says there have been just five native projects in the solar field in the last decade, with spending of around 4.5 million dollars per project.
In recent years, China has become the global leader in renewable energy technology manufacturing, surpassing the United States in terms of both the number of wind turbines and solar panels it makes. The accounting firm Ernst & Young in September named China the best place to invest in renewable energy.
Chinese companies, led by the Jiangsu-based Suntech, have one-quarter of the world’s solar panel production capacity and are rapidly gaining market share by driving down prices using low-cost, large-scale factories. China’s 2009 stimulus package included subsidies for large solar installation projects.
In terms of wind power, home-grown companies have rapidly gained market share in recent years after the government raised local partnership requirements for foreign companies to 70 percent from 40 percent (the government has since removed local partnership requirements) and introduced major new subsidies and other incentives for Chinese wind power companies.
By 2009, there were 67 Chinese turbine providers and foreign companies’ market share fell to 37 percent from 70 percent just over five years ago.
But most of the parts produced by Chinese companies are based on technology developed from abroad, with scant focus on homegrown innovation in the renewable energy field.
Wang Mengjie, deputy director of the China Renewable Energy Society and former vice chairman of the Chinese Academy of Agricultural Engineering, works in the biomass industry. He says bioenergy can be used to improve living standards in rural areas, and he is currently involved in projects aimed at providing farmers with equipment that can turn organic waste into clean biogas and fertiliser.
According to the Ministry of Agriculture, the number of biogas pools in China’s rural areas reached over 35 million as of the end of 2009, producing 12.4 billion cubic metres each year. The government has increased financing of biogas pools in recent years, to 5 billion RMB (754,547 million dollars) in 2009 from an average of 2.5 billion RMB (377.2 million dollars) in 2006 and 2007.
Despite the investment, Wang says China still faces technological hurdles in the biomass industry.
“In terms of biodiesel technology, Western countries like the United States and Germany lead the world, while China is still at its infancy stage,” Wang says. “China has no definite regulations or policies on biomass energy right now. Under the present circumstances, there’s no possibility for relevant enterprises to develop further.”
Critics say China’s interest in renewable energy is essentially a business opportunity – most of what it produces is sold abroad – and that it is less interested in applying the more expensive technology at home.
China has not yet caught up to the United States in terms of renewable energy production. The country is the biggest consumer of coal in the world and is expected to burn 4.5 billion tonnes of standard coal by 2020, according to figures from the National Energy Administration.
While coal will still make up two-thirds of China’s energy capacity in 2020, the government has promised to invest billions of dollars into the development of wind, solar and nuclear power. The country’s top legislature, the National People’s Congress, now requires power grid companies buy 100 percent of the electricity produced from renewable energy generators.
Official statistics released last April said that low- carbon energy sources would account for more than a quarter of China’s electricity supply by the end of 2010, according to the state-run Xinhua news agency. The figures revealed that hydro, nuclear and wind power were expected to provide 250 gigawatts of capacity by the end of 2010, while coal will account for 700 gigawatts.
This story includes downloadable print-quality images -- Copyright IPS, to be used exclusively with this story.
IPS is an international communication institution with a global news agency at its core,
raising the voices of the South
and civil society on issues of development, globalisation, human rights and the environment
Copyright © 2021 IPS-Inter Press Service. All rights reserved. - Terms & Conditions
You have the Power to Make a Difference
Would you consider a $20.00 contribution today that will help to keep the IPS news wire active? Your contribution will make a huge difference.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9441438317298889,
"language": "en",
"url": "https://www.barnesandnoble.com/w/the-zero-marginal-cost-society-jeremy-rifkin/1117319405?ean=9781137278463",
"token_count": 4389,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.248046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ce79b10c-58b1-4b49-99b1-7007d3e1bc3a>"
}
|
In The Zero Marginal Cost Society, New York Times bestselling author Jeremy Rifkin describes how the emerging Internet of Things is speeding us to an era of nearly free goods and services, precipitating the meteoric rise of a global Collaborative Commons and the eclipse of capitalism.
Rifkin uncovers a paradox at the heart of capitalism that has propelled it to greatness but is now taking it to its deaththe inherent entrepreneurial dynamism of competitive markets that drives productivity up and marginal costs down, enabling businesses to reduce the price of their goods and services in order to win over consumers and market share. (Marginal cost is the cost of producing additional units of a good or service, if fixed costs are not counted.) While economists have always welcomed a reduction in marginal cost, they never anticipated the possibility of a technological revolution that might bring marginal costs to near zero, making goods and services priceless, nearly free, and abundant, and no longer subject to market forces.
Now, a formidable new technology infrastructurethe Internet of things (IoT)is emerging with the potential of pushing large segments of economic life to near zero marginal cost in the years ahead. Rifkin describes how the Communication Internet is converging with a nascent Energy Internet and Logistics Internet to create a new technology platform that connects everything and everyone. Billions of sensors are being attached to natural resources, production lines, the electricity grid, logistics networks, recycling flows, and implanted in homes, offices, stores, vehicles, and even human beings, feeding Big Data into an IoT global neural network. Prosumers can connect to the network and use Big Data, analytics, and algorithms to accelerate efficiency, dramatically increase productivity, and lower the marginal cost of producing and sharing a wide range of products and services to near zero, just like they now do with information goods.
The plummeting of marginal costs is spawning a hybrid economypart capitalist market and part Collaborative Commonswith far reaching implications for society, according to Rifkin. Hundreds of millions of people are already transferring parts of their economic lives to the global Collaborative Commons. Prosumers are plugging into the fledgling IoT and making and sharing their own information, entertainment, green energy, and 3D-printed products at near zero marginal cost. They are also sharing cars, homes, clothes and other items via social media sites, rentals, redistribution clubs, and cooperatives at low or near zero marginal cost. Students are enrolling in free massive open online courses (MOOCs) that operate at near zero marginal cost. Social entrepreneurs are even bypassing the banking establishment and using crowdfunding to finance startup businesses as well as creating alternative currencies in the fledgling sharing economy. In this new world, social capital is as important as financial capital, access trumps ownership, sustainability supersedes consumerism, cooperation ousts competition, and "exchange value" in the capitalist marketplace is increasingly replaced by "sharable value" on the Collaborative Commons.
Rifkin concludes that capitalism will remain with us, albeit in an increasingly streamlined role, primarily as an aggregator of network services and solutions, allowing it to flourish as a powerful niche player in the coming era. We are, however, says Rifkin, entering a world beyond markets where we are learning how to live together in an increasingly interdependent global Collaborative Commons.
|Publisher:||St. Martin's Publishing Group|
|Product dimensions:||6.00(w) x 9.40(h) x 1.30(d)|
About the Author
Jeremy Rifkin, one of the most popular social thinkers of our time, is a bestselling author whose 20 books have been translated into 35 languages. Mr. Rifkin is an advisor to the European Union and to heads of state around the world and a lecturer at the Wharton School's Executive Education Program at the University of Pennsylvania.
Read an Excerpt
The Zero Marginal Cost Society
The Internet of Things, The Collaborative Commons, And The Eclipse of Capitalism
By Jeremy Rifkin
Palgrave MacmillanCopyright © 2015 Jeremy Rifkin
All rights reserved.
THE GREAT PARADIGM SHIFT FROM MARKET CAPITALISM TO THE COLLABORATIVE COMMONS
The capitalist era is passing ... not quickly, but inevitably. A new economic paradigm — the Collaborative Commons — is rising in its wake that will transform our way of life. We are already witnessing the emergence of a hybrid economy, part capitalist market and part Collaborative Commons. The two economic systems often work in tandem and sometimes compete. They are finding synergies along each other's perimeters, where they can add value to one another, while benefiting themselves. At other times, they are deeply adversarial, each attempting to absorb or replace the other.
The struggle between these two competing economic paradigms is going to be protracted and hard fought. But, even at this very early stage, what is becoming increasingly clear is that the capitalist system that provided both a compelling narrative of human nature and the overarching organizational framework for the day-to-day commercial, social, and political life of society — spanning more than ten generations — has peaked and begun its slow decline. While I suspect that capitalism will remain part of the social schema for at least the next half century or so, I doubt that it will be the dominant economic paradigm by the second half of the twenty-first century. Although the indicators of the great transformation to a new economic system are still soft and largely anecdotal, the Collaborative Commons is ascendant and, by 2050, it will likely settle in as the primary arbiter of economic life in most of the world. An increasingly streamlined and savvy capitalist system will continue to soldier on at the edges of the new economy, finding sufficient vulnerabilities to exploit, primarily as an aggregator of network services and solutions, allowing it to flourish as a powerful niche player in the new economic era, but it will no longer reign.
I understand that this seems utterly incredible to most people, so conditioned have we become to the belief that capitalism is as indispensable to our well-being as the air we breathe. But despite the best efforts of philosophers and economists over the centuries to attribute their operating assumptions to the same laws that govern nature, economic paradigms are just human constructs, not natural phenomena.
As economic paradigms go, capitalism has had a good run. Although its timeline has been relatively short compared to other economic paradigms in history, it's fair to say that its impact on the human journey, both positive and negative, has been more dramatic and far-reaching than perhaps any other economic era in history, save for the shift from foraging/hunting to an agricultural way of life.
Ironically, capitalism's decline is not coming at the hands of hostile forces. There are no hordes at the front gates ready to tear down the walls of the capitalist edifice. Quite the contrary. What's undermining the capitalist system is the dramatic success of the very operating assumptions that govern it. At the heart of capitalism there lies a contradiction in the driving mechanism that has propelled it ever upward to commanding heights, but now is speeding it to its death.
THE ECLIPSE OF CAPITALISM
Capitalism's raison d'être is to bring every aspect of human life into the economic arena, where it is transformed into a commodity to be exchanged as property in the marketplace. Very little of the human endeavor has been spared this transformation. The food we eat, the water we drink, the artifacts we make and use, the social relationships we engage in, the ideas we bring forth, the time we expend, and even the DNA that determines so much of who we are have all been thrown into the capitalist cauldron, where they are reorganized, assigned a price, and delivered to the market. Through most of history, markets were occasional meeting places where goods were exchanged. Today, virtually every aspect of our daily lives is connected in some way to commercial exchanges. The market defines us.
But here lies the contradiction. Capitalism's operating logic is designed to fail by succeeding. Let me explain.
In his magnum opus, The Wealth of Nations, Adam Smith, the father of modern capitalism, posits that the market operates in much the same way as the laws governing gravity, as discovered by Isaac Newton. Just as in nature, where for every action there is an equal and opposite reaction, so too do supply and demand balance each other in the self-regulating marketplace. If consumer demand for goods and services goes up, sellers will raise their prices accordingly. If the sellers' prices become too high, demand will drop, forcing sellers to lower the prices.
The French Enlightenment philosopher Jean-Baptiste Say, another early architect of classical economic theory, added a second assumption, again borrowing a metaphor from Newtonian physics. Say reasoned that economic activity was self-perpetuating, and that as in Newton's first law, once economic forces are set in motion, they remain in motion unless acted upon by outside forces. He argued that "a product is no sooner created, than it, from that instant, affords a market for other products to the full extent of its own value. ... The creation of one product immediately opens a vent for other products." A later generation of neoclassical economists refined Say's Law by asserting that new technologies increase productivity, allowing the seller to produce more goods at a cheaper cost per unit. The increased supply of cheaper goods then creates its own demand and, in the process, forces competitors to invent their own technologies to increase productivity in order to sell their goods even more cheaply and win back or draw in new customers (or both). The entire process operates like a perpetual-motion machine. Cheaper prices, resulting from new technology and increased productivity, mean more money left over for consumers to spend elsewhere, which spurs a fresh round of competition among sellers.
There is a caveat, however. These operating principles assume a competitive market. If one or a few sellers are able to outgrow and eliminate their competition and establish a monopoly or oligopoly in the market — especially if their goods and services are essential — they can keep prices artificially high, knowing that buyers will have little alternative. In this situation, the monopolist has scant need or inclination to bring on new labor-saving technologies to advance productivity, reduce prices, and remain competitive. We've seen this happen repeatedly throughout history, if only for short periods of time.
In the long run, however, new players invariably come along and introduce breakthroughs in technology that increase productivity and lower prices for similar or alternative goods and services, and break the monopolistic hold on the market.
Yet suppose we carry these guiding assumptions of capitalist economic theory to their logical conclusion. Imagine a scenario in which the operating logic of the capitalist system succeeds beyond anyone's wildest expectations and the competitive process leads to "extreme productivity" and what economists call the "optimum general welfare"—an endgame in which intense competition forces the introduction of ever-leaner technology, boosting productivity to the optimum point in which each additional unit introduced for sale approaches "near zero" marginal cost. In other words, the cost of actually producing each additional unit — if fixed costs are not counted — becomes essentially zero, making the product nearly free. If that were to happen, profit, the lifeblood of capitalism, would dry up.
In a market-exchange economy, profit is made at the margins. For example, as an author, I sell my intellectual work product to a publisher in return for an advance and future royalties on my book. The book then goes through several hands on the way to the end buyer, including an outside copyeditor, compositor, printer, as well as wholesalers, distributors, and retailers. Each party in this process is marking up the transaction costs to include a profit margin large enough to justify their participation.
But what if the marginal cost of producing and distributing a book plummeted to near zero? In fact, it's already happening. A growing number of authors are writing books and making them available at a very small price, or even for free, on the Internet — bypassing publishers, editors, printers, wholesalers, distributors, and retailers. The cost of marketing and distributing each copy is nearly free. The only cost is the amount of time consumed by creating the product and the cost of computing and connecting online. An ebook can be produced and distributed at near zero marginal cost.
The near zero marginal cost phenomenon has already wreaked havoc on the publishing, communications, and entertainment industries as more and more information is being made available nearly free to billions of people. Today, more than one-third of the human race is producing its own information on relatively cheap cellphones and computers and sharing it via video, audio, and text at near zero marginal cost in a collaborative networked world. And now the zero marginal cost revolution is beginning to affect other commercial sectors, including renewable energy, 3D printing in manufacturing, and online higher education. There are already millions of "prosumers"— consumers who have become their own producers — generating their own green electricity at near zero marginal cost around the world. It's estimated that around 100,000 hobbyists are manufacturing their own goods using 3D printing at nearly zero marginal cost. Meanwhile, six million students are currently enrolled in free Massive Open Online Courses (MOOCs) that operate at near zero marginal cost and are taught by some of the most distinguished professors in the world, and receiving college credits. In all three instances, while the up-front costs are still relatively high, these sectors are riding exponential growth curves, not unlike the exponential curve that reduced the marginal cost of computing to near zero over the past several decades. Within the next two to three decades, prosumers in vast continental and global networks will be producing and sharing green energy as well as physical goods and services, and learning in online virtual classrooms at near zero marginal cost, bringing the economy into an era of nearly free goods and services.
Many of the leading players in the near zero marginal cost revolution argue that while nearly free goods and services will become far more prevalent, they will also open up new possibilities for creating other goods and services at sufficient profit margins to maintain growth and even allow the capitalistic system to flourish. Chris Anderson, the former editor of Wired magazine, reminds us that giveaway products have long been used to draw potential customers into purchasing other goods, citing the example of Gillette, the first mass producer of disposable razors. Gillette gave away the razors to hook consumers into buying the blades that fit the devices.
Similarly, today's performing artists often allow their music to be shared freely online by millions of people with the hope of developing loyal fans who will pay to attend their live concerts. The New York Times and The Economist provide some free online articles to millions of people in anticipation that a percentage of the readers will choose to pay for more detailed reporting by subscribing. "Free," in this sense, is a marketing device to build a customer base for paid purchases.
These aspirations are shortsighted, and perhaps even naïve. As more and more of the goods and services that make up the economic life of society edge toward near zero marginal cost and become almost free, the capitalist market will continue to shrink into more narrow niches where profit-making enterprises survive only at the edges of the economy, relying on a diminishing consumer base for very specialized products and services.
The reluctance to come to grips with near zero marginal cost is understandable. Many, though not all, of the old guard in the commercial arena can't imagine how economic life would proceed in a world where most goods and services are nearly free, profit is defunct, property is meaningless, and the market is superfluous. What then?
Some are just beginning to ask that question. They might find some solace in the fact that several of the great architects of modern economic thinking glimpsed the problem long ago. John Maynard Keynes, Robert Heilbroner, and Wassily Leontief, to name a few, pondered the critical contradiction that drove capitalism forward. They wondered whether, in the distant future, new technologies might so boost productivity and lower prices as to create the coming state of affairs.
Oskar Lange, a University of Chicago professor of the early twentieth century, captured a sense of the conundrum underlying a mature capitalism in which the search for new technological innovations to advance productivity and cheapen prices put the system at war with itself. Writing in 1936, in the throes of the Great Depression, he asked whether the institution of private ownership of the means of production would continue indefinitely to foster economic progress, or whether at a certain stage of technological development the very success of the system would become a shackle to its further advance.
Lange noted that when an entrepreneur introduces technological innovations that allow him to lower the price of goods and services, he gains a temporary advantage over competitors strapped with antiquated means of production, resulting in the devaluation of the older investments they are locked into. This forces them to respond by introducing their own technological innovations, again increasing productivity and cheapening prices and so on.
But in mature industries where a handful of enterprises have succeeded in capturing much of the market and forced a monopoly or oligopoly, they would have every interest in blocking further economic progress in order to protect the value of the capital already invested in outmoded technology. Lange observes that "when the maintenance of the value of the capital already invested becomes the chief concern of the entrepreneurs, further economic progress has to stop, or, at least, to slow down considerably. ... This result will be even more accentuated when a part of the industries enjoy a monopoly position."
Powerful industry leaders often strive to restrict entry of new enterprises and innovations. But slowing down or stopping new, more productive technologies to protect prior capital investments creates a positive-feedback loop by preventing capital from investing in profitable new opportunities. If capital can't migrate to new profitable investments, the economy goes into a protracted stall.
Lange described the struggle that pits capitalist against capitalist in stark terms. He writes:
The stability of the capitalist system is shaken by the alternation of attempts to stop economic progress in order to protect old investments and tremendous collapses when those attempts fail.
Attempts to block economic progress invariably fail because new entrepreneurs are continually roaming the edges of the system in search of innovations that increase productivity and reduce costs, allowing them to win over consumers with cheaper prices than those of their competitors. The race Lange outlines is relentless over the long run, with productivity continually pushing costs and prices down, forcing profit margins to shrink.
While most economists today would look at an era of nearly free goods and services with a sense of foreboding, a few earlier economists expressed a guarded enthusiasm over the prospect. Keynes, the venerable twentieth-century economist whose economic theories still hold considerable weight, penned a small essay in 1930 entitled "Economic Possibilities for Our Grandchildren," which appeared as millions of Americans were beginning to sense that the sudden economic downturn of 1929 was in fact the beginning of a long plunge to the bottom.
Keynes observed that new technologies were advancing productivity and reducing the cost of goods and services at an unprecedented rate. They were also dramatically reducing the amount of human labor needed to produce goods and services. Keynes even introduced a new term, which he told his readers, you "will hear a great deal in the years to come — namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour." Keynes hastened to add that technological unemployment, while vexing in the short run, is a great boon in the long run because it means "that mankind is solving its economic problem."
Excerpted from The Zero Marginal Cost Society by Jeremy Rifkin. Copyright © 2015 Jeremy Rifkin. Excerpted by permission of Palgrave Macmillan.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of Contents
Chapter 1: The Great Paradigm Shift from Capitalism to Collabortism
Part 1: The Untold History of Capitalism
Chapter 2: The European Enclosures and the Birth of the Market Economy
Chapter 3: The Courtship of Capitalism and Economies of Scale
Chapter 4: Human Nature Through a Capitalist Lens
Part 2: The Near Zero Marginal Cost Society
Chapter 5: The Exponential Race to a Free Economy
Chapter 6: 3D Printing: From Mass Production to Production by the Masses
Chapter 7: MOOCs and a Zero Marginal Cost Education
Chapter 8: The Birth of the Prosumer
Part 3: The Rise of the Collaborative Commons
Chapter 9: The Comedy of the Commons
Chapter 10: The Collabortists Prepare for Battle
Chapter 11: The Struggle to Define and Control the Intelligent Infrastructure
Part 4: Social Capital and the Sharing Economy
Chapter 12: The Transformation from Ownership to Access
Chapter 13: Crowdsourcing Social Capital, Democratizing Currency, and Humanizing Entrepreneurship
Part 5: The Economy of Abundance
Chapter 14: Sustaining Abundance
Chapter 15: The Three Wild Cards of the Apocalypse
Chapter 16: A Biosphere Lifestyle
Afterword: A Personal Note
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9735795259475708,
"language": "en",
"url": "https://www.feministcurrent.com/2016/04/13/a-basic-income-guarantee-is-feasible-and-feminist/",
"token_count": 2637,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.48046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:48d86007-6605-4fd5-85d2-41e244fc9cb5>"
}
|
The idea of guaranteeing a basic income for everyone, whether or not they are employed, isn’t a new one. Both Canada and the US experimented with the idea in the ’60s and ’70s, but the idea fell out of favour as politics shifted to the right. Today, as the economy shifts away from secure employment, we are experiencing a resurge in interest in the idea. In a 2013 survey, 46 per cent of Canadians supported the idea, nonetheless, many people still see it as unrealistic, unaffordable, or out of reach.
In fact, a basic income guarantee would address many social problems we struggle with here in Canada — particularly with regard to women’s equality.
A basic income guarantee (BIG) — also known as guaranteed annual income, a Mincome, or universal basic income (UBI) — is money the government gives to everyone whether they need it or not.
There are a couple of approaches, one which involves providing supplemental pay only to low income people (this is called the negative income tax or NIT), and another that provides everyone the same amount upfront, then taxing it back from higher income earners (this is called the universal demogrant or UD). Many people prefer the NIT approach because it looks cheaper upfront, but the UD approach has the advantage of treating everyone equally. It may also be easier for people whose income fluctuates a lot.
The purpose of a BIG is to replace means-tested programs like welfare and old age security that are only available to people who have nothing left (means-testing looks at savings and property like housing and cars to determine eligibility for support), with a universal support that would prevent people from ending up in that position to begin with. In Canada it could be $20,000 per year, no questions asked, with possible additional amounts for seniors, disabled people, and Northern residents (i.e. people who get additional tax deductions in recognition of the extra costs of living in remote areas), though it would probably be less than that initially. While that might sound like a lot, if you look at what it replaces and the way it would impact society as a whole, it makes a lot of sense, economically and socially.
There have been numerous experiments that show simply giving money to those living in poverty and letting them figure out what to do with it is more successful than giving them targeted aid. In the ’60s and ’70s, the US ran experiments in five different regions to see if people would quit working (they didn’t), while Canada experimented with a Mincome in Dauphin, Manitoba — for the five years the program existed in Dauphin, poverty was completely eliminated. In general, people’s motivation to work was not impacted, but when people did stop working outside the home, it was mostly women who were staying home with young children or teenagers who stayed in school longer.
Mexico started giving impoverished people cash in the ’90s, to replace food subsidies, and found that people ate better and were healthier, and that children stayed in school longer. In 2008, Uganda’s Youth Opportunities Program offered cash to young rural applicants to learn a skilled trade. (Women in these programs fared particularly well because they were so much worse off to begin with.) In 2009, 13 long-term homeless London men (“rough sleepers”) were offered £3,000 each to help them move off the street, with logistical support from a project coordinator. Far more of them moved off the street than was originally expected. And in 2012, GiveDirectly, insipired by Mexico, gave impoverished Kenyan families $1,000 each to see what they would do. The Kenyans did things like upgrade their houses with better materials and start businesses. There have also been recent pilots in Namibia and India.
The biggest concern people seem to have about these programs is that recipients will just drink or otherwise waste the money instead of spending it wisely, the assumption being that poor people don’t know how to manage money. (Do the wealthy never “waste” money?) But it turns out that the reason people are poor is because they don’t have enough money, so when they get it, they spend it on things they need, like better food, improved housing, education, and starting businesses. They know how to manage money as well as anyone else does. They just don’t have enough to work with.
Another concern is that if people are given enough to live on, they will stop working (which would actually be bad for the economy, because then who would pay taxes to pay for all of this?). And it’s true — people do work slightly less. In the US studies, the overall decrease in working hours was 13 per cent for the entire family, but it was mothers and teenagers who disproportionately reduced work hours. In general, fears about people becoming lazy and quitting work are not born out by the facts. Sure, a few people might slack off, but not enough to have a notable impact on the economy.
In any case, demonstrated benefits appear to outweigh any potential cons: improved physical and mental health (which leads to reduced health costs), lower rates of domestic violence, lower crime rates in general (which means lower policing costs), higher school graduation rates; and overall increased economic productivity.
Why give the poor more money?
1) Economic security is a human right.
Human beings — all human beings — have the right to be treated with dignity and respect, to be fully equal, just because they are human. The Universal Declaration of Human Rights (UN, 1948) includes economic security (articles 22, 25) as a basic human right.
2) It’s cheaper to administer than the welfare programs we operate currently.
Current benefit plans pay people to judge other people — sometimes repeatedly — in order to determine whether or not they “deserve” help. For example, in the UK, private contractors spent more money weeding out ineligible disability recipients (many of whom were actually eligible) than the money the government saved by taking them off the rolls.
3) A basic income program would work better than what we have now.
Welfare programs typically impose too many conditions and don’t provide enough money, due to irrational fears around giving money to people who don’t “deserve” it. Most welfare recipients in Canada, for example, are well below the poverty level even with maximum support. And in many jurisdictions in Canada and the US, single “employable” adults may not be eligible for any support at all, depending on their circumstances. There may be a waiting period before they receive benefits, even if they are down to their last dollar, and there is often a time limit after which they run out of benefits, regardless of need.
No one is actually entitled to a job
Today, our priority is training people to hold down jobs or to get better jobs. But the reality is that no one is legally entitled to a job. In Canada, our government doesn’t ensure there are enough jobs to go around, or that the jobs available are suitable for the people who are looking for work. The government does encourage the business sector to create jobs, and we have the right to be considered for those jobs, when they exist, on the basis of qualifications and not demographics. However, that does not guarantee that there will be an appropriate job available for every individual. There are no secure fields either. Even engineering jobs can go offshore. (I had a chat with a German engineer online a while back and he mentioned that his employer gave him early retirement and shipped his job to India. Do they still have those “German engineering” ads?)
There are many ways a BIG could be introduced, but so far there is no consensus as to how to go about it or which level of government would pay for it. In Canada, Ontario is planning a pilot project, and Quebec is also talking about it, but it would be preferable to approach this at the federal level, supplemented by the provinces if needed.
In my fantasies, the Canadian government could take our Goods and Services Tax Credit infrastructure, and start by giving everyone $100 per month instead of the GSTC payments. There would be no eligibility requirements other than to sign up, and no clawbacks on welfare and other handouts already in place. Raise the amount to $250 the second year and $500 the third, and you start to see stress levels and debt go down. Once you get to $1000 a month (provided everyone continues to be better off from year to year), you can start to partially clawback benefits such as welfare, disability, old age security, the non-refundable tax credit on your tax return (basic personal amount), minimum wage, subsidies for social housing and daycare, registered savings plans, etc. Eventually you reach the end goal and from then on the amount would simply increase every year, based on inflation. In reality, it’s unlikely to be quite that simple, but the point is that if BIG is brought in in stages, people have time to adjust, as does the job market.
A number of countries already have partial BIG programs in place, including Macau (since 2008), Iran (since 2010), and Alaska (since 1982), which indicates that it’s possible to at least get things started.
In the meantime, for far too many people, poverty is an emergency right now. It’s possible the provinces might be willing to raise welfare rates or make eligibility easier in the short term if they know they’ll get to phase it out in a few years.
Also, since women tend to have less money than men, we need to make sure we clawback programs that benefit men/higher earners more (such as retirement savings plans) as well as plans that benefit women/lower earners more (such as minimum wage).
There are a number of social benefits to a basic income, including:
– The poorest will be better off — many people will be healthier and less stressed.
– Homeless youth will be better protected — if the minimum age to qualify for a basic income is 16, a significant portion of homeless youth will be able to afford food, shelter, and further education.
– It’s easy to get. There is no paperwork beyond filing a tax return, no means testing, no delays, no clawbacks if you earn other income, you don’t have to sell everything you own first before qualifying.
– It isn’t dehumanizing like welfare is now.
There may also be also some negatives:
– People will most likely pay off their debts, so banks will lose some income. Also, student loans may become unnecessary. This may reduce administrative jobs, which tend to be held by women.
– People whose jobs involve administering government handouts will need to look for other sources of income to supplement their new BIG. (This is one reason why it needs to be brought in in stages, so people have time to adjust.) (Most of these people are women.)
– There will be an adjustment period while everyone gets used to it.
– A few slackers might stop working.
A BIG would benefit women in particular
There are numerous situations where a BIG would help women in particular. For example:
– The number of women in poverty is increasing at a much faster rate than men, meaning that poor people are disproportionately female. A BIG could address the feminization of poverty.
– Family care (e.g. child care, elder care).
– Preventing girls/women from entering the sex industy, and helping them exit the trade more easily than if they were to make use of welfare/job training.
– Leaving abusive relationships (there may also be fewer abusive relationships to begin with).
– In developing countries, girls may be more likely to stay in school longer.
Having enough money to cover basic costs gives people real choices and protects their human rights. We talk a lot about choice now, but many “choices” we make are made under economic constraints. A BIG would not be enough to buy the company and fire the boss, or buy the building and fire the landlord, or hire that lawyer who totally blows your enemies out of the water (or whatever your personal power fantasy is), but it would make it easier for people to protect themselves from exploitation and abuse. It would also make it easier for people to find the time to work towards a better world.
Anemone Cerridwen has three science degrees and no job. She has been on welfare/disability for 18.5 years to date (15 continuous) and would not wish it on her worst enemy. She currently lives in Edmonton.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9339792132377625,
"language": "en",
"url": "https://www.iciss.ca/2019/09/is-this-an-intergenerational-transfer/",
"token_count": 269,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.40234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:cecb7255-f7e2-4b87-a328-26a577ebdf47>"
}
|
Is This an Intergenerational Transfer?
An intergenerational transfer is the passing down of assets, rights, and privileges from one generation to another.
Though it might sound appealing given that the generation of boomers, especially those using alternative medicines, before us has accumulated significant wealth, that also leaves us with the many problems they created.
The transfer could include:
- Personal property
- Personal capital
- Social cohesion
But in reality, they also left us with innumerable problems that we are left to solve ourselves.
- The most pressing being a dying planet.
- Exhausted natural resources
- Stomach turning level of debt
- Inflated housing market
- Low-quality jobs
- Congested and filthy cities.
- Unstable economies
Although one generation can’t solve all the problems, the generation we preceded, however, has left us with complicated problems that already threaten the framework of our society and future generations preceding us.
Conceivably the greatest failure of the boomer generation is their continued adherence to simple growth of GDP and the belief that a larger economy will solve the problem.
Any solution now is already very late, but despite that, we must still continue to fight. We must continue our goal to a sustainable future and ensure the continued existence of our society.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9725856184959412,
"language": "en",
"url": "https://yourbusiness.azcentral.com/business-financial-issues-12618.html",
"token_count": 770,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01519775390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:cfa1164d-0f2b-4696-9843-5a443a9d8845>"
}
|
Although businesses offer products and services for the benefits of their customers, their main underlying objective is to make money. And ultimately, without finding ways to do this consistently, a company will likely not be in business long. Small-business financial problems are common, and without proper planning, these problems could be the downfall of your business. The types of financial problems may vary, but their effect remains the same.
The Importance of Profitability
Profit, or net income, is the amount of money a company has remaining after subtracting expenses and is the center of a major financial problem for small businesses. In some circumstances — such as newer companies focused on hypergrowth — not turning over a profit is no big deal because revenue is high and much of it is being reinvested into growth. However, for most companies, the ability to make a profit is the deciding factor between operation and shutting down.
A company should know their production costs, as well as overhead costs, in order to properly set prices to an amount that will ensure they can afford their expenses. Not being able to sustain operations at levels that ensure profits is the downfall of many small businesses.
Cash Flow Needs
Although they may seem similar in nature, cash flow is distinctly different from profit. While profit measures the money left after expenses, cash flow focuses on the amount of money coming into and out of a business. Money coming into a business is from products and services sold, while the money going out of a business can include bills, payroll and other debt obligations. Just because a company makes a profit does not mean that they are cash-flow positive.
For example, let us imagine that a company does $25,000 in sales and has $20,000 in expenses one month. There is no guarantee that the company received all $25,000 from those sales, as some may have been sent via invoice to be paid at a later date. If $10,000 of those sales are sent via invoice and not paid by the time the business has to cover their $20,000 in expenses, they could be in financial trouble.
The Role of Overhead Costs
Overhead costs are indirect, fixed costs, meaning a company will have to pay them regardless of how well (or not) business is going. These are costs that do not generate any revenue for the business but are mandatory to keep the business operating. Some examples of overhead costs include rent, utilities and salaries. Regardless of if a company makes $50,000 per month or $5,000,000 per month, those expenses will be the same each month.
For any company, knowing the overhead costs is critical because that lets them know the minimum cost of running a business and how much money they need to bring in to keep operating. As time progresses, it should be a company's goals to reduce overhead costs where possible. Mismanaging or being oblivious to overhead costs is one of the quickest ways for a company to fail.
Do Not Forget the Payroll
For many small businesses, payroll accounts for the most significant amount of their monthly expenses. Payroll not only includes the monetary salary companies pay to employees, but it also includes costs associated with having employees, such as benefits and taxes. Because payroll accounts for so much of a company's expenses, efficiently managing payroll is integral to avoiding financial issues. Having different positions with overlapping job duties, for example, may unnecessarily increase the number of employees a company needs on the payroll.
Stefon Walters earned a bachelor's degree in Economics from the University of North Carolina at Chapel Hill. After college, he went on to work sales and finance roles for a Fortune 200 company before founding two tech companies. He is also the author of Finessin' Finances, a full-length book on personal finances.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9532555937767029,
"language": "en",
"url": "https://big-pond-rumours.com/ethereum-fbar/",
"token_count": 1115,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.095703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2f719682-ca2c-4d55-a462-aaa21feb68f5>"
}
|
What Is Ethereum (ETH)?
Ethereum is a decentralized open-source blockchain system that features its own cryptocurrency, Ether. ETH works as a platform for many other cryptocurrencies, along with for the execution of decentralized clever contracts Ethereum was first explained in a 2013 whitepaper by Vitalik Buterin. Buterin, together with other co-founders, secured financing for the task in an online public crowd sale in the summer of 2014 and officially released the blockchain on July 30, 2015.
Ethereum’s own purported objective is to become a global platform for decentralized applications, permitting users from all over the world to compose and run software that is resistant to censorship, downtime and fraud.
Who Are the Founders of Ethereum?
Ethereum has an overall of eight co-founders an unusually a great deal for a crypto task. They first satisfied on June 7, 2014, in Zug, Switzerland.
Russian-Canadian Vitalik Buterin is maybe the best understood of the lot. He authored the original white paper that initially described Ethereum in 2013 and still deals with improving the platform to this day. Prior to ETH, Buterin co-founded and wrote for the Bitcoin Magazine news site.
British programmer Gavin Wood is perhaps the 2nd essential co-founder of ETH, as he coded the first technical implementation of Ethereum in the C++ shows language, proposed Ethereum’s native programs language Solidity and was the very first chief technology officer of the Ethereum Structure. Before Ethereum, Wood was a research study scientist at Microsoft. Afterward, he moved on to establish the Web3 Structure.
Amongst the other co-founders of Ethereum are: – Anthony Di Iorio, who financed the task throughout its early stage of advancement. – Charles Hoskinson, who played the primary function in establishing the Swiss-based Ethereum Foundation and its legal structure. – Mihai Alisie, who offered help in developing the Ethereum Foundation. – Joseph Lubin, a Canadian business owner, who, like Di Iorio, has actually assisted fund Ethereum during its early days, and later on established an incubator for start-ups based upon ETH called ConsenSys. – Amir Chetrit, who helped co-found Ethereum however stepped far from it early into the development.
What Makes Ethereum Distinct?
Ethereum has actually originated the concept of a blockchain wise contract platform. Smart contracts are computer programs that automatically carry out the actions required to satisfy a contract in between a number of celebrations on the internet. They were created to decrease the need for relied on intermediates in between specialists, thus reducing transaction expenses while likewise increasing deal dependability.
Ethereum’s principal development was creating a platform that allowed it to perform smart contracts utilizing the blockchain, which further enhances the already existing benefits of clever agreement technology. Ethereum’s blockchain was developed, according to co-founder Gavin Wood, as a sort of “one computer for the whole planet,” in theory able to make any program more robust, censorship-resistant and less vulnerable to fraud by running it on a worldwide dispersed network of public nodes.
In addition to smart contracts, Ethereum’s blockchain is able to host other cryptocurrencies, called “tokens,” through the use of its ERC-20 compatibility requirement. This has actually been the most common use for the ETH platform so far: to date, more than 280,000 ERC-20-compliant tokens have been released. Over 40 of these make the top-100 cryptocurrencies by market capitalization, for example, USDT LINK and BNB B: Related Pages:
New to crypto? Find out how to purchase Bitcoin today Ready to find out more? Visit our discovering hub Want to look up a deal? Visit our block explorer Curious about the crypto space? Read our blog site
How Is the Ethereum Network Secured?
As of August 2020, Ethereum is protected through the Ethash proof-of-work algorithm, belonging to the Keccak household of hash functions.
There are plans, however, to transition the network to a proof-of-stake algorithm connected to the significant Ethereum 2.0 upgrade, which introduced in late 2020.
After the Ethereum 2.0 Beacon Chain (Phase 0) went reside in the start of December 2020, it ended up being possible to begin staking on the Ethereum 2.0 network. An Ethereum stake is when you transfer ETH (serving as a validator) on Ethereum 2.0 by sending it to a deposit contract, essentially functioning as a miner and thus securing the network. At the time of writing in mid-December 2020, the Ethereum stake cost, or the amount of money earned daily by Ethereum validators, is about 0.00403 ETH a day, or $2.36. This number will change as the network develops and the quantity of stakers (validators) boost.
Ethereum staking benefits are figured out by a circulation curve (the participation and typical percent of stakers): some ETH 2.0 staking benefits are at 20% for early stakers, however will be lowered to wind up between 7% and 4.5% annually.
The minimum requirements for an Ethereum stake are 32 ETH. If you decide to stake in Ethereum 2.0, it means that your Ethererum stake will be secured on the network for months, if not years, in the future till the Ethereum 2.0 upgrade is finished.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9416806101799011,
"language": "en",
"url": "https://energyindemand.com/reference-material/",
"token_count": 4246,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.130859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6c959b69-d5e2-4176-93fd-56f2bac90f10>"
}
|
To help readers, various updates on policies, programmes, definitions and so on related to energy efficiency and renewable energy will be added to this page.
1. European Energy Efficiency Policies and Programmes
The EU has taken a very comprehensive approach to developing a legal framework for energy efficiency. The central goals for energy policy (security of supply, competitiveness, and sustainability) are now laid down in the Lisbon Treaty. A common EU energy policy has evolved around the common objective to ensure the uninterrupted availability of energy products and services on the market, at a price affordable for all private and industrial consumers, and at the same time contributing to the EU’s wider social and climate goals.
The main categories of legal instruments used in the European Union are:
- regulations: these are binding in their entirety and directly applicable in all Member States;
- directives: these bind the Member States as to the results to be achieved and have to be transposed into the national legal framework;
- decisions: these are fully binding on those to whom they are addressed;
- recommendations and opinions: these are non-binding, declaratory instruments.
For energy efficiency, the main approach taken has been to use directives. Below are the main Directives being used for energy efficiency. This does not include regulations or decisions that are based on the basic directives. The list of Directives is large and comprehensive. It is also continuing to grow and evolve: in June 2011, a new Energy Efficiency Directive was proposed by the EC to help achieve the 2020 target and even more.
The European Union also promotes energy efficiency through the Intelligent Energy Europe Programme, the Covenant of Mayors, and Research and Development. Intelligent Energy funds non-technical projects for energy efficiency. The Covenant of Mayors brings together local and regional authorities who voluntarily commit to increase energy efficiency and use of renewable energy sources on their territories. Signatories aim to meet and exceed the European Union 20% CO2 reduction objective by 2020.
The EU also promotes energy efficiency through over-arching policy strategies, plans and communications. The most recent Plan was published in March 2011 (for more information go to the eceee website).
Top of page…
2. Main European Union Directives Related to Energy Efficiency
Energy Efficiency Directive
Council Directive 2012/27/EU on energy efficiency, amending Directives 2009/125/EC and 2010/30/EU and repealing Directives 2004/8/EC and 2006/32/EC
Energy Labelling of Domestic Appliances
Council Directive 92/75/EEC of 22 September 1992 on the indication by labelling and standard product information of the consumption of energy and other resources by household appliances and its amendments and implementing measures (“Energy Labelling Directive”) repealed by:
Directive 2010/30/EU of the European Parliament and of the Council of 19 May 2010 on the indication by labelling and standard product information of the consumption of energy and other resources by energy-related products (recast).
Ecodesign of Energy-Using Products
Directive 2005/32/EC of the European Parliament and of the Council of 6 July 2005, as amended by Directive 2008/28/EC of the European Parliament and of the Council of 11 March 2008, establishing a framework for the setting of ecodesign requirements for energy-using products and amending Council Directive 92/42/EEC and Directives 96/57/EC and 2000/55/EC of the European Parliament and of the Council (“Ecodesign Directive”), replaced by Directive 2009/125/EC of the European Parliament and of the Council of 21 October 2009 establishing a framework for the setting of ecodesign requirements for energy-related products (recast).
End-use Efficiency & Energy Services
Directive 2006/32 of the European Parliament and of the Council of 5 April 2006 on energy end-use efficiency and energy services and repealing Council Directive 93/76/EEC (“The Energy Services Directive”). This
Energy Efficiency in Buildings
Directive 2002/91 of the European Parliament and of the Council of 16 December 2002 on the energy performance of buildings and its amendments repealed by its recast directive:
Directive 2010/31 of the European Parliament and of the Council of 17 May 2010 on the energy performance of buildings and its amendments (the recast Directive entered into force in July 2010.
Cogeneration – Combined Heat and Power (CHP)
Directive 2004/8/EC of the European Parliament and of the Council of 11 February 2004 on the promotion of cogeneration based on a useful heat demand in the internal energy market and amending Directive 92/42/EEC of 21 May 1992 on efficiency requirements for new hot-water boilers fired with liquid or gaseous fuels.
This Directive has now been removed and replaced by the 2012 Energy Efficiency Directive.
Directive 2009/33/EC of the European Parliament and of the Council of 23 April 2009 on the promotion of clean and energy-efficient road transport vehicles
Directive 1999/94/EC of the European Parliament and of the Council of 13 December1999 relating to the availability of consumer information on fuel economy and CO2 emissions in respect of the marketing of new passenger cars
Top of page…
3. European Union Directives Related to Renewable Energy
Directive 2009/28/EC of the European Parliament and of the Council of 23 April 2009 on the promotion of the use of energy from renewable sources and amending and subsequently repealing Directives 2001/77/EC and 2003/30/EC
Directive 2003/30/EC the European Parliament and of the Council of 8 May 2003 on the promotion of the use of biofuels or other renewable fuels for transport
Directive 2001/77/EC the European Parliament and of the Council of 27 September 2001 on the promotion of electricity produced from renewable energy sources in the internal electricity market.
Top of page…
4. Codecision (or ordinary legislative procedure)
One of the important changes introduced by the Lisbon Treaty (or the Treaty of the European Union (TEU) and the Treaty of the Functioning of the European Union (TFEU)) is the fact that co-decision becomes the “ordinary legislative procedure”, i.e. what used to be the exception in decision-making has become the norm for most policy areas.
As defined in Article 294 of the TFEU, the co-decision procedure is the legislative process which is central to the Community’s decision-making system. It is based on the principle of parity and means that neither institution (European Parliament or Council) may adopt legislation without the other’s assent.
For those following the legislative approval, the following terms are used by the institutions of the European Union.
“A” item / “B” item: The Council’s rules of procedure lay down that “the provisional agenda shall be divided into Part A and Part B. Items for which approval by the Council is possible without discussion shall be included in Part A, but this does not exclude the possibility of any member of the Council or of the Commission expressing an opinion at the time of the approval of these items and having statements included in the minutes”. An “A” item is therefore a dossier on which an agreement already exists, enabling it to be formally adopted without debate. The items in part “B” of the agenda are scheduled for debate. Similarly, the Coreper agenda is divided into a part “I” (items scheduled without debate) and a part “II” (items scheduled for debate). In addition, the deliberations and decisions of the Council itself under the co-decision procedure are public.
Absolute majority (in the European Parliament): Majority of the members who comprise Parliament. In its present configuration (with 736 MEPs), the threshold for an absolute majority is 369 votes (Note: In the elections in June 2009 which took place on the basis of the Nice Treaty, the number of MEPs was reduced to 736. With the entry into force of the Lisbon Treaty on 1/12/2009, the number will be increased to 754 once the new arrangements have been completed and reduced to 751 for the elections in 2014. Consequently, the numbers necessary to reach an absolute majority will thus change to 378 and 376 respectively). Under the co-decision procedure, an absolute majority is necessary in plenary session when voting on a second reading in order to reject the Council position at first reading or to adopt amendments.
COREPER: Article 16 (7) TEU lays down that “a committee consisting of the Permanent Representatives of the Member States shall be responsible for preparing the work of the Council”.
Coreper plays a pivotal role in the Community decision-making system, where it is a forum for both dialogue (between the permanent representatives and between each of them and their capital) and political control (orientation and supervision of the work of the groups of experts). It meets each week and is in fact divided into two parts:
- Coreper I, comprising the Deputy Permanent Representatives, prepares the ground for the following Council configurations:
- Employment, Social Policy, Health and Consumer Affairs;
- Competitiveness (internal market, industry, research and tourism);
- Transport, Telecommunications and Energy;
- Agriculture and Fisheries;
- Education, Youth and Culture (including audiovisual);
- Coreper II , comprising the Permanent Representatives, prepares for the other configurations:
- General Affairs Council;
- External Relations Council (including European security and defence policy and development cooperation);
- Economic and Financial Affairs (including the budget);
- Justice and Home Affairs (including civil protection).
Coreper monitors and coordinates the work of some 250 committees and working parties consisting of officials from the Member States who prepare the dossiers at technical level.
With regard to the co-decision procedure, Coreper, and particularly its President, is Parliament’s main counterpart.
General approach (in the Council of Ministers): This is an informal agreement within the Council, sometimes by qualified majority, before Parliament has given its opinion on first reading. Such an agreement speeds up work, or even facilitates an agreement on first reading. On the other hand, the Commission gives no definitive undertaking to the Council owing to the absence of an opinion from Parliament. Once the Council has received Parliament’s opinion, the Council prepares a political agreement.
Inter-institutional relations group (GRI) (French acronym): A body within the Commission with the task of coordinating political, legislative and administrative relations with the other institutions and in particular with the European Parliament and the Council. The GRI brings together members from all the Commissioner’s cabinets tasked with monitoring inter-institutional affairs. The GRI meets, in principle, once a week. It handles, more specifically, dossiers dealt with by the Council and the European Parliament which are sensitive from an institutional point of view, some of which come under the co-decision procedure.
Ordinary legislative procedure: formal Treaty term in the Lisbon Treaty to refer to co-decision as set out in article 294 TFEU.
Political agreement (in the context of preparing the Council position at first reading) agreement expressed in principle by the Council, following a vote where appropriate. This agreement contains the guidelines for the future common position and the details are finalised, particularly in terms of the recitals, by the working party, verified by lawyer-linguists, then formally adopted as a common position by the Council at a subsequent session, mostly without a debate. On average, the political agreement comes 3 to 6 months prior to formal adoption of the common position.
Qualified majority (in the Council of Ministers): Since 1 January 2007, the weighting for the number of votes attributed to each Member State is as follows: the threshold for a qualified majority is set at 255 votes out of 345 (73.91 %). The decision also requires a favourable vote from the majority of Member States (i.e. at least 14 Member States). In addition, a Member State may request verification that the qualified majority includes at least 62% of the Union’s total population. Should this not be the case, the decision will not be adopted.
In successive waves of institutional reform, qualified majority voting has replaced unanimity, which is less effective for developing an operational Community policy (risk of veto). With the entry into force of the Lisbon Treaty, the above-mentioned regime will continue until 31 October 2014 (see Article 16 (4) TEU,Article 238 TFEU and Protocol No. 36 on Transitional Provisions).
Rapporteur: The MEP responsible for preparing a report.
Report (Parliament): Under the co-decision procedure, a Parliamentary report prepares Parliament’s position. Drawn up by an MEP chosen from within the competent Parliamentary committee (the “rapporteur”), it basically contains suggested amendments and a statement of reasons explaining the proposed amendments.
Shadow rapporteurs: MEPs who monitor a dossier for political groups other than that of the rapporteur.
Simple majority (in the European Parliament): Majority of the members taking part in the vote. Under the co-decision procedure, a simple majority is required when voting in Parliamentary committee, in plenary on a first reading and, on a second reading, to approve the Council position at first reading and in order to draw up the act in accordance with the joint draft prepared by the Conciliation Committee.
Statement of reasons: text accompanying an act or preparatory act to explain the reasoning behind it. Such texts consist of Commission proposals, opinions of the European Parliament and common positions of the Council.
TEU:Treaty on the European Union (part of the Lisbon Treaty that entered into force on 1 December 2009)
TFEU:Treaty on the Functioning of the European Union (part of the Lisbon Treaty that entered into force on 1 December 2009)
Trilogue / Trialogue (FR): informal tripartite meetings attended by representatives of the European Parliament, the Council and the Commission. Owing to the ad-hoc nature of such contacts, no “standard” format of representation has been laid down, The level and range of attendance, the content and the purpose of trilogues may vary from very technical discussions (involving staff level of the three administrations) to very political discussions (involving Ministers and Commissioners). They may address issues of planning and timetable or go into detail on any particular substantial issue.
However, as a general rule, they involve the rapporteur (accompanied where necessary by shadow rapporteurs from other political groups), the chairperson of COREPER I or the relevant Council working party assisted by the General Secretariat of the Council and representatives of the Commission (usually the expert in charge of the dossier and his or her direct superior assisted by the Commission’s Secretariat-General and Legal Service).
The purpose of these contacts is to get agreement on a package of amendments acceptable to the Council and the European Parliament. The Commission’s endorsement is particularly important, in view of the fact that, if it opposes an amendment which the European Parliament wants to adopt, the Council will have to act unanimously to accept that amendment. Any agreement in trilogues is informal and “ad referendum” and will have to be approved by the formal procedures applicable within each of the three institutions.
Unanimity (Council): Unanimity denotes the obligation to reach a consensus among all the Member States meeting within the Council so that a proposal can be adopted. According to Article 238 (4) TFEU, abstention “shall not prevent the adoption by the Council of acts which require unanimity”. Since the Single European Act of 1987, the scope for unanimity has been increasingly limited. Under the ordinary legislative procedure , unanimity is only required in cases where the Commission cannot accept the amendments introduced into its proposal. Otherwise, the Lisbon Treaty makes provision for unanimity mostly in case of application of the “special legislative procedure”.
The procedure flow chart
The diagram below shows the complex nature of the EU ordinary legislative procedure, revised since the adoption of the Lisbon Treaty. The co-decision procedure is firmly established as the ordinary legislative process central to the Community’s decision-making system. It is based on the principle of parity, meaning that neither the European Parliament nor the Council may adopt legislation without the other’s assent. A written description of the co-decision procedure is available at the Commission’s website.
5. What is a Carbon Price and Why do we Need One?
This is good reference material from the Guardian on a carbon price.
A carbon price is a cost applied to carbon pollution to encourage polluters to reduce the amount of greenhouse gas they emit into the atmosphere. Economists widely agree that introducing a carbon price is the single most effective way for countries to reduce their emissions.
Climate change is considered a market failure by economists, because it imposes huge costs and risks on future generations who will suffer the consequences of climate change, without these costs and risks normally being reflected in market prices. To overcome this market failure, they argue, we need to internalise the costs of future environmental damage by putting a price on the thing that causes it – namely carbon emissions.
A carbon price not only has the effect of encouraging lower-carbon behaviour (e.g. using a bike rather than driving a car), but also raises money that can be used in part to finance a clean-up of “dirty” activities (e.g. investment in research into fuel cells to help cars pollute less). With a carbon price in place, the costs of stopping climate change are distributed across generations rather than being borne overwhelmingly by future generations.
There are two main ways to establish a carbon price. First, a government can levy a carbon tax on the distribution, sale or use of fossil fuels, based on their carbon content. This has the effect of increasing the cost of those fuels and the goods or services created with them, encouraging business and people to switch to greener production and consumption. Typically the government will decide how to use the revenue, though in one version, the so-called fee-and-dividend model – the tax revenues are distributed in their entirety directly back to the population.
The second approach is a quota system called cap-and-trade. In this model, the total allowable emissions in a country or region are set in advance (“capped”). Permits to pollute are created for the allowable emissions budget and either allocated or auctioned to companies. The companies can trade permits between one another, introducing a market for pollution that should ensure that the carbon savings are made as cheaply as possible.
To serve its purpose, the carbon price set by a tax or cap-and-trade scheme must be sufficiently high to encourage polluters to change behaviour and reduce pollution in accordance with national targets. For example, the UK has a target to reduce carbon emissions by 80% by 2050, compared with 1990 levels, with various intermediate targets along the way. The government’s independent advisers, the Committee on Climate Change, estimates that a carbon price of £30 per tonne of carbon dioxide in 2020 and £70 in 2030 would be required to meet these goals.
Currently, many large UK companies pay a price for the carbon they emit through the EU’s emissions trading scheme. However, the price of carbon through the scheme is considered by many economists to be too low to help the UK to meet its targets, so the Treasury plans to make all companies covered by the scheme pay a minimum of £16 per tonne of carbon emitted from April 2013.
Ideally, there should be a uniform carbon price across the world, reflecting the fact that a tonne of carbon dioxide does the same amount of damage over time wherever it is emitted. Uniform pricing would also remove the risk that polluting businesses flee to so-called “pollution havens”‘ – countries where a lack of environmental regulation enables them to continue to pollute unrestrained. At the moment, carbon pricing is far from uniform but a growing number of countries and regions have, or plan to have, carbon pricing schemes in place, whether through cap-and-trade or carbon taxes. These include the European Union, Australia, South Korea, South Africa, parts of China and California.
• This article was written by Alex Bowen of the Grantham Research Institute on Climate Change and the Environment at LSE in collaboration with the Guardian
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.972534716129303,
"language": "en",
"url": "https://growthecon.wordpress.com/2015/07/06/the-glacial-speed-of-institutional-change/",
"token_count": 1338,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.177734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9167fa8d-ca68-40a9-a84c-43857de1f008>"
}
|
NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.
I just finished reading “The Long Process of Development” by Jerry Hough and Robin Grier. The quick response is that you should read this book. If that’s enough, then go get it. All the rest of this post is just some of my reactions to the book.
The basic idea of HG is to trace out how long it took England and Spain (and by extension, their colonies Mexico and the U.S.) to evolve the elements of “good institutions” that we think promote economic growth. Clearly the process went faster in some of these places than others, but the point is that it took centuries regardless of who we are talking about.
HG look at the development of an effective state in England through history. For them, England gets a minimally effective state with Henry VII in 1485. His victory in the War of the Roses (and in particular his ruthless elimination of others with claims to the throne) gave him a government that had at least some control over the entire area of England and Wales. So is that when England has good institutions? No, not really. From that point, it is another two hundred and four years until the Glorious Revolution and what we might call the beginnings of constitutional monarchy. All good? Not quite. It is another one hundred and forty three years before the Reform Act of 1832 generates the barest seeds of what might be called inclusive institutions. Even if you think that England in 1832 had “good institutions” for economic development, that was three hundred and forty-seven years after England got a functioning central government. If we lower our sights and say that the Glorious Revolution had given England the “good institutions” necessary for economic development, then that was still two hundred years after England got a functioning central government.
The second major example used by HG is Spain. By 1504, Isabella had acquired a kingdom that essentially looks like modern Spain in geographic reach. She was the monarch of Castile, the Moors had been forced out of Granada, and she had brought Aragon into the kingdom by marrying Ferdinand. HG then document that despite this geographic reach, the government of Spain was not an effective central government in the way that Henry VII or VIII had over England. Even Philip II’s reign in the late 1500’s did not consolidate government in a way that seems consistent with his numerous foreign military activities. HG argue that Spain was about 200 years behind England, and only reached an effective central government around 1700. It would be arguably another 280 years after that before Spain got what we would call “good institutions”.
Regardless of the exact historical case study, HG’s point is that developing modern institutions the support sustained economic growth takes centuries, even in one case – England – where all the breaks kept going their way.
What is the point of this regarding development and growth? HG suggest that a large number of developing countries have a central government with the capabilities roughly equal to those of Henry VII. Many of them began as separately defined states only in the 1960’s, and in the subsequent fifty years have perhaps gained the ability to extend their powers of taxation and coercion to all corners of their geographic area. In places like Afghanistan, they cannot even do that.
Asking, expecting, or advising these countries to adopt “good institutions” is to ask them to skip between two and five centuries of institutional evolution in one leap. Developing countries evolving their own stable institutional structures that support economic growth is going to be long, ugly, and likely violent – just like it was in every single currently rich country. HG’s work says that institutions are not just another technology. While you can play catch-up relatively easily with technology (e.g. adopting mobile phones without landline networks), you cannot do the same with institutions.
Further, institutional development is always going to involve some coercion. Some group is going to have to be dragged kicking and screaming into the new institutional arrangement. HG clearly reject the idea that new social contracts will spontaneously get re-negotiated as circumstances change, as in the old North and Weingast interpretation of the Glorious Revolution in England. In contrast they accept the more Mancur Olson-ian view, that social contracts are whatever the dude with the gun says they are. The only way to accelerate the development process is to accelerate the concentration of coercive power with one group/party/coalition. From that perspective, the problem with the U.S. attempts at state building in Afghanistan and Iraq was not that they intervened, but that this intervention was half-assed and ended before the job was done. If you are going to intervene, pick a winner and then make sure they win. Trying to equalize power across different factions is precisely the wrong thing you should do to encourage institutional development. That is me spinning the argument out to a logical extreme, but it makes the point.
A last mild critique of HG is that it has a fault similar to most other work on institutions. It does not define what a “good institutions” are. We know that England and the U.S. have them now, and that Spain seems to have them at least since after Franco. We know that England had “good” institutions in or around the 1800’s, and Spain apparently didn’t. And we know that England and Spain had “bad” institutions before the 1500’s. So it must be that institutional evolution takes somewhere between three and five centuries? But what precisely is it that England and Spain have today that they didn’t in 1500? What is a good institution?
HG are more clear than many on this point. They consciously limit themselves to examining whether a central government has effective control of taxation and violence within its borders. But of course, what does effective control mean? What does taxation mean – what’s the difference between a tribute, a donation, expropriation, and a tax? Does control of violence simply mean that all the people coercing others wear the same uniform?
This critique doesn’t eliminate the value of reading the book. The general point about the long time lags in the evolution of institutions (good or bad) is excellent. It is hard to fight time compression when reading history, and HG make clear that the institutions literature needs to get far more serious about that fight.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9356873035430908,
"language": "en",
"url": "https://mshealthpolicy.com/tobacco/",
"token_count": 613,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11962890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9bd8db13-4d28-430a-8c20-8cc6ecac7b0e>"
}
|
The negative effects of tobacco usage on health have been well studied, and the Center has published several briefs on the topic as it relates to Mississippi. Both of our projects in this area focused on both the disease impact of smoking but also the economic impact of treating smoking.
Mississippi Medicaid Costs Attributable to Tobacco
Evidence of the increased risk for specific diseases associated with tobacco use is well documented the higher risk calculates into greater health care costs for treating these diseases, much of which is paid by public programs such as Medicare and Medicaid. The Center commissioned researchers with The Hilltop Institute at the University of Maryland to review Mississippi Medicaid claims data and quantify the financial impact of tobacco use on Mississippi’s Medicaid program.
Why this is important:
- When all categories of expenditures were totaled, the estimated direct and indirect cost of tobacco-related illness to Mississippi Medicaid was $388 million in 2016and $396 million in 2017.
Secondhand Smoke: Impact on Health and EconomyReports have indicated that there is no safe level of exposure to tobacco smoke. Secondhand smoke can have 80-90% of the impact of chronic smoke. Given the number of smokers in Mississippi, exposure to secondhand smoke is an area of public health interest for Mississippi.
Why this is important:
- The most recent evidence identifies more than 7,000 chemicals in secondhand smoke, 69 of which have been identified as carcinogens, or cancer-causing compounds.
- In 2009, approximately 76,719 Mississippi children (10.4%) and 144,009 Mississippi adults (6.6%) had asthma. Between 2003-2007, asthma emergency room visits in Mississippi increased by 23%,15 with approximately 4,000 asthma hospitalizations in 2008.
- Mississippi is one of only seven states without any kind of statewide law restricting smoking in private indoor workplaces, restaurants, or bars.
- In Mississippi, 47 municipalities have passed ordinances ensuring these public places are smoke-free, and 12 municipalities have partial smoke-free ordinances in place.
- Over the three years following implementation of a smoke-free ordinance, residents of Starkville experienced a 22.7% reduction in heart attack admissions, compared with a 14.8% reduction among non-residents treated at the same hospital which resulted in a hospital cost savings of an estimated $288,270 over a five-year period.
- Analysis of tax revenues showed that no Mississippi community experienced a decline in collected tourism tax after enacting a smoke-free policy, indicating that smoke-free ordinances at the municipal level did not have a negative impact on restaurants and/or bars.
PUBLICATIONSCopies of the issue briefs, chartbooks, and reports can be downloaded here. Printed copies of all documents are available by contacting the Center for Mississippi Health Policy at 601-709-2133 or by e-mail at [email protected].
- Mississippi Medicaid Costs Attributable to Tobacco (2018)
- Secondhand Smoke: Impact on Health and Economy (2011)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8574143648147583,
"language": "en",
"url": "https://oppla.eu/casestudy/17278",
"token_count": 432,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.00909423828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d092d090-b53e-4147-93d6-ec7794ffd970>"
}
|
The European Union
Evaluating how recent and forthcoming EU policy developments affect the levels of ecosystem services (ES) and natural capital (NC) in Europe. Many of Europe's natural habitats and species are in decline. While the EU has a number of policies in place to safeguard habitats and species, losses are ongoing for many habitats, species and associated ecosystem services.
- Research, particularly on ecosystem services mapping and on No Net Loss/offset policy assessments is informing policy at the EU scale (for example, as input to the EU No Net Loss Initiative).
- Demonstrate the potential effectiveness of policy measures to avoid, minimize and offset impacts on (semi)natural habitats across Europe and underlying factors of success or failure, related to land-policy interactions.
- Test priority areas identification methods for ecosystem services, considering ecosystem services demand and flow to inform decision making.
Land use modelling can be applied at regional to global scales (see CLUEScanner, CLUMONDO ), with context specific input data.<br /> The ecosystem services indicators can be applied at regional to EU scales, with context-specific input data (see respective publications).<br /> Prioritization methods can be applied at regional to global scales (see Zonation ), but need context specific data about biodiversity and/or ecosystem services and costs of conservation actions.
Business-as-Usual scenarios of land use change in Europe have widespread negative effects on ES/NC supply (Tucker et al. 2014; Schulp et al. 2016).~Policy measures to avoid, minimize and offset impacts on (semi)natural habitats are projected to be effective in reducing impacts across the EU, although fully meeting no net loss is very challenging. The effectiveness of policy measures changes across the EU, and multiple mechanisms are responsible for this. See Schulp et al (2016).~When applied appropriately, biodiversity offsets are one solution to widespread, poorly-compensated biodiversity loss (Quetier et al. 2015).~Accounting for the demand of ecosystem services is essential in the identification of priority areas for ecosystem services (Verhagen et al. 2017)~
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9469031095504761,
"language": "en",
"url": "https://www.cbsit.co.uk/2018/10/05/machine-learning-lawyers/",
"token_count": 647,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:412a0fde-9ca5-438b-8f98-6bf1054e5165>"
}
|
Machine learning is the practice of getting computers to think and act like humans using artificial intelligence (AI). Rather than computers using rules-based programming, machine learning uses algorithms to parse data and use it to make accurate predictions.
The aim of machine learning is to build computer systems that automatically improve our experience as humans, rather than doing our jobs for us as most people think. This new technology has something to offer every profession – so why should the legal sector be any different?
Machine learning for lawyers and the legal sector
Machine learning for lawyers offers many benefits for them and their chambers, and it could help small legal firms compete. For instance, human-made data sets can be applied to machine learning to help lawyers unearth relevant documents and opinions that are relevant to their cases. AI applications can also minimise the number of errors in the research process and point attorneys towards essential documents.
Machine learning software can be used to do accounts, recruit staff, draw up contracts and provide simple legal counsel to first time clients. Instead of spending time and money on research and paperwork, machine learning could free lawyers up to engage in more client-facing tasks. The result of AI in chambers would be better efficiency, lower outsourcing costs and instant access to data and insights.
Machine learning for lawyers – The benefits
- Less time spent on routine paperwork
- Money freed up from fewer outsourced tasks
- Greater efficiency
- More accurate data – less human error
- Legal services become price-predictable and more accessible for clients
- Less stress for lawyers
- Ability to handle more data
- Higher revenue – A recent report found that businesses who invested in AI and machine learning could see an estimated 38% revenue boost by 2035.
Will machine learning replace human lawyers?
Machine learning isn’t as simple as letting computers do all your paperwork, nor will robots take over the jobs of human lawyers. An integrated approach to machine learning is needed to maximise efficiency and minimise risk in the legal sector. This means humans and machines working together.
“ AI will never fully replace people, particularly highly skilled people. But it can be used to automate routine tasks. Technology firms like Atrium are using AI-based software to complement and enhance a service that’s already being provided by humans, which can be easily duplicated by a machine.” – Gene Marks, The Guardian.
Legal machine learning in action
A great example of legal AI in action is the service provided by Atrium. Atrium is a corporate law firm that leverages AI technology to do much of the legal work required by startups. Their services help new business owners hire employees, raise equity and write up legal contracts. Atrium’s software uses machine learning to understand legal documents and automate routine processes.
For lawyers, the most successful uses of machine learning utilise both technology and human expertise. Supervised machine learning could help lawyers become more efficient and improve the accuracy of their research, as well as giving those competing in the legal sector an edge on rival chambers. For more information about what AI and machine learning could bring to your chambers, contact City Business Solutions on 020 3355 7334.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8788242936134338,
"language": "en",
"url": "https://www.ccrpcvt.org/initial/e/",
"token_count": 1231,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ceb09e7c-b926-4f90-97de-5a2eefd654ee>"
}
|
- 85th Percentile Speed
(or: Eighty-Fifth Percentile)
The maximum speed at which 85% of all vehicles are travelling.
A congressional budgetary mechanism built into the appropriation bill, often used to undertake specific projects. Earmarks are generally designated as a dollar amount.
A less-than-fee property right that can be positive or negative. A positive easement authorizes a second party to use the property in a specific, limited way (such as a right-of-way that authorizes the second party to cross the property). A negative easement prohibits a property owner from using the property fully (such as a scenic easement that prevents an owner from building a structure on the property that would block the public’s view of a distant mountain). An appurtenant easement benefits a neighboring property; an easement that is not appurtenant is in gross.
- Eastern Border Transportation CoalitionEBTC
Organization providing a cross-border issue forum for each U.S. state, Canadian province, and border service agency.
- Economic Development
Policies, actions, and/or projects intended to improve the qualitative characteristics or to expand the quantitative size of the economy.
- Economic Development AssociationEDA
The federal office responsible for the provision of federal economic development assistance to economically depressed areas, especially to areas of high unemployment.
A way of exiting or travelling away from a location. Egress generally describes vehicle or pedestrian movements from the perspective of driveways and walkways which provide “egress from a property”. See also “Access” or “Ingress”.
- Electronic Toll & Traffic ManagementETTM
ETTM systems equip vehicles with electronic tags (or transponders) that communicate with roadside sensors to provide automatic vehicle identification that allows for toll collection at the toll booth, and general vehicle monitoring and data gathering beyond the toll plaza. These systems the potential to reduce congestion, improve safety, energy efficiency, air quality, and to enhance economic productivity at a cost significantly less than additional road construction.
24 VSA 44303 (16): A component of a Comprehensive Plan.
- Eminent Domain
The power of a government (or a person delegated such authority by a government) to require an owner to sell private property to the entity exercising the power if the entity pays the owner Just Compensation.
- Emissions Budget
An aspect of the State Implementation Plan (SIP) that identifies allowable emissions levels, mandated by the National Ambient Air Quality Standards (NAAQS) for certain pollutants emitted from mobile, stationary, and area sources. The emissions levels are used for meeting emission reduction milestones, attainment, or maintenance demonstrations.
- Emissions Inventory
An emissions inventory is a database that lists (by source of emission) the amount of air pollutants discharged into the atmosphere of a community or region during a given period of time.
When a land use is located too close to another land use, resulting in one or more Adverse Impacts.
- Endangered Species
10 VSA 5401 (6): A species listed on the state endangered species list (see 10 VSA 5402) or determined to be an endangered species under the federal Endangered Species Act. The term generally refers to species whose continued existence as a viable component of the state’s wild fauna or flora is in jeopardy.
- Enterprise Planning Area
A location designated by this Regional Plan that is recommended to be a center for employment.
- Environmental AssessmentEA
The purpose of an EA is to determine if there is sufficient evidence for a proposed project to require a more comprehensive Environmental Impact Study (EIS). Often an EA is a sufficient environmental document in itself when impacts of a project minor or can be mitigated.
- Environmental Court
The court authorized to hear appeals of local land use decisions, ANR regulatory decisions, and District Environmental Commission decisions of Act 250 permit applications. See 4 VSA 1001 to 1004.
- Environmental Impact StatementEIS
Document that studies all likely impacts resulting from major federally-assisted programs. Impacts include those on the natural environment, economy, society, and the built (existing) environment of historical and aesthetic significance.
- Environmental JusticeEJ
The fair treatment of people of all races, cultures, and income with respect to the development, implementation, and enforcement of environmental laws, regulations, programs and policies.
- Environmental Protection AgencyEPA
The federal regulatory agency responsible for administering and enforcing environmental laws, including the Clean Air Act.
- Equivalent Single Axle LoadESAL
Equivalent 18-kip Single Axle Load. A basic premise of truck weight enforcement is that there is a resulting reduction in the rate of pavement deterioration. ESAL measures Truck traffic loading expressed as the number of equivalent 18,000 lb (80 kN) single axle loads.
- Essential Air ServicesEAS
Federal subsidy program for scheduled air services to rural communities
The dedication of property, payment of money in lieu of dedication, or other contribution that a government requires a developer to make as a condition for some government action (such as approval of a development permit).
- Exclusionary Zoning
A legal doctrine that prohibits government from using Zoning to exclude specific types of people (such as racial minorities, poor people, or handicapped people) or certain types of lawful Land Uses (such as churches, group homes, or mobile homes) that can take 3 forms: (1) explicit (expressly prohibiting a land use in a zoning ordinance), (2) implicit (failing to include a land use in a list of permitted land uses), and (3) effective (using unreasonable design standards to discourage development of a land use).
- Excursion Train
A rail enterprise catering to tourism or leisure markets in the form of seasonal, recreational, historical, or tourist service destinations.
A controlled access, divided arterial highway for through traffic where intersecting roads are bypassed via Grade Separation.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9501715302467346,
"language": "en",
"url": "https://www.coherentmarketinsights.com/ongoing-insight/a2-milk-market-2691",
"token_count": 1064,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1923828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6214ae26-d160-48c3-ae62-62850c59bd14>"
}
|
There are two types of beta-casein proteins present in milk i.e. A1 and A2. A2 milk is cow’s milk with the A2 beta-casein protein, which do not contain A1 beta-casein protein. Guernsey, Jersey, Holstein, Charolais, and Brown Swiss are the breeds that naturally produce A2 milk. Due to many health benefits offered by consumption of A2 milk such as boosting immunity, promoting mental growth, increasing metabolic rate, and others the demand for A2 milk among customers is growing across the globe.
A2 milk can be used as milk alternative for infants under the age group of one year. Mother’s milk is very essential for infants in order to ensure health and growth. A2 milk is a great alternative to goat’s and mother’s milk. Various manufacturers are using A2 milk for the manufacturing of infant food. In addition to this, infant food products are being launched using A2 milk annually by several manufacturers, which is anticipated to boost the global market A2 milk market during the forecast period.
According to Department for Environment Food & Rural Affairs, U.K. dairies processed 1,167 million liters of milk in July 2018, which reached to 1,249 million liters of milk in March 2019. Therefore, significant growth in consumption of milk is expected to boost the demand for A2 milk during the forecast period.
However, high price of A2 milk and limited presence of the product in the market is likely to have adverse effect on the market growth in the near future. Attributing to the presence of limited players in the market, it offers impetus growth opportunities for A2 milk producers to enlarge distribution across the globe.
Asia Pacific is predicted to record fastest growth in the global A2 milk market, in terms of revenue during the forecast period. Factors such as growing awareness associated with the health benefits offered by A2 milk products among the consumers is projected to generate higher demand for the A2 milk over the approaching years. A2 milk finds applications in the products such as milk powder, butter, yogurt, ghee, and cheese. Owing to the extensive applications of A2 milk in variety of products the market growth of A2 milk is likely to increase, since A2 milk is used as natural constituent in the production of these products. Therefore, significant increase in demand for dairy products is expected to drive growth of global A2 milk market in the region during the forecast period.
According to India Brand Equity Foundation (IBEF), India’s dairy industry generated a revenue of US$ 77.5 billion in 2016, and is expected to reach US$ 135 billion by 2020, on increasing consumption and also expected that India will be the largest dairy producer by 2020.
Major players operating in the global A2 milk market include Jersey Dairy, The a2 Milk Company Limited, Dairy Farmers, Pura, Fonterra, MLK A2 Cow Milk, Amul, and others.
On the basis of nature, the global A2 milk market is segmented into:
On the basis of product form, the global A2 milk market is segmented into:
On the basis of packaging, the global A2 milk market is segmented into:
- Glass Bottles
- Carton Packaging
- Plastic Bottles & Pouches
On the basis of application, the global A2 milk market is segmented into:
- Infant Formula
- Dairy Products
- Bakery & Confectionary
- Milk & Milk-based Beverages
On the basis of distribution channel, the global A2 milk market is segmented into:
- Supermarkets & Hypermarkets
- Grocery Stores
- Online/Non-Store Retailing
On the basis of region, the global A2 milk market is segmented into:
- North America
- Latin America
- Rest of Europe
- Asia Pacific
- New Zealand
- Middle East and Africa
A2 Milk Market Key Developments:
- In 2018, Nestle S.A. launched its new organic and natural product, ‘illuma Atwo’. Illuma Atwo is an infant formula which is formulated with A2 milk. This is expected to expand company’s product portfolio and strengthen its market position over the forecast period.
- In February 2019, Fonterra Co-operative Group, signed an agreement with famers to supply the milk to The a2 Milk Company Limited for the season 2019-2020 in New Zealand. The milk pool of The Fonterra Co-operative is expected to be based in the Waikato near its Hautapu site in New Zealnad. This is expected to increase supply of high quality milk to The a2 Milk Company
- In November 2019, The a2 Milk Company Limited extended its supply agreement with Synlait Ltd., a New Zealand based milk products company. The supply agreement includes platinum products and other nutritional products of The a2 Milk Company Ltd. This is expected to increase the volume of nutritional products produced by Synlait exclusively for The a2 Milk Company Ltd.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.