meta
dict | text
stringlengths 224
571k
|
---|---|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9494936466217041,
"language": "en",
"url": "https://www.agmrc.org/renewable-energy/renewable-energy-climate-change-report/renewable-energy-climate-change-report/november-2009-newsletter/perspectives-on-indirect-land-use",
"token_count": 3221,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ced4e36a-04a4-40c5-ac0b-cb1c1be64613>"
}
|
Perspectives on Indirect Land Use
AgMRC Renewable Energy Newsletter
Agricultural Marketing Resource Center
The concept of “indirect land use change” (iLUC) has received considerable attention in the renewable fuels industry. It was introduced by individuals and organizations attempting to reduce the destruction of the world’s rainforests (and other forested and grassland areas) that subsequently results in the release of large amounts of carbon dioxide (CO2) into the atmosphere, the primary greenhouse gas (GHG) associated with climate change.
In addition, rainforests provide many other benefits to the world’s society. The Amazon rainforest is said to be the “lungs” of the earth, breathing in carbon dioxide and releasing oxygen, which provides stability for the world’s climate. It also provides an incredible array of plant and animal species that are found nowhere else on earth. These species have the potential for medical and other technological breakthroughs. However, once rainforests are destroyed, the ecosystem is extremely difficult to recreate, resulting in a permanent loss of the benefits that rainforests provide. So, protecting the rainforests is critically important to the world community.
Controlling the destruction of the rainforests is difficult. Many of these forests are located in remote areas of the developing world. Examples are the Amazon basin of Brazil and the rainforests of Indonesia. Patrolling these vast areas with the limited resources of a developing country is difficult at best. So, in the absence of successful direct controls to protect the rainforests, concerned citizens are attempting to reduce the incentives for rainforest destruction.
Indirect land use change is a mechanism that attempts to control the incentives for rainforest destruction. The basic premise states that rainforests are cut down to provide land for raising crops that subsequently provide profits for the destroyer of the rainforest. If world grain prices are high, the incentive for converting rainforest to cropland is increased. Conversely, if grain prices are low, the incentive is reduced.
The iLUC proponents want to attach this premise to biofuels production. In essence, if an acre of cropland is used for biofuel production (e.g. corn ethanol) rather than for food production, the price of grain will be increased because less grain is available for food, which subsequently increases the incentive to cut down an acre of rainforest to replace the acre that is shifted from food to biofuels. So, the GHG emissions from destroying an acre of rainforest should be attached to the acre of corn for used for biofuels. This means the acre of biofuels will carry the GHG emissions from producing the biofuels (direct emissions) plus the GHG emissions from the acre of rainforest that is supposedly destroyed as a result (indirect emissions).
The U.S. mandates for the production of biofuels (corn ethanol, cellulosic ethanol, advanced biofuels, etc.) requires that biofuels must meet tests of reduced GHG emissions compared to gasoline. When iLUC emissions are added to the direct emissions from biofuel production, the GHG reduction benchmarks are difficult to achieve.
To help us navigate through the logic, let’s examine the issue by asking the three following questions:
- Should iLUC be assigned to biofuels?
- If so, what else should iLUC be assigned to?
- Can we reliably assess the impact of biofuels on iLUC?
We will investigate each of these questions below.
Should indirect land use be assigned to biofuels?
The iLUC premise is based on the concept that the individuals who destroy the rainforest are not held responsible for their actions. So, the individuals who provide the economic incentive for the destruction of the rainforest should be held responsible. This would hold even if these individuals did not intend to provide an incentive.
This is similar logic to thieves that rob a bank. The thieves should not be held responsible for the robbery. Rather, the bank is responsible because, by holding money, the bank provided an economic incentive for the thieves to rob the bank.
Whenever we embark on the slippery slope of making one person responsible for another person’s actions, the results can lead to unintended consequences. As individuals we learn at an early age that life is a series of choices. When we make bad choices, we must take responsibility for these decisions. Blaming our bad decision on someone else is not productive. Likewise, when we are held responsible for the actions of others, we have limited ability to affect those actions, which leads to uncertainty and stress on our part.
Likewise, to stop the destruction of rainforests and the subsequent increase in GHG emissions, the polluter must be held responsible for his/her actions. Can you hold a person in one country responsible for the actions of a person in another country?
Is there cause-and-effect
Can we show a strong and direct relationship between increasing grain prices and deforestation? Deforestation is caused by many factors. The argument can be made that the world price of lumber provides an incentive for the destruction of rainforests. The increased cutting of trees and production of lumber flows through both legal and illegal channels. Also, the high poverty rates in many of these countries result in “squatters” who take up residence on land to practice subsistence farming and “slash and burn” agriculture where the forest is cut down to provide farmland. After the land is farmed for a few years and the productivity is gone, it is abandoned and the farmer moves on to cut down more rainforest. These actions are independent of biofuels development. They are the result of poverty and the inability of these people to access existing farmland. Because the world price of grain is only one of many factors that lead to deforestation, the size of its impact is uncertain.
What else causes indirect land use change?
Let’s assume for the sake of argument that world grain prices do significantly impact deforestation and grassland destruction and that biofuels should be held responsible for it. Are there other actions that lead to increases in world grain prices? If there are, should these factors be penalized in some fashion? Below are some factors that fit this category.
Research indicates that from 25 to 30 percent of the food consumed in this country is disposed of at the point of consumption (see AgMRC Renewable Energy article “Domestic Perspectives on Food versus Fuel”) This loss does not include the amount disposed of earlier in the food supply chain.
Food waste includes food purchased by consumers but not eaten, unsold food disposed of from food service facilities, and other means of disposal. If food waste is reduced to a more reasonable level (e.g. 10 percent), you could make the argument that grain prices would be lower. So, this large amount of food waste increases grain prices which subsequently increases rainforest deforestation. As a result, due to the premise of iLUC, every time consumers don’t eat all of the food on their plate, they are responsible for deforestation. Also, every time a buffet prepares more food than is consumed by its customers, it is responsible for deforestation. Taking it a step further, if you are obese or even just a little overweight, you have eaten more food than your body requires and you are responsible for deforestation.
Food versus feed
A common misconception is that using a bushel of corn to produce ethanol takes away a bushel of corn that people otherwise would have eaten. However, this is not correct. Only about ten percent of the corn grain produced in this country goes into products where the corn is consumed directly. The great majority of corn grain and virtually all of the corn for silage are used as animal feed. From this feed comes the beef, pork, chicken, milk and eggs that we consume. Half of the current corn crop is expected to be fed to livestock in this country. Another 15 percent is expected to be exported, of which most will be fed to livestock overseas. So, about two-thirds of our corn will be fed to livestock. About one-fourth of the corn is expected to be processed into ethanol.
However, livestock are rather poor converters of grain into meat, milk and eggs. When we consume livestock products, we consume more pounds of grain than if we had eaten the grain directly. If we wanted to maximize the amount of food produced, we would consume the grain directly rather than processing it through livestock. So, are meat, cheese and egg eaters responsible for deforestation of the rainforests? How about milk drinkers? Consuming meat, milk and eggs requires more bushels of corn than if we eat the corn directly. Consuming livestock products drives up the prices of corn which leads to deforestation of the rainforest.
Horses and Pets
Although livestock are poor converters of feed, at least we eat the meat, milk and eggs that are produced. How about the meat we don’t eat? There are about nine million horses in the U.S. that consume feed grain and forage. The majority of these are for recreational purposes and many are little more than pets. These horses replace beef cattle that could be fed the grain and forage. Once again, we could make the case that horse owners are driving up the price of grains and forage and contributing to deforestation of the rainforests.
How about our pets? Little Fido needs to eat also. But, like our horses, we don’t eat Fido. So it is diverting feed away from animals that provide milk, meat and eggs and subsequently contributes to rainforest deforestation.
Farmland Conservation Programs
Over the years, USDA has introduced a number of farmland conservation programs. The most popular has been the Conservation Reserve Program (CRP) which has taken large acreages of erodible and environmentally sensitive land out of production.
These programs have been hailed as successes by environmentalists and the public in general. But upon closer inspection, they have an iLUC impact. As with corn ethanol, taking an acre of cropland out of food grain production and using it for something else (whether it is biofuels production or conservation makes no difference) falls into the iLUC trap. So, do cropland conservation programs contribute to the deforestation of rainforests? Should these programs be assessed a penalty for causing deforestation in Indonesia?
Essentially, iLUC leads to the conclusion that anything that raises grain prices is bad because it leads to the destruction of the rainforest. So, the acceptance of this approach could lead to keeping grain farmers and the rural communities in which they live in a state of reduced income.
These are just a sample of the linkages that can be made to iLUC. A thorough identification of all of the linkages that exist will create an enormously complicated web.
Can we reliably assess the impact on indirect land use?
The initial research on iLUC assumed that world agricultural production is a zero-sum game. In other words, world agricultural production is fixed, regardless of price, so that a new acre of land is required to replace the acre lost to biofuels production. However, even before the advent of biofuels production, grain yields have trended upward. An example is U.S. corn and soybean yields as shown in Figure 1.
This increase in world grain production must be compared to the expected increase in world demand for food to see if there is room for biofuels production. The two major factors driving world food demand are increases in world population and improved diets (more meat) as the world population moves from poverty to the middle class. If supply growth exceeds demand growth, there will be room for biofuels without an iLUC impact.
For example, the trend-line increases in corn yields have already increased the production of corn in the U.S. If the expansion of the ethanol industry had not occurred, excess stocks of corn and other grains would be piling up and prices would have sunk to low levels (AgMRC Renewable Energy article “Impact of Biofuels on Corn and Soybean Prices”). If iLUC stops the expansion of the U.S. ethanol industry, trend-line corn yield increases will cause supplies to outperform usage resulting in falling prices, unless the corn export market takes up the slack. Surplus corn acres will cause excess production capacity in the entire grain production sector, resulting in falling prices of all grains.
What would happen if demand growth exceeds supply growth. Under this situation there will be a supply response from biofuels production due to higher grain prices. Higher prices attract investment into the agriculture sector that subsequently increases the productivity of the sector. We can look at this response as “supply elasticity”. Figure 2 shows the yield response from an increase in price. If the supply response is elastic (a), an increase in price will lead to a large increase in yield. An inelastic supply response (b) will lead to a small increase in yield. Another supply response important to the analysis is the increase in land conversion from an increase in price. Preliminary research is showing a huge variation in iLUC ranging from a large impact to a minor impact depending on the levels of these elasticities.
The amount of iLUC is also heavily impacted by the type of land brought into production. In general, deforestation leads to substantially more GHG emissions than grassland conversion.
Can we reliably assess the impact of biofuels on iLUC? Probably not. So, should we use imperfect research to establish the linkage, even though it is probbly wrong? Or should we not include a linkage, which assumes that the correlation is zero and is probably also wrong? More research is needed to improve the accuracy of this linkage. In the meantime, the biofuels industry is struggling under a cloud of uncertainty. This cloud drives away capital investments for research and technology that are needed to improve biofuels efficiency and reduce its direct emissions carbon footprint.
Deforestation and the destruction of grasslands must be stopped. Of special concern are the world’s pristine rainforests. In addition to the intrinsic value of these ecosystems, the uncontrolled release of GHG and the resulting impact on climate change means that it cannot be tolerated. This issue is not in question.
Many of these forested and grassland areas are located in developing countries that do not have the resources to control the destruction. So, individuals have focused on reducing the economic incentives for this destruction as a means of slowing the destruction. This has lead to the concept of iLUC that is being assigned to the biofuels industry. However, if the developed countries are committed to stopping the destruction, they must provide sufficient direct funding to the developing countries for regulating and policing these areas. The issue is too important to be handled by any other way than direct intervention.
In addition to regulating and policing these areas, limiting world greenhouse gas emissions can best be achieved with a world-wide cap-and-trade system where total emissions are limited and reduced over time. Under this system, world markets will make the adjustments needed to meet the reductions in the most cost effective manner. This will be more successful than trying to make one group responsible for the actions of another group.
Trying to regulate deforestation and grassland destruction through the iLUC will, at best, only achieve partial results while imposing a heavy burden on the ability of the biofuels industry to provide renewable and low carbon transportation fuels. We must work for better solutions that will satisfy both sides of the issue.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9548868536949158,
"language": "en",
"url": "https://www.ctc-n.org/resources/green-jobs-towards-decent-work-sustainable-low-carbon-world",
"token_count": 530,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09326171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f9f87f52-cd78-4ca5-8261-3c53da22fffc>"
}
|
Amidst a visible period of transition with trade unions, employers’ organisations, the private sector and the UN allying themselves to low-carbon and sustainable thinking, this paper reports on the emergence of a “green economy” and its impact on the world of work in the 21st Century. It shows for the first time at a global level that green jobs are being generated in some sectors and economies. It is explained that there is now a virtual avalanche of reports by international agencies, governments, business, environmental groups and consultancies on the technical and economic implications of climate change and the consequences of mitigation and adaptation strategies. Many declaim a future of green jobs, but few present specifics. This is no accident as there are still huge gaps in knowledge and available data, especially when they pertain to the developing world. From a broad conceptual perspective, employment will be affected in at least four ways as the economy is oriented toward greater sustainability:
first, in some cases, additional jobs will be created - for example, in the manufacturing of pollution-control devices
second, some employment will be substituted - shifting from fossil fuels to renewables, or from truck manufacturing to rail car manufacturing, or from land-filling and waste incineration to recycling
third, certain jobs may be eliminated without direct replacement - for example, when packaging materials are discouraged or banned and their production is discontinued
fourth, it would appear that many existing jobs (especially such as plumbers, electricians, metal workers, and construction workers) will simply be transformed and redefined as day-to-day skill sets, and work methods and profiles are greened.
The paper concludes that although much of the present optimism in green jobs is justified, there are many remaining data gaps. Key recommendations around this are that:
governments must establish statistical reporting categories that recognise and help capture relevant employment in both newly emerging industries and green employment in established sectors
as the German government has done, governments should also commission in-depth modeling and econometric efforts to analyse not just direct green jobs but also those that are related in a more indirect manner
business associations and trade unions can play a useful part as well. Some have begun to do job surveys and profiles, but far more of these kinds of efforts are needed
attention needs to be given to disaggregating data on the basis of gender in order to ensure that there is equality of opportunity for women and men for green jobs
greater scrutiny of supply chains is required to better understand just how much many traditional businesses and occupations are positively affected and reinvigorated by the greening of the economy.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9406635165214539,
"language": "en",
"url": "https://www.myassignmentservices.com/resources/econ111-microeconomics-principles-individual-assignment-sample",
"token_count": 1447,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08837890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:56c4cc12-daa9-4867-a2c7-a16c20956755>"
}
|
Farmers were better off if they sacrifice several hours of the work instead of the better payment and at the same time to attain a more crop (Albert, 2017). For the farmers the parent optimality is the number of hours invested in, would help them to attain the number of crops. Similarly for the landowners, the Pareto optimality is to make the laborers work per number of hours instead of the more shared quota. It can be more the land given in or the more quota shared over the percentage on the earning of the crops in comparison to the more quota shared and the number of hours allowed for the workers to work in. It is also based on the Pareto optimality which describes, how the resources need to be shared and allocated, in lieu and comparison of another. It would help to determine, how one can sacrifice the good used and it would make the conditions better off for one state in comparison to the another. The other is of the parent efficiency, which determines how one can one party be better off in comparison to the another to be worse off.
In the given analysis, we interpret from the operations Barga it would indicate, how it would establish a positive framework, which would help to relate with the best use of the laborers who can sacrifice their free hours and earn the 75% of the total earnings of the share crops (Albert, 2017). it would hold equally true for the landowners that can allow the maximum workers to work on their land and allow them to reap benefits by investing their time in the fields.
The definition of the Pareto optimality is interpreted, how best one can use their limited resources to the best of ability and invest within the given community, that can deliver maximum outcomes and productivity. At every point of time, it would help to understand how the resources can be optimally utilize to the best interest level and what can happen, if it is underutilized or overutilized. Pareto optimality main aims are to understand the points at which it can identify how better off an individual can be in comparison to the worse off condition. At every point of time, there should be a balance of factors which can relate with the best possible manner in which the resources are allocated and at the same time it would have to deliver a better outcome (Antle, 2015).
Here, Mamta can work for the 24 hours and can produce the maximum of the 4 tonnes of rice,
Maximum = 24 hours = results in the 6 hrs per the 1 tonne of rice.
4 tonnes of rice
While being part of the field, Mamta can produce 3 tonnes within the given 8 hours it gives us
A= 8 hours
-------- = 2.6 hours per 1 ton.
Source (Albert, 2017)
As shown how the Mamta would have the choice to either use her time in the leisure activity or simply use it for rice production, to come out of the crises situation.
In here the situation would be the low resources such s the lack of land and ability to work, but as the landowners are sharing the land, it would help the poor farmers to work maximum and attain the 75% of the total earning. As seen, the maximum deriving point would be where the Mamta came to reduce the maximum the output where the Mamta has invested the maxum=imum hours and can produce the maximum, that would be her threshold point (Dahl, 2017).
As shown that the Mmata would be on her indifference curve and it would be derived with the feasible curve showing per hour invested instead of the hours of production.
As noticed, the point of the C is the threshold where the MRS would be perpendicular to MRT.
Source (Dahl, 2017)
The number of hours invested by forgoing the free time or leisure time, then the number of hours of labor can help to produce the maximum (Pigou, 2017). With every hour invested in producing the crops, it would help to produce the maximum hours of the rice crops production. It would help to relate to the number of hours invested that can produce the maximum number of crops. The threshold would be at where the Mamta would forgo the production and would like to invest in terms of the maximum number of hours and evade from the scarcity condition. It would be an added incentive to invest more hours to work and in lieu earn as much.
The economy of fairness is defined as the richer should not be richer, while the poorer should not be poor and their demand should be met equally. It would be crucial to understand that the workers can give their labor, while the landowners can give their land or the capital. With the help of the labor or the use of capital, the maximum production is possible. The economy of fairness would help to understand how taxation on the rich, would help to balance the fair (Munda, 2016). Here, the quota is imposed on the farmer's earnings at 25% and the government has also introduced the taxes, to take away some funds and use it in abetter form of the economy. This is also used to fund poor people.
Albert, M., & Hahnel, R. (2017). Quiet revolution in welfare economics (Vol. 5008). Princeton University Press.
Antle, J. M. (2015). Pesticide policy, production risk, and producer welfare: an econometric approach to applied welfare economics. Routledge.
Dahl, R. A. (2017). Politics, economics, and welfare. Routledge.
Kneese, A. V., Ayres, R. U., & D'Angelo, R. C. (2015). Economics and the environment: a materials balance approach. Routledge.
Pigou, A. (2017). The economics of welfare. Routledge.
Turvey, R. (2017). Optimal Pricing and Investment in Electricity Supply: An Essay in Applied Welfare Economics. Routledge.
Munda, G. (2016). Beyond welfare economics: some methodological issues. Journal of Economic Methodology, 23(2), 185-202.
5 Stars to their Experts for my Assignment Assistance.
There experts have good understanding and knowledge of university guidelines. So, its better if you take their Assistance rather than doing the assignments on your own.
What you will benefit from their service -
I saved my Time (which I utilized for my exam studies) & Money, and my grades were HD (better than my last assignments done by me)
What you will lose using this service -
Unfortunately, i had only 36 hours to complete my assignment when I realized that it's better to focus on exams and pass this to some experts, and then I came across this website.
Kudos Guys!Jacob "
Proofreading and Editing$9.00Per Page
Consultation with Expert$35.00Per Hour
Live Session 1-on-1$40.00Per 30 min.
Doing your Assignment with our resources is simple, take Expert assistance to ensure HD Grades. Here you Go....
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9600308537483215,
"language": "en",
"url": "http://europeanenergyinnovation.eu/Articles/Summer-2015/A-Long-Road-To-Securing-Energy",
"token_count": 1152,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1396484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bbfa6da1-890c-4c39-9aa5-b9b7f7835b56>"
}
|
By MEP Nils Torvalds
"Energy is essential for Europe to function. But the days of cheap energy for Europe seem to be over. The challenges of climate change, increasing import dependence and higher energy prices are faces by all EU members. Moreover the interdependence of EU Member States in energy, as in many other areas, is increasing - a power failure in one country has immediate effects in the next. Europe needs to act now, together, to deliver sustainable, secure and competitive energy."
This is not a paragraph of the European Commission's proposal on an Energy Union, although one would be inclined to believe so. The text is from the Commission Communication on An Energy Policy For Europe, dating back to January 2007. Indeed, the similarities are striking.
The work on strengthened EU energy policies have been ongoing for long, and the objectives seem to have remained similar, even identical, for at least a decade. However, the world has changed in many ways during this time. Many of the challenges remain, or have transformed over time. At the same time, one could get the impression that nothing or almost nothing has been achieved during the years.
The Achilles heel of European energy policies is nothing new: Today, the EU still imports over half of the energy it consumes. One third comes from one external supplier - Russia. The import cost amounts to more than 1 billion euros, per day.
The crisis in Ukraine brought back energy security as one of the most urgent issues on EU's foreign policy agenda, and the concurring crises in the Middle East and Northern Africa add to the challenge of secure energy supplies. This all makes energy security a topic that the EU cannot - and should not - avoid in upcoming talks on the Energy Union.
One essential challenge is to even identify a precise definition of energy security, in order to pinpoint policies and measures aimed at increasing security of energy supplies. Perhaps the lack of energy security is easier to define. The potential threats to energy supplies are very context bound, and change shapes and faces over time. It's a mixture of geopolitics, international (and internal) markets, but climate change and environmental policies are also parts of the
Being such a broad and cross cutting issue, energy security will be challenging to coordinate in a decision making structure as complex as that of the EU. It will demand streamlined and coherent work in many sectors and by many actors. These requirements will be very challenging - to say the least - for the Commission, for the Member States, for External Action Service, and for the Parliament. Governance will be a key issue, but likely also heavily debated. Although several energy policies have been drawn up before, implementation has always been an uphill struggle. This will be even more critical this time, depending heavily on the
attitudes of Member States.
Previously - especially when it comes to oil and gas supplies - Member States tend to have pursued their own aims and interests. There has been no common orientation in terms of external energy policies, which has been reflected in the EU foreign policy related to energy issues. It is only in recent years this has begun to change, with a number of energy agreements and infrastructure projects, such as gas pipes, with third countries. Yet, progress has been far from easy, and far from solving all problems. The unrest in many supplying third countries makes diversifying energy supplies a questionable solution.
Increased interconnectivity within EU could stabilize energy supply significantly, but require enormous investments for the infrastructure needed. The drafters of the Energy Union proposal have had their eyes on the European Fund for Strategic Investments - or the so-called "Juncker Fund" - to help pour money into cross border cables and pipelines, but the issue will hardly be solved on any short-term basis. Interconnectivity is also desirable in order to reduce energy prices in Europe - which today are among the highest in the world - which in turn could foster investments and competitiveness, and ease the pressure on consumers' wallets.
The struggles with law proposals such as the Emissions Trading System and Market Stability Reserve as well as indirect landuse change and biofuels have revealed how fundamentally different the energy structures of the EU Member States are. The division is enhanced when forming common policies and pushing for progress, making the least prepared states hit the brakes. This stalls technological and economic development in EU, and also slows down a multifaceted approach to energy security.
Not only - but perhaps in particular - as the COP 21 in Paris approaches, Europe should not neglect climate goals and undertakings when seeking to increase energy security. Instead, new approaches could enable synergies between these two goals.
For this, the bioenergy sector may play a highly interesting role. Heavyweight actors, both within the EU, but also within UN, stress the potential of locally produced biofuels. Especially forest-based bioenergy could have significant climate benefits, when managed sustainably, and attract much needed investments and job opportunities especially to rural parts of the EU.
The counterpart of interconnectivity is technological progress. New ways of energy storage could probably make it more economical and efficient to build local high-tech solutions. The technological breakthroughs in recent years are providing us with more tools than our precedent decision-makers have had. Now, we have the responsibility to make use of these tools - if the Member States can overcome their differences in favour of a common goal. In 1952 that was made possible through the European Coal and Steel Community. It is as needed today, but one could - on good grounds - doubt EU's ability to make far-reaching and intelligent decisions. The EU won’t achieve energy security by circulating good intentions, but by concrete action and cooperation.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9460744261741638,
"language": "en",
"url": "https://btcnewsjournal.com/whats-bitcoin-and-the-way-is-bitcoin-mined/",
"token_count": 888,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.12890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d7735cd4-6201-4eea-9b43-5f68d3ac86c2>"
}
|
Bitcoin is the first cryptocurrency created by an unknown person in 2008. Among all other digital currencies, Bitcoin is still the most popular and valuable currency. Bitcoin’s value has grown rapidly in just a few years. Its value had risen high when Elan Musk (the founder of aerospace company SpaceX) said in 2020 that “Bitcoin is a good thing”. Following his comment, the Bitcoin value was found to have increased from £ 3,600 to £ 27,000. The bitcoins are earned through the mining process. But wait, mining doesn’t mean you have to pick up your axes and start mining. Bitcoin mining is carried out by computers by solving complex cryptological puzzles.
If you want to learn more about Bitcoin mining and how it works. Read this explainer to learn all about Bitcoin mining.
What is bitcoin
Bitcoin is a virtual or digital currency that was created in 2008. Some researchers say the creator of this currency is unknown, others said it was created by Satoshi Nakamoto, the same person who created the cryptocurrency. In Bitcoin, transactions are carried out without intermediaries or banks. The user transfers the bitcoin currency directly to each other. The bitcoins are also used to buy products or services. For example, you can buy Xbox games with Bitcoin, book a hotel on Expedia, and buy furniture from Overstock. Last year, the PayPal Company also announced that it would enable its users to buy and sell bitcoins.
What is Bitcoin Mining?
The mining process is the integral function, or you can say it is the backbone of the Bitcoin. Without the mining process, the miners cannot secure and confirm the transactions, and the hackers can attack the network and it can also malfunction.
Bitcoin mining is done through computers that solve complex mathematical computing problems. These problems are so complex that they cannot be solved manually or even by a normal computer. They are solved by powerful special computers. The miners do the mining on the computers. The miners confirm the transactions and make sure the network is secure. As a result, the miners will be rewarded with new bitcoins.
What is the purpose of bitcoin mining?
Three main functions are performed by Bitcoin mining.
- Issuance of new bitcoin through mining
- Miners confirm the transactions through mining
- Network security.
Each of the functions is explained here:
1) Issuance of new bitcoins through mining:
Bitcoin currency is not issued like traditional currencies. The traditional currencies are issued by a central body such as the state bank or the central bank of the country whenever this is deemed necessary. However, this is not the case with bitcoins. Bitcoins currency is issued when miners solve a complex computational math problem and miners are rewarded with new bitcoin every ten minutes. There are specific codes, and this code sets the rate at which the biotin is released so the miners don’t tamper with the system.
2) Miners confirm the transactions through mining:
Another important function of mining is for the miner to validate the transactions. The transaction is considered secured and completed when it is included on the Bitcoin blockchain. The blockchain is an online ledger that records all transactions that are carried out over the network. A group of secure and completed transactions is called a “block” and all of the blocks that are linked together are called a “blockchain”. When miners solve the computational math problem, they add a block to the chain.
Once the miner confirms the payment and adds the block to the blockchain, the payment cannot be reversed. Payment without confirmation can be canceled.
3) Network security.
The Bitcoin network cannot function without the miners, and even the number of miners cannot go below a certain level, since this network is impractical without miners. The miner secures the network by solving the complex mathematical calculation problem. Solving these problems or puzzles requires a lot of computing power and electrical power. The miners make it impossible to hack, attack, or even stop the network.
Once you have earned the bitcoin through mining, it will be moved to your wallet. Bitcoin cannot be earned without the Bitcoin wallet. The bitcoin wallet is a digital wallet and is used to store bitcoins. This wallet resides on the user’s computer or in the cloud. This wallet works like an online bank account. You can use it to buy and sell goods and send and receive bitcoins.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9391378164291382,
"language": "en",
"url": "https://childrenofthelandfill.com/plastic-offset-is-here-heres-how-to-do-it-right/",
"token_count": 1670,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1fef5d48-4f42-46cb-b7fe-88c93302193d>"
}
|
A new paradigm is entering the environmental zeitgeist, and that paradigm is plastic offset. So what is it really, and how could it stem the global tide of plastic pollution?
By Peter Wang Hjemdahl, Co-Founder at rePurpose
Put simply, for every dollar contributed by a polluter, a certain amount of plastic waste would be intercepted from the environment on your behalf as an individual or a company.
All across the developing world, waste management social enterprises have popped up to provide ethical & efficient solutions to our plastic epidemic, yet they are often underfunded and left unable to scale. Inspired by carbon credit, plastic offset is a transformative way of funding these innovations to accelerate our transition towards a circular economy.
Just like carbon, there are as many ways to do plastic offset wrong as ways to do it right. With the complex relationship between consumer responsibility and producer accountability, generating truly meaningful impact is challenging yet entirely possible. From the landfills and alleyways of Mumbai to corporate headquarters in New York, we spent years understanding both local needs and the global systems that govern our waste.
Here are 3 principles we have distilled on how to do plastic offset, right.
Principle 1: Hit the problem where it hurts
Anywhere in the developing world, if you pay attention to the kinds of plastic that are actually littering our streets, beaches, and landfills, you will notice a trend — it’s dominated by low-value plastic like to-go containers, candy wrappers, and plastic bags.
These materials are classified as low-value plastic because they are extremely difficult to recycle. Shanghai, Cairo, New Delhi, Nairobi, Jakarta — a vibrant informal recycling industry do exists in cities worldwide and employs tens of millions of workers who form the backbone of the local waste management infrastructure. Unlike the high-value materials (PET, HDPE) with an established recycling supply chain, low-value plastic is not even collected by these workers because it inherently lacks any financial value. As a result, they have become the most commonly found items degrading in and polluting our environment.
At rePurpose, we believe that the most genuine way to offset your plastic footprint is to deal with plastic that would have otherwise never been recycled. Offsetting through the recycling of high-value plastics like PET is not a good practice because it offers marginal environmental additionality: by nature of their high value, these materials would have likely been picked up and recycled anyway by workers in the informal industry .
Instead, through your offset contribution, we put a price on low-value plastic and pay informal workers to intercept it before it reaches the oceans or landfills, adding a crucial income stream for these marginalized workers in addition to their work with materials like PET and HDPE.
After the plastic is collected, we either use it to make bricks and roads or co-process it through pyrolysis, a practice that uses low-value plastics as an energy source in industries that typically burn coal to meet their high energy demands (e.g. cement kilns). Through this process, low-value plastics kick out coal and are cleanly incinerated. We know this system is not perfect: however, given the extremely low value of these materials, the high costs to separate them, and the lack of technology to recover them, co-processing is the best solution for these plastics that would have otherwise been landfilled or flushed into oceans.
A best practice from the established carbon credit space is where offset providers work with local organizations to implement the offset as opposed to developing their own operations. This practice makes sense for plastic offset for three reasons:
- Local waste management initiatives have an established history and expertise in their work, and we should engage and empower existing efforts to deliver impact effectively.
- It boosts cost effectiveness as offset implementers and offset providers can focus on their own functions while working together. Collaborating as opposed to competing strengthens the impact and furthers our shared mission.
- Working through partners allows for a third-party like the offset provider to monitor & evaluate operations through rigorous standards of measurement, enabling transparent communication of impact to the individual or organization going PlasticNeutral.
At rePurpose, we work with three vetted waste management social enterprises in Mumbai, Bangalore, and Hyderabad with a proven track record of combating plastic pollution and transitioning urban India towards a circular economy. Any foreign intermediary may understand needs on the ground through extensive research, but having witnessed the complexities of waste management in Asia first-hand, we realized that our impact is best achieved with local partners who have been doing waste management for years, if not decades.
Principle 3: It’s about the people behind the plastic too
For more than 25 years, rich Western countries like the US, Canada, and the UK have shipped their plastic waste to poorer Asian countries who struggle to even handle their own waste simply because the economics of recycling their own citizens’ garbage do not make sense at home. After China, which used to take in two-thirds of the world’s plastic waste banned all new imports at the end of 2017, more waste has been diverted to countries like India, Thailand, and Malaysia with much more informal waste management systems.
So it’s no longer just about the plastic — it’s about the people and communities it’s impacting too. Over 50 million informal workers worldwide spend their entire lives dealing with the consequences of our mindless consumption, all without recognition as environmental heroes or access to basic healthcare or education that traps them in a generational cycle of abject poverty. In India, a waste picker on average spends 12 hours a day scavenging for recyclable waste in dumpsters and landfills, earning less than $5 from an exploitative supply chain.
We believe that any plastic offset should actively engage with and explicitly empower informal waste workers.
At rePurpose, an offset supports dedicated impact programs co-created with each PlasticNeutral partner, helping them empower workers. By transitioning scavengers into the formal sector, we ensure dignified jobs (tackling social stigmatization), fair wages and benefits (to prevent volatility in incomes and provide a social safety net), and safe conditions (regulated facilities with safety equipment as opposed to manually scavenging in landfills). In addition, we help provide savings accounts, health insurance, and education subsidies for the workers’ children in order to break the cycle of poverty.
Today, the negative consequences of plastic pollution extend beyond the environment to the people who receive the bulk of the world’s garbage burden. Therefore, plastic offset programs must be socially impactful to effectively neutralize the perils of plastic pollution.
Plastic offset: funding our world’s transition towards a circular economy
Even though plastic offsetting is a fairly new concept, conscious consumers and forward-thinking businesses worldwide are already adopting the platform. However, we should carefully guide the use of offset to avoid greenwashing. For example, brands that use plastic offsetting as a PR bandaid to their own degradative practices should never be permissible.
However, we believe that plastic credit has the potential to solve the larger systemic issue at play, a linear economy that has created an unchecked system of production and consumption. We have started with recycling social enterprises where we can easily verify the amount of waste taken out of the environment & sell that credit to consumers and companies, but we also realize that recycling is only a band-aid solution.
We are now working towards using our proven methodology of plastic crediting to fund experimental innovations that replace and redesign out plastic all together. Simply put, for every kg of plastic an innovation is able to tangibly replace out of a manufacturing supply chain, one credit can be generated and sold to the public to further fund scaling up the innovation.
Plastic offsetting can be a powerful tool to mobilize resources for solving the environmental and social crises caused by plastic pollution. Make sure to do it right, however, by ensuring your offset meets the principles listed above. Together, we can reduce waste, revive lives, and restore nature’s balance — join us and go #PlasticNeutral, today!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9411048889160156,
"language": "en",
"url": "https://datavizblog.com/2017/11/05/infographic-machine-learning-methods/",
"token_count": 198,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.004241943359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e3088fb2-1051-4c6b-89de-06d303f110cc>"
}
|
Artificial intelligence (AI) and machine learning are hot topics in the IT industry these days. Approximately 54% of organizations are making substantial investments in AI with company leaders having high hopes for how they can be used to improve and automate business processes. That number is supposed to jump to 63% in three years, according to the 2017 Global Digital IQ Survey.
So how will AI solve business problems, like helping you figure out why you’re losing customers or assessing the risk of a credit applicant? It depends on a number of factors, especially the data you are working with and the type of training that will be required. Learn about the most common algorithms and their uses cases in the infographic below.
Source: Morrison, Alan and Anand Rao, Machine learning methods (infographic), pwc, Next in Tech, April 17, 2017, http://usblogs.pwc.com/emerging-technology/machine-learning-methods-infographic/.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9420708417892456,
"language": "en",
"url": "https://superbessay.com/samples/marginal-product-and-marginal-cost/",
"token_count": 730,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06787109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:779d8804-efa6-40f4-84d8-2dd569d8928b>"
}
|
Marginal product is an economic term that refers to additional output generated by utilizing an extra unit of input. It is the measure of change in quantity produced when the input of a factor of production such as labor is increased or decreased by one unit. Ferguson and Gould (2010) define marginal product as the extra output produced by an extra input.
Marginal Product and Marginal Cost
For example, at General Motors, marginal product is the additional vehicles produced when an extra mechanic is employed. An effective measure of marginal product of a factor of production is achieved when all other inputs are held constant. Marginal product is also referred to as marginal physical product because it measures physical units produced by a firm.
On the other hand, marginal cost refers to the change in total cost of producing an additional unit of output. It is the change in total cost of production caused by reducing or increasing units produced by one. Perloff (2011) simplifies marginal cost as the cost of producing an extra output. According to Perloff (2011), marginal cost includes all costs such as additional wages and raw materials incurred for producing an extra output. Marginal cost usually changes as the level of production changes. Marginal cost is also referred to as differential cost.
Importance of Marginal Product and Marginal Cost
Marginal product is used in short-run production analysis to determine the effects of additional input of factors of production on total quantity produced using the law of diminishing returns. Marginal product is also used by companies to determine supply curves and quantities for fixed input of factors of production. For example, if the marginal product of labor at Toyota is forty, then employing one extra worker would increase production by forty units. Marginal product is used in determining the optimal level of production. This helps companies in realizing the benefits of large scale production (economies of scale).
According to Nicholson (2011), the part of marginal cost curve that falls above its point of intersection with average variable cost curve forms the supply curve of a firm operating in a perfectly competitive firm, hence it can be used to determine the optimal quantity to be supplied to the market. Nicholson (2011) also stresses that firms operating in perfectly competitive markets use marginal cost curves to determine their break-even points (BEP) and profitability. For example, if marginal cost is higher than selling price, then the firm will incur losses, hence should not produce. On the other hand, if marginal cost is lower than the selling price, the firm will gain profits, thus it is advisable to produce.
Managers also use marginal cost during allocation of resources. In order to maximize output and profits, resources must be allocated where marginal revenue exceeds marginal cost. Marginal cost for public goods is also used in determining the impact of externalities of production (positive and negative) such as pollution of the environment. Consumers use marginal cost when making purchases by comparing the cost of acquiring the products to the benefits derived, a process called cost-benefit analysis (CBA).
Use and Misuse of Time-Series Analysis when making Management Decisions
Time-series analysis can be used by companies to predict and forecast future trends such as demand for goods in the market. It is also used in making long-term investment decisions by measuring the performance of a company over a given period of time, for example, return on investments (ROI). Trend analysis can also be used in exploring new business opportunities and finding out areas that need improvement or change in an organization. On the other hand, time-series analysis can be misused by the management during strategic planning process, especially if the information presented is deceptive and ambiguous.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9721892476081848,
"language": "en",
"url": "https://theblogbyjavier.com/2012/02/03/the-march-toward-fiat-money/",
"token_count": 338,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:31870c42-72dd-4ab3-87f9-8272080b86d8>"
}
|
Fiat money is “money that derives its value from government regulation or law. The term derives from the Latin fiat, meaning “let it be done” or “it shall be [money]”, as such money is established by government decree”.
The different types of money along history can be seen in this entry from the Wikipedia:
“Currently, most modern monetary systems are based on fiat money. However, for most of history, almost all money was commodity money, such as gold and silver coins. As economies developed, commodity money was eventually replaced by representative money, such as the gold standard, as traders found the physical transportation of gold and silver burdensome. Fiat currencies gradually took over in the last hundred years, especially since the breakup of the Bretton Woods system in the early 1970s.”
I found an interesting graphic in the book “This Time is Different” (C. Reinhart & K. Rogoff) where you can see how during several centuries governments debased or decreased the content of silver of its currency in order to get over heavy debts. The trend in the graphic seems to point at the “inevitability” of fiat money.
These debasements of course created inflation, which is nothing new, only the means have changed, as Carmen Reinhart and Kenneth Rogoff say in their book:
“[…] the shift from metallic to paper currency provides an important example of the fact that technological innovation does not necessarily create entirely new kinds of financial crises but can exacerbate their effects, much as technology has constantly made warfare more deadly over the course of history.”
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9478474259376526,
"language": "en",
"url": "https://www.cioapplications.com/news/smart-parking-s-quest-to-make-cities-smarter--nid-4382.html",
"token_count": 510,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10205078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ae4a7430-a636-4f56-8592-cc95286e92e9>"
}
|
Cities continually need to reorganize their policies on urban mobility. In that deference, they also have to reflect on offering services that will influence citizens' behavior in terms of mobility.
FREMONT, CA: Today, more and more people express a growing need for mobility. Mobility is not only an issue of developing transportation infrastructure and services within the city’s domain but also a matter of coping with people’s preferences and choices since private vehicles are still the preferred mode of transportation. This leads to an enlargement in the number of private vehicles annually, which in turn, inevitably results in an increase in fuel consumption and carbon emissions.
As a result of citizens’ use of private motor vehicles, inner-city parking, especially on-street parking, has become a principal constituent of metropolitan mobility. Provision of pertinent information about both alternative means of transport and on-street parking permits drivers to make rational decisions about their utilization of motor vehicles.
Active surveillance of parking spaces and monitoring of parking fees is a well-organized manner to control people’s conduct and choice of means of transport, which contributes considerably to reduce carbon emissions and energy consumptions.
Smart Parking Meters as the Smartest Approach:
From universities and military bases to large metropolis, the embracing of a smart approach in a digitally connected atmosphere is an innate progression for the communities. The notion of smart parking and parking meters has revolutionized with the advent of technology. Electronic counters for motorists have been constructed, which are far beyond the traditional concept of hourly pay for parking tickets.
Smart parking, an IoT-based technology, enables smart devices to send signals at the receiver’s end about the accessibility of parking slots. The collected data is then utilized to transmit parking-space information to a guidance system installed within smart vehicles for drivers.
Monitoring of Parking Spaces:
The urban populace continues to grow by the year, and the mobility of citizens has become a concerning matter with ecological traits on stake. The cities are searching for intelligent and smart solutions to amplify the comfort of motorists while eliminating its negative environmental impact.
With the use of on-street sensors, new technologies provide an accurate view of any on-street parking activity. This innovative solution congregates to the needs of society to better manage traffic in inner-cities.
The notion of online parking rental services near places of public interest is also a profitable business model. Many cities have begun to familiarize themselves with smart parking solutions according to the detailed requirements of infrastructure.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9564348459243774,
"language": "en",
"url": "http://risingstarmagazine.com/bitcoins-energy-usage/",
"token_count": 435,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:187f395e-d46c-4955-94a8-edd377a6b1bc>"
}
|
By David Zhang
Bitcoin is mined through powerful processors that create tens of terra-hashes(1012) per second. This process consumes a large quantity of energy, considering the fact that most Bitcoin mining devices are run 24/7. These devices use about 0.1 Joules per giga-hash(109), and the quantity in which these devices are being used by people around the world is mind-boggling. Cambridge University recently created a methodology to measure the amount of energy that bitcoin uses in a year which takes into account these factors and many more to give an estimate on the amount of energy that bitcoin “uses”. Its estimates found that Bitcoin uses around 64 TWh (terawatt hours). This is more than a small country, such as Switzerland, and may shock most people, but considering the scale that bitcoin has attained, it may have even been expected.
Since there are many factors that are considered, there is also a huge margin of error which the created methodology has. It has an upper estimate of 150 TWh, and a lower estimate of 22 TWh. According to the developers at Cambridge University, “Reliable estimates of Bitcoin’s electricity usage are rare: in most cases, they only provide a one-time snapshot and the numbers often show substantial discrepancies from one model to another.” Despite the uncertainty, the energy usage of Bitcoin is still extremely shocking. There have also been major increases, sometimes doubling in less than six months, which comes with a significant carbon footprint. Exact estimates still vary due to multiple factors that include the source of the energy.
This sort of energy usage applies not only to bitcoin but also to cryptocurrencies in general, showing how much of an effect of the annual world energy usage this could potentially cause. One cryptocurrency, Ethereum, has chosen to switch methods to mine and store its “coins”, which could potentially decrease the energy that is consumed by the currency’s usage. Right now, Bitcoin uses up only about ¼ of 1% of the world’s energy, but with rapid growth, it could soon pose a problem.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9319519996643066,
"language": "en",
"url": "http://www.48northsolutions.com/current-events/record-renewable-energy-investment-in-2015",
"token_count": 1183,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.01055908203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8d372b15-5f36-4456-a9b2-b40e4baa8525>"
}
|
Developing world investments in renewables topped those of developed nations for the first time in 2015, according to the Global Trends in Renewable Energy Investment 2016 report.
Additionally, coal and gas-fired electricity generation last year drew less than half the record investment made in solar, wind and other renewables
Helped by further falls in generating costs per megawatt-hour, particularly in solar photovoltaics, renewables excluding large hydro made up 54 percent of added gigawatt (GW) capacity of all technologies last year. It marks the first time new installed renewables have topped the capacity added from all conventional technologies.
The 134 gigawatts of renewable power added worldwide in 2015 compares to 106GW in 2014 and 87GW in 2013. Were it not for renewables excluding large hydro, annual global CO2 emissions would have been an estimated 1.5 gigatons higher in 2015.
UNEP Executive Director Achim Steiner said, “Access to clean, modern energy is of enormous value for all societies, but especially so in regions where reliable energy can offer profound improvements in quality of life, economic development and environmental sustainability. Continued and increased investment in renewables is not only good for people and planet, but will be a key element in achieving international targets on climate change and sustainable development.
“By adopting the Sustainable Development Goals last year, the world pledged to end poverty, promote sustainable development, and to ensure healthier lives and access to affordable, sustainable, clean energy for all. Continued and increased investment in renewables will be a significant part of delivering on that promise.”
Michael Liebreich, Chairman of the Advisory Board at BNEF said: “Global investment in renewables capacity hit a new record in 2015, far outpacing that in fossil fuel generating capacity despite falling oil, gas and coal prices. It has broadened out to a wider and wider array of developing countries, helped by sharply reduced costs and by the benefits of local power production over reliance on imported commodities.”
As in previous years, the report shows the 2015 renewable energy market was dominated by solar photovoltaics and wind, which together added 118GW in generating capacity, far above the previous record of 94GW set in 2014. Wind added 62GW and photovoltaics 56GW. More modest amounts were provided by biomass and waste-to-power, geothermal, solar thermal and small hydro.
Developing countries’ rise led by China and India
In 2015, for the first time, investments in renewable energy in developing and emerging economy nations ($156 billion, up 19 percent compared to 2014) surpassed those in developed countries ($130 billion, down eight percent from 2014).
Additional energy generating capacity, 2015:
Renewables (excluding large hydro) 134 GW
Large Hydro: 22 GW
Nuclear: 15 GW
Coal-fired: 42 GW
Gas-fired: 40 GW
Annual global investments in renewable energy:
$286 billion (2015)
$273 billion (2014)
$234 billion (2013)
$257 billion (2012)
$279 billion (2011)
$239 billion (2010)
$179 billion (2009)
$182 billion (2008)
$154 billion (2007)
$112 billion (2006)
$73 billion (2005)
$47 billion (2004)
12 year total:
$2.3 trillion (unadjusted for inflation)
Much of the record-breaking developing world investments took place in China (up 17 percent to $102.9 billion, or 36 percent of the world total).
Other developing countries showing increased investment included India (up 22 percent to $10.2 billion), South Africa (up 329 percent to $4.5 billion), Mexico (up 105 percent to $4 billion) and Chile (up 151% to $3.4 billion). Morocco, Turkey and Uruguay all joined the list of countries investing more than $1 billion.
Among developed countries, investment in Europe was down 21 percent, from $62 billion in 2014 to $48.8 billion in 2015, the continent’s lowest figure for nine years despite record investments in offshore wind projects.
The United States was up 19 percent to $44.1 billion, and in Japan investment was much the same as the previous year at $36.2 billion.
The shift in investment towards developing countries and away from developed economies may be attributed to several factors: China's dash for wind and solar, fast-rising electricity demand in emerging countries, the reduced cost of choosing renewables to meet that demand, sluggish economic growth in the developed world and cutbacks in subsidy support in Europe.
Still a long way to go
That the power generation capacity added by renewables exceeded new capacity added from conventional sources in 2015 shows that structural change is under way, states UNEP. Renewables, excluding large hydro, still represent a small minority of the world’s total installed power capacity (about one sixth, or 16.2 percent) but that figure continues to climb (up from 15.2 percent in 2014). Meanwhile actual electricity generated by those renewables was 10.3 percent of global generation in 2015 (up from 9.1 percent in 2014).
“Despite the ambitious signals from COP 21 in Paris and the growing capacity of new installed renewable energy, there is still a long way to go,” said Dr Udo Steffens, President of the Frankfurt School of Finance & Management. “Coal-fired power stations and other conventional power plants have long lifetimes. Without further policy interventions, climate altering emissions of carbon dioxide will increase for at least another decade.” The recent big fall in coal, oil and gas prices makes conventional electricity generation more attractive, Steffens added.
The report is available here.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9246809482574463,
"language": "en",
"url": "https://en.economicdata.ru/country.php?menu=america-country&cu_id=6&cu_ticker=ATG&country_show=statistics&ticker=ATG.FP.CPI.TOTL.ZG",
"token_count": 464,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0732421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:42636b30-9a2b-40b5-af6c-f36a459dbf81>"
}
|
Antigua and Barbuda: Inflation, consumer prices (annual %) 0.2 April 17, 2021
Statistics: Inflation, consumer prices (annual %)
|Date||1999 - 2019|
|Previous value||1.2 (2018)|
Definition: Inflation, consumer prices (annual %)
Inflation as measured by the consumer price index reflects the annual percentage change in the cost to the average consumer of acquiring a basket of goods and services that may be fixed or changed at specified intervals, such as yearly. The Laspeyres formula is generally used.
Chart - Antigua and Barbuda: Inflation, consumer prices (annual %) (1999 - 2019)
Development relevance: Inflation, consumer prices (annual %)
A general and continuing increase in an economy’s price level is called inflation. The increase in the average prices of goods and services in the economy should be distinguished from a change in the relative prices of individual goods and services. Generally accompanying an overall increase in the price level is a change in the structure of relative prices, but it is only the average increase, not the relative price changes, that constitutes inflation. A commonly used measure of inflation is the consumer price index, which measures the prices of a representative basket of goods and services purchased by a typical household. The consumer price index is usually calculated on the basis of periodic surveys of consumer prices. Other price indices are derived implicitly from indexes of current and constant price series.
Limitations and Exceptions: Inflation, consumer prices (annual %)
Consumer price indexes should be interpreted with caution. The definition of a household, the basket of goods, and the geographic (urban or rural) and income group coverage of consumer price surveys can vary widely by country. In addition, weights are derived from household expenditure surveys, which, for budgetary reasons, tend to be conducted infrequently in developing countries, impairing comparability over time. Although useful for measuring consumer price inflation within a country, consumer price indexes are of less value in comparing countries.
Statistical concept and methodology: Inflation, consumer prices (annual %)
Consumer price indexes are constructed explicitly, using surveys of the cost of a defined basket of consumer goods and services.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8946142196655273,
"language": "en",
"url": "https://tsigroups.com/course/48/public-sector-budgeting/",
"token_count": 212,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0130615234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:68781f66-8c22-4086-9bc5-f18904a2bb42>"
}
|
15 June, 2020.
17 June, 2020.
Public Sector Budget is the vehicle by which government programs are planned and implemented to the benefit of the electorate but within the limitation of government revenue. The competing needs of government are articulated and organized in this practical training and participants are shown how government spending is planned and executed
To provide all financial managers with the knowledge to understand budgeting
processes and procedures in government.
- Overview of Budgeting, The Budgeting Cycle and Budgetary Control
- Steps in Developing an Operating Budget
- The Role of Budgets in:-
- - Strategic Planning and Implementation
- - Performance Measurement
- - Employee motivation and coordination
- Content and Structure of Public Sector Budget
- Computer-based Financial Planning Models
- Activity-based Budgeting
- Budgeting and Responsibility Accounting
- Human/Behavioral Aspects of Budgeting
- Preparation of Budgets:
- - The Cash Budget,
- - The Budgeted Income Statement
- - The Budgeted Balance Sheet
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9634377956390381,
"language": "en",
"url": "https://www.futureinsights.com/5-small-things-you-can-do-to-help-slow-down-climate-change/",
"token_count": 2105,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.01422119140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:376af280-e40d-40c2-b3a5-cc5da5ac6d33>"
}
|
There’s no denying that climate change is the biggest issue that this generation is likely to face, and how we handle it will have drastic implications for the future of planet Earth. To put it bluntly, regardless of what any climate change denier says, the human race simply must lower its use of fossil fuels, and reduce the amount of carbon dioxide (CO2), in the Earth’s atmosphere, or else face the consequences of melting polar ice caps, rising sea levels, and uninhabitable lands.
Yes, there are naysayers who claim that climate action will amount to nothing more than hotter summers, but not a lot of time passed between scientists predicting that the Earth could be in trouble, to them pointing out that it is in serious need of a course correct now. Future generations will pay the price for this one’s ineptitude if a viable climate solution is not found and implemented.
That being said, this is a task that will be shared by many, if not all the peoples of the earth. Many hands will make light work, and there are little efforts which you can make that go beyond turning off televisions and lights when they’re not in use and are just as easy and achievable, but much more effective at reducing your carbon footprint. Now, no one is suggesting that you rip out your electric service and survive on minimalist resources—in fact, quite the opposite. You may find that these small things might actually benefit you a lot more than your current set-up does. So, here are five small things that you can do today to help slow down climate change.
1. Check out renewable energy.
The first tip will appear to be very obvious, and that’s because it is. As natural gas and other fossil fuels deplete, it stands to reason that electric bills are likely to increase as demand outweighs supply. But while we may be a ways off from that yet, there is still no harm in looking to your utility companies and seeing if you can get a better deal on the tariffs from your gas and electric suppliers by switching to renewable energy instead.
As renewable sources of energy are being enhanced more and more, the growth in its popularity is likewise increasing. Many utility companies are offering renewable “green” energy tariffs in addition to the standard. What’s more, these tariffs don’t cost more than what you would be ordinarily paying, and in most cases are actually cheaper, with the right deal.
But if you’re having trouble finding one, comparison websites like Northern Ireland-based Moneygains are on hand to help. Set up to help people find cheaper electricity suppliers in NI, this UK company is adapting and growing all of the time. It won’t be long before they stretch their reach outside of Northern Ireland, and when they do, you’d be all the better off using them, to find electric companies that specialize in green energy.
2. Beware of the dog… or the cat!
Pet owners beware! Your furry friend is actually contributing to changes in the climate. Clearly, it’s not their fault, and animal lovers everywhere would know that. But it isn’t like we can just ask our animals to stop polluting the atmosphere with their waste gases. Nor can we really change their diet too much, even if a vegan diet is proving to be an alternative. Or can we?
Normally, those who offer pet supplies direct sales toward toys and treats, but some animal wellbeing companies, such as Pawtree also offer pet care products and specialized pet food as well. It might be possible to change the diet of your dog or cat to be a little more environmentally friendly, but if you want to make a bigger impact on the environment (and have fun while you’re at it), then here’s an alternative idea. Work with Paw Tree! Pawtree encourages pet owners to participate with them as direct sales professionals. Joining Pawtree to sell pet products and treatments not only allows you to become part of a community, in the same way, that Avon and Tupperware parties do but can also help you with your endeavors to be more environmentally conscious. By acting as a direct sales representative on Pawtree’s behalf, you can reduce emissions from long commutes to work, and reduce your participation and the environmental impact of large office buildings. Plus, there’s always the benefit of having your furry friend make furry friends of their own!
3. Invest in house plants and indoor trees to balance the CO2 levels.
It is common knowledge that plants and trees take in carbon dioxide and expel oxygen. That’s actually where we get the phrase that says trees are the “lungs of the planet.” For a long time, trees were our strongest line of defense against the “greenhouse effect” (whereby CO2 in the atmosphere would insulate the world, and increase the temperatures), since the world’s population of plants and foliage would convert carbon dioxide into oxygen before it rose into the stratosphere.
But as some countries ignore the necessity of our forests and continue deforestation for economic gain, the world’s CO2 levels are rising, which is a huge factor as to why the climate is changing.
However, you can help put that right in one small way. Get on to a site like Lively Root, get some house tree plants, and pot them around your home. Houseplants and indoor trees come in so many varieties, from the easily-maintained Rubber Tree (Ficus elastica) plant, which only needs a spot in the shade from sunlight, and watered once a week, to the more intricate Madagascar Dragon Tree (Dracaena marginata), which needs a tiny little bit more direct sunlight but has different watering needs depending on the season. If you’re up for the challenge, extend your green thumb beyond indoor plants and plant a deciduous tree in your garden or yard. Regardless of whether you prefer indoor trees like Dracaenas, indoor plants like the Rubber Figs, or like their outdoor big cousins, having shrubs around the home is environmentally friendly, aesthetically pleasing, and doesn’t pull too much from the money tree either!
4. Simply put, don’t drive.
Yes, having a car is convenient, and no, we’re not suggesting you abandon it. But how many of your car journeys, from which the fumes all contribute to greenhouse gases, are really necessary? If you can, leave the car on the driveway, and walk to your destination instead. Use public transport a little more, or cycle if you have a bike. Not only is having less reliance on the car much better for your health, but it’s better for the environment too.
5. Reset your thinking.
There is no harm in being a little reluctant to embrace the idea that climate change is real, and that we are all going to have to make adjustments to our lifestyles. After all, it’s constantly in the media, and there are many ideals that you are told that we all have to do, but you don’t have any say on how big companies handle their carbon emissions, nor how much energy suppliers charge for renewable energy.
In 2016, the Climate Leadership Council members, then-led by entrepreneur Ted Halstead, recognized that there were “barriers” to finding a high-quality global solution to the environmental issues faced by the world. Firstly, the psychological barrier — it went against human nature to make sacrifices for a problem they did not see or were not likely to see in the near future. The second, a geopolitical barrier, was an obstacle in which countries had incentives to free-ride off others, instead of strengthening their own carbon emission programs. Finally, the partisan barrier, whereby big political parties worldwide, were not reducing carbon emissions at a speed that would make a difference to fix the problem.
The solution that Ted Halstead proposed is based on implementing a “carbon tax” in the United States, that increases slowly year on year. Because the carbon tax in itself would be politically unpopular, they further proposed that it replaces carbon regulations and that carbon dividends would be redistributed back to the American people. As the carbon tax incentivizes companies to use fewer fossil fuels, and therefore cut emissions, the environment benefits. As people could potentially receive up to $2000 per year in carbon dividends, the psychological and partisan barriers are both removed. Furthermore, the geopolitical barrier is nullified with a competition element between neighboring counties with who the U.S. trades.
As the public of bordering countries see how much they could stand to earn by carbon dividends of their own, they will pressure their own governments to implement the same carbon tax. In what Ted Halstead described as a “domino” effect, other countries would fall in line, and a worldwide solution to climate change would be reached. The Economist’s Statement on Carbon Dividends was presented to the White House at the start of the Trump Administration and has been revised and worked on since. However, economists and environmental leaders from all over the world believe that it is an incredible idea, and worth pursuing.
So why is this a “top pick” for the list presented here? Well, it’s simple: The tip is to simply think about it. That’s all. By thinking about the solution presented above, you will see that by reducing their carbon footprint, through this carbon tax, American people stand to gain, not lose, a considerable sum of money. Going into reducing one’s impact on the environment in a positive way, if not as a potentially lucrative investment, will make everyone more likely to do so too.
Sadly, Ted Halstead passed away in September this year, but his legacy lives on in the work of the council members he brought together. It can live on with all of us too, but word of mouth is key. Tell your families, friends, neighbors, bosses, employees, in fact, tell everyone, that there is an idea here to combat the problem of climate change which will put money in their pocket, and lobby your leaders to get behind it.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9558870792388916,
"language": "en",
"url": "https://www.knowingwall.com/moneyhouse/23-how-to-invest-in-blockchain-technology.html",
"token_count": 1788,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.326171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a146428a-4430-429b-beef-106a0c3026d9>"
}
|
How to invest in Blockchain technology?
What is blockchain?
In the current situation, until you buy a good or service from another person, there must be a middleman between you, the bank or credit card company, who are in the process of cash settlement between you. The blockchain system or technology eliminates the mediator concept and enables you to complete the process directly with the seller.
This process is done through complete encryption of the exchanges and is recorded in decentralized electronic records, meaning that it is available on a wide network of computers, everyone can see it. These computers must agree to the exchange process in order to be confirmed and registered.
The best way to explain the concept of a blockchain system is to explain it by example: in the case of Bitcoin which is, by the way, a cryptocurrency and not a digital process. Party A wants to send money to Party B. This request is represented in the network as a Block and then sent to everyone in the network to verify and approve this request, after which this process is recorded in electronic records that are not editable and available to all Then Party B receives the money that Party A has sent
This is one example that blockchain technology can do and has very many uses. It can be used in any process in which value is exchanged, whether it is currencies, commodities, or property. Also, this technology can reduce frauds as all operations are available for everyone to see
Currently, it represents a very simple percentage representing 0.025% of the global GDP that is exchanged through the blockchain representing approximately 20 billion dollars and is expected to reach 10% by 2025
Some governments and private institutions have already begun applying this concept to their own operations but on a private and limited scale to members who are allowed to view and approve the operations. Among the disadvantages of this idea, the organization controls 100% of the network and any penetration of the network makes this data vulnerable to manipulation, unlike the global network where penetration and modification of data are difficult to the point of impossibility.
One of the models used globally in this technique is land registration in Sweden, where experiments have shown that the process of land registration and deals has become faster than before, as well as in Georgia, India is using this technology to reduce fraud in land deals.
Also, one of the business models for using this technology is smart contracts. It is now possible to create electronic contracts that are valid and applied without any human intervention. One of the ideas used is automated guarantees. The International Monetary Fund believes that blockchain will reduce ethical problems in contracts and improve contract work. Smart contracts are also used by singers in particular, as they are programmed to automatically and automatically prompt the singer whenever someone plays their song.
Ethereum Solidity blockchain is used in particular for the work and programming of these contracts so that they are valid in the event that certain conditions are fulfilled in the contract in our previous example playing a song for a singer and receiving the amount at the moment.
Microsoft is now trying to integrate this technology into its platform, MS Visual Studio. Also, the Australian Financial Markets Authority has purchased a stake in one of the companies in the blockchain technology to develop a structure to determine trading and settlement of operations, which opens a wider field for new products to be launched in the future.
In August 2016, the University of Munich conducted a study on how this technology can sabotage the current sectors. They analyzed the financing of startups and discovered that $ 1.55 billion was invested in finance, information, communications, and general professional services.
In the end, we must know only one thing, that this technology will change the way and the way we do business in our day, everything in the future will be done in a completely different way using this technology, we must fully realize that Bitcoin is only one use of this technology Among the many uses too.
What sectors will be sabotaged during the next ten years?
Banks and electronic payment services
Analysis and research
The Internet of things
Participation Economics - Uber and Karim
Distribution of government grants
How did governments use blockchain technology and what are their latest experiences in this field:
According to expectations, 9 out of 10 governments are expected to invest in blockchain technology during 2018, which will include investment in financial transactions and asset and contract management. The first country to adopt blockchain activities is the state of Georgia, which the state has implemented in the process of registering land ownership through Their series is directly linked to the relevant bodies, enabling the government to verify land ownership and approve the necessary documents through technology.
The other country is Estonia, where it implemented blockchain technology nationwide as it issues identities and encrypted electronic residencies that enable them to use the public services of the state and individuals verify their government-owned information about them and who can access them, Estonia also puts all medical records On the blockchain to reduce the risk of penetration by criminals and alter their contents
Singapore also implemented a payment system between its banks through blockchain technology by the Singapore Monetary Agency, which facilitates the process of exchanging funds between banks very quickly by issuing a special digital process at a much lower cost, the government is now considering linking this system to the international payment system
The Dubai government, too, has formed a blockchain council to implement technology in all aspects of life in Dubai and they have applied it to several programs, including property transfer, commercial records, medical records and diamond trade, etc., and the government is expected to provide 25 million hours of work until 2020
How can blockchain fundamentally change the speed and scale of corporate and start-up financing? How are Blockchain-based projects funded?
First, in order to understand the topic more, we must know two important terms in order to make the explanation easier. There are two types of cryptocurrencies in circulation.
Currency = the main currency
Token = token
Major currencies are based on Blockchain platforms completely different from any other currencies like Bitcoin, Ethereum, Ripple
Each of these major currencies differs as platforms from other currencies as infrastructure in terms of the speed of execution of operations in the number of operations that are executed within one second and so on. And it allows developers to build their applications on this infrastructure due to the limited number of available major currencies, the developers issue new currencies for their applications and this is due to the previous reason and also so that the networks are not strangled by the number of daily operations, these currencies perform almost two functions, a currency is available for use only On its own platform.
For example, a developer has developed a program that records land sales. In order to benefit from this service, you must pay in the currency that the developers of this program launched.
Invest in BlockChain
After we understand the difference between the two .. How can this technology solve the problems of financing small and emerging projects and how can we invest in this technology?
Normally, companies are funded through two ways by either borrowing or buying a stake in the company, the blockchain has created another way to fund these projects through what is called an ICO or initial coin offering
ICOs are very similar to IPO operations for companies wishing to be listed on the stock exchange. In the blockchain, program developers issue a new currency under whatever name they wish and they sell this currency to investors who see this application as an opportunity and achievable.
For example, developers want to collect 50 million riyals by subtracting 50 million digital currencies, the value of each coin is one riyal. Usually, these amounts are raised and collected faster than the traditional way. The investor in these currencies benefits from only one thing, which is the possibility of a high price of this currency and the achievement of capital gains. Usually, the price rises either through speculation or in the event the program are done and the demand for the services of this program increases At that time, the demand for currency will rise until they play with it to use the program's services.
This method of raising funds is still new and it is possible to be exposed to fraud in many of them, but this innovative method of raising funds will make it easier for companies to finance their projects with great ease and on a global scale without looking at the local market only.
The number of projects funded by issuing currencies on the blockchain increased to 234 projects in 2017, up by 409%, and the total amounts collected amounted to $ 3.7 billion in 2017 compared to $ 96 million in 2016, up by 3700%
In summary, the most important current methods that can be invested by blockchain are as follows:
Investing in cryptocurrencies.
Investing in startups that use blockchain technology as mentioned above.
Investing in ETFs.
Investing time in learning to develop and update blockchain technology.
Here we have finished and we hope we have answered a question, What are the ways to invest in Blockchain technology?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8746317625045776,
"language": "en",
"url": "https://www.moneymuseum.com/en/coins/speaking-coins?t=Video",
"token_count": 385,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2138671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f360db5c-aead-4b59-a9a1-5455818e06a7>"
}
|
These 5-minute podcasts entertainingly explore the topic of money. Learn about the very first coins in the ancient world, the first dollar bill or select the coin which appeals most to you.
A coin named "Waserthaler" and a mayor who was thought to have betrayed his hometown. Not true – read the truth.
The «Waserthaler» in Zurich 1660
Terror - the beginning of political murder.
This video shows why this great currency fell apart.
The Fall of the Roman denarius
The bloody end of the Staufer and the conquest of Sicily.
Charles of Anjou, 1268
... in the Holy Roman Empire, 1150
Brandenburg's crucial role
a podcast of the year in 1527
... when Rome was looted.
The search for Aesillas leads us to the means the Romans used to conquer Asia Minor.
Macedonia, 90 BC
Hammer and sickle in the Austrian coat of arms?
Schilling, First Republic of Austria
Get an answer in 90 seconds
How big is a Trillion?
Features of the Human Face on Coins
Innovations in Asia Minor, before 336 BC
Ideal and Reality
Wealth Inequality in America
a course on the symbolism of the sunflower
Fibonacci and the sunflower
1816: The year without a Summer
from his Stress Syndrome and dedicated coins to Asclepius of Pergamum
How Caracalla found Relief
A short statement
What does Sunflower want?
Where did the meat come from which was consumed in medieval Nuremberg?
The Nuremberg «Meat Bridge»
Bottom-Fishing: How to Avoid the Traps, a chart service by FinGraphs
Financial Graphs: Bottom-Fishing
Change of Climate as trigger
The French Revolution 1789
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9141278862953186,
"language": "en",
"url": "https://www.stt.aegean.gr/en/microeconomics-i/",
"token_count": 92,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:47e05193-78ac-4764-af27-00e46ab5f749>"
}
|
At the end of the semester students must be able to:
- Explicate the way product markets operate and determine demand and supply functions
- Define and explain the decision making of consumers to consume goods given their income and products’ prices,
- Explain-depict production and cost functions at short and long time periods,
- Comprehend-explicate and compare the behavior of a firm operating under conditions of both Perfect Competition and Monopoly.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9534932971000671,
"language": "en",
"url": "https://addisstandard.com/analysis-the-plastic-pile-weighing-on-ethiopia/",
"token_count": 1744,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.025390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ec758c49-961e-4f01-9b6d-edb370c80ac0>"
}
|
Etenesh Abera and Bileh Jelan
Addis Abeba, December 10/2019 – According to the UN environment body, UNEP, globally “one million plastic drinking bottles are purchased every minute and Five trillion single-use plastic bags are used worldwide every year.” In total, half of all plastics produced annually are designed to be used once: the world produces 300 million tons of plastic waste annually, a weight equivalent to the weight of the entire human population.
In Ethiopia, a Country Cluster of EUROMAP-European Plastics and Rubber Machinery puts the per capita plastic consumption in 2018 at 2.8 kg, a 267% rise from a figure that put the consumption per capita at 0.6 kg in 2007. That places Ethiopia as the second largest importer of plastic raw material in central and eastern Africa and the fastest growing plastics industry in the continent.
While countries like Kenya and Rwanda, two of the other fastest growing economies in Sub-Saharan Africa, are seeing policy changes toward reduction of plastic consumption and an all-out ban on single-use plastic products, Ethiopia is seeing a rise in its plastic consumption and production. EUROMAP predicts that Ethiopia will produce 386,000 tons by 2022 and the per capita consumption will rise to reach 3.8 Kg.
Teshale Woreku, owner of a local butcher, corroborates EUROMAP’s alarming prediction when he speaks of the rise in plastic bag consumption in his field of business. “We buy 50 plastic bags for 22 ETB (some $70 cents) and 32 plastic bags for 16 ETB. During holidays, we consume up to 50 packs per day,” he told Addis Standard. According to him in an ordinary, non-fasting Saturdays and Sundays, the use plastic bags rise up to 10 packs, each pack consisting of dozens of plastic bags. “As a shop owner I will be happy if customers can use alternatives materials for two reasons. First, the plastic bag used to package meat for customers are one-time use. I believe this affects the environment. Secondly, it makes sense economically, since the cost of providing such materials is contributing to increases in meat prices.”
The story of the vegetable market is no different. Solomon, a local distributor of vegetables, who only wanted to be referred by his first name, told Addis Standard he did not know exactly how much plastic bags he uses per day. “I am not sure but we buy 2000-3000 ETB (around $70 – 100) worth of plastic bags each month, so we don’t know exactly how much plastic bags we are using daily,” he said.’’ A customer at Solomon’s shop who was packing her goods, says she would take more than 10 plastic bags every four days for vegetables alone and another 10 bags for other items. “I am always surprised at the amount of plastic bags I consume alone,” she said.
These figures add up to Ethiopia’s troubling rise in the use of plastic material when considering the country’s non-existing garbage disposal facility and culture, especially the absence of proper sorting in the types of household garbage. Most plastic materials end up in the streets, rivers and alleys of urban and rural Ethiopia: on sidewalks and farmlands alike.
Lack of institutionalized research
There are several companies which are either engaged in importing or producing plastic bags in some form or another. Aleta Land Group, “a local indigenous company that has been established to predominantly to engage in coffee exporting activities,” is one such company. But its marketing manager, Tigist Nemiru, told Addis standard, that “all raw materials we use in our production are environmentally friendly and we only produce poly bags”. However, there are no information available if if there were efforts by the company to conduct researches on either environmental impacts or decomposition of their products.
Speaking about the absence of corporate practices in mitigating plastic pollution and waste management, Dr Ahmed Hassen, a researcher at Addis Abeba University, admits that despite evidences linking lack of such data and the mounting problem, at the moment, “we don’t have any available research.”
Yalemsew Adela, a researcher, an environmental technology expert and the director of the environmental pollution management directorate at the Ethiopian Environment and Forest Research institute highlighted the need for more coverage to such stories in a serious manner and called the stage Ethiopia is in “a plastic pollution stage.” “It is a nasty state; we consume huge amounts of plastics in various sectors and we can only imagine the amount of waste that is dumped on our eco-system,” he told Addis Standard.
According to Yalemsew, the packaging sector, which is using 60% of plastics produced in bottling and bagging, is the highest consumer of plastic products. The government is aware of the problem and is trying to find solutions. “Our current research project is focused on ways we can contribute in making a green economy, be it by reducing the contamination load on the environment or recycling.” He further explained that institutional arrangements, policy and the legal frame work were some of the ways the government was looking into. But he admits “we have all these three and still I don’t understand where the problem lays. The problem is also associated with human behavior: we are designed to use the what we need and get rid of what we don’t,” he said.
In the absence of viable policy
The absence of a clear strategy and policy by responsible authorities such as City Administration offices to manage organic and non-organic waste separately is to take a chunk of the blame too, according to Yalemsew. “Addis Abeba spends 400 million ETB to manage all waste annually,” but effective management of such resource is another thing.
One way of dealing with the problem is starting a public discussion on alternatives packaging than to the one-time use of plastic products, Yalemsew said, “We could produce long lasting packaging products; we could use bamboo and many other materials for example. But I mentioned the problem is not with lack of alternatives as it is the social behavior and the government’s failure in balancing between economic growth and environmental protection.” But he cautiously noted that before throwing all the blame on the government we should also consider that factors such as “more than 37 NGOs operating in Ethiopia are still teaching that hand washing before meals is important. Officials usually represent such communities.”
Gutema Moroda (Eng.), Deputy Manager at Addis Abeba Environmental Protection Authority, also admits that the city does not have a strategy on separating organic and non-organic waste although “we have a proclamation and regulations in place and no corporation is allowed to dispose of waste without the permission of the authorities,” he told Addis Standard.
However, the city administration is designing a policy to have biodegradables and bio plastics materials replace the current materials used in plastic production, to have Addis Ababa ban the use of plastic bags and to put into effect a law that demands the packaging sector to participate in the recycling their own products “I can’t pinpoint the exact date but Addis Abeba will ban the use plastic bags in the near future,” he said.
Meanwhile, although few and far between, there are exemplary efforts where concerned authorities can look into possibilities. One exceptional example is Teki Paper Bags, a “social and environmental enterprise developed for and by the deaf community”.
Teki Paper Bags primary goal is to “create sustainable employment to empower deaf women while building a plastic bag free Ethiopia,” according company information. So far, Teki has created job opportunities to 27 employees, of whom 17 are deaf employees, and has replaced 805,000 plastic bags with paper bags. “At Teki, we aim to make a real and lasting change in the lives of deaf women by providing meaningful employment, a social life, plus the ability to find a family and take care of their children through entrepreneurship.”
By all accounts, initiatives such as Teki Paper Bags deserve all the policy support from government authorities if Ethiopia is to unshackle itself and become free from the weight of the plastic pile threatening its environment and its future generations. AS
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9595373272895813,
"language": "en",
"url": "https://thebluecollareconomist.com/2014/10/08/the-error-of-pricing-goods-in-gold/",
"token_count": 1486,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.057861328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1bff2c53-de36-491c-95ce-7d2e7133f593>"
}
|
We’ve all heard experts, pundits and talk show hosts pontificating about gold and silver. We’ve heard examples of their so called “intrinsic” value, about how gold and silver retain their value over time. Aside from any intrinsic value, this is accurate, except when these people attempt to make price comparisons of goods in gold or silver terms. We’re again going to rely on Austrian economic analysis and determine something new about their claims. Hopefully, this analysis won’t lead to me becoming a sort of pariah in hard money circles.
Subjectivity and Prices
In a previous article, “The Batmobile, Value and Prices” we discovered the difference between value and prices. Whereas, value is subjective to the individual and prices are objective. For the sake of this article I won’t go into lengthy detail about that subject. Suffice it to say that, as Carl Menger and the marginalists had demonstrated, consumer subjectivity is the basis for all economic activity.
People, who interact in an exchange economy do so to be left better off. It’s not because of some instinctual drive to exchange or, “animal spirits”, as John Maynard Keynes described it. In reality it’s because consumers make rational decisions on how to employ their scarce means to achieve their desired ends. Each individual has his or her own personal value scale of desired ends and ranks them accordingly in their mind. These personal subjective valuations manifest themselves through the process known as price discovery into the objective prices of goods. When buyers and sellers enter into an exchange, prices for goods are bid up or down according to subjective valuations. Buyers attempt to be left better off by expending as little money as possible, reflecting in a downward pressure on prices. On the other side of the exchange, sellers are attempting to be left better off by receiving the highest price as possible. The price is arrived at when those goods meet the market clearing price. That is, when a specific amount of money is agreed upon to clear an inventory of a given good. This is just a short description of how an unhampered exchange economy functions.
Subjectivity and Price Terms
All of this talk of subjectivity and prices must be understood in the monetary terms of each exchange. When consumers talk of prices they are relating those prices in terms of the currency that is being exchanged. Moreover, like a language, price terms translate into a certain amount of either labor or capital that must be expended in order to meet the required price. As an example, in the US, consumers understand prices in dollar terms. A wage earner subconsciously understands the amount of labor that must be performed in order to afford the price of goods and services. They understand the amount of their scarce time they must give up in order to perform the labor required to receive the required money, in dollar terms, to meet that price. A retailer understands how much money in profits that must be met, a lawyer may understand the amount of legal services that would have to be provided and so on.
Another point, wage earners also understand the cost of living at any given time in the same currency terms. A US wage earner understands how many dollars per hour, week, month or year that must be earned in order to meet their subjective goals. Prices are set in dollar terms and consumers understand the cost of living and any further expenditures in those terms. When someone earns $10/hour he rationally wouldn’t expect to afford $3,000/month in rent on a home. To summarize, prices to consumers translate into a cost and benefit. Something must be given up in order to meet the agreed upon price and still be left better off.
Gold and Silver Prices
The US economy has been rolling along in fiat dollar terms ever since President Franklin Roosevelt decoupled gold from the dollar in the early 1930s and the last vestiges of silver coinage ended after 1964. The Bretton Woods Agreement of the 1940s only established a gold exchange standard to settle accounts between sovereign nations. Over time people have been making subjective valuations of goods and services in fiat dollar terms. Today, consumers do not compare the dollar price of a gallon of gas or a loaf of bread in ounces of silver. As previously demonstrated, they simply don’t calculate the cost of living in those terms anymore.
When we hear people talk of current dollar prices in either gold or silver terms they are simply demonstrating the loss of purchasing power the dollar has undergone over a certain length of time in relation to that commodity. No one, by any means, can say with any certainty, the exact price of a good in either gold or silver terms. When someone says that a 1964 silver dime would buy $3.00 worth of gas today they are making interpersonal comparisons of utility. They are assuming the subjective valuations of millions of people over decades of time. The only way that 1964 dime would purchase $3.00 worth of gas today would be if the owner of the dime sold it to a coin dealer. What he would receive would be the commodity price for that specific weight of silver in fiat dollar terms. No goods have intrinsic value, only subjective value. These kinds of price assumptions are only possible by ceteris peribus, meaning, all things being held constant. A theoretical construct that economist Alfred Marshall used to isolate an economic anomaly. It was never intended to be a reflection of reality.
Gold and silver have been evaluated in terms of their respective commodity prices, not money prices, since the US went on a pure fiat standard. Prices in this inflationary fiat economy have been discovered in paper dollar terms ever since the government killed the gold dollar. That is the reality.
Not to be misunderstood, this article was to demonstrate that people cannot assume the price of current goods and services by using the commodity prices of gold or silver. For that matter we could easily assume prices in terms of wheat, corn or pork bellies. Gold and silver have lost their place in the minds of consumers as money long ago. As a mental experiment they are demonstrating the loss of purchasing power the fiat currency has had over time by comparing it to what it once was, real money. Of course the rational choice would be to return to a hard money standard. That is the assertion these advocates of gold and silver money are making. By holding all things constant, that is, excluding any subjective valuations of people in the economy over decades and decades, that 1964 dime would buy you a loaf of bread or $3.00 worth of gas. The fact of the matter is, we really don’t know what the price of gas, bread or anything else would be today had the dollar been on a hard money standard all along. We can’t assume the pretense of knowledge that is required to make any accurate price predictions. Personally, I would assume prices would actually be much lower as purchasing power would be much higher. In closing, let us agree that the fiat dollar isn’t worth a Continental and hard money is as good as gold. Let consumers determine what type of money they want to exchange in, not governments or elitist central bankers.
[Image credit: www.crisisboom.com]
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9674887657165527,
"language": "en",
"url": "https://tradesanta.com/blog/what-is-bitcoin",
"token_count": 972,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.5,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3add68f2-f1d9-4bcd-96eb-bbb6dd145422>"
}
|
“Bitcoin is a peer-to-peer version of electronic cash that allows payments to be sent directly from one party to another without going through a financial institution.” – Satoshi Nakamoto
Bitcoin’s history & ideology
In 2008 Bitcoin.org was registered and a whitepaper was published claiming the idea of “Peer-to-Peer Electronic Cash System.” Bitcoin includes all the ideas of its predecessor: secure digital signatures, proof-of-work, not using a third party and hashing the transactions together to form a chain.
The creator of Bitcoin remains a mystery, however, is considered to be an anonymous person or a group of people under the pseudonym of Satoshi Nakamoto.
The first BTC was mined on Jan 3, 2009, and is called as the ‘Genesis block’. Further, Bitcoin journey took lots of unexpected turns. To find out them and take a walk through all details and milestones, you can visit this timeline http://historyofbitcoin.org/
As for the pre-Bitcoin years, it should be noticed that before Bitcoin there were several unsuccessful attempts of creating e-currencies (e.g. B-Money, Bit Gold)
It was 1982 when computer scientist David Chaum first proposed the concept of e-Cash referred to the idea of digital privacy, automated and safe payment system without third parties.
Further, in 1998 we can also find 2 similar ideas by Wei Dai (b-money) and Nick Szabo (Bit Gold). Both of them wanted to create an alternative currency ledgers secured by encryption. Their concepts were formulated but not developed. Curious fact: common Ethereum denominators were names after these two researchers.
Features and work process
Bitcoin was the first currency built on blockchain, and it inherits all of its characteristics: decentralisation, transparency, no third party.
When sending BTC coins, one needs to digitally sign a message that will be broadcasted to all the computers in the network, storing this message on the ledger. So, bitcoins are transferred from one virtual wallet to another (i.e. small personal database that one stores on the computer device)
This process helps to prevent transactions from being double-spended and people from copying bitcoins as all users have access to its history.
Bitcoins are gained by miners whose goal is to find a “hash” (i.e a line of letters and numbers that verifies the validity of information) and register all transactions in the system. Such process was established to provide decentralization, however, in reality powerful mining farms and pools are formed.
The maximum and total amount of bitcoins that can ever exist is limited by 21 million. Nowadays over 17 million are in circulation. Also it should be taken into consideration that not all 17 million coins are actively available to trade as some amount of bitcoins was lost (one recent guess stands for about 3-4 million lost bitcoins).
Evolution of Bitcoin price
The first Bitcoins were issued in January, 2009, but the price was $0.00 and, initially, only fans of cryptography used this new cryptocurrency, negotiating between each other.
Bitcoin’s second year turned out to be more successful:
it was absolutely worthless at the beginning of the year. For example, in March 2010 the auction was held to sell 10,000 BTC with a starting bid of $50 and nobody bought it. However, In May, bitcoin ‘s first commercial transaction happened. Two Papa John’s pizzas were bought for 10,000 BTC (~ $40,000,000 at the moment of writing) and its price did increase to around $0.39. Moreover, in the same year the first cryptocurrency exchange started operating.
Since that moment Bitcoin has seen lots of rallies and crashes
As for the first shooting up, Bitcoin really began to fluctuate significantly in October and November of 2013. In early October the price of BTC was around $100, then went up to $195 by the end of the month. In the end of November, the Bitcoin’s price rocketed to over $1,120.
After around three years of relative calm, Bitcoin got in the mainstream in 2017. It was a year of steep and enormous changes from around $1,000 per coin to almost $20,000 per 1 BTC.
On 5 August 2017, the price of Bitcoin passed $3,000 for the first time. On 17 December 2017, Bitcoin, having started the year from around $1,000, soared to $19,783.06. However, by the end of January 2018, the cryptocurrency dropped from around $20,000 to $10,000.
Nowadays, the current price of Bitcoin is around $4 000.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9370615482330322,
"language": "en",
"url": "https://www.eurekalert.org/pub_releases/2017-09/hhw-tec092717.php",
"token_count": 3837,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.458984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5709e406-853e-41d5-b62b-b4f32d4521f2>"
}
|
Economic losses from Hurricanes Harvey, Irma and Maria and 76 wildfires in nine Western states, intensified by human-induced climate change, will be the most costly combined weather events in U.S. history.
When the final accounting is completed, the economic losses of these three hurricanes and the wildfires, which happened within one month, could cost nearly $300 billion in damage, 70 percent of the 92 weather events of at least $1 billion in the last decade (Figure 1).
Added to the damages of extreme severe storms, hurricanes, floods, droughts and wildfires are the enormous health expenses of burning fossil fuel, the main cause of climate change. Total cost on average is now $240 billion a year (Table 1), excluding the catastrophic events this August and September and excluding climate related economic losses in the agricultural sector and associated with heat stress on humans.
This annual average economic losses and health costs equal about forty percent of the current growth of the U.S. economy, according to a new report, The Economic Case for Climate Action in the United States, published online by the Universal Ecological Fund.
In the next decade, these economic losses and health costs are projected to reach at least $360 billion annually equal to an estimated 55 percent of the U.S. growth.These escalating costs are due to the continued use of fossil fuels triggering the climate to continue to change.
"Burning fossil fuels comes at a giant price tag which the U.S. economy cannot afford and not sustain," says Sir Robert Watson, coauthor of the report and former Chair of the Intergovernmental Panel on Climate Change, the leading scientific body on climate change.
"The evidence is undeniable. These recent extreme weather events are a continuation of a three decades trend of increasing numbers, intensities and costs of severe storms, hurricanes, flooding, droughts and wildfires. Simply, the more fossil fuels we burn, the faster the climate continues to change and cost. Thus, transitioning to a low-carbon economy is essential for economic growth and is cheaper than the gigantic costs of inaction."
These massive costs fall mainly on individuals and families, not the Government or the private sector, says the new report.
"We can expect extreme weather events and economic losses and costs associated with them to continue increasing unless we make dramatic reductions in greenhouse gas emissions," says James McCarthy, Ph.D., coauthor of the report and Professor of Oceanography, Harvard University. "The Trump Administration is determined to maximize the use of America's fossil fuels -coal, oil and natural gas- as well as to cut energy industry regulations. This is taking us the opposite direction."
Alternatively, addressing this problem can ensure economic growth and create jobs.
"Clean and sustainable energy just requires smart decisions and smarter investments," says Liliana Hisas, Executive Director of Universal Ecological Fund (UEF) and coauthor of the report.
Soaring number of extreme weather events
"Weather events are the result of natural factors. However, human-induced climate change has altered their intensity and frequency substantially and measurably," explains Dr. Watson, Director of Strategic Development at the U.K's Tyndall Center for Climate Change Research.
For example: the number of extreme weather events with at least $1 billion in economic losses and damages have increased by almost 2.5 fold, totaling 92 in the last decade (2007-2016), compared to 38 in the 1990s and by more than 4 fold, compared to 21 in the 1980s according to the National Oceanic and Atmospheric Administration National Centers for Environmental Information.
In the latest 10-year span, the economic losses from these extreme weather events - droughts, wildfires, severe storms, hurricanes and flooding-have almost doubled, from $211.3 billion in the 1990s to $418.4 billion in the last decade.
In the latest 10 years, weather events of less than $1 billion are estimated at about $100 billion in damages or $10 billion for 80 events a year, compared to $50 billion or $5 billion for 60 events a year in the 1990s.
Hardest hit states by extreme weather events
Not all states are impacted in the same way by extreme weather events.
Texas had 49 events that contributed to or exceeded $1 billion ineconomic loss, since 2007. Hurricane Harvey is the most damaging of all these events (Map and Table 2).
The states impacted with economic losses of at least $1 billion from extreme weather events in the last decade are:
- Severe storm: Texas (32, a more than a four-fold increase, compared to the 1990s), Kansas (24, a six-fold increase, compared to the 1990s), Oklahoma and Illinois (23 each, a more than a four-fold and almost a six-fold increase respectively, compared to the 1990s), Missouri (21, a more than a five-fold increase, compared to the 1990s) and Tennessee (18, more than a four-fold increase, compared to the 1990s).
- Hurricane: Alabama, Louisiana and Virginia (4 each, a two-fold increase, compared to the 1980s); Pennsylvania, New York, Maryland and Connecticut (3 each, a 50 percent increase compared to the 1990s); North Carolina (3); and Mississippi and New Jersey (3 each, a three-fold increase, compared to the 1990s).
- Flooding, as a result of severe storms and hurricanes: Louisiana and Missouri (4 each, a four-fold increase, compared to the 1990s); Texas (3); and Arkansas, Illinois, Indiana, Kansas and Iowa (3 each, a three-fold increase, compared to the 1990s).
- Drought: California (8, with no billion-dollar droughts in the 1990s or 1980s), Idaho (7, with no billion-dollar droughts in the 1990s), Oregon and New Mexico (6, a six-fold increase, compared to the 1990s), Oklahoma (6, a two-fold increase compared to the 1990s), Kansas (6, a three-fold and two-fold increase respectively, compared to the 1990s and 1980s) and Texas (6, a three-fold increase, compared to the 1990s).
- Wildfire: California (6, a two-fold increase, compared to the 1990s), Arizona and Oregon (6, a six-fold increase, compared to the 1990s), Idaho (6, with no billion-dollar wildfire events in the 1990s), Texas, Nevada, Washington and Colorado (5 each, a five-fold increase compared, to the 1990s) and Montana (5,with no billion-dollar events in the 1990s).
The economic impact for a single state can be severe. For example, in August, 2016, 30 inches of rainfall fell in a few days, flooding southern Louisiana. As a result, more than 50,000 homes, 100,000 vehicles and 20,000 businesses were damaged or destroyed. The economic losses due to the floods in Louisiana were $10 billion. Some 75 percent of those affected by the record Louisiana rainfall were uninsured.
In these severe storms, hurricanes, flooding, droughts and wildfires, many individuals, families and businesses lost everything.
Impact on agriculture
Climate change is changing rain patterns. Because agricultural production in the U.S. mainly depends on rain, farmers are especially being hit. Since 2012, farmers across the Central and Western U.S. have suffered $56 billion in economic losses, due to the persistent drought.
The production of corn and soybean -the largest crops in the U.S.-could experience a 20 to 30 percent decrease within the next three decade, if action to address climate change is not taken. Thus, corn and soybean producers could potentially lose $17 to $25 billion annually.
Combined with weather events are unhealthy air costs, caused by fossil fuel burning. More than 43 million Americans live in an area with unhealthy airpollution. These costs due to air pollution exposure caused by energy production in the U.S. were estimated atabout $188 billion in 2011.
Emissions regulations on the energy sector reduced air pollution health damages by 35 percent or almost $67 billion a year -from $255 billion in 2002.
"The costs of health damages due to air pollution exposure caused by energy production will increase without regulations to the energy industry," says Dr. Watson. "Individuals and families will have to pay these health costs, either directly or through increased insurance premiums."
Coastal cities such as Miami, Boston, New York, Seattle and San Diego are at most risk due to sea level rise, caused by climate change.
"The question is when and how much sea level will rise," says Dr. McCarthy. "Lives and almost $1 trillion worth of real estate in coastal areas are at stake.
"The quantity of greenhouse gases that has accumulated in the atmosphere increase global temperature, and in turn warm the oceans. Hurricanes, like Harvey, Irma and Maria, gain strength and moisture traveling over warmer water, making them larger, stronger and more intense, especially endangering coastal cities."
Fossil fuels account for 80 percent of U.S. energy
Coal, oil and natural gas currently account for just over 80 percent of the primary energy generated and used in the U.S. This percentage has decreased slightly over the last two decades. As a result, 82 percent of America's greenhouse gas emissions are solely from carbon dioxide (CO2) from fossil fuel burning. It is these CO2 emissions, driving the observed climate changes.
Despite the escalating costs and economic losses on U.S. lives, health, homes, businesses and livelihoods, the U.S continues to rely on fossil fuels to produce energy, which includes electricity, fuel and natural gas.
"Every time you turn on the light on or start your car, you are contributing to climate change," says Ms. Hisas. "Everyone is part of the problem and everyone is part of the solution."
Economic growth and job creation with climate action
Securing sustained economic growth and job creation -a priority for the current Administration for the next four years in the U.S.- requires generating energy differently, according to the new report. It also requires a more efficient use of energy in all sectors -residential, commercial, and industrial and transportation.
Relying on fossil fuels for economic growth was how many economies grew in the 19th and 20th centuries. More than a century ago, the consequences of burning fossil fuels were not known and understood; they were not appreciable then either. Today, however, they are.
Climate action can provide economic growth and job creation in these ways:
Changing the energy equation
Carbon-free and sustainable energy can provide the additional energy needed in the U.S. Clean energy can also significantly increase the energy employment, now employing 1.9 million workers, via:
Renewable Energy. 10 percent of the energy (or 15 percent of electricity generation) used in the U.S. currently comes from renewables -solar, wind, bioenergy, hydropower and geothermal (Table 3).
Half of the electricity generated by renewables are solely from solar and wind, or about 7 percent of the electricity used in the U.S. These technologies provide almost 500,000 jobs, including manufacturing, construction, project development, operations and maintenance. Jobs in the solar industry grew 17 times faster than the overall job creation in the U.S. economy. In 2016, the solar workforce increased by 25 percent, accounting for 374,000 jobs or more than 40 percent of the employment in the generation of electricity in the U.S.
While a major transition to renewable energy is required, even doubling the solar and wind generation capacity will create 500,000 new jobs. It will also provide sustainable clean electricity, only requiring an initial investment in installation, but there will be significant savings for users in the long-term, due to low operating costs.
Most importantly, doubling the solar and wind generation capacity will reduce the share of electricity generation from fossil fuels (natural gas and coal) by 23 percent -from the current 65 to 50 percent.
The expansion of these renewable technologies will, in turn, make their costs much more competitive and accessible.
Workers in the extraction of natural gas and coal in Illinois, Kentucky, Louisiana, Oklahoma, Pennsylvania, Texas, West Virginia and Wyoming can with training and investments greatly benefit from these new jobs in renewable energy.
Nuclear. Electricity produced with nuclear power accounts for 9 percent of America's energy (or 20 percent of electricity generation). Nuclear power provides carbon-free energy and is now safer.
Sixty nuclear power plants in the U.S. employ about 70,000 workers. Two new nuclear reactors are planned for Georgia and an additional four new nuclear power plants are to be built in Florida, North Carolina, Virginia and Texas. These new plants will provide at least 10,000 new jobs in the generation of electricity.
Using fossil fuels responsibly
Fossil fuel power plants can be consistent with job creation and a low-carbon economy.
Currently, fossil fuel power plants generate 65 percent of the electricity used in the U.S., contributing 39 percent of the U.S. CO2 emissions. Natural gas and coal are the main sources of electricity generation, accounting for 34 and 30 percent respectively.
The 220,000 workers employed by these fossil fuel power plants may feel threatened by the need to switch the generation of energy to address climate change. However, carbon capture and storage (CCS) technologies which bury CO2deep underground would allow the continuing burning of fossil fuels to responsibly meet America's energy needs.
Of the 16 large-scale CCS plants in operation in the world, eight are in the U.S. An additional CCS plant will be operational this year, placing the U.S. at the top of technological innovation in using fossil fuels responsibly.
Power generation with CCS is still in its infancy and still requires more research and development for its large-scale deployment, according to Dr. Watson. More pilot programs will need to be implemented, since more than 1,000 electric power plants burn fossil fuels in the U.S. (256 use coal and 816 use natural gas).
The research, construction and maintenance of CCS plants could double the current number of workers in energy construction, creating 250,000 additional jobs, while securing the jobs of those currently employed by fossil fuel power plants.
Fusion is an example of new technology being tested to generate electricity.Fusion is the human replication of the mass-to-energy conversion of hydrogen in the core of the Sun that gives the Earth light and warmth. This highly complex process has been duplicated at facilities in San Diego, Princeton, Russia, U.K., Germany and South Korea.
Right now, the International Fusion Energy Organization (ITER), a collaboration of 35 countries, is constructing the largest controlled fusion device ever built in France to prove the viability of fusion.
Fusion is clean, abundant, safe and economic. ITER estimates that the first fusion plants will start coming on line in the mid-2040s.
Innovation and new technologies
New technologies to produce carbon-free energy will also have to be tested and deployed, such as locally produced advanced biofuels from forest and crop residues or municipal and construction waste, and biofuels derived from algae with subsequent sequestration of CO2.
Currently, about 300,000 workers dedicated to research, architecture, and engineering to support energy generation technologies.An additional 50,000 jobs will accelerate the identification, testing and deployment of innovative technologies to generate sustainable clean energy.
Using energy more efficiently
Reducing fossil fuel use will be easier and faster in some sectors of the economy than others. Thus, promoting energy efficiency in those sectors is a key element of ensuring economic growth while taking climate action.
For example, a critical sector of the economy is transportation. Gasoline, diesel and jet fuel -all petroleum-based fuels- account for 92 percent of the energy used in the transportation sector; natural gas is another 3 percent. The remaining 5 percent is biofuels -ethanol and biodiesel-that is added to gasoline and diesel fuel.
These fuels are used in 263 million cars, trucks, motorcycles; 6,676 aircrafts (passenger and cargo), 132,500 transit and commuter buses and rail cars; 397,500 freight trains and locomotives; 11.8 million recreational boats and 465 vessels (tankers, passenger and cargo ships) to transport individuals, passengers and goods throughout the U.S.
Using transportation more efficiently and promoting vehicle performance improvements will ensure that travelling and trade meets needs and demands, while using less fuel.
"This is why it is essential that the Trump Administration support, or ideally, strengthen the 2025 fuel efficiency standards negotiated between the Obama Administration and the automobile industry," says Dr. McCarthy.
"Reducing the 95 percent fossil fuel use for transportation will require more research to develop alternative biofuels on a large-scale, without compromising food production," says Dr. Watson."It also requires electric cars powered by renewable energy."
Programs to provide consumers with financial incentives to purchase electric cars will make the transition faster and more accessible.
Other sectors that can greatly benefit from energy efficiency include:
- 136 million U.S. homes and buildings where 324 million people live.
- Offices, hospitals, schools, police stations, places of worship, warehouses, hotels, shopping malls and industries (manufacturing, agriculture, and construction) where 160 million people work in the U.S.
- Generating strategic investments
Transitioning to a low-carbon economy and increasing the efficient use of energy in all sectors will require strategic investments. Much of the revenue for these investments could come from a carbon tax.
The aim of a carbon tax is to reduce emissions, promote a more efficient use of energy and encourage the transition away from fossil fuels.
The potential revenues from a tax on carbon emissions could be up to $200 billion in the U.S. within the next decade, according to Intergovernmental Panel on Climate Change.
A carbon tax will increase the cost of gasoline for fuel users. However, a carbon tax will promote a much more efficient use of vehicles and stimulate the transition to electric cars.
"Protecting Americans from the escalating economic losses and costs due to the impacts of climate change can happen. It will require increased efficient use of energy in all sectors, the share of carbon-free electricity to doubled or tripled, and fossil fuel generation with CCS to expanded, along with installing a carbon tax," says Dr. Watson.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9665734767913818,
"language": "en",
"url": "https://www.fundivo.com/small-business/business-finance/what-is-an-annual-financial-report-and-what-does-it-include/",
"token_count": 577,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.01202392578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:156bad1c-9af4-45f0-a81f-bcd97e4cd466>"
}
|
The Annual financial report is a comprehensive document released by businesses – whether large or small – at the end of the year to show their activities throughout the previous year. It is important to both large corporations and small businesses.
Generally speaking, small businesses need to provide their financial report to their investors and board members. Corporations on the other hand, have to make available for shareholders and interested members of the public their annual report to communicate their activities and financial performance to them.
Why Do You Need It?
The Annual Financial Report may seem stressful, but it is critical for your business operations and is very important when seeking funding from lenders or investors if you want to take your business to the next level. It also ensures your products and services are well-priced.
What does it include?
Typical annual financial reports contain the following items:
- Letter to the shareholders
- Detailed financial statements with narrative text and graphics
- Auditor’s report
- Summary of all financial data provided
- Relevant information pertaining to the business
In addition to the above, there are three basic financial statements that are essential in every small business and must be included in the Annual Financial Report. They are balance sheet, profit and loss statement and cash flow statement:
1. Balance sheet : The company balance sheet for the fiscal year is an important component of the Annual Financial Report. There are two types of assets which include: assets which include cash in the bank and outstanding receivables are listed on the left-hand side.
|Current assets||Fixed assets|
|This includes cash or other valuables that can be converted into cash within a year. Examples are prepaid expenses, inventory and accounts receivable.||Fixed assets include all the items that you don’t intend to sell as day-to-day business operations can’t be carried out without them.|
On the other hand, the liabilities are listed at the right-hand side. The liabilities include outstanding payables, mortgages and loans. It also includes the company’s net worth, which is the value of the company’s possessions if all outstanding debts were paid and the assets were sold off.
2. Income statement : This is also referred to as a Profit and Loss Statement. It shows all the losses incurred and overall profit the company realized within the previous year. The net profit of a company is calculated by subtracting the total operating expenses from the gross profit.
3. Cash flow statement : Cash flow statement shows the total money that comes in to the business and that which goes out of the business. Cash inflows are the sum of all the money coming into the business, including accounts receivable collections, loans, sales (in cash), and other investments. Cash flows include purchase of equipment, expenses on inventory, and workers salaries.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.961948037147522,
"language": "en",
"url": "https://www.livekindly.co/the-eu-wants-to-tax-meat-for-the-climate/",
"token_count": 1281,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.32421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4a20b3cb-087f-44e7-99b6-6242e0b04e08>"
}
|
At a recent event in the European Parliament, the True Animal Protein Price Coalition (TAPP) proposed a meat tax in Europe. But what impact would this meat tax have on health and climate change? And, if a meat tax was applied globally, could it save lives?
During the event, the Dutch research group CE Delft presented a TAPP commissioned proposal calling for the tax. Per The Brussels Times, the proposal suggests an initial tax of €0.52 per kilo on beef and veal, and €0.41 a kilo on pork. It also called for a €0.18 charge per kilo of chicken.
TAPP claims that taxation could effectively offset the environmental, welfare, and health costs of animal farming. TAPP is made up of health, climate, and health NGOs. These include Compassion in World Farming EU, ProVeg, and others. Under the plan, the price of meat would go up beginning in 2022. This would add a projected €32.2 billion per year by 2030.
The specified amounts would, theoretically, gradually increase to €5.70 for beef and veal, €4.50 for pork, and €2.04 for chicken. Beef and veal prices have the highest increase due to their environmental impact. These charges would also help to correct the “artificially low” current price of meat.
“The time has come for us to act decisively with policy on the environmental consequences of animal protein, the price of which has been kept artificially low for far too long,” said Philip Mansbridge, executive director of ProVeg, one of the coalition members. According to Mansbridge, the plan is “fair for farmers.”
According to a study published in the online journal PLoS One, optimal taxation would see the price of processed meat prices increase by an average of 25 percent. This would then vary from 1 percent in low-income countries to more than 100 percent for high-income countries. This type of targeted taxation could help alleviate the pressure felt by those dependent on cheap meat for survival.
Investing in Sustainability
Animal agriculture is a leading contributor to climate change. It damages ecosystems and causes pollution. It is also an inefficient way of producing protein, due to its excessive consumption of land, water, and crops.
Cows, in particular, are a leading contributor to climate change. According to the United Nations’ Food and Agriculture Organization, cattle raised for beef and milk account for 65 percent of livestock sector emissions. The sustainability charge could lead to an estimated 4.2 Mt/a reduction in CO2–eq. emissions in 2030.
Revenue from higher meat prices could also help farmers invest in more sustainable practices. At the start of its proposal, CE Delft specifies that it sets out “a policy package to incentivize the farming sector to reduce its environmental footprint.”
It adds that the policy package should be equitable. The tax should not lead to a “disproportionately high financial burden for lower-income households.” In fact, the revenue could lower value-added taxes and consumer subsidies on fruit and vegetables. Thereby increasing access to fresh produce.
If approved, the proposal would be added to the European Commission’s Farm to Fork Strategy. Farm to Fork aims to create a more sustainable food system by implementing further restrictions on the use of pesticides, fertilizes, and antibiotics. The strategy would move the EU toward a circular economy and reduce the carbon footprint of food processing.
By introducing an EU sustainability tax, revenues can also help consumers transition to healthier diets. Specifically those rich in plant-based foods. A growing body of evidence has linked red meat to chronic health issues, and several government bodies and NGOs recommend moving away from animal products in favor of plant-based alternatives.
Meat and Health Risks
The 2018 PLoS One study explored the impact meat tax could have on healthcare. It predicted global health costs directly related to red and processed meat consumption could reach $285 billion in 2020. With no change, they estimated that more than two million people will die from causes linked to meat consumption in 2020.
Processed and red meats have been linked to a number of chronic health conditions and diseases. These include colorectal cancer, rectal cancer, and breast cancer, as well as liver disease, cardiovascular disease, and diabetes.
If a meat tax was introduced, both deaths and healthcare costs would be significantly reduced. The study indicated that the number of deaths linked to red and processed meat would drop overall by nine percent. In addition to reduced death-rates, a meat tax could also reduce overall health costs linked to meat consumption by 14 percent globally.
The World Cancer Fund supports a global meat tax. The WCF’s Louise Meincke said: “This research, looking at the potential effects of a meat tax, shows it could help reduce the level of meat consumption, similar to how a sugar-sweetened beverage tax works, as well as offset costs to the healthcare system and improve environmental sustainability.”
Minh Nguyen—a registered dietician with the Physicians Committee for Responsible Medicine—says that there is “no safe amount” of meat. Instead, Nguyen advocates for a whole-food, plant-based diet. Nutrient-dense plants are linked with lowering the risk of various health conditions. By improving citizens’ diets through taxation and reducing healthcare costs, both patients and taxpayers would benefit.
Reducing Animal Suffering
According to CE Delft’s report, it would also lead to welfare increases of around €800 million. There will also be a decline in livestock disease, as well as reduced ammonia, NOx, and particulate emissions.
Overconsumption of animal products, in general, maximizes animal suffering. By reducing the overall consumption and production of animal products, a meat tax could minimize the cruelty of mass production. Intensive factory farms have received criticism from animal welfare groups, politicians, and other organizations for the way they treat animals.
Mass production, generally, means more confined spaces and shorter lives. It can also cause outbreaks of disease. The Canadian Food Inspection Agency recently closed three major slaughterhouses following an E. coli outbreak.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9537555575370789,
"language": "en",
"url": "https://www.mirror.co.uk/tech/what-is-bitcoin-digital-currency-10409961",
"token_count": 1485,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f537f1d7-387b-4ef0-8b9c-bc219b40e9ed>"
}
|
Bitcoin has been up and down in the last year but is a currency starting to come into the mainstream.
A surge towards the end of 2017 was followed by a crash soon after - but many predict that it has long-term potential.
As a technology though, cryptocurrencies are booming - thanks to their decentralised nature and encrypted security.
If you're still not sure about exactly what it is, we've taken a look at the 21st century currency below and explained everything about it.
What is Bitcoin?
Bitcoin was the first of what have become known as "cryptocurencies".
These are forms of digital money that use encryption to secure transactions and control the creation of new units.
The plan was to make a form of currency not controlled by governments or businesses, that you could trade globally with no cost and without having to reveal your identity.
The popularity of Bitcoin has spawned many copycats - sometimes called "altcoins".
To make things more confusing, there are also "second generation" virtual currencies like Ethereum and Bitcoin Cash.
So they’re not like the coins in my purse or wallet?
No. They are essentially a line of numbered “code” - instructions used in computer programming.
However, once purchased they can be exchanged for some goods and services, like normal money.
Where did Bitcoin come from?
Created by a mysterious developer who uses the pseudonym Satoshi Nakamoto, Bitcoins exploded on to the financial scene in 2013, following enormous increases in their value.
In the original Bitcoin white paper, Nakamoto describes his creation as a "peer-to-peer version of electronic cash", allowing "online payments to be sent directly from one party to another without going through a financial institution".
How does Bitcoin work?
Nakamoto wrote that such a currency uses "cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party".
This sort of stateless, bank-free currency uses a distributed, cryptographically secure "blockchain" to record payment transactions.
Recording of payments onto the blockchain is powered by users, who offer their computer power.
They are rewarded with newly created Bitcoins, and this activity is referred to as mining.
What determines their value?
Like many things, it comes down to supply and demand.
New Bitcoins are released at a rate of about 25 new coins every 10 minutes.
But the flow will dry up as they have been designed to ensure that no more than 21 million will ever exist. Today, around 16 million are in use.
How to get Bitcoin
Bitcoins can be obtained in a number of different ways. It's possible to accept them as payment for goods or services.
You can also buy them directly from individuals or special websites called 'exchanges', such as Coinbase, that will swap Bitcoins for regular currency.
Free Bitcoin and Bitcoin Faucets
While 'free bitcoin' may seem like something that lands in your spam folder, there is a legitimate way to get it with a Bitcoin Faucet.
A Bitcoin faucet is a type of award system either on a website or an app. The company running the faucet will send small amounts when you complete tasks such as watching videos or playing games.
Some of the most popular include:
The Cointiply mining game is one of the most popular games in the faucet community and lets you earn in the background, in addition to taking surveys.
Bitcoin wallets are simply specially-designed programs that store your Bitcoin, the same way a regular wallet would store your cash.
They can be used either on a desktop computer or a smartphone and can be stored securely on the web so they can be accessed from anywhere.
How to mine
Mining is a tricky process that involves solving a complex maths problem that takes both time and computing power. The more powerful your computer (and thus, the quicker you can crunch the numbers) means a more difficult problem.
Custom-built Bitcoin mining hardware and software is now available, allowing miners to find Bitcoins even faster.
Each miner also solves a dual function as they process and secure transactions on the block chain. But the more miners that join, the harder it becomes to find Bitcoins.
What is a Bitcoin miner?
A Bitcoin miner can be anyone that simply does it for fun right up to someone with the latest equipment who is attempting to mine for profit.
Bitcoin miners also join into pools that split the workload and gives each of them a share of the profits.
The future of cryptocurrencies
Second-generation cryptocurrencies include altcoins with more advanced functions, that harness the computing power of the blockchain.
An example is Ethereum - the blockchain can execute "smart contracts".
These are pieces of computer code that can interact with other coded contracts and perform work - for instance moving money around and making decisions.
The DAO platform that was hacked is written into the Ethereum blockchain and can autonomously operate without humans to control the organisation.
To decide what investments the DAO makes, its members vote on which proposed contacts will be included in the blockchain.
This could be the start of an autonomous financial future dictated by machines rather than humans.
Why have there been so many warnings about Bitcoin?
Partly because of fears that investors will lose a packet.
Firstly, Bitcoin has no central bank that stands behind it and isn’t regulated by any state.
Secondly, experts reckon the bubble could burst.
Earlier this year Ethereum – the second biggest cryptocurrency after Bitcoin – saw its value collapse from $317 a coin to $0.1 a coin in a day. It bounced back, and is now trading at $473 a coin, but the lesson is there.
Some have labelled Bitcoins what traders call a “fool’s asset”. Unlike investing in a house that can be rented out or a company that makes profits, the only way to make money from them is to find a “greater fool” than you who’ll pay an even higher price than you will.
Legendary investor Warren Buffett says of Bitcoin: “Stay away from it. It’s a mirage, basically.”
Finance expert Martin Lewis said: “Bitcoin is a highly speculative investment. Putting money in it is a form of gambling.”
Why else are people worried?
Because it is being exploited by criminals and hackers.
The fact that transactions are untraceable makes it a dream come true for drug dealers and money laundering, and it is the currency of choice for cyber criminals.
It is telling that online crooks who launched the massive WannaCry ransomware attack earlier this year, which crippled part of the NHS and as well as businesses in 150 countries, demanded Bitcoin payments for organisations to regain access o their systems.
The ill gotten gains can be transferred across borders and withdraw in any currency or spent them on the dark web - a collection of hard to find websites where it is impossible to track the user.
The Treasury this month announced a crackdown on Bitcoin to tackle money laundering and tax dodging.
Under the plans, online platforms where Bitcoins are traded will be required to vet customers and report suspicious activity.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9411290884017944,
"language": "en",
"url": "http://blog.ncenergystar.org/2014/01/the-latest-in-green-home-appraising.html",
"token_count": 533,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.04248046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8485646a-e5f6-4e17-b030-01c6c73ff088>"
}
|
written by Dr. Lee BallAppraising “green” homes can be complicated and confusing, especially if you are not sure what supposedly makes them green. Many homes built in North Carolina in 2013 had some type of green feature. In fact, North Carolina leads the nation in green building in terms of volume (http://www.homeinnovation.com/about/blog/a_year_in_review_for_ngbs). As a result, residential appraisers need to study the market and get green appraisal training in order to legitimize their competency as this market trend continues, especially since the market data drives the adjustment process. The process of making adjustments to homes with green features or certifications may require an approach that the residential appraiser is not accustomed to.
The sales comparison approach or paired sales analysis is not reliable in some markets due to the lack of available data. This requires the appraiser to use other methods such as the cost and income approaches in order to provide evidence for adjustments related to green building features. Fortunately, there are numerous resources available to the appraiser. For the cost approach to making adjustments, appraisers can reference the Marshall and Swift Green Building Cost Supplement or RS Means Green Building: Project Planning & Estimating, 2nd Edition in order to accurately estimate the cost of certain features (http://www.rsmeans.com/bookstore/detail.asp?sku=67338A).
The income approach can be used by calculating annual operating expenses which are usually much lower in residential properties with energy efficient or green building certifications or features. Reduced operating expenses or monthly/annual savings are examples of quantifiable “positive cash flow” which benefit homeowners on the day they move in. Monthly utility savings can also be calculated into a contributory value by using the present value of the annual energy savings, the mortgage interest rate, and the anticipated life of the savings (http://www.appraisalinstitute.org/library/bok/highperformance.pdf). Contributory value can also be calculated by multiplying monthly energy savings and the property’s gross rent multiplier.
Other methods include using market data such as the McGraw HillSmart Market Report 2012 which stated that the added cost to build a green home was approximately 7% above the cost to build a conventional home.
Our next blog will focus on how to use a home’s HERS score to demonstrate added value.
The NC Energy Efficiency Alliance is proud to offer CE training for Appraisers seeking knowledge of this subject matter. Please contact us for more information and to book trainings.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9400200843811035,
"language": "en",
"url": "http://www.obela.org/en-analisis/hydrogen-production-for-fuel",
"token_count": 919,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4cd3fe90-6ca6-458e-af7c-86ff041dbec0>"
}
|
In a previous note, we discussed the production of hydrogen batteries in general terms. We mention why hydrogen is considered a renewable energy source option and the route it takes in the energy matrix shift to replace gasoline at the industrial level and be harnessed in fuel cells for transport vehicles and to generate other fuels.
There are five categories of hydrogen production for energy purposes classified by colours, four by the origin, and one by treatment. There is grey hydrogen from natural gas; brown, from synthetic gas from lignite; black, from synthetic gas from coal and oil; green, from the electrolysis of water using other renewable sources; and blue, from natural gas. It resembles grey, but with an added process to capture the carbon dioxide generated.
Transforming unstable molecules, such as molecular hydrogen (H2) and molecular oxygen (O2), into a stable one, such as the water molecule (H2O), releases a large amount of energy. Work is currently underway to convert this chemical reaction energy into electrical energy. The International Energy Agency (IEA) estimates that obtaining green hydrogen would save up to 830 million tonnes of CO2 per year.
We can burn hydrogen to generate water vapour, heat, and mechanical work or generate water vapour and heat and electrical energy when using fuel cells. Hydrogen engines mainly consist of three parts: the tank containing hydrogen, a fuel cell, and an electric motor. (see graph)
The manufacture of ever more efficient cells generates technological competition. Electrolysis loses between 10-30% of the input energy in the energy generation process. Hence, its application is only feasible when there is low-cost primary energy. Hydrogen is attractive because we can store it. Volume-wise, one kilogram of hydrogen can generate the equivalent power of one gallon of gasoline (2.9 kg).
According to Wood Mackenzie, more than 90% of the hydrogen currently produced emits pollutants into the environment. Black hydrogen production a fossil fuel costs, namely natural gas. The cost of generating green hydrogen is in turn connected to electricity prices. In the long term, by 2040, the production costs of green hydrogen will be equal to those of black hydrogen in line with the price competitiveness of the photovoltaic industry and the prices of natural gas and hydrocarbons.
Global energy consumption is growing along with population, transport use, and technological change. Using hydrogen as an essential energy carrier will require a considerable increase in production volume and a new and complex infrastructure to supply it to users. Countries with wind, solar and hydroelectric energy infrastructure will have a competitive advantage in green hydrogen production.
In July 2020, the European Union presented its plan to achieve the substitution of non-renewable energies with green and blue hydrogen by 2050 to help "clean up" the continent. Its program will have an initial investment for this decade equivalent to US$50 billion. Analysts estimate that by 2050, the electricity industry will be worth US$1.2 trillion.
Hydrogen as an alternative fuel is a reality. The Japanese companies Honda, Toyota, and Mazda and the Korean company Hyundai have cars with hydrogen batteries. Currently, one of these vehicles has a range between 430 and 600 km. However, refueling stations are still few and far between. Only Europe, China, Japan, and South Korea have plans to build massive infrastructure. Two Western companies are developing hydrogen batteries, Germany's BMW and US-based General Motors. Volkswagen, which has focused on lithium, has discarded this possibility after several attempts. China specialises in deploying hydrogen batteries in heavy-duty vehicles and buses and aims to have half a million cars on the move by 2035 in its territory. Norway's Havyard is looking to implement this technology in ships, and Norway's Corvus Energy is developing lithium-based electric batteries for boats.
In this case, the route leads to fuel cells' production and increasingly efficient hydrogen production methods. Both are viable within a socio-environmental logic, which in the first instance will lead to being implemented in freight transport and services and, in the longer term, will be accessible to all users. Green hydrogen is the best fuel option we have for environmental recovery. In the Latin American and Caribbean regions, various sustainable energy production projects are in the development process. These intend to meet the Sustainable Development Goals, achieving a more extraordinary approach towards energy autonomy and, as a whole, resolving socio-cultural conflicts, as discussed, for example, in the Sustainable Energy Strategy 2030 Report of the SICA countries.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9562836289405823,
"language": "en",
"url": "https://ehlconstruction.com/financial-control-means-organizing-all-organization-activities-collectively/",
"token_count": 753,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.038818359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:08a9f6f7-e516-487f-a21f-4529110a6aa6>"
}
|
In simple terms, monetary management can be explained as a self-discipline or field in an organization that is largely concerned with the management of money, expenses, income and credit. Financial administration involves the assessment, planning and managing of financial assets of an business. It will involve the use of economic tools and techniques as well as the preparation 4playclub.com of records.
Financial control includes five main ideas namely – cash flow, expense of capital, functioning, and monetary balance. This kind of also consists of the recognition, way of measuring and reporting of financial transactions. The concepts and principles of the branch of accounting have become extremely complex owing to the modern movements and changes in them. As a result of these complexities, financial management includes a number of different disciplines. These kinds of disciplines will be related to accounting, economics, facts systems and banking.
Accounting for fiscal management refers to the process with which financial facts is processed and used for making decisions. It includes the preparation of reports, inspecting the data, and providing information on how to increase the performance of this organization. An excellent accountant will be detail oriented and is required to perform research and the analysis of the financial data. Accounting is an important part of the managing of money. Proper accounting techniques enable managers to produce informed decisions on the part of resources. The objective of accounting is to accomplish decision making and improve the administration of funds.
The 1st principle of economic management description is that funds is the simple resource of this organization. Seeing that capital funds represent the growth inside the organization, managers must always manage all over capital funds. An excellent accountant should be able to maximize the return upon capital cash by ensuring effective usage of existing capital and new resources in the market.
Finance is a study of economic activities. In neuro-scientific finance, two broad types are distinguished namely managing of financial actions and usage of financial actions. Managerial actions refer to those activities that are done in order to increase or cure the effectiveness of business activities. In this context, almost all actions that contribute to increasing the effectiveness of organization are also known as finance activities. On the other hand, usage of financial actions refers to all the stuff that are completed use the fiscal activities to get the benefit of the organization.
The purpose of a manager is usually to increase the earnings of the organization through sound financial management decisions. This really is achieved by right investment within the profits. Good financial managers are those who find out when to invest on belongings and when to promote them. They always make an effort to increase the net profit by making the most of the productivity of the spent capital.
Another important principle of finance is a rule that every changes in the monetary affairs of a organization are combined with corresponding changes in other related areas of the business as well. It means that there should be a coordinated change in financial commitment, production, and marketing strategies too. In addition , all of these activities should be carried out as a way not to affect the other domains of the business. In this regard, it might be necessary to suggest that financial managing means observing beyond the four four corners. It is necessary to understand the inter-dependence of all the fields of the organization in terms of financing.
Thus, we see that the principle of economic management is certainly seeing the inter-dependence as well as the cumulative effect of all fiscal activities. This kind of inter-dependence is closely connected with the concept of efficiency. For instance, if the procurement procedure is made effectively and the cash allocated just for the purchase properly, then a firm is said to have performed financial supervision successfully. Similarly, if the production process is definitely planned correctly and the solutions are properly utilized, the firm is said to have effectively handled the procurement process.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9418705105781555,
"language": "en",
"url": "https://eng.globalaffairs.ru/articles/uncertain-world-economy-time-to-manage-the-global-governance/",
"token_count": 4037,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11572265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8c5cbc73-a858-40e0-afac-f6bd5e78dc47>"
}
|
The world economy enters a new phase of prolonged recession and uncertainty. Over the past half-century, globalization and underpinning international governance such as the WTO have led global economic growth. Since the global ?nancial crisis of 2007-2008, however, the world economy has failed to create new growth engines. There seems no breakthrough in sight. Nevertheless, international community is no longer capable of creating new global initiatives as major nations are struggling to tackle a variety of socio-economic problems. The recession and the absence of global cooperation almost automatically add instability and fear of protectionism to the global economy. The bleak landscape of the global economy is paradoxically the product of globalization that hauled the previous decades of economic growth. So the immediate challenge is how the international economic community manage the new landscape of the world economy in the face of increased uncertainties and lost growth momentum of globalization.
Prolonged Recession; It’s Structural Rather than Cyclical
Worldwide recession is here to stay longer than we want. The world economy is failing to recover from the almost a decade long recession except for a short period of V-type come back from the 2007–2008 global ?nancial crisis. The current situation seems to betray the hope that the recession is only a cyclical one rather than a structural one. The world economy witnessed euro zone crisis, slowing of the Chinese economy and weak demands form newly developing markets in recent years. We tend to believe that the current recession can be explained by these relatively shorter term events. However, we should remind ourselves that these events originate again from the possibly fundamental changes in the world economic structure.
First of all, the rapid ‘financialization’ of the economy must have increased the overall volatility in the global economic system. Major economic regions have experienced, one by one, continual financial crisis over the last few decades. The fragility of the financial markets is not only detrimental to efficient resource allocation but also fast contagious across regions adding uncertainty to international economic activities of trade and investment. We have observed that a financial crisis of a particular country had seriously negative impacts on periphery countries as international investors in center countries quickly adjust international portfolios. The so-called ‘Wake-up Call Hypothesis’ is quite plausible considering highly integrated international capital market. Developing countries particularly in Asian regions have to accumulate their hard-earned foreign currency through active export. They have to keep sufficient level of foreign reserves as an insurance policy against the fear of potential financial crisis. Obviously, the insurance policy, otherwise being potentially productive capital, weakens effective demands in the developing world. The overall sluggish demand also comes from more structural changes such as global and domestic income inequality and fast aging of major societies. While these changes seem to be almost perpetual, policy spaces of governments are limited mainly due to the high level of public debt, record low level of interest rates and distorted balance of powers between market and public sector.
Figure 1 supports the pessimistic expectation presented above. It shows that every country group of different economic development stages has experienced lower economic growth for a decade than before the global financial crisis. While the growth prospects are positive for the year 2016 and beyond, most institutions are downgrading their forecasts recently. Even the positive prospects are greatly hinges on uncertain assumptions of rebounding of effective demand in the near future particularly in the major emerging markets. The bottom panels of Figure 1 shows that the gross fixed capital formation in most countries dropped signi?cantly since after the global financial crisis except in the few numbers of countries like the U.S, Japan, Germany and France. While the economic rebound is most conspicuous in the U.S, it has not proven to be suf?cient enough to boost overall world economic conditions. It seems quite appropriate to put the popular new tag of ‘New Normal’ to the current situation of the world economy.
Is Globalization Hitting the Wall?
The prolonged slowdown of the world economy may be signaling the end of export-led growth in the age of globalization since 1980s. Globalization may have reached to a limit in many respects after it has played as growth locomotion of the world economy. Since 1980s, the growth rates of world trade has hauled overall economic growth. Recently, the trend seems to have overturned; growth of trade is slower than the economic growth. Figure 2 clearly shows that the fitted lines of trade and GDP growth crossed around 2013 and 2014. It is afraid that world trade is settling down in the lower growth trend around 3% while it had maintained growth rates above 5% for the most years of recent decades. The world economy has not experienced such reversal of trends before. Particularly, the share of capital goods in total imports gradually dropped from 35.0 per cent in 2000 to 30.1 per cent in 2014, whereas consumer goods maintained their share of about 30 per cent throughout the same period. The trend exactly is consistent with the stagnant fixed investment described in the Figure 1. If this trend continues, it would impose a serious challenge to the developing world, particularly the East Asian countries. In the absence of sufficient expansion of world demand, increased economic sizes of export-oriented emerging markets can create a zero-sum game situation among countries adopting export-led growth strategy. The situation may call for a new growth policy space for many countries, and it is highly likely that protectionism would be included in short lists of policy options.
The potentially new trend of trade and economic growth may reflect an intrinsic limit against furthering of globalization. That is, the world economy may be facing at the ‘Globalization Trilemma’, which says it is just impossible that the world integrates national economies in full scale allowing certain level of sovereignty and democratic relationship at the same time. Globalization initiatives, popular for a long time, are becoming too a costly option for politicians and policy makers as we have witnessed in the US presidential election process. Therefore, the world economy may have reached the stage at which not only globalization can lead the world economic growth, but also globalization itself can no longer expand its fronts any further. Major countries do not have sufficient political capitals to create a system of international economic cooperation. It is almost impossible for international community to agree upon any resolution to call for further sacrifice of sovereignty, which is necessary for strengthening international economic governance in order to push forward further globalization. Recent development of the U.S domestic politics indicates that no country has big enough political capital to take the leadership of creating a mechanism for new global economic governance. Countries seem busy looking inside rather than outside their borders.At this point, the right question must be how the world economic system could effectively manage the hard-earned current platform of economic cooperation including the WTO.
Widening Global and Domestic Income Inequality
In 2015, the IMF reported that the income inequality continued globally. It was quite a news that the same observation was released by the IMF itself because it has been the leading organization of the globalization wave for longer than a half century. Now it is natural to receive such reports warning aggravated income inequality and blaming globalization at least partially. Figure 3 from an IMF report (Dabla-Norris, et.al, 2015) summarizes the global trend of income inequality. Evidently, the situation has aggravated in most major economic regions. For instance, we can find huge increases of GINI indices in China and Russia in which the indices show not only fast increases but also high absolute levels. Major advanced economies in the North America and Europe also showed significant growth of inequalities while Europe is still of relatively equal situation of income distribution. We can find some improvement in less developed regions of Africa and Latin America. However, their absolute levels of GINI are still too high and they are the regions still struggling for the basic socio-economic problems of very limited access to education, health and financial services.
While reducing income inequality itself is an important policy objective today, there are growing numbers of evidence that it is also an effective way to promote economic growth. A trading partner country with high income inequality yields less bilateral trade flows through lower import demands. More balanced income distribution provides stronger effective demand necessary for stable economic growth considering the disparate marginal propensities to consume of the rich and poor income groups. Also, if a society fails to improve the income distribution, it could face a risk of falling into a vicious cycle of low growth and aggravated income inequality. Following the logic of IMF, it is true that both the technological gap and globalization may be responsible for today’s income inequality. However, high income inequality again can widen the technology gap between countries and between social groups within a society with accelerated globalization. It is highly likely that the potential benefits of globalization tend to be limited in a relatively smaller group of high income. Globally widening income inequality is an important background for a pessimistic prospect on the future of the world economy.
Understanding the Real Value of the WTO
It is no doubt that the launch of WTO in 1995 was one of the most important achievements of the international community. It has the most comprehensive governance over commercial policies and trading activities with binding mechanism for dispute settlement. Ironically, however, the birth of WTO ignited explosive expansion of regional economic agreements, mostly in the form of FTA . Principally, regional economic agreements pursue trade and investment liberalization among a few numbers of contracting parties. They are in line with the objectives of the world trading system under the WTO. However, as the basic mechanism of regional economic agreement is to provide preferences to members in the block, they create discriminatory impacts on world trade flows, hence potentially resulting in distortions in resource allocation globally.
Proliferation of regional economic agreements is almost an inevitable result of the development of global economic governance. The highly liberalized world market under the WTO provided a favorable environment for increased international production sharing by not only multinational enterprises but also medium and small sized ?rms. It is natural that production sharing activities resulted in creating agglomeration of industrial activities leading to a new regional economic geography. Once the WTO was established, private sectors seemed to find it too costly to pursue UR-type multinational trade negotiation from the cost and bene?c perspectives. The WTO is a platform for ‘markets (exports and imports) for markets’ as well as code of conducts with respect to trade policy measures. The practically failed DDA is the proof that the multilateral trading system has reached to the point where the marginal bene?t of negotiation efforts is far less than the expected marginal bene?t of further liberalization of world market. This argument is well supported if we compare the scopes of the DDA and the UR. The latter ?rst time in the history, introduced service markets and trade related IPRs into the realm of the multilateral trading system. The DDA is, at most, a mere attempt to improve market access achieved during the UR. The private sectors simply do not ?nd any appetite to push for another round of multilateral trade negotiations. It was natural to pursue regional economic agreement as an alternative as it provides more direct and immediate market access for contacting parties.
Of course, in spite of this argument, the failure of DDA does not deny the value of the WTO. It is now working as more of the manager of the world trade activities and the supervisor of code of conduct for trade policy than as a marketplace of international trade. The current multilateral trading system is successful enough to manage trade practices against returning to protectionism.
The Diminishing Marginal Bene?t of Regional Trading Blocs
The theory of the New Economic Geography (NEG) explains that the improved market access through globalization (by both multilateral and regional initiatives) prompts agglomeration of industrial activities in major economic regions or regional central economies. The increasing return to scale is the very ?rst factor behind the asymmetric space distribution of industrial activities. It is advantageous for manufacturing industries in particular to be concentrated rather than dispersed geographically with respect to the bene?ts of the economies of scale. Secondly, NEG explains with monopolistically competitive behavior of businesses when gathered around limited geographical area; businesses tend to rationalize the size of production activities to take advantage of the economies of scale. Also, the formation of an industrial cluster creates a variety of external effects such as supply of skilled labor, procurement of intermediate goods and technology spillover. Therefore, regional economic clusters generate gravity to invoke further concentration of industrial activities. Regional economic agreements reinforce agglomeration of industrial activities. NAFTA created a new industrial zone along the US-Mexico border as businesses took advantage of the improved market access to both markets by actively engaging in international production sharing targeting both markets. Distribution of major industrial regions and existing trade agreements clearly overlaps. That is, regional economic agreements are distributed around regions of most active industrial activities in north America (West Сoast regions of the U.S, US-Mexico Border), European region (EU members and Сentral European countries) and East Asia (China, Korea, Japan and China-South East Asia border region).
The expansion of regional economic agreements is a natural response of policy makers to pressures from markets; there has been growing demand for regional economic agreement from both regional economic leader and followers to support for international production sharing activities. However, the wave of regional trade blocs after the WTO seems to have reached to the final stage. Most of the major trading countries already joined multiple numbers of regional trade agreements. More than two hundred regional trading agreements are in force and more agreements were notified to the WTO (See Figure 4). The widespread of regional trade agreements brought about much broader and deeper integration in addition to the improved market access provided by the multilateral system.
Obviously, it would decrease again the marginal bene?ts from further efforts for establishing additional trading blocs. Regional trade agreements reduced trade barriers signi?cantly while they are adding economic and political costs domestically. The recently stumbling TPP best supports this argument. TPP is a unique attempt by the USA to pursue improving market access at semi-multilateral platform. On the one hand, the initiative is the plan B of multilateral approach of global integration. The active involvement of the USA in TPP in recent years signals its pessimistic view on the future role of multilateral trading system in furthering market access globally as witnessed by the failure of DDA. On the other hand, the TPP is a grand experiment of whether a semi-multilateral version of regional trade agreement is achievable. It is formally an FTA of a grand scale. The doomed destiny of TPP is not because of the large scale itself but of the intrinsic political risks from furthering globalization. As we have discussed earlier, it is an interesting observation that recognition of the social costs of further globalization sees widespread among the public of the USA. Considering its political verdict in the process of the US presidential election, it is hardly likely that the US politics would seriously consider TPP at least during the early years of the new administration. In the European front, euro zone crisis and the recent BREXIT referendum can be interpreted in a similar way, raising the question of whether EU can go on further integration. The initiative of RCEP by China has not entered into a meaningful negotiation process.
Transition to Nationalistic Approach and the Fear of Protectionism
Of course, there is every reason to worry about protectionism in the face of the structural changes of world economy and appearance of nationalistic politics in both the USA and Europe. The recent increasing fear of protectionism is understandable because the new president of the U.S. is vividly painted as advocating protectionism seeking national interests over international cooperation. In the European front, Brexit is a ground for protectionism concern because it makes policy makers to believe populations are against further integration of world markets.
However, from economic point of view, the nationalistic approach is a plausible option to avoid economic and political costs associated with the internationally cooperative approach. Trump’s pressure on China and members of NAFTA and revoking TPP are signs of policy transition from a cooperative approach to nationalistic approach. In efforts to appease his constituents, Trump may bring trade cases against China, both under U.S. trade remedy laws and before the WTO. The transition is exactly consistent with the economic logic of proliferation of regional trading block after the WTO as we discussed earlier. The nationalistic approach is just an attempt to gain national interests with lower economic and political costs. Precisely speaking, the nationalistic approach is to improve market access to targeted countries. At the same time, threats to impose high tariffs are hardly realistic considering closely integrated production network between the U.S. and targeted countries. The previous attempts in 1990s to revoke the MFN treatment to China were never realized just because they would harm the interests of huge number of the US investors in China. Also, it is difficult to relate Brexit directly with protectionism, per se. After the referendum, the U.K. made it clear that it would engage into trade negotiations with major trading partners. It is hardly conceivable that the U.K. would want to raise trade barriers against negotiating partners than before. On the contrary, Brexit may be a liberal option to avoid costs of regulatory system imposed by the EU. Remember that the EU is not just a regional economic integration but is a political process to realize the ‘European Federalism’. Brexit is another case of nationalistic approach by conservative politics seeking smaller government, rather than a manifestation of protectionism.
So, the fear of protectionism may be somewhat exaggerated. Threats of protectionism are hardly realistic whereas nationalistic approach could end up with mixed results, sometimes with more liberalized markets. The real worry is that the nationalistic approach would increase trade disputes among trading partners. It would accelerate the recent increasing trend of discriminatory measures by major trading countries under the prolonged recession of the world economy. It’s time to recognize what the international community has achieved in this regard, the WTO. As mentioned earlier, the WTO is more competent as the supervisor of code of conduct for trade policy than as a marketplace of international trade. The current multilateral trading system is successful enough to manage trade practices against returning to protectionism. We have witnessed the value of the WTO as a stabilizer of international trade during the global financial crisis in 2007-2008 when pundits cried out for possible return of the pre-World War II type of protectionism. Protectionism never been a serious concern after the crisis thanks largely to the operational capacity of the WTO to bring trade disputes between member economies in the dispute settlement mechanism. Therefore, it is time that the international community has to be content with the current role of the WTO in defending the world economy from protectionism, instead of blaming it for not being competent enough to materialize more ambitious market liberalization.
The views and opinions expressed in this Paper are those of the author and do not represent the views of the Valdai Discussion Club, unless explicitly stated otherwise.
For instance, IMF’s world economic outlook update of July 2016, after the Brexit Referendum, downgraded 0.1% point for the world economic growth in 2017.
Lee and Chung, 2015, The New World Order of Trade and Finance and Implications for Korea, Korea Economic Forum.
For more details, see UN World Economic Situation and Prospects, 2016.
Rodrik, Dani, 2007,‘How to Save Globalization From its Cherleaders’, The Journal of International Trade and Diplomacy 1 (2), Fall 2007: 1-33.
As it will be discussed below, the almost dead ‘Doha Development Agenda’ reflects the limit of WTO’s competence.
Arjona, Ladaique, and Pearson, 2002, Social Protection and Growth, OECD Economic Studies No.35.
Free trade agreement, FTA. – Ed. note.
Doha Development Agenda, DDA. – Ed. note.
Trans-Pacific Partnership, TPP. – Ed. note.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9520859122276306,
"language": "en",
"url": "https://iisc.im/2015/12/02/living-in-the-second-elizabethan-age/",
"token_count": 1668,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.166015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:91349bc0-7f76-47de-aac1-07eab0f828b0>"
}
|
We are living, in what I would argue, is the second Elizabethan Age. An age of commerce, exploration and discovery in Space driven by geopolitical, commercial, and cultural factors so incredibly similar to those of the first Elizabethan Age that is it worth noting. With this new Golden Age of exploration history is repeating itself and as there is much we can learn from the parallels.
The standard view of the first Elizabethan age was that it was an epoch in world history marked by the reign of Elizabeth I (1558-1603) which laid the stage for the rise of England as a global power by harnessing the power of the renaissance, reformation and commerce that in turn transformed its age and all ages to come, giving birth to the British Empire and all that entailed: education, law, global commerce, and more. The basis of the very law that enables our activities and commerce in space today is of course based upon Maritime Law which in turn is based upon British Admiralty Law which finds it routes in the first Elizabethan Age.
Like today, the late 1500’s was a time of great social disruption, of religious strife, of the reformation, of new ideas and technological change flourishing with the coming of the renaissance and of commerce. England was near bankrupt. Other states had utilized the strength of their Treasuries to explore and to exploit the ‘new world’, both East and West Indies, to great success, enriching themselves with gold and spices from around the world. The English Queen, Elizabeth I, could not hope to match either the investments of her fellow Monarchs in the new world, nor the fleets of her ‘cousins’ in Spain, Portugal, and Holland, but she did have one great strength: she had commerce.
Her fleets were privateers and to enable their growth, and the growth of commerce, Elizabeth I signed into law the ability to create the first joint stock companies. This democratization of trade and investment under the Rule of Law with the provisions for joint stock and returns and sharing risks to explore and exploit the New World…or occasionally to ‘liberate’ a Spanish galleon or Dutch ship returning from the East Indies with a cargo of spices and more... laid the regulatory framework for the rapid growth of the British Empire and for global trade today.
Elizabeth I created a legal framework out of necessity that is still with us today and guides the concept behind modern investment, risk, markets, and more and is again finding itself at the forefront of exploration and trade as we continue to expand new markets in Space. Which brings us to this new Golden Age, this second Elizabethan age.
Again, we find ourselves living in a time of religious strife and a time of technological change being driven this time by a new renaissance, a digital renaissance, and also by a time of the exploration of new worlds and boundless opportunities in Space.
A Queen Elizabeth is again on the throne, Elizabeth II, though this time not of England, but of Britain and the larger, and growing, British Commonwealth.
Yet again, Britain is using its focus on commerce to take a lead from those who have invested their Treasuries in exploration.
According to a recent study by Northern Sky Research, Britain is now the home to the largest number of satellite communications companies in the world. From the Isle of Man to Bermuda, to Gibraltar and on to London itself, more satellite communications companies are incorporated in the British Isles, aka the British Space Sphere, accessing spectrum, finance, insurance, investing in infrastructure, powering the City of London and powered by it than anywhere else on Earth. Again, Lloyds of London is insuring the investments of a new era of exploration.
The satellite companies are in the Britain Space Sphere not because of Government largesse, but rather because of the rule of law, solid regulation and the commerce it enables allowing them the freedom to flourish.
Adam Smith’s invisible hand of the market is making its presence firmly known.
This is all also really quite fitting when you remember that it was author, engineer, scientist and visionary, Briton Sir Arthur C Clarke who first postulated the concept of Telecommunications Satellites and the Geostationary Orbit… and who was of course ennobled by Queen Elizabeth II.
Space commerce is thriving in the British Space Sphere as a result of this new found vitality in space commerce. ESA’s new European Centre for Space Applications and Telecommunications (ECSAT) focused on innovation with the UK Satellite Applications Catapult is based in the UK for the same good reasons. As is the ISU’s International Institute of Space Commerce and the Space Data Association and Satellite Interference Reduction Group on the Isle of Man.
Further, London has been chosen by the Society of Satellite Professionals International, the largest professional association in the global space and satellite industry, as the home of the new annual Better Satellite World Awards as a reflection of Britain’s leading role as the global home of satellite communications.
As I write Britain’s first ESA astronaut, Major Tim Peake, is readying himself to follow in the footsteps of many British explorers before him from Helen Sharman and Mark Shuttleworth to Michael Foale, Nick Patrick, and Piers Sellers. A modern Drake no less?
Much like in the time of Elizabeth I, the application of law has allowed commerce to flourish in this new frontier of Space this time under Elizabeth II.
Think of the International Space Station as the metaphorical equivalent of a Spanish Treasure Fleet. Go on, it’s fun and thought provoking. While the International Space Station partners have worked tirelessly to create this amazing $100 billion orbital facility, I would predict that though its utilization will be driven my many uses, it’s value will come as an exploration platform and new frontier for commerce.
After all, it must never be forgotten that in effect taxpayers are both investors and shareholders in their Governments’ investments in Space and that these Governments have a sacred trust in realizing a return on these investments. As history has shown us, the commercial utilization of the International Space Station and further, the exploration of Space for new knowledge, resources and more is the ultimate return on investment for the economies and citizens today’s Governments represent.
In the same way individuals invested in the first Joint Stock Companies of the 1600s and thus fueled commerce around the globe, today’s taxpayers are investing both in the space programs of their governments and again on top of this as investors and shareholders in the commerce that drives their economies forward in Space. Success in both leads to a virtuous circle in economic activity further fueling both. Multiple quotes from Adam Smith come to mind again…
In this second Elizabethan Age, I would predict that, like satellite communications today, that tomorrow a growing number of the new companies formed to utilize the ISS and beyond will also be based in the British Space Sphere: the Isle of Man, Bermuda, Gibraltar, and London.
In so doing they will simply be following the same commercial and legal logic of their contemporaries in the global satellite industry, choosing to work from the British Space Sphere for the political stability, economic stability, access to capital, and most importantly access to enabling regulation and the stability of the Rule of Law that is offers. Thought provoking, yes?
The similarities and historical comparisons are many between these two Golden Ages, these two ages of Elizabeth, especially with the role of commerce, but this time the difference is that today in this second Elizabethan Age it is this new Golden Age of space commerce that is driving value for all humanity, with global commerce quite rightly leveraging the investments of governments to return the wealth of these new worlds and new frontiers back to all of humanity. The British Space Sphere is acting as a crucial enabler of global space commerce.
In this second Elizabethan Age, the International Space Station is not perhaps the equivalent of a Spanish Treasure Fleet and the new frontiers of Space are not perhaps the same as the East and West Indies and the New World. Yet, the opportunities they represent are also perhaps not far from it. There is much we can learn and benefit from history repeating itself.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9587355256080627,
"language": "en",
"url": "https://payrollheaven.com/define/bid/",
"token_count": 500,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:df746dbb-ae2e-4feb-a96a-8113c343fe75>"
}
|
Business, Legal & Accounting Glossary
Offer (a certain price) for something, especially at an auction.
n. an offer to purchase with a specific price stated. It includes offers during an auction in which people compete by raising the bid until there is no more bidding, or contractors offer to contract to build a project or sell goods or services at a given price, with usually the lowest bidder getting the job.
‘Bid’ – can simply mean offering to buy shares. It may also mean the highest price which you are prepared to pay for a given security at a particular time.
You’ll also come across the term “bid price”. This is the price at which a market-maker in the stockmarket is prepared to buy shares from existing holders. It will be below his offer price – the price at which he will sell.
‘Bid’ – in the context of a takeover bid means making an offer for the shares of another company. When one company seeks to take over another it makes a bid – an offer to buy at a stated price. A takeover bid must be a general offer to all the shareholders of the target company.
The price offered will be above the market price of the target company (being highly unlikely to succeed otherwise). City rules on takeovers stipulate a time-limit for acceptance of the bid.
In practical terms, the bid is the available price at which an investor can sell shares of stock. The ask is the available price at which an investor can buy shares of stock. The bid and the ask together create the dealer’s quotation. This system of bid and ask pricing is used by the stock markets to match buyers and sellers. The ask price is almost always a little higher than the bid price. The difference, or spread, between the bid and ask is what the market makers use as their profit margin for handling the transaction. If the bid is $2.20 and the ask is $2.23, the spread between the bid and ask, also called the bid/ask spread, is .03 or 3 cents.
To help you cite our definitions in your bibliography, here is the proper citation layout for the three major formatting styles, with all of the relevant information filled in.
Definitions for Bid are sourced/syndicated and enhanced from:
This glossary post was last updated: 26th April, 2020 | 1 Views.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9525030255317688,
"language": "en",
"url": "https://payrollheaven.com/define/black-friday/",
"token_count": 222,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.03564453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:22d82b86-6fca-4b36-94b1-382914b7a428>"
}
|
Business, Legal & Accounting Glossary
Black Friday refers to September 24, 1869. On Black Friday, a group of speculators led by James Fist and Jay Gould attempted to corner the gold market. They failed, and the resulting collapse in gold, and then stocks, became known as Black Friday. From this original Black Friday came the practice of labelling market collapses as “Black”. In addition to Black Friday, there is Black Thursday of 1929 and Black Monday of 1987. There was also a second Black Friday in 1873. Black Friday also refers to the Friday after the US Thanksgiving Day holiday. As one of the busiest shopping days of the year, Black Friday is said to turn the retail industry’s bottom line from red to black.
To help you cite our definitions in your bibliography, here is the proper citation layout for the three major formatting styles, with all of the relevant information filled in.
Definitions for Black Friday are sourced/syndicated and enhanced from:
This glossary post was last updated: 4th February, 2020 | 0 Views.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9379388689994812,
"language": "en",
"url": "https://sites.utu.fi/bre/how-trade-policy-can-grow-the-european-green-tech-sector/",
"token_count": 942,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03857421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:830eae81-cb69-4271-acf8-07ca6364d762>"
}
|
Associate Director, Climate, Environment and Sustainability,
The planet is facing a multitude of environmental threats from expanding greenhouse gas emissions to biodiversity loss and resource depletion. Policy makers and the public are rightfully concerned about the impact of our activities on the planet. In this piece, we examine how trade policy can be a vehicle to deliver environmental goals.
Trade develops markets, increases competition, lowers prices and encourages growth. It is a force for good, having lifted over 1.1 billion people from poverty since 1990. While there is a perception that trade is inherently at odds with sustainability goals, recent OECD data shows that the volume of global trade has grown more rapidly than the carbon emissions embodied in it, pointing to a decoupling of economic growth and CO2 emissions.
But with pressing environmental and climate challenges, more to use the power of trade policy to support the low carbon transition and delivery of Sustainable Development Goals: in short, we need to rethink trade to align it with the challenges of the 21st century.
Environmental provisions frequently feature in Free Trade Agreements (FTAs). 630 FTAs signed between 1947 and 2016 include environmental provisions: exceptions to trade for the conservation of natural resources, the protection of plants or animals, or provisions to tackle illegal trade-related practices, including fishing, mining and logging. But these have generally been vague statements of ambition and not legally binding. There have been calls in the past for legally enforceable environmental standards but these have rightly been rejected. They are a heavy-handed mechanism that risk alienating partners and increasing trade tensions. A more collaborative approach is needed.
- Green tariffs: World Bank research found that the top 18 developing countries ranked by greenhouse gas emissions would be able to import 63% more energy-efficient lighting, 23% more wind power equipment, and 14% more solar power equipment if the trade barriers these very countries maintain on these goods were abolished. Whilst there have been pockets of good progress, for example in 2012 the Asia-Pacific Economic Cooperation economies agreed to cut tariffs to 5% or less on 54 environmental goods covering around $300bn of annual trade in the region, more can be done globally. The EGA has seen 18 WTO members – accounting for most global trade in environmental goods – examine tariff elimination for over 300 environmental products. Zero tariffs would provide government and business with the ability to acquire more and better-quality environmental technologies at lower costs, andwould diffuse innovation and technology around the world. These discussions must become more inclusive of other WTO members and then accelerated.
- Develop fora for discussions alongside FTAs. Consultation, transparency and cooperation remain the best means to encourage third countries to increase their environmental standards. Joint governmental and non-governmental committees could be established to work with international partners to deliver more concrete and measurable environmental commitments. This could ensure international standards can be promoted and enforced, while FTA partners remain free to define policies adjusted to the labour and environmental standardsthey deem most appropriate for their domestic market. In this way, buy-in to enhanced domestic standards may be more easily assured from third country producers, as additional obligations will not be imposed upon them externally, but rather built with in-market national experts who are closer to the concerns and priorities of local producers.
- Support small companies in international supply chains: Help smaller companies in developing countries get access to finance. Technology, and in particular blockchain, has a role to play here as greater data accessibility for lenders and producers enables greater business certainty. This is already occurring as Sainsbury’s and Unilever worked together to develop a distributed ledger system that offers Malawian tea growers cheaper finance if they use certifiably sustainable production methods. Technology could therefore be the helping hand that smaller companies and those from developing regions require in order to operate more sustainably and to take advantage of the burgeoning green market. Governments could launch platforms, both within and without FTAs, with guidance, technology support and access to finance to push for common standards or certificates for green products, mutual recognition of said standards and procedures, and a broader commitment to work together to facilitate trade in green goods.
- Climate check existing trade deals and green international institutions. We need to see systematic WTO-UNFCCC dialogue via the WTO’s Trade and Environment Committee and the consideration of national trade policy’s consequences for existing climate change commitments in national Trade Policy Review.
These suggestions could help to ensure that the breakthrough technologies and standards that are being developed in the developed countries can be more rapidly applied around the world. It is possible to reorientate and harness the power of free trade to help address our global sustainability goals.
Expert article 2730
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.972193717956543,
"language": "en",
"url": "https://www.cato.org/commentary/blame-fed",
"token_count": 635,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1689453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a3d90d1a-b4e1-4c8d-9663-0ea9363a55da>"
}
|
Egged on by statements from Chairman Greenspan, market participants came to believe the era of low interest rates would last indefinitely. But the era did come to an end as the Fed was forced to begin raising interest rates. Faced with the prospect of paying higher rates on their mortgages in the future, borrowers began defaulting. First home prices stopped rising, and then home prices began dropping — precipitously in some overheated housing markets. Now we are approximately six months into a new cycle of lower interest rates, but with no end in sight to the crunch.
At least two other factors stoked the crisis. First, many exotic financial products were issued whose value was tied in one way or another to home prices and the value of the securities into which home mortgages were bundled, such as collateralized mortgage obligations. The pricing of these financial products was the product of complex economic models, not the outcome of market transactions. As the value of the underlying homes and mortgages declined, pricing of the financial exotica became nearly impossible. As we learned in the collapse of Long Term Capital Management, these pricing models fail precisely when their accuracy is most important — in times of financial turbulence. The inability to price the financial products has exacerbated losses among the firms holding them.
There is a wonderful parallel here to the collapse of the Soviet Union. As the great Austrian economist Ludwig von Mises argued almost 100 years ago, central planning inevitably fails because there are no market prices to allocate resources. Market prices can only be the outcome of actual market transactions among buyers and sellers. Planners used mathematical formulas to value resources, especially capital. Now Wall Street wizards have imported Soviet thinking to allocate financial capital. Is it any wonder that it failed?
The second factor contributing to the housing market collapse was the federal government’s commitment to “affordable housing.” Lenders, especially Fannie Mae and Freddie Mac, were pressured into promoting housing to low‐income groups that could not qualify for normal loans. That policy is predicated on the belief that there is an underserved group of people who, but for economic discrimination or some other market failure, would be homeowners. That social goal and the credit‐driven desire for more deals merged into mortgages made without adequate collateral.
We learned two lessons from the drive to make home ownership available to the heretofore underserved. First, many of these were not homeowners because they could not afford a home. Only under the temporary “hothouse” conditions in mortgage markets did they seem to qualify. Second, people who have no equity in their homes cannot meaningfully be said to be owners. When times turn tough, they will walk away. They were effectively renters, not homeowners.
The crisis will end when housing markets hit bottom and the prices of mortgage securities stabilize. Banks also need to unwind their positions in exotic financial derivatives.
The Fed needs to understand it is facing a capital crisis, not a liquidity crisis. The very low interest rates on safe assets show there is ample liquidity in financial markets. The Fed should not supply capital. That is the job of markets, and they are doing it.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9568044543266296,
"language": "en",
"url": "https://www.cato.org/commentary/government-should-steer-clear-fuel-economy-issue",
"token_count": 798,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:364c496b-ff6f-463c-944a-ff6462e1059d>"
}
|
The CAFE standards now mandate that the fuel economy of new cars sold by companies equal or exceed 27.5 miles per gallon for passenger vehicles and 20.7 mpg for light trucks, a category that includes minivans and SUVs, vehicles that didn’t exist in 1975. These standards are enforced by imposing large fines on automobile manufacturers.
Sen. John Kerry (D‐Mass.) has proposed increasing the standard to a 35‐mpg average for all vehicles by 2013, and Sen. John McCain (R‐Ariz.) has proposed a 36‐mpg standard by 2016. The Bush administration opposes Congress mandating fuel efficiency, preferring to let the Transportation Department set fuel standards.
The underlying premise of the existing system, as well as the new proposals, is that it is important to reduce gasoline consumption and that increased CAFE standards can achieve that.
Supporters of CAFE have argued that it would reduce our vulnerability to oil shocks.
But disruptions in world oil markets affect prices everywhere regardless of our level of imports because oil is traded in world markets.
Besides, increased fuel economy has not led to reduced dependence on imported oil. Since the CAFE standards were introduced, the average fuel economy has increased by 114% for new cars and by 56% for new light trucks, but the U.S. consumption of imported oil has increased from 35% to 52%.
A more relevant impact is the effect of gasoline consumption on air pollution or global warming. But if Congress believes that gasoline costs are too low because they do not include funds to pay for environmental damage, then it should increase the gasoline tax and leave decisions about vehicle design and gasoline consumption to the normal interplay of car manufacturers and consumers.
In contrast to a tax on gasoline, CAFE standards are an imperfect and inefficient method of signaling drivers about the true costs of the gasoline that they consume.
First, the standards put a damper on new car sales by increasing vehicle price or reducing size, and they reduce the per‐mile cost of using cars because the vehicles use less fuel per mile. The lower sales of new cars means longer retention of existing cars. These older cars pollute more and use more gasoline, undermining the purpose of the CAFE standards. The new cars would use less gasoline per mile, which leads people to drive more. The current best estimate is that every 10% increase in the mpg standard results in a 2% increase in vehicle miles traveled.
A second inefficiency of the CAFE standards arises from the interests of the auto unions. The United Auto Workers does not want unionized U.S. auto makers to comply with CAFE by importing small cars that use less gasoline. Under CAFE rules, gas‐frugal imports only offset the gasoline use of other imports. To offset the gasoline use of low‐mpg U.S. cars, high‐mpg cars must be made in the United States, presumably with higher‐cost UAW labor that would increase the price to consumers.
A third effect of CAFE has been to reduce the weight of small cars and subsidize their sales (because those are cheaper techniques of improving mileage than retooling engines), which, in turn, has increased auto fatalities. It has been estimated that the 500‐pound reduction in auto weight that coincided with the introduction of CAFE has increased the fatality risk by up to 27%.
A final effect of CAFE is to tax the production of low‐mileage vehicles differently depending on the product mix of the company. Low‐mileage vehicles, for example, produced by a company that does not exceed the standards or has accumulated mileage credits face a different tax than identical mileage vehicles produced by a company with a different vehicle mix.
Our conclusion is simple: If we want drivers to pay for the costs of their pollution, increase the gasoline tax. CAFE standards are an inefficient method of reducing gasoline consumption and have undesirable side effects.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9461444616317749,
"language": "en",
"url": "https://www.kasyanenko.com.ua/en/legal-entities/tax-disputes/tax-planning-and-enterprise-optimization",
"token_count": 890,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1298828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:75bf796e-468a-4037-afdb-6f477346a8a9>"
}
|
Tax planning is a certain set of actions of the taxpayer aimed at reducing the amount of tax payment on legal grounds. This applies to various types of payments that will be sent to the state Treasury.
Types of tax planning
Taking into account a certain subject, there are two types of planning:
- Individual-means reducing mandatory payments for an individual who wants to save their money.
- Corporate financing is an integral part of the total financing of a particular enterprise. The purpose of this planning is to optimize costs, which will have a positive impact on the company’s profit. All the funds that were saved on paying taxes will be redirected to modernizing and improving the efficiency of the company’s working activities, which can compete with other organizations.
Taking into account a certain level, there are two main types:
- Local planning-takes place within a specific country (one legal field), which excludes the active participation of international planning tools.
- International planning-reducing tax payments by actively supporting foreign companies, foundations, and other organizations. In this case, there are tax benefits that are accrued according to foreign legislation and / or agreements on the avoidance of double taxation. Popular transfer pricing methods can also be used.
Main stages of tax planning
- Tax analysis of the current situation is carried out. Taking into account Ukrainian laws, the current status of a taxpayer is determined, and a list of mandatory payments is compiled.
- The selected task and identifying opportunities. A detailed analysis of two legislations – local and foreign-is carried out, and international agreements are considered, as a result of which the possibility of using special tax regimes or zones with preferential rates for taxation is determined.
- All sorts of risks are calculated. It is important to first understand the possible losses that may occur when choosing such tax planning schemes. This list includes financial losses as a result of attracting attention from public services, as well as image losses.
- Planning an individual work flowchart. After a detailed analysis of all the legislation has been carried out and all the risks have been taken into account, it is necessary to build a special scheme that perfectly fits the current situation.
- The embodiment of all ideas. All actions that will help to launch previously planned schemes are performed. The list of such actions includes: property restructuring; establishment of organizations and funds, as well as their further implementation in a well-thought-out business scheme; conclusion of new contracts, etc.
International tax planning tools
As popular tools, it is customary to use:
- private foundation;
- various trusts;
- offshore company;
- as an agency scheme;
- back-to-back credits.
In addition, Kasyanenko & Partners Law Company specialists assist in the process of opening new Bank accounts, support businesses that have lost their legal capacity.
Our specialists offer professional assistance for high-quality business optimization in Ukraine and abroad. Experts allow, using legal methods, to minimize the company’s monetary expenses, as well as to exclude the possibility of receiving fines from regulatory authorities.
What do we offer?
- Providing detailed advice for individuals and legal entities that relate to tax planning on legal grounds.
- Development of competent strategies, restructuring, joint ventures, and jurisdiction with the most appropriate tax regime.
- Representation of the client’s interests in the process of resolving important issues with the tax service on the territory of Ukraine.
- Professional support during the creation of joint organizations and strategic alliances; detailed advice on the transfer of assets and shares; planning and structuring of joint ventures with the help of lawyers.
- Representation of the client’s interests in the process of liquidation of legal entities.
- Comprehensive support during business restructuring, as well as consideration of all possible risks that may negatively affect the interruption of business processes.
- Consultations for business managers aimed at preventing bankruptcy.
- Support for bankruptcy and reorganization of enterprises that are undergoing bankruptcy procedure.
- Assistance and advice in the legal regulation of bankruptcy proceedings.
- Conducting legal expertise that will help confirm compliance with the requirements in the field of foreign investment, as well as when registering foreign investments in Ukraine.
- Advice on issues related to corporate securities.
- Representation of the client’s interests aimed at resolving issues with regulatory authorities and services.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9474236369132996,
"language": "en",
"url": "https://www.nber.org/digest/nov04/can-markets-predict-future",
"token_count": 1121,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07470703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b1f662d3-0471-47ba-9950-4479a951f0e3>"
}
|
Can Markets Predict the Future?
By election day, the markets with an average absolute error of around 1.5 percentage points, were considerably more accurate than the Gallup poll projections, which erred by 2.1 percent.
Prediction markets -- also known as information markets or events futures -- first drew widespread attention in July 2003 when it was revealed that the Pentagon's Defense Advanced Research Projects Agency (DARPA) was establishing a Policy Analysis Market to allow trading in various forms of geopolitical risk, including economic and military scenarios. The objective was to discover whether trading in such contracts could help predict future events. Bowing to a storm of criticism that it was proposing "terrorism futures," DARPA dropped the program. But other prediction markets, dealing with everything from sports and entertainment to elections and finances, have emerged and gained growing interest and participation.
In Prediction Markets (NBER Working Paper No. 10504), authors Justin Wolfers and Eric Zitzewitz describe the types of contracts that might be traded in prediction markets and then survey several applications, with special attention to market design issues. Finally, they assess the predictive value of such markets.
Wolfers and Zitzewitz begin by noting that much of the enthusiasm for prediction markets derives from the efficient markets hypothesis. In a truly efficient prediction market, the market price will be the best predictor of the event, and no combination of polls or other information can be used to improve on the market-generated forecasts. Wolfers and Zitzewitz do not insist that prediction markets are literally perfectly (or fully) efficient; however, they acknowledge that a number of successes in these markets, both within firms and with regard to public events such as presidential elections, have generated substantial interest among both political and financial economists.
In a prediction market, the researchers note, payoffs are tied to unknown future events, and how the design of how the payoff is linked to those events can elicit the market's expectations of many things. A in a "winner-takes-all" contract, for example, the contract costs a specific amount and pays off a specific amount, and only pays if a specific event occurs, such as a particular candidate winning an election. The price on a winner-takes-all market represents the market's expectation of the probability that an event will occur. By contrast, for in an "index" contract, the amount that the contract pays varies in a continuous way based on a number that rises or falls, like the percentage of the vote received by the candidate. Finally, in "spread" betting, traders bid on the cutoff that determines whether an event occurs, like whether a candidate wins more than a certain percentage of the popular vote.
The various types of contracts may reveal the market's expectation of a specific parameter: a probability, a mean, or median, respectively. But prediction markets also can be used to evaluate uncertainty about these expectations, for example a family of winner-takes-all contracts that pays off only if the candidate earns 48 percent of the vote, 49 percent, 50 percent and so on. This family of winner-takes-all contracts then will reveal almost the entire probability distribution of the market's expectations. A family of spread-betting contracts can yield similar insights.
With these factors in mind, Wolfers and Zitzewitz examine the data compiled from analyses of the University of Iowa's Iowa Electronic Market, which has offered trade on presidential election contracts since 1988. Charting the price bids for the past four presidential elections, the data show that as election day drew nearer, the prediction markets' projected candidate vote shares grows more accurate. Prediction markets also beat appeared better calibrated than independent analysts on the probability of the ouster of Saddam Hussein. The Hollywood Stock Exchange likewise has proved highly accurate in predicting opening weekend box office success and Oscar winners.
Even some prediction markets with very small participation have shown striking results. An internal market at Hewlett-Packard produced more accurate forecasts of printer sales than did the firm's internal processes, and at Siemens an internal market predicted the firm would definitely fail to deliver a software project on time, even as traditional planning tools said the deadline could be met. In each firm, the traders numbered only between 20 and 60 employees.
Wolfers and Zitzewitz maintain that the success of prediction markets, like all markets, depends on their design and implementation. Some key design issues include: how buyers are matched to sell traders; the specification of the contract; whether real money is used (some prediction markets operate for entertainment purposes and use make-believe currency); and the kind of information available to provide a basis for trading. When such factors are weighed judiciously, Prediction markets are better at pricing some events than others. The markets, like many individuals, are not always well calibrated on small probability events. In addition, markets on complex events, or events where there is likely to be inside information, often fall to attract sufficient liquidity.
Wolfers and Zitzewitz conclude with cautious optimism. They believe that prediction markets are extremely useful for estimating the market's expectation of certain events. Simple market designs can elicit expected means or probabilities, while more complex markets can elicit variances, and contingency markets can be used to elicit the markets' expectations of covariances and correlations.
Prediction markets have their limitation, the researchers caution, but they may be useful as a supplement to more traditional means of prediction, such as opinion surveys, expert panels, consultants, and committees.
-- Matt Nesvisky
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9411035180091858,
"language": "en",
"url": "https://ag.ducks.ca/blog/letter-to-minister-wilkinson-in-support-of-the-beef-industry/",
"token_count": 1027,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.259765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:068f65f6-cf16-42c1-9c60-1313a9998121>"
}
|
The Honourable Jonathan Wilkinson, P.C., M.P.
Minister of the Environment and Climate Change
House of Commons
Dear Minister Wilkinson,
As a group of dedicated conservation organizations, we are writing to express our growing concern of the fate of Canada’s grasslands, wetlands and other ecosystems due to the stark challenges being felt today by Canada’s beef farmers and ranchers.
Some of the most important habitats remaining in southern Canada are managed and conserved by beef producers. We are watching the challenges faced by Canada’s beef farmers and ranchers due to COVID-19 through a conservation lens, as we know nothing is more detrimental to the preservation of one of Canada’s most at-risk ecosystems, native grasslands, than for the beef industry to relive hard economic times.
As you know, the early 2000s were a difficult economic period for many Canadian ranchers. Animal health issues took centre stage in 2003 when a case of Bovine Spongiform Encephalopathy (BSE) was discovered in a cow in Canada. As a result, international markets closed their borders to Canadian beef almost immediately. The economic consequences for ranching families was devastating, forcing many out of the business.
In addition to the economic impacts, there were negative environmental impacts as well. The BSE crisis led to a rapid and disconcerting acceleration in the loss of our prairie grasslands, wetlands and other critical habitats. Without ranching to provide a sustainable and profitable means of preserving these intact landscapes, they were rapidly converted to other uses, especially annual crop production. Consequently, Canada lost 26,917 ranching operations between 2001-2011 and with them five million acres of grasslands. This is especially concerning since today less than 20% of the grasslands in the Northern Great Plains remain intact.
We recommend that as you analyze the economic losses being endured by Canada’s beef farming and ranching community you also strongly consider the potential environmental impacts including the release of carbon stored in grassland ecosystems, the loss of pollinator habitat, the loss of wetlands, the loss of flood mitigation services and the loss of biological diversity.
Canada’s highly threatened and rapidly diminishing grassland habitats are largely privately owned and managed. Some protected areas do exist, and conservation groups, such as ours, work with ranching families to ensure these critical grasslands continue to exist into the future. Canadian beef producers collectively steward some of the most important habitat we have in Canada. The collateral benefits of grassland stewardship resulting from a healthy beef industry are often overlooked.
Grasslands have long been synonymous with Canada’s prairie provinces. Part of the world’s most endangered terrestrial ecosystem (temperate grasslands), they are the backbone of community culture and the foundation of sustainable ranching economies. More than 60 Species at Risk depend on this habitat and its ongoing management. Species as diverse as Swift Foxes, Poweshiek Skipperling butterflies and Small White Lady’s-slipper orchids depend on Canada’s grasslands, and upon the Canadians who steward them.
Grassland bird species, which have been negatively impacted by declining cattle production and associated grassland loss, are experiencing among the highest avian population declines in Canada and North America. According to the 2019 State of Canada’s Birds Report, grassland birds have declined by 57% since the 1970s and native prairie obligates have declined by a staggering 87%. The primary steward of the remaining grasslands in Canada is the beef industry.
Many in the grasslands conservation community were relieved when the financial viability of Canada’s ranching sector began to rebound. A healthy beef industry is an important conservation partner, and with their support, enables us to conserve what’s left of Canada’s grasslands. As conservationists, we are not oblivious to the individual financial realities of our farmers and ranchers. The Canadian beef industry must be able to compete economically on the agricultural landscape to conserve and restore grassland habitats.
We recognize that conservation groups raising concerns about the viability of farms and ranches may appear to be unrelated to the work we do. The simple answer is that actions that impact cattle, grasslands and wildlife go hand-in-hand in Canada.
During the BSE crisis, we didn’t fully appreciate how financially hard times for beef producers would quickly transcend into habitat loss. Today, we have a far better understanding of the extent of habitat loss we could face and encourage swift action.
We thank you for your consideration of our concerns; should you have further questions about the connection between the beef industry and the conservation of grasslands and other critical habitats, we would be pleased to discuss this matter at your convenience.
Yours in Conservation,
Karla Guyn, CEO, Ducks Unlimited Canada
Steven Price, President, Birds Canada
Kevin Teneycke, President Regional Vice President-Manitoba Region, Nature Conservancy Canada
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9860981702804565,
"language": "en",
"url": "https://exclusivepapers.com/essays/politics/the-global-community-issues.php",
"token_count": 723,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.447265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ae8f35de-02d1-4395-92f4-03427b7419d8>"
}
|
The global community has been going through one of the challenging moment in the modern history. From the west to the east, north to the south, everyone has been crying of economic crisis. The nations of the world, both the weak and the mighty have been going through a period or recession or they are preparing for recession. The superpowers such as the United States have not been spared. The global economic crisis that is currently facing the whole world can only be compared to that of the 1930s.
However, as in any crisis, there are various factors that have led to this crisis. According to Arrington (2008), the government of the United States of America came up with a national policy in the late 1990s to ensure that people who did not have financial capabilities were able to access loans that would enable them to acquire mortgages. As a result, housing prices skyrocketed and the housing units became very expensive because of the high demand. However, as time went by it was realized that these people could not service their loans. Therefore, Fannie Mae plunged into a huge debt that could not be covered and clearly secretly by the government.Want an expert to write a paper for you Talk to an operator now
On the other hand, investment banks are required to keep a certain percentage of capital assets for every loan that they issue. Yet, as prices of houses reduced, there was a decreased value in the total capital assets held by the bank. This means that the banks can only manage to issue fewer loans, an issue that causes a downfall in the value of houses, something that causes people who have acquired houses during the recent time to have debts that is higher than the price of the houses they have bought. The international banks were also caught in this wave whereby they were involved in securitization of assets. Similarly, there are various monetary policies that were made between 200 and 2004 that also contributed to the financial crisis in the United States. These monetary policies set in motion a high growth in prices of assets in the United States thus increasing the number of people who invested in these assets using loans that had been borrowed from banks (Bjørnland and Leitemo).
Sub-prime mortgage crisis is one of the major factors that triggered the current economic crisis in the United States and in the rest of the global economies. Sub-prime crisis can be defined as the crisis that arose when there was an increase in mortgage foreclosure and delinquencies in the United States. As a result, there was pressure on the financial institutions leading to a weakening of the financial regulation systems. This causes bubbles in the market that leads to adverse effects on the financial markets (Stock Market investors).
There are several measures that have been taken by the government to respond to these situations. These measures are meant to slow down the effects of the crisis on the economy and salvage the status of the financial institutions. One of these measures is the lending by the government to financial institutions that were affected by this crisis. On the other hand, there have been programs that have been formulated to insure financial institutions against the risks of loosing their financial status on the market. These responses from the government have been able to bring to a halt the effects of the crisis on its financial and mortgage institutions. However, the responses have affected the economy negatively in that there is a slow down in economic growth, and in some sectors a total stagnation. Similarly, the responses have increased the national debt as the government has been forced to borrow in some cases to rescue its institution and guard its investors against the risk of loosing their investments (Stock Market investors).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9380260109901428,
"language": "en",
"url": "https://smallbusiness.chron.com/figure-profit-vs-cost-36647.html",
"token_count": 507,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.009033203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c618a237-f748-45b7-9d42-e403958ddc61>"
}
|
How to Figure Profit Vs. Cost
A business’s profit is the amount of money remaining after the company pays its costs and expenses. Costs are the expenses involved in developing, creating and selling the business’s products and services. The business’s costs have a great impact on the business’s overall profit and, when not controlled, can be the cause of the business’s failure to profit. Figuring the business’s profit versus its costs can help determine if the costs require review and revision.
Use the business’s most recent income statement to quickly identify the total profit and cost amounts. Configure the profit and cost totals manually if an income statement is not available and to check the accuracy of the statement’s information.
Identify the business’s costs. Add all the expenses incurred as a result of doing business, including development and liability expenses, payroll and administrative expenses, labor costs, loans and selling expenses.
Calculate the business’s net profit by first identifying the business’s gross profit. Configure the cost of goods sold by adding the total amount of beginning inventory to the costs of purchases and labor, and then subtract that total from the value of the ending inventory. Calculate the gross profit by subtracting the business’s net sales from its cost of goods sold.
Determine the business’s net profit by subtracting the business’s gross profit amount from its total expenses. Subtract the business’s income tax amounts from the net profit to identify the total net profit after taxes.
Use the profit and cost information to determine if the business is operating efficiently and earning all of its potential income. Use the business’s profit margin ratio to help identify its efficiency. Calculate the profit margin by dividing the business’s net profit by its net sales, which compares the business’s earnings after expenses against its total sales amounts. Examine the business’s expenses, costs and prices carefully if the profit margin results in a low percentage rate, such as below 15 percent, as this means that the business earns very little income with each product sale.
Writing professionally since 2004, Charmayne Smith focuses on corporate materials such as training manuals, business plans, grant applications and technical manuals. Smith's articles have appeared in the "Houston Chronicle" and on various websites, drawing on her extensive experience in corporate management and property/casualty insurance.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9397062063217163,
"language": "en",
"url": "https://www.businessmanagementideas.com/cost-accounting/process-costing/21599",
"token_count": 12896,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.00579833984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:03f6fa93-5171-43dc-b4b2-2e1eb846c489>"
}
|
Process Costing method applies to those industries, where the material has to pass through many processes for converting it into a finished product. This method is used in chemical products, oils, varnishes, soap, paints etc.
Process Costing is another method of Costing employed for ascertaining the costs of goods and services of Processing Industries. This method can be applied in the industries which are mass producing industries producing standard products.
In the case of Processing Industries, raw materials are processed in one or more process to obtain the finished goods or saleable commodity.
The terminology of CIMA defines Process Costing as – “the costing method applicable where goods or services result from a sequence of continuous or repetitive operations or processes. Costs are arranged over the units produced during the period”.
- Introduction to Process Costing
- Meaning of Process Costing
- Definitions of Process Costing
- Characteristics of Process Costing
- Features of Process Costing
- Fundamental Principles of Process Costing
- Stages Involved in Processing of a Product
- Examples of Specific Industries where Process Costing can be Applied
- Treatment of Important Items
- Joint Products and By Products
- Steps Involved in Computation of Costs
- Elements of Process Costs and their Accounting Treatments
- Transfer of a Part of Output to Warehouse for Sale
- Classification of Problems
- Industries where Job and Process Costing is Applied
- Difference between Job and Process Costing
- Advantages and Disadvantages
- Merits and Demerits
- Multiple Choice Questions and Answers
What is Process Costing: Meaning, Concept, Characteristics, Features, Principles, Stages, Examples, Treatment, Merits, Demerits, MCQ, Difference and Suitable For…
Process Costing – Introduction and Concept
This method applies to those industries, where the material has to pass through many processes for converting it into a finished product. This method is used in chemical products, oils, varnishes, soap, paints etc.
Process Costing represents a type of cost procedure suitable for continuous and mass production industries producing homogeneous products. In industries suitable for process costing, output consists of like units.
Each unit is processed in the same manner. It is difficult to trace the items of prime cost relating to a particular order, because its identity is lost in continuous production. It is assumed in process costing that average cost presents the most satisfactory cost per unit. Cost of production during a particular period is divided by the number of units produced during that period to arrive at the cost per unit.
In a paint factory, thousands of litres of paint are produced. It is difficult to trace the items of prime cost relating to a particular order for one hundred litres of paint. Under these circumstances, cost of production for a particular period will be taken and it will be divided by total number of litres of paint produced during that period to ascertain the cost per litre of paint.
It is presumed that same amount of material, labour and overhead is chargeable to each litre of paint produced during that period. It is necessary to understand the concept of process. A process is an organisational entity or section of the firm, in which specific and repetitive work is done.
Some of the various other terms used to describe a process are department, cost centre, responsibility centre, function and operation. A process can also be referred to as the sub-unit of an organisation specifically defined tor cost collection.
This sub-unit is concerned with specific operations. In process costing, particular attention is given to –
(a) costs relating to the process, i.e. both direct and indirect cost,
(b) period for which cost for the process is collected,
(c) completed units produced during the period
(d) incomplete units in the process at the end of the period and
(e) determining unit cost of the process for the period.
What is Process Costing – Meaning and Formula for Calculating Unit Cost
Process Costing is another method of Costing employed for ascertaining the costs of goods and services of Processing Industries. This method can be applied in the industries which are mass producing industries producing standard products. In the case of Processing Industries, raw materials are processed in one or more process to obtain the finished goods or saleable commodity.
Further, the finished product of the factory is in the form of identical units which require the same quantum or amount of material, labour and overhead per unit. Process Costing, therefore, aims at ascertaining both the process costs and the costs of units processed in each of the processes (both total and per unit).
More specifically, it aims at ascertaining the average cost of the product which can be achieved by dividing the total process costs by the total number of units produced.
Hence, the simple formula used to compute the unit cost is –
Process Costing – Definitions
Process costing as a method of ascertaining the cost has been defined by different experts and professional institutions in the manner stated below:
According to I.C.M.A., London, Process Costing is, “that form of operating costing which applies where standardised goods are produced”.
Kohler defines Process Costing as – “a method of cost accounting whereby costs are charged to processes or operations and averaged over units produced”.
Process Costing is defined by CIMA, London as that form of operation costing which applies where standardised goods are produced. Wheldon has viewed Process Costing as a method of costing used to ascertain the cost of product at each process, operation or stage of manufacture.
Ronald W. Hilton opined, process costing is used in production process where relatively large number of nearly identical products are manufactured. The purpose is …. to accumulate costs and assign them to units of product.
The terminology of CIMA defines Process Costing as – “the costing method applicable where goods or services result from a sequence of continuous or repetitive operations or processes. Costs are arranged over the units produced during the period”.
Like unit costing, Process Costing is also a form of operation costing as distinguished from specific order costing.
In case of unit costing, production of a single product is brought about by setting up a separate plant. In the case of Process Costing, however, production follows a series of sequential processes for either a single product or a limited range of products.
The aim of Process Costing is to determine the total cost of each operation and to apply this cost to the product at each state of process. It will then be possible to ascertain cost per unit for each operation or process and in total.
Main Characteristics of Process Costing
The main characteristics of process costing are:
1. Manufacturing activity is carried on continuously by means of one or more processes that run selectively or parallely.
2. The output of one process becomes the input of another process until the final products are completed.
3. The end product usually is of identical and standardized units not distinguishable from one another.
4. It is not possible to trace the identity of any particular lot of output to any lot of input of materials.
5. Joint and or By-products occur in one or more processes. These joint and by-products may require further processing before they can be sold.
6. At the end of the period, the incomplete units in a process are restated in terms of completed units i.e. equivalent units.
7. Goods may be transferred from one process to another process not at cost price but at a price nearer to market price i.e. transfer price. This policy highlights the inefficiency and losses occurring in a particular process.
Top 5 Features of Process Costing
The various features of process costing are as given below:
1. Production activities are undertaken on a continuous basis. That means, production is not undertaken against the customer’s specific order as in the case of Jobbing Industries and Job Costing.
2. Raw materials are processed, usually, in more than one process sequentially. Hence, the entire manufacturing activities are divided into a number of processes which take the form of cost centres. Separate accounts are kept for each of the processes to arrive at both the element-wise and process-wise costs, both total and per unit.
3. The materials processed in one process are transferred to another process for further processing. That means, the output of one process becomes the input for the next process until the finished product (i.e., saleable commodity) is obtained.
Hence, the product (i.e., output) of a process is called a process product. The Process Product of the last or final manufacturing process in the finished product of the company.
4. The finished product or output comprises of like units and they are not distinguishable from one another. That means, output is uniform and the units are identical. Hence, the products and processes are standardized.
5. Since the output is uniform and the units are identical, the cost per unit is ascertained by dividing the cost of the process (of a period) by the number of units produced during that period.
Further, some processing industries may undertake the production of more than one product simultaneously in one or more processes of production.
Besides, the processing companies may also obtain one or more by-products in addition to the main product. Of course, process losses usually occur in one or more processes as it is unavoidable.
Fundamental Principles of Process Costing
The fundamental principles are as follows:
1. Cost of materials, wages and expenses (both direct and indirect) are accumulated for a period and classified by departments or processes.
2. Adequate production records of output and scrap of each process or department for the period are maintained.
3. Total cost of each process during a period is divided by the number of units produced during that period to get the average cost per unit.
4. The cost of normal spoilage is included in the cost of good units produced. This increases the average cost per unit.
5. As products pass from one process to another, the accumulated cost of output of that process is also transferred to the next process like raw material.
6. If there is WIP at the end of the accounting period, production and inventory are computed in terms of equivalent completed units.
7. If one or more products of small value emerge with the main product of high saleable value during manufacturing, they are called by-products. By-products may or may not be processed further before selling. When two or more main products with high saleable value emerge simultaneously in a process, they are called joint products.
Process Costing – Stages Involved in Processing of a Product (With Examples)
Process costing is used in case of industries, which involve processing of a product through different stages:
(i) Continuous Sequential Processing:
In case of this processing, a product has to pass through different cost centres or stages of manufacturing continuously and in succession one after the other during a period. The processing being continuous and identical, the costing units for each centre or stage are identical during any period.
Examples of this type of processing are cement-making, paper-making, refining of crude petroleum, etc.
(ii) Discontinuous Processing:
In case of this processing, a process is independently operated for the individual product as such at frequent intervals. The costing unit in case of this processing, dependent upon the product may vary even for the same cost centre.
Examples of this type of processing are dye manufacturing, fruit preservation, vegetable canning, yarn spinning, etc.
(iii) Parallel Processing:
In case of this processing, the operations or stages through which the product has to pass run-parallel and separately. All these parallel processes ultimately join with the end process.
Examples of this type of processing are manufacturing different components which ultimately join in the assembly process to make a product, meat packing etc.
(iv) Selective Processing:
In case of this processing, the combination of the processes or stages of operation depend upon the end-product to be commercialised.
Examples of this type of processing are cooked meat, chloride compounds like bleaching, powder or zinc chloride or hydrochloric acid, etc.
Process Costing – Examples of Specific Industries where Process Costing can be Applied: Production Industries, Public Utilities, Mining Industries, Chemical Industries and Others
Process Costing can be applied to mass production industries producing standardised goods on a continuous basis.
In this background, the examples of specific industries where Process Costing can be applied are presented below:
1. Production Industries:
i. Cement Industry
ii. Paper Industry
iii. Paints, Ink and varnishing, etc.
iv. Textiles, Wearing, Spinning, etc.
vi. Iron and Steel
vii. Ceramics, etc.
2. Public Utilities:
i. Electricity Generation
iii. Water Supply, etc.
3. Mining Industries:
i. Mineral Oil and Refineries
vi. Gas, etc.
4. Chemical Industries:
i. Box making
ii. Distillation process
iii. Biscuit works
iv. Food products
vi. Canning factory
vii. Coke works
viii. Meat product factory
ix. Milk diary.
Process Costing – Treatment of Important Items (With Example of Abnormal Loss and Gain)
Treatment of important items in process costing are given below:
1. Direct Expenses and Indirect Expenses:
(a) Direct expenses
All direct expenses relating to a process like material issued, labour engaged, power used etc., will be debited to the process account.
(b) Indirect expenses
Some indirect expenses are bound to incur such as – manager salary, rent of the office, departmental expenses where more than one process is carried on. These expenses will be apportioned to all processes on a suitable basis.
Generally cost of material or labour is taken as the basis to allocate indirect expenses in process costing but sometimes a more appropriate method can also be used.
Anything left as a residue after the production processes is called wastage/ loss. Wastage is generally of lesser value than the main product. It can be termed as by product if it can be sold in the market directly or after further processing. Therefore, by-produces are those residue of production process which have slightly higher value and can be sold in the market directly or after further processing.
Whereas wastage is the residue having lesser value and cannot be sold easily in the market. Sometimes residue can also be utilized again as a material in the same process or in previous process.
For instance, these are many by products in case of petroleum refineries such as – diesel, petrol, charcoal, kerosene etc. Thus in case of fabric processing, the left out portion of fabric having lesser value is an example of wastage/ Loss.
The wastage can be treated in cost accounting as follows:
a. Wastage salable in market
If the wastage can be sold in the market without further processing, the price realized from the sale should be credited to respective processes account in which wastage arise.
b. Wastage with insignificant value
Where wastage occur in different processes having insignificant value, it is better to sell wastage of all processes together and credit the amount to indirect expenses A/c or General work overhead account, so that all the processes get their share in the sale.
c. Reprocessable Wastage
If the nature of the wastage is such that it can be reprocessed in same process or in the previous process, then such wastage should be utilized again as raw material.
i. If re-used in same process
If material can be reused in the same process again, in such a case the value of wastage should be the same as that of material introduced into the process in the beginning.
If material can be reprocessed in the previous process then the value of the material will be that of the material introduced in the previous process. Thus, the wastage will be valued at the material price of pervious process. The material will be transferred to previous process account or store ledger account if kept in stores.
There are certain materials which are lost because of inherent features of material or production process, such as – evaporation, dusts, chemical reactions, inefficiencies. The wastage of material resulting from natural or inherent features of material is called loss.
In many industries there is always some losses of raw material in the manufacturing process. Thus, proper record is to be maintained for each loss.
Process losses can be divided into two categories:
a. Normal loss.
b. Abnormal loss.
a. Normal Loss:
Normal loss is a predetermined loss. It is the usual wastage of material resulting because of nature of material or process. This loss cannot be avoided. Such losses may result from the factors like chemical reaction, dust, evaporation etc. It also includes the units withdrawn for testing purposes. The cost of normal loss will be borne by the cost of good units.
Accounting Treatment of Normal Loss:
Cost of normal loss will be absorbed by the good production. So the entire cost of loss will be charged to the cost of good production. Sometimes, there may be physical loss like remaining cut pieces of iron, rubber, copper etc., i.e., resulting in physical wastage. If such wastage can be sold in the market the sale processes of such loss shall be credited in the respective process account.
The remaining loss will be borne by good production as explained below:
It should be observed here that though there is physical wastage of 50 units as normal loss but as it cannot be sold in the market thus entire process cost of Rs. 6000 will be borne by good production.
When Scrap can be sold @ Rs. 2 unit
The amount realized from the sale of normal loss Rs. 100 is credited to the process account A. The remaining cost of process i.e., 5900 is borne by good units, resulting in lower cost of production. Normal loss account can also be prepared.
b. Abnormal Loss:
Abnormal loss means loss over and above normal loss. It results from various factors like carelessness of worker, machine break down, strike, accident, defective material or any other external factor. The percentage of abnormal loss cannot be determined in advance.
Accounting Treatment of Abnormal Loss:
The cost of normal loss should not be borne by good production; otherwise it will result in cost fluctuation of the product cost. Therefore, the cost of abnormal Loss will be credited to the process account and debited to abnormal loss account.
The sale proceeds from abnormal loss (if any) will be credited to the abnormal loss account. The abnormal loss account will be closed by transferring the balance amount to costing profit and loss account.
Thus, the cost of good units can be obtained with the help of following formula:
(a) Cost of good production = good units x cost per unit (calculated above)
Where good units = Total units introduced – Normal loss units – Abnormal loss units
(b) Abnormal loss = Abnormal loss units x Cost per unit (calculated above)
(c) Normal loss = always valued at salable value Entire process is explained with the help of example –
3. Abnormal Gain/ Effectives:
If actual good production is more than the expected good production, it will result into abnormal gain. Since normal loss can be ascertained in advance therefore, the expected output (i.e., total input-Normal loss) can also be ascertained.
But actual output may not always match with expected output. If actual output exceed expected output it will be called abnormal gain/ effective. If actual output is less than expected output it is called abnormal loss.
Accounting Treatment of Abnormal Gain:
Abnormal gain is valued in the same way as the abnormal loss. It means it is valued at the same cost of good units. The abnormal gain is debited to respective process account and credited to abnormal gain account. The balance of abnormal gain account (after adjusting normal loss) will be transferred to the costing P & L A/c
Taking information from previous example 1, total units transferred to the next process are 950 units. Prepare process A account and other accounts.
4. Process Accounts when there is Work in Progress (WIP):
Process accounting deals with the continuous production industries therefore it is quite possible that some input may remain incomplete at the end of the accounting period. That incomplete material or input is called work in progress (WIP). Valuation of semi-finished material or (WIP) presents difficulty because these units are in various stages of production or stages of completion.
WIP may include material on which work has just started to those which are about to complete. There may be same material which is 100% complete of material and labour but 50% complete in respect of overheads. Therefore, it is difficult to value work in progress.
WIP is valued in terms of equivalent production as explained below:
Equivalent Production means conversion of semi-finished material (WIP) into completed units of production. For instance, if there are 100 units of WIP, and these are estimated to be 60% complete, then equivalent production will be 100 units x 60% = 60 units These 60 units will be taken as equivalent production and value can be ascertained for the same in process costing.
Thus, equivalent production of any WIP can be found by using following formula:
Equivalent units = Total units x Degree of Completion (%)
It should be remembered that degree of completion of WIP plays an important role. It has to be carefully determined otherwise it may lead to wrong results.
Valuation of Equivalent Production:
After conversion of WIP into equivalent units, three statements need to be prepared for valuation of equivalent units.
These statements are:
(i) Statement of equivalent production
(ii) Statement of cost per unit
(iii) Statement of total cost.
(i) Statement of Equivalent Production:
In this statement, equivalent production units for each element of cost like labour, material and overhead will be ascertained.
Equivalent Production units not only include converted units from WIP but also include finished output from current production as shown below:
In a particular process, 3000 units were introduced of which 2000 were finished. 1000 units remain in WIP is 100% complete in respect of material, 50% and 40% complete in respect of labour and overhead respectively. Prepare equivalent production statement –
Therefore, Equivalent production units for various elements of cost are:
Material = 3000 units
Labour = 2500 units
Overhead = 2400 units.
(ii) Statement of Cost per Unit:
In this statement we will ascertain the cost per unit of each element of cost like labour, material and overhead. Per unit of cost can be found out by dividing the total cost by number of equivalent units as given below –
For example, if total cost of material labour and overhead are Rs. 9000, 5000 and 2000 Rs. respectively. Then Per unit cost can be found easily. Equivalent units can be taken from previous example.
(iii) Statement of Total Cost:
In this statement, total cost of each head like finished goods, WIP, normal loss, abnormal loss etc., can be found out as given below –
5. Equivalent Production and Valuation of Normal Loss:
We know that normal loss is incurred in various production processes. Equivalent units of normal loss will always be taken as NIL.
It means normal loss do not form part of equivalent production, therefore value of normal loss to be taken to process account will always be zero. The realizable value of normal loss (i.e., scrap sale) will be deducted from the material cost to calculate net material cost incurred.
Therefore, following points should be remembered:
i. Abnormal Loss:
Abnormal loss is over and above the normal loss. The abnormal loss will be obtained in the same way as the good production (finished output). It means abnormal loss is the loss of good production. If degree of completion of abnormal loss in respect of labour, material and overhead is not given, it is always taken as 100% complete.
Abnormal loss will be valued in the same way as good production i.e., at total cost of production.
ii. Abnormal Gain:
Abnormal gain occurs when actual production is more than the estimated production. Since it is part of good production, it is taken 100% complete in respect of degree of completion for labour, material and overheads. The value of abnormal loss will be obtained in the same way as per unit cost of good units.
However abnormal gain is deducted out of total units to obtain equivalent production. The procedure to calculate value of normal, loss abnormal loss and WIP are explained with the help of example.
Evaluation of process cost where there is opening and closing WIP both. When both opening and closing WIP are given the output completed and transferred to the next process may be obtained either according to FIFO or Average method.
iii. FIFO Method:
Under this method, it is assumed that opening stock of work in progress (WIP) is completed first before taking up new units introduced in the process. Therefore whatever remains unfinished is out of newly introduced units. The FIFO method is advisable when the prices of material are relatively stable. Use of this method will result in valuing the closing WIP at current costs.
Average method assumes that opening WIP is not completed first instead work an all units (Opening as well as introduced during the period) is carried on. Thus closing work in progress may not necessarily be part of recently introduced units as in the case of FIFO method.
Under this method, the respective element of cost (material, labour & overhead) of opening WIP is added to total cost incurred during the period for that element of cost as shown below –
Thus, cost to be used for calculation of per unit cost will be obtained by adding cost of opening WIP and current period cost.
Under this method units completed and transferred as well as closing WIP will be valued at the average cost.
Equivalent Production will include entire units completed and transferred and equivalent units of closing WIP.
Alternate Statement of Equivalent Production:
Degree of completion of opening WIP is irrelevant here, thus statement of equivalent production can be prepared as follows:
It means opening WIP may be ignored at all as we need not to calculate value of opening stock of WIP in average cost method.
When to Use FIFO method?
Students are advised to use FIFO method when opening WIP is given in lump sum along with stages of completion of different elements of cost as given below –
When to Use Average Method?
Students are advised to use average method when stages of completion of different elements of cost are not given. The opening WIP is given in terms of material, labour and overheads instead of lump sum cost.
Use any FIFO or Average:
If both degree of completion of different elements of cost and their respective cost are given, then students are free to use any method.
Average Cost Method:
In this method, the cost of opening work-in-progress is not kept separate but is averaged with the additional costs incurred during the period. This method thus, combines the cost of opening work- in-progress and new production. Information relating to degree of completion of opening WIP is not required.
In order to find out the cost per unit of equivalent production, the cost of each element (material, labour and overheads) applicable to the opening work-in-progress is added to the cost incurred in the current period for that element. A single cumulative total and unit cost is obtained. Units completed and transferred as well as closing work-in-progress will be valued at this average unit cost.
Process Costing – Joint Products and By Products (With Accounting Treatment)
Sometimes, two or more products are produced simultaneously from a common process. If a manufacturing process using the same inputs produces two or more products, they can be either by-products; or joint products; and also major products and by-products.
The classification of the products into joint-products and by-products depends upon relative importance of the products, objectives and policies of the management etc. In the case of edible oil, which is the main product, oil cake emerges; petroleum industry gives several joint-products like gasoline, kerosene, fuel oil etc.
When two or more products of equal importance are simultaneously produced from the same raw materials, such products are generally known as joint-products. In the case of dairy industry skimmed milk, butter-cream, butter milk etc., are joint-products.
The distinction between by-product and joint-product is a matter of degree of importance of the products. It is difficult to draw a distinction, but from the accounting point of view, it is necessary.
If the products are of equal commercial importance then they can be called as joint-products and if the products are not of the same importance, then the products of lesser commercial importance are known as by-products.
The by-products are of secondary importance in terms of relative sales revenue. Generally, no additional expense is necessary on by-products, but additional expense is needed on joint-products before sale.
Joint products are the products which are jointly produced, having equal economic importance from the same or basic raw materials, possessing comparable value. For example, petroleum, diesel oil, paraffin etc., are joint-products arising from processing crude oil.
The joint-products have the following characteristics:
(a) The products are the simultaneous outcome of the joint process and from the same raw materials.
(b) The products have equal commercial value.
(c) The joint-products cannot be identified as separate products up to a certain stage in manufacturing. This stage is known as split off point. Cost prior to split off stage is known as joint-cost and cost after this split off stage is known as subsequent cost.
The cost before the separation stage has to be distributed to each product.
The accounting methods employed in costing joint-products are:
(a) Average Unit Cost Method:
In this method, it is assumed that the total cost of the process is borne by all units equally. The total process cost of pre-separation is divided by the total units produced to get the average cost per unit of production. This method is applicable where processes are common and inseparable from products and expressed in common units, i.e., weight or volume applicable to all products.
(b) Physical Units Method:
In this method the joint costs are apportioned on the basis of some physical units (raw materials) i.e., in meters, tonnes etc. Physical units are the units in which the basic raw materials are measured and are determinable at the point of separation of the joint-products. This method cannot be applied when one product is gas and another, a liquid.
(c) Survey Method (Points Value Method):
This method is adopted after a technical survey of all factors involved in the production and distribution of products. Percentage or points value is assigned to each product to denote its relative importance and common costs are apportioned on the basis of total points.
(d) Market Value Method:
In this, the joint costs are apportioned on the basis of the proportion of market price of the products. Thus, products having higher price are charged with a higher portion of the joint-costs and products having lesser price get lesser share of the joint-costs.
By-products can be of two types—certain products can be sold in their original condition and certain products need further processing after separation. By-products are produced along with main products and the same area of comparatively less value.
In accounts, they are treated in any one of the following ways:
i. Non-Cost Methods (Sale Value Methods):
(а) Other Income Method
The value realised by the sale of by-products is treated as other or miscellaneous income because of negligible value. The stock of by-products is valued at zero value for Balance Sheet purposes.
(b) Crediting Sales Value to the Process Account
Under this method the value of by-product is credited to the process account, so that the cost of the main product is reduced. For Balance Sheet purposes, the unsold stock of by-product carries zero value.
(c) Credit to Sales Value less Selling & Distribution Expenses
In certain cases, by-products need selling and distribution expenses, and these expenses are deducted from the sale-value. The net amount is credited to the Process account.
(d) Crediting Actual Cost to the Process
In case the by-products need further processing before sale, the amount is ascertained and deducted from the sale-value. The net amount is credited to the process account.
ii. Cost Methods:
(a) Replacement Cost
Under this method, the by-products are utilised in the same industry as raw materials and valued at the market price, the process account is credited to the value.
(b) Standard Price
In this method, the by-products are valued at standard cost (predetermined cost) and credit is given to the process account.
(c) Apportionment on Suitable Basis
Where by-products are prominent, they will be treated as joint products and as such joint-cost is to be apportioned.
Process Costing – Steps Involved in Computation of Costs
As most of the items of costs can easily be identified with, and charged to, respective process as direct costs, computation of costs under Process Costing is comparatively easier when compared to Job Costing. However, a few items of overhead expenses require the apportionment.
In this background, the costing procedure under Process Costing is summarised below in the form of steps involved in the computation of costs:
1. The production activities required to be performed in the production of a product are divided into a number of processes.
2. A separate account called, Process Account is opened and maintained for each of the processes.
3. Cost of materials, wages and other expenses are chained or debited to the Process Account concerned. As the total costs attributable to the process comprise of both the direct costs and the indirect costs, cost allocation and apportionment principles and procedure are followed.
4. Cost per unit of (finished) output of each process (i.e., Process Product) is computed by dividing the total cost incurred (for or) in the process during a period by the number of units of output produced in that process during that period.
5. As the output of a process is in the form of a semi-finished product requiring further processing, the output of one process is transferred to another (next) process. Therefore, the output of one process becomes the input for another (next) process.
While transferring the output from one process, its (accumulated) costs are also transferred to the next process.
This inter-process transfer continues till the (finished) output of the last process is obtained. This is the finished product of the company. Usually, the output of the last process is transferred to Finished Goods Account.
Process Costing – Elements of Process Costs and their Accounting Treatments: Direct Material Cost, Direct Labour Cost, Direct Expenses and Overhead Expenses
The important elements of process costs and their accounting treatments are identified and discussed below:
1. Direct Material Cost:
Raw materials required for processing are drawn from the Stores Department by the concerned process by sending Material Requisition Note. When a process uses less-than the materials received from the Stores Department, the person-in-charge of process informs the Stores Department about the quantity of raw materials consumed during the period and the quantity of raw materials left unused at the end of the period.
Based on the raw materials consumed during a period and issue prices, cost of raw materials consumed is ascertained and this is charged to the Process Account. That means, cost of raw materials consumed is debited to the concerned Process Account.
When the production of a product involves more than one manufacturing process, one can find the sequential processing of materials to obtain the finished product. In this type of situation, the output of one process becomes the input for the next process until the finished product is obtained.
While transferring the output of one process (Transferor Process) to the next process (Transferee Process), costs accumulated (i.e., not only material cost but also the conversion costs) in the first process (i.e., transferor process) are also transferred to the next process (i.e., to the transferee process).
This cost is debited to the next process (i.e., to transferee process) wherein some more or new materials are consumed. As far as the recording of these extra or new material costs are concerned, the same procedure is followed.
2. Direct Labour Cost:
Since the entire manufacturing activity is divided into a few and distinct processes, and since the employees including supervisors are posted on permanent basis to the processes, it is possible to prepare the process-wise pay-roll. Hence, the wages and salaries of employees and supervisors can be easily identified with the process concerned.
Therefore, the wage bill of all the employees working in a manufacturing process is charged and debited to the concerned Process Account.
In the case of wages and salaries of employees and/or supervisors who are assigned to work in more than one process, the same is apportioned among the beneficiary processes on the basis of the time bookings. Idle time, if any, may be accounted separately for the purpose of control.
3. Direct Expenses:
Besides the Direct Material Cost and Direct Labour Cost, many a number of items of expenses (which are charged under Job Costing as Overhead Expenses) can be identified with the processes.
Depreciation, insurance, power charges, repairs and maintenance, etc., are examples of expenses which can be directly charged to the process concerned. These expenses are, therefore, debited to the Process Account (for which these expenses were incurred).
4. Overhead Expenses:
All other manufacturing expenses which are incurred for the benefit of works in more than one manufacturing process are collected under separate Standing Order Numbers and apportioned to all the beneficiary processes on equitable or suitable basis.
Overhead expenses are normally recovered on the basis of the Predetermined Overhead Absorption Rates. The share of each process is debited to the concerned Process Account. Any difference between the overhead expenses incurred and absorbed is balanced through Overhead Adjustment Account.
Process Costing – Transfer of a Part of Output to Warehouse for Sale (With Treatment, Illustration and Solution)
In process costing, at the time of the completion of each process, there is a finished product. It may be sold directly in the market (without further processing) or transferred to the next process as its raw material.
Sometimes, a manufacturer may transfer a part of production of a process to the next process for further processing while the remaining part may be transferred to warehouse for sale.
In such a case, both the cost of output transferred to next process and the cost of output transferred to warehouse for sale are shown on the credit side of the concerned Process Account.
The treatment can be shown by means of the following illustration –
The product of a manufacturing concern passes through three processes. Details of costs and production during March, 2007 were as follows –
In each process 6% of total weight is lost and 8% is scrap. You are required to prepare Process Account showing the cost per ton of each process.
Process Costing – Classification of Problems (With Formula)
Process Cost Accounts are prepared by following the procedure enumerated. However, the general and simple procedure needs slight modification and improvement depending upon the nature of the problems. For the purpose of easy understanding, the problems are classified into a few numbers of groups as presented below.
The analysis is made below taking the problems of different categories one after another:
1. Process Costing when there is no Process Loss:
When there is no loss or gain in the processing operation of a product, ascertainment of average cost does not pose any difficulty. Direct material cost, direct labour cost, direct expenses and the apportioned overhead expenses are debited to concerned Process Account. The total process costs (accumulated) are transferred to the next process along with its output for further processing.
As is known, the output of the first process becomes the input for the second process and the output of the second process becomes the input for the third process and so on ….
This process of transfer continues till the completion of the work in the final manufacturing process from which its output is transferred to Finished Stock Account. At the end of the accounting period, the unit cost is computed for each process by dividing its costs by its output.
2. Process Costing when there is Process Loss and/or Gain:
This implies that there is no difference between the number of units of raw material introduced into the manufacturing process and the number of units of finished product obtained. For instance, by introducing 1,000 kgs of raw material into the manufacturing process, if the company obtains 1,000 kgs of finished product, then there is no process loss. Because, Input 1,000 kgs = Output 1,000 kgs. But, in reality, it is very rare to find this type of situation.
Because, some loss of materials is bound to take place in the manufacturing process. The difference between the input and the output, therefore, represents the Process Loss.
If the loss is inherent to the manufacturing process, and if it is inevitable and within the limit, it is called Normal Process Loss. This type of loss cannot usually be avoided due to the nature of material and/or process.
Loss arising out of chemical reaction, evaporation, shrinkage, cutting, etc., are examples to this type of loss. On the basis of nature and type of raw materials used, nature of processing operations involved, technical aspects, etc., it is possible to anticipate the extent of Normal Loss.
Since this loss is natural and inherent to the nature of manufacturing process, the cost of Normal Process Loss shall be borne or absorbed by good units produced in the process. The normal practice is to charge no portion of process cost to normal loss. That means, the entire process cost which includes even the cost of Normal Process Loss is borne by the good units.
Hence, no separate treatment is necessary in the Process Account except entering the quantity of normal loss in quantity column on the right hand side (i.e., credit side) of the concerned Process Account. Of course, the presence of Abnormal Gain necessitates the maintenance of separate account.
However, if the Normal Loss can be disposed off for some price, then the realisable value from the sale of Normal Process Loss is credited to the concerned Process Account (i.e., recorded against the quantity of Normal Process Loss). In this type of situation, only the difference between the Cost of Normal Process Loss and its Realisable. Value is borne by the good units.
If the loss is caused by unexpected or abnormal factors such as – fire, machine break-down, negligence, inefficiency of the managerial personnel, wrong designs, substandard materials, etc., it is called Abnormal Process Loss.
This process loss is both avoidable and controllable through proper planning and management. From accounting point of view, this loss (i.e., Abnormal Process Loss) represents the excess of actual process loss over the anticipated normal process loss. On the other hand, if the actual loss is lower than the anticipated or expected normal process loss, then there arises Abnormal Gain.
Since the Abnormal Loss arises on account of inefficient operations and due to the factors which are controllable arid avoidable, it is not fair to charge the Cost of Abnormal Loss to good units. Hence, the treatment for Abnormal Process Loss in Cost Accounts differs from that for Normal Process Loss.
The following procedure is, therefore, followed to account for the Abnormal Process Loss:
i. Abnormal Process Loss is valued just like good units and transferred to a separate account called, Abnormal Loss Account,
ii. Since the Abnormal Loss is valued at the same rate at which the good units are valued, the following formula is used to ascertain the cost (or value) of Abnormal Loss –
iii. The cost (or value) and units of Abnormal Loss so computed are credited to the concerned Process Account debiting the Abnormal Process Loss Account, and
iv. Abnormal Loss Account is closed by transferring the balance to Costing Profit and Loss Account (i.e., by debiting Costing Profit and Loss Account, and crediting the Abnormal Loss Account).
However, if the Abnormal Loss is in the form of scrap having some realisable value, the amount realised from the sale of Abnormal Loss is credited to the Abnormal Loss Account.
In this type of situation, the Abnormal Loss Account is closed by transferring only the balance (in the Abnormal Loss Account) to Costing Profit and Loss Account (i.e., by debiting Costing Profit and Loss Account, and crediting Abnormal Loss Account for the balance amount).
If the actual loss is lower than the anticipated Normal Process Loss, it gives rise to Abnormal Gain or Abnormal Effectives. The value of Abnormal Gain is computed using the same procedure as used for Abnormal Loss.
The Cost (or Value) of Abnormal Gain is not used for reducing the process cost. Instead, it (i.e., value along with quantity) is debited to concerned Process Account and crediting the Abnormal Gain Account.
The realisable value which would have otherwise been realised (if there was Normal Loss and no Abnormal Gain) is debited to the Abnormal Gain Account. To put it alternatively, the loss of income is debited to Abnormal Gain Account and credited to Normal Loss Account.
Abnormal Gain Account is closed by transferring the balance in the Abnormal Gain Account to the Costing Profit and Loss Account (i.e., by crediting Costing Profit and Loss Account, and debiting Abnormal Gain Account for the balance).
3. Inter-Process Profits:
As is known, the manufacturing activities are classified into a number of manufacturing processes and these processes take the form of cost centres. In some processing industrial enterprises, output from one process to another is transferred (not at cost but) at market price or at cost plus a percentage for profit.
The price at which the output of one process is transferred to another is called Transfer Price. The difference between the transfer price and the cost, therefore, represents the profit.
This profit is called Inter-process Profit as the profit is made by the transfer of output from one process to another. When the inter-departmental transfers are effected at cost plus, the processes take the form of Profit Centres.
This facilitates the company to evaluate the performance of each of the processes not only from the view point of cost effectiveness and/or economies but also by using profit based yardsticks.
Consequently, each process is expected to work more efficiently, economically, effectively and profitably to contribute its share to the overall profit, profitability, etc., of the company.
However, this system gives rise to a number of problems. One important problem relating to accounting is the valuation of closing stock of output in each process. Because, for financial statements purpose, the closing stock should be valued at lower cost or market value. But under this system of inter-process transfers at cost plus, the value of closing stock includes not only the cost element but also the profit element.
It is, therefore, necessary to eliminate the inter-process profit included in the value of closing stocks. Alternatively, cost of closing stock may be computed. At this stage, it may be of some interest to note that if the amount of profit element contained in the value of closing stock is higher than that in the value of opening stock, then the profit is overstated and vice-versa.
To compute the cost of closing stock, the following formula may be used:
With a view to ascertain the profit element included in the value of closing stock and to arrive at net realized profit for the period, three amount columns in each side of the Process Account are provided.
Further, the closing stock is shown as deduction from the debit side of the Process Account (i.e., deduction either from the prime cost or from the total manufacturing cost depending upon the valuation basis) instead of showing it on the credit side of the Process Account.
4. Equivalent Production:
One of the features of processing industries is the continuous production of goods and services. That means, production activities are undertaken by the enterprises on a continuous basis.
Consequently, one can find, at the end of the period, the units on which the manufacturing work is not complete. They are, therefore, called work-in-progress. As a result, one can find work-in- progress in one or more processes either at the beginning of the accounting period or at the end of the accounting period or both.
The presence of work-in-progress either at the beginning or at the end of the period or both poses an accounting problem as to the evaluation of period-end inventory and also about the ascertainment of cost per unit of output. This problem is solved by expressing the work-in- progress or incomplete units in terms of complete units called equivalent units of production.
Equivalent production which is also called equivalent units, effective production units or equivalent performance units refers to a systematic procedure of expressing the output or production (whether the work is completed or not) of a process in terms of completed units. It, therefore, refers to the conversion of incomplete production into its equivalent of completed units. For instance,
Computation of equivalent production is necessary for determining the cost of finished goods and that of closing work-in-progress.
The procedure for computing the cost of finished goods and closing work-in-progress involves the preparation of three important statements viz.:
i. Statement of equivalent production,
ii. Process cost sheet, and
iii. Statement of evaluation of finished goods and closing work- in-progress.
i. Statement of Equivalent Production:
This statement aims at finding out the equivalent production. In order to compute this, it is necessary to consider the degree of completion of work on opening and/ or closing work-in-progress besides the units completed and transferred.
Besides, the following points should also be kept in mind while preparing the equivalent production statement:
a. As is known, usually there is some loss in the manufacturing processes. These losses are treated in the same manner as explained already. That means normal process losses are ignored and therefore, not considered while computing the equivalent production.
Abnormal losses and gains are reckoned as good units for computing equivalent production after taking into account the degree of completion. As far as the degree of completion of work on abnormal loss is concerned, it is necessary to take the actual percentage, if given.
Otherwise, it may be assumed that 100% work, in all respects, is complete and the abnormal loss units are rejected at the end of the final manufacturing process.
However, in the case of abnormal gain (which is subtracted for arriving at equivalent production), 100% completion of work is always assumed as the gain usually denotes the finished production.
b. As far as the opening and closing work-in-progress are concerned, degree of completion of work usually differs from one element of cost to another (viz., material cost, labour cost and overhead expenses).
In this type of situation, it is necessary to apply the principle of equivalent production for material, labour and overhead expenses separately and this is called elemental equivalent production. This is necessary to compute the costs.
ii. Process Cost Sheet or Cost Statement:
This statement is prepared by considering the costs of opening work-in-progress and the costs incurred during the current period. Further, the method of valuing the transfers (such as FIFO, Average Cost Method) should also be considered.
When the normal process loss has some realizable value, it should be deducted from the material cost. The net material cost shall be divided by equivalent production in respect of material to obtain the material cost per unit of equivalent production.
Further, when a process receives transfers from other processes for further processing, material costs shall be computed properly (one, by computing material cost on the basis of the transfers from other processes and the other, on the basis of materials introduced to the process). On the basis of amounts of different elements of costs and the elemental equivalent production, elemental cost per unit is computed.
iii. Statement of Evaluation:
On the basis of equivalent production of different categories (such as – finished production, closing work-in-progress, abnormal loss or gain) and the elemental unit costs, costs of different categories of output are computed.
However, when the Average Cost Method is used, a slightly different method is followed as discussed below:
a. Under this method, degree of completion of work on opening work-in-progress is immaterial. Further, it is not shown separately while computing equivalent production,
b. Costs of opening work-in-progress analyzed into different elements of cost are added to the respective elements of costs incurred during the current period in the same process.
Besides, the cost of transfers from the previous process is also added to the material cost of the current process. Using these element-wise costs and elemental equivalent production, cost per unit of equivalent production is computed, and
c. Both the closing work-in-progress and the units completed and transferred to Finished Stock Account are valued at the unit costs computed in step (b) above.
Process Costing – Industries where Job and Process Costing is Applied (With Examples)
In a factory, both job costing and process costing methods may be applied simultaneously for different departments.
For example, in food processing industry job costing method may be applied for department mixing different ingredients by batches and process costing may be applied for manufacturing and packing departments which use the mixture and pack the end products.
Thus, it depends on the nature of the final product and processing methods of intermediate products. Again, in some industries, process costing is used in the initial process and job costing in the subsequent processing of end products.
For example, in steel industry, the cost of steel is ascertained by process costing system and cost of individual steel products is determined by job costing.
Difference between Job and Process Costing
The differences between Job and Process costing are briefly given below:
Difference # Process Costing:
1. The production is a continuous flow of stock in anticipation of demand.
2. Since the production is a continuous flow, individual identity is lost.
3. Since production is continuous there is always work-in-progress at the beginning or closing.
4. Costs are accumulated for each process for a period.
5. Costs are found out at the end of the cost period.
6. Costs are transferred from one process to another process.
7. Production is a continuous process; through standardised systems managerial control is easy.
8. Paper work is less.
9. Since production is of standard products they are uniform: similarities are there.
Difference # Job Costing:
1. Production is executed against specific order from customers.
2. The different jobs may be independent of each other.
3. Jobs may or may not have opening or closing work-in-progress.
4. Costs are accumulated for each job.
5. Costs are found out at the state of completion of the job.
6. Costs are not transferred unless there is a surplus production.
7. Since each unit is different, managerial attention is needed.
8. Since every job is costed separately, there is more work.
9. Production is on the basis of individual specification. Therefore, each job is dissimilar to others.
Process Costing – Advantages and Disadvantages
The main advantages of process costing are:
(a) Process costing helps computation of costs of processes as well as of the end-product at short intervals;
(b) Average costs of homogeneous products can easily be computed;
(c) It ensures closer control over production and costs since the daily quantitative and cost records are kept at the shop floor to assess the efficiency of production against the standards; and
(d) It involves less clerical work because of the simplicity of cost records.
The following are the disadvantages of process costing:
(a) The average cost ascertained under this method is not the true cost per unit. As such, it conceals weakness and inefficiencies in processing;
(b) If production is not homogeneous, as in the case of foundries making castings of different sizes and shapes, the average cost may give incorrect picture of the actual costs;
(c) The emergence of joint products may present the problem of apportionment of joint costs. If apportionment is not properly done, cost results may not be accurate;
(d) This system has all the weaknesses of historical costing since it based on historical costs;
(e) Valuation of work-in-progress on the basis of the degree of completion may, sometimes, be a mere guesswork; and
(f) The method does not permit evaluation of efforts of individual workers or supervisors.
Process Costing – Merits and Demerits
The Chief Merits of Process Costing are:
1. It is possible to compare the process costs periodically, say, at the end of each month. Where predetermined overhead rates are used, process costs can be computed weekly or even daily.
2. This cost finding method is simpler and requires less clerical efforts and expenses as compared to job costing.
3. Managerial control is comparatively easier as budgeted and actual figures are available for each process.
4. Average costs are easily computed, provided the product is homogeneous. Costs are accurate as allocation of expenses to process can be done easily.
5. Price quotations may be submitted without difficulty with the standardization of process. Standard costing system can be easily established in process industries.
The Demerits of Process Costing are:
1. Costs available at the end of the accounting period have only historical importance and hence are not of much use for managerial control.
2. This method gives an average cost per unit. Average cost is not of much use for detailed analysis and evaluation of operating efficiency as there is wide scope of errors. An error in one average cost is carried through all the processes. This affects the valuation of WIP and finished goods.
3. Where several products emerge from the same process, the apportionment of Joint Costs among various products becomes a problem and an element of approximation comes into picture.
4. For the purpose of valuation of WIP, its stage of completion is determined by estimation. This introduces further inaccuracies.
5. Average costs are not always accurate as the units are not fully homogeneous.
Process Costing – Multiple Choice Questions and Answers
1. Which of the following industries would most likely use a process cost accounting system?
(c) custom printing
2. Which of the following is a characteristic of a process costing system?
(a) material, labour and overheads are accumulated by orders
(b) companies use this system if they process custom orders
(c) Opening and closing stock of work-in-process are restated in terms of completed units.
(d) Only closing stock of WIP is restated in terms of completed units
(e) None of the above
3. In determining production cost per equivalent unit in process costing, the average cost method considers:
(a) current process costs in addition to the cost of closing WIP
(b) current process costs in addition to the cost incurred last period which was assigned to opening WIP
(c) current process costs less the cost assigned to opening WIP
(d) current process costs only.
(e) None of the above.
4. A company’s total cost of production was Rs.50,000 in 1996; 30,000 units were completed which included 8,000 units of opening WIP (75% complete) at a cost of Rs.11,000. The closing stock of WIP was 3,000 units (l/3rd completed). The cost per unit for 1996, using FIFO method is:
(e) None of the above
5. A company had opening stock of work-in-process 3000 units, 20% completed for all items, it introduced into the process 15,000 units. At the end of the period, there was a closing stock of WIP of 6,000 units; 100% complete as to materials and one-third complete as to labour and overhead 12,000 units were transferred to the next process. The equivalent units, assuming FIFO method is used, are:
(a) 18000 for material, 18000 for labour and overhead
(b) 18000 for material, 14000 for labour and overhead
(c) 12000 for material, 12000 for labour and overhead
(d) 17400 for material, 13400 for labour and overhead
(e) 12000 for material, 15000 for labour and overhead
6. In the above example the equivalent units, assuming the average method is used, are:
(a) 18000 for material, 18000 for labour and overhead
(b) 18000 for material, 14000 for labour and overhead
(c) 12000 for material, 12000 for labour and overhead
(d) 17400 for material, 13400 for labour and overhead
(e) 12000 for material, 15000 for labour and overhead
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9712533950805664,
"language": "en",
"url": "https://www.irs.com/articles/what-is-a-tax-credit",
"token_count": 634,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.041259765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3da5c32a-4c85-4c61-84a3-c92c8dbb6b18>"
}
|
What Is A Tax Credit?
Understanding Tax Credits & How They Lower Your Tax Liability
Many people are under the impression that tax credits and tax deductions are the same thing. While they are both tax breaks and are similar in many ways, there are several key differences that you need to be aware of.
Most importantly, a tax credit reduces the amount of tax that you owe. On the other hand, a tax deduction only helps to reduce the amount of your taxable income.
How Do Tax Credits Work?
Tax credits reduce your tax liability dollar-for-dollar. Simply put, your gross tax liability is the total amount of tax you owe before any credits are applied.
The majority of tax credits are non-refundable, which means they cannot reduce your income tax liability to less than zero. In other words, any excess credit amount expires the year in which it is used and is not refunded to you.
However, there are some refundable tax credits, which are applied to your tax liability and can reduce it to below zero (if the tax credit is worth more than what you owe). With refundable credits, your tax refund can actually grow.
To get a better idea of how tax credits work and whether or not you qualify for any, you need to know what is available to taxpayers in your situation.
Types of Tax Credits
There are many different federal tax credits that can be claimed on the 1040 tax return. Each tax credit has its own specific rules for eligibility, so make sure you satisfy the requirements before claiming a credit on your income tax return.
Some of the most common tax credits include:
- Earned Income Tax Credit (EITC or EIC)
- Child Tax Credit (CTC)
- Child and Dependent Care Tax Credit
- Premium Tax Credit (PTC)
Remember that just because you qualify for one particular tax credit does not mean that you automatically qualify for others. For example, the Foreign Tax Credit is only available to those who pay taxes in a foreign country – most Americans do not fit into this group, but may qualify for other tax credits.
How Much Are Tax Credits Worth?
It depends on the specific credit you’re talking about. The Child Tax Credit (CTC), which is one of the most popular, can be worth up to $2,000 depending on your situation.
Just as the amount of each tax credit is different, so are the qualification guidelines. Since a tax credit is so helpful to the overall amount of money that you pay, it is essential that you are 100% accurate with this information.
If you are unsure about whether or not you qualify, you may want to check with a tax professional before including the credit on your income tax return. Removing a tax credit is going to greatly affect how much you pay in taxes, so it is better to avoid mistakes than to have the IRS catch them later on.
With this information you should have a better idea of what a tax credit is, and how this type of tax break can help you pay less money to the IRS.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9603936076164246,
"language": "en",
"url": "https://www.theguardian.com/sustainable-business/micro-grid-power-companies-business",
"token_count": 931,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11572265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e6a8638b-c682-4254-a4cf-5e9378f920c3>"
}
|
For the last century, the power created by large power generators (hydro, nuclear, coal and gas) has been brought to consumers via a grid comprised of transmission lines and cables with varying voltages. The system was like a waterfall – with electricity flowing in just one direction, from lines of very high voltage to lines of progressively lower voltages.
This is now rapidly changing. There are now a range of smaller units such as cogeneration plants, which deliver heat and electricity, and renewable energy sources such as windmills and photovoltaic panels. These smaller units are owned by a whole raft of municipalities, households and businesses.
While large power plants were instructed to deliver the exact amount of electricity needed to match real-time consumption, these smaller units generate electricity either as a by-product or in accordance to weather conditions (solar and wind). The role of larger power plants is being reduced as these smaller units come online.
Today, millions of smaller units are generating electricity all over Europe. Just how far this decentralisation of power generation will go is unknown but we can assume that this revolution will go on. This new context is not only changing the business model of power generation, but transmission and distribution grid companies see their investments and grid operation significantly impacted.
Where should investments be made? Should the focus be in more high-voltage transmission grids (the highways of electricity transmission), more local (lower voltage) distribution grids, or in making the grid smarter? Or maybe priority should be placed on higher efficiency appliances, demand side management, or everything all at the same time?
These questions are being asked in both the boardrooms of power grid businesses in Europe, and the regulatory authorities overseeing the investments of these companies. Today, this consists of 41 companies that have a regulated mandate to own and/or operate the high voltage transmission grids, and more than 2,300 that have a mandate to own and/or operate the more local distribution grid. These power grid companies have been unbundled from the vertically integrated utilities that took care of the whole electricity value chain. So what is their future strategy now that they can set an independent course?
A new business model
Some are already questioning the sustainability of their business due to a major technological development: the micro grid. These private grids, which can serve just a few houses or neighbourhoods, allow consumers and companies to disconnect partly or completely from the existing grid infrastructure. Micro grids are becoming reality thanks to a number of developments like local generation and improved storage capacity in batteries, electric cars, or heat. The more renewable energy sources are connected to local grids, the more storage becomes key. And at the same time, better storage could mean less dependence on being connected to the public grid.
That would cause upheaval both in terms of grid operation and tariff structure. Power grid companies are indeed obliged to offer a universal service to all customers, and often at a universal price, with limited possibilities to tailor their offering. If some customers go off grid and no longer use the central network (or only use it as a back-up service), the total cost of the grid has to be paid by a smaller number of customers. As costs go up, more customers will be tempted to leave the grid, making it even more costly for those that stay connected all year long. In some geographical areas, like California, this situation is already happening.
What does the future hold?
Although there is no doubt that central and decentral generation will co-exist in the future, nobody knows what the balance and outcome will be. This means a major uncertainty for an industry characterised with assets having a lifetime of between 25 and 50 years.
We will have to wait and see how the existing power grid companies react to this new development and what their role will be in the future. After all, private grids also need an owner and someone to manage them. Companies such asUK Power Networks have understood this by entering into the private networks of some of the busiest airports in Britain. And in some countries like Germany, there is a long tradition of local companies taking care of electricity and gas networks, telecoms, waste and water. We are witnessing a revival of locally integrated network companies partly motivated by smart city initiatives supported by local governments. Only time will tell what is the best way for the current players to deal with this new business model.
Leonardo Meeus is energy markets professor and director of the Energy Centre at Vlerick Business School
Join the community of sustainability professionals and experts. Become a GSB member to get more stories like this direct to your inbox
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9470668435096741,
"language": "en",
"url": "https://blog.knoxcustody.com/holding-bitcoin-the-basics/",
"token_count": 1610,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2001953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bd607b37-a1c0-4457-9db1-83d088be4dc5>"
}
|
In this document, we present a primer on some of the fundamentals behind Bitcoin. We will only cover the basics involved in holding and spending, and introduce these in one place so they may act as a useful guide should such a prerequisite be useful elsewhere.
We will focus on simplicity over accuracy if it helps get the point across, which means for those more knowledgeable that we’ll skip over the UTXO model, and only speak in terms of:
- The right to spend Bitcoin from some address
We use the term signing authority to refer to a situation in which some entity has the right to spend bitcoins from some address. If so, that entity has signing authority over the address. You can think of it as that entity having the right to sign off on Bitcoin movements away from the address. That entity is also said to hold all the bitcoins that were ever sent to that address. The entity typically has signing authority over many addresses, and they hold as many bitcoins as have been sent to all of the addresses over which they maintain signing authority.
It should be noted that when we refer to signing authority in this way, we mean it only in the strict technical sense. An entity maintaining signing authority may not have the legal authority to spend. Further dissection of some of these differences, as well as Knox’s view on how to implement a trust-minimized service and the importance of insurance can be found in our first discussion on insurance.
Conventions & Helpful Aides
To make everything clearer, for the rest of the article, any time an important term appears, especially one that will appear in multiple diagrams, we will italicize it and use the same form in the article. For example Public-Key, Private-Key, Address.
Many of the below examples will involve basic transformations of information from one form to another. As a common mental model, we will call these computer programs. If you prefer, you can think of them as functions or simple input-output machines. In any case, they will always be shown graphically like in the below example:
Keys & Addresses
An entity has the right to sign off on Bitcoin movements away from an address by proving to the Bitcoin network that they know a secret number. We call this a private key, so termed because it must be kept private to be of practical use. The relation is as follows:
What does Private-Key look like? You can think of it as nothing but a large number. Given some Private-Key, it is easy to derive a corresponding Public-Key. What does the Public-Key look like? You can also think of it as nothing but a number.
Importantly, given some Private-Key, you can always use it to get the corresponding Public-Key. The Public-Key will always be the same given the same Private-Key. However, it is impossible to go backwards. If you prefer, you can think of it as: There exists a computer program called Get-Public-From-Private, which can be fed one input, a Private-Key, and always outputs the same Public-Key.
There exists no program Get-Private-From-Public, and indeed a practical one is impossible to write.
Suppose that there exist two special computer programs called Sign and Verify.
Sign works by taking some Private-Key, and any Message you can throw at it, and outputs a Signed-Message. We’ll use the Signed-Message output in a moment.
Given a signed message like that above, we can use the other special computer program called Verify. Feeding it the Signed-Message, and the Public-Key, the program will tell you if Signed-Message was signed using the same Private-Key from which Public-Key is derived. Amazingly, this works without revealing that corresponding Private-Key.
Now, suppose Address above had previously been sent 5 bitcoins, and the holder of those bitcoins wished to send them to another address, Pay-Address.
We can imagine a particular Message being produced that asks the network: Take the 5 bitcoins from Address and move them to Pay-Address.
Of course, anyone could produce such a message, so the network needs some proof that the request is legitimate. It demands proof that the sender of the Message seeking a spend actually maintains signing authority over the particular Address. This requires proving that they actually know the associated Private-Key. Of course, Private-Key can’t be revealed otherwise everyone who witnessed it would gain signing authority over Address.
But of course, we can turn to the programs we defined above, Sign and Verify. Behind closed doors, the entity that knows the Private-Key used to derive Address can use it to produce a signed message of the above:
The participants in the network will see the Signed-Message, and can use Verify to convince themselves that the movement is legitimate, and from then on know that Pay-Address holds the 5 bitcoins. If the entity that maintains signing authority over Pay-Address then wants to send them to another address, they can use Sign to produce a similar Signed-Message.
In reality, the kinds of Signed-Messages that are signed in order to move funds on the Bitcoin network do not look much like the movement requests above, but ultimately it is important to understand the relation between Private-Key, its corresponding Public-Key, and the ability for perfect signing and verification to occur without a Private-Key ever being exposed.
Up until now, we showed the relationship between a Private-Key, and its corresponding Address. In fact, in Bitcoin it is possible to produce an individual Address derived from a set of completely independent Private-Keys. The great thing about this arrangement, and one of the reasons we make such heavy use of it at Knox, is that the address is derived from the set of Public-Keys corresponding to the set of Private-Keys. This means that a completely distinct set of entities can independently create each Private-Key without ever having to reveal anything to the others. In this way, signing authority is achieved by a quorum of independent Private-Keys. As you can imagine, this goes a long way in taming risk.
The above depicts 4 distinct private keys, (Private-Key-1, Private-Key-2, Private-Key-3, Private-Key-4) each of which may be completely independently generated, coming together to produce 4 public keys (Public-Key-1, Public-Key-2, Public-Key-3, Public-Key-4) from which a single Address is derived. When the Address is derived, we can specify the number of signatures that need to appear to achieve quorum.
For example, in the case that it is 3 out of 4 keys, we refer to the Address above as a 3-of-4 multisignature address. In this case, at least 3 signatures like those we saw earlier need to be produced before anything can be moved from Address. As an example, Private-Key-1, Private-Key-2, and Private-Key-4 can together reach quorum. Conveniently, not only can the keys be created independently, they can be used completely independently of each other, and never even need to be found in the same place. Suppose for example that Private-Key-1 were created in Paris and used in Montreal, Private-Key-2 were created in Calgary and used in Toronto, and Private-Key-4 were created in Vancouver and used in Montreal.
We hope that with the help of this document you have come to better understand some of the intricacies involved in holding and spending Bitcoin. If you came to this document to learn these basics in order to understand other content, we trust you will be better armed, and can come back to this document regularly should you need to refresh your knowledge.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9222691655158997,
"language": "en",
"url": "https://blog.rotronic.com/tag/moisture-content/",
"token_count": 827,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.251953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8f314a57-e1b9-4448-95e6-f2cd7cf68fe0>"
}
|
The sugar market worldwide
Sugar is one of the most important raw materials traded on the worldwide markets.
Top 5 sugar producing companies
1. Suedzucker AG,
2. Cosan SA Industria & Comercio
3. British Sugar PLC
4. Tereos Internacional SA
5. Mitr Phol Sugar Corp.
In the 18th century only a few countries were producing sugar. However, these days over 100 nations process different base materials into sucrose. Remarkably India, China, Brazil & the European Union alone deliver 50% of the global demand.
– Worldwide 170 million tons of raw sugar were produced in 2011/2012
– Brazil, India, China & EU are the most important sugar producing nations
– With an annual consumption of more than 24 million tons India, is the world’s largest market for raw sugar
Raw materials & processing
In temperate regions such as West, Central & Eastern Europe, the United States, China and Japan raw sugar is produced from sugar beet. However in the tropics and subtropics sugar is extracted from sugar cane.
Sugar cane & Sugar Beet
The processing of these two raw materials only differs in the first few steps. The main goal is to extract the juice, containing the sugar, as efficiently as possible.
Extracting the sugar
Sugar cane is cut into small pieces during the harvest. It is then put through an industrial press to squeeze out the sweet sap.
Sugar beet has to be processed in extraction towers, where the plants release their sugar during a hot water treatment at 70°C.
After filtering the juice the water is extracted by passing through different stages of evaporators until only a thick syrup is left consisting of around 70% sugar.
The syrup is then boiled until sugar crystals are formed. These crystals are then cleaned through centrifugation. To further improve purity this process is repeated twice.
Cooling & drying
Now the sugar has to be dried. One option is in large scale drum dryers at a temperature of 60°C. after drying, the sugar is cooled down on fluidized-bed coolers before heading to the warehouse or packed for shipping.
Inside a drum dryer.
Storage & logistics
Sugar belongs to the group of hygroscopic goods with an extremely low water content – below 1.5%. Basically sugar is a robust material but vulnerable to high humidity and temperature changes.
Generally it is recommended to store and transport sugar at a temperature of 20-25°C and 25-60% relative humidity.
By taking a closer look at the adsorption curve of sugar it is easy to see that over a long range of relative humidity the product quality is not affected. But as soon as the humidity level rises to 75% sugar starts to clump and above 80% relative humidity even dissolves .
Immediately after production the refined sugar is stored in humidity controlled sugar terminals or ventilated silos connected to dehumidifiers.
Sugar in a storage terminal
Large quantities are trans-ported in silo trucks or train wagons. When sent by ship sugar is packed in double-walled bags made of natural fibre and plastic. If sealed like this, temperature is the crucial parameter which can affect the quality of the sugar. Due to big differences in temperature water vapour left inside the bags may cause clumping and even liquefaction.
The finer the sugar, the higher the risk of clumping.
Why the need to measure humidity?
As seen above, temperature and humidity measurements are crucial parameters in the sugar industry. Due to its hygroscopic behavior sugar can resist small changes in humidity, and slight temperature variations are not a major problem. But as soon as relative humidity rises above 80% or temperature changes significantly, the product can be destroyed as it clumps or even turns liquid.
During the process of evaporation, crystallisation, drying and cooling temperature and humidity play a huge role.
Philip Robinson Rotronic UK
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9623335003852844,
"language": "en",
"url": "https://dumbwealth.com/money-is-debt-gold-is-money/",
"token_count": 2291,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.470703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c6604870-ee57-4f0a-9498-ebd8346dc83d>"
}
|
“In numerous years following the [civil] war, the Federal
Government ran a heavy surplus. [But] it could not pay off its
debt, retire its securities, because to do so meant there would be
no bonds to back the national bank notes. To pay off the debt was to destroy the money supply.”
— John Kenneth Galbraith
In the investing world, you can take a 3 month view, a 3 year view or a 30 year view. One person looking at one asset class might have a different forecast depending on the time horizon he is considering. In this article, I will look at gold through a 30 year lens.
I believe that structural forces will support gold and other hard assets over the long term. While current forces may be bearish for gold in the immediate term as investors panic and liquidate everything, there are a number of underlying currents that demand a strategic allocation to the metal. While the sophisticated gold investor is already familiar with these concepts, I think it is important to re-introduce them to a broader audience who may have zero allocation to gold, other precious metals and hard assets.
The Origins of Money
Throughout history, money has always held an important position as a means to facilitate transactions, thus creating massive efficiencies within an economy. Sometimes money was issued by governments. Other times a common means of transacting arose organically within a population.
Many historians suggest that fractional reserve banking and private money creation started when gold owners stored bullion within the vaults of goldsmiths for safe keeping. As proof of deposit, goldsmiths issued paper receipts that could be redeemed in exchange for gold. Seeing an easier way to transact, when buying goods and services gold owners would simply hand over gold receipts as forms of payment instead of redeeming for gold and delivering the metal.
Eventually, enough people were doing this that some enterprising goldsmiths, who noticed that the gold in their vaults was rarely reclaimed, started lending (with interest) new paper receipts that weren’t tied to a specific gold deposit. After making these loans, more paper existed than gold in the vaults, resulting in an early example of expanding money supply and credit growth. Of course, any goldsmith that manufactured receipts far in excess of gold reserves risked a run on deposits and existing receipt holders may have experienced a loss of exchange value.
Money Creation Today
In the US today, many believe that the Federal Reserve is the primary source of money supply growth. Many also believe that the Fed creates money and simply pumps it into the economy somehow. This assumes the Fed has some sort of authority over how money is spent, but this is untrue. Monetary policy is the handmaiden of fiscal policy, but both are quite distinct.
Through open market operations, the Fed adds to the money supply by purchasing assets such as US Treasuries and mortgages. Effectively, each dollar injected this way is the mirror image of someone’s liability, giving rise to the concept that money is debt.
Think about it this way, the massive fiscal response to the 2008/2009 recession and sluggish recovery has added trillions to the Federal debt. Much of this debt was indirectly financed by the Federal reserve (although they’d never admit it) via open market operations. So instead of simply printing and spending its own money, the US government has granted an independent entity (the Federal Reserve) the right to print and lend to the government and its citizens. Some might see this as ‘checks and balances’ while others might argue that it grants unnecessary power and profit to the banking cabal that controls the Federal Reserve. In the end, the US government has added $trillions to its debt.
The truth is that while the Federal Reserve can add to the money supply the biggest driver of money growth is the private sector. And this is where it gets especially important for the gold investor.
The monetary system does not stand still – it operates on a treadmill of debt. The majority of money in the economy is created when private banks make loans. One might think that these loans are based on deposits, but the reality is that – much like the goldsmiths of days past – in a fractional reserve system far more loans are made than deposits on hand.
Modern Money Mechanics, a publication by the Federal Reserve Bank of Chicago in 1968, states the following:
” For example, if reserves of 20 percent were required, deposits could expand only until they were five times as large as reserves…Under current regulations, the reserve requirement against most transaction accounts is 10 percent…Of course, they [the banks] do not really pay out loans from the money they receive as deposits. If they did this, no additional money would be created…The deposit expansion factor for a given amount of new reserves is thus the reciprocal of the required reserve percentage (1/.10 = 10).”
They further illustrate this with the following diagram, showing the initial deposit and the cumulative expansion via additional loans.
Fig. 1: Cumulative expansion in deposits on the basis of 10,000 of new reserves and reserve requirements of 10 percent, from: FED, 1968. Modern Money Mechanics – A Workbook on Bank Reserves and Deposit Expansion. Federal Reserve Bank of Chicago, Revised Edition, February 1994, p. 11
In essence, the banking system has the legal power to create money out of thin air
The Debt-Money Conundrum
Here’s the kicker: whether the money is created by the Fed or by the private banks, the money must be paid back with interest. Because only the principal amount is loaned, only the principal amount exists in circulation. In aggregate, enough money doesn’t exist throughout the economy to pay both principal and interest on all debts.
Bernard Lietaer, who helped design the Euro and has written several books on monetary reform, explains the interest problem like this:
“When a bank provides you with a $100,000 mortgage, it creates
only the principal, which you spend and which then circulates
in the economy. The bank expects you to pay back $200,000
over the next 20 years, but it doesn’t create the second $100,000
– the interest. Instead, the bank sends you out into the tough
world to battle against everybody else to bring back the second
The debt-money conundrum results in two conditions:
1. Systemic Competition. Like rats in a cage, society is provided too few resources. In this case, the money required to repay debts plus interest is short. This means that if one person or company is able to repay their debts another is not. This raises the level of competition within society. Arguably this has been a positive economic characteristic since the industrial revolution, however one must wonder what the world would be like if debt-fueled competition didn’t exist. Competition goes far beyond the healthy – many wars and crimes can be traced to the competition for the resources required to indirectly repay debts through economic growth. Right or wrong, money loaned into existence has created systemic competition. On an individual level, many refer to this as the ‘rat race’. On a macro level some refer to this as the New World Order.
“The problem is that all money except coins now comes from banker created loans, so the only way to get the interest owed on old loans is to take out new loans, continually inflating the money supply; either that, or some borrowers have to default. Lietaer concluded: [G]reed and competition are not a result of immutable human temperament . . . . [G]reed and fear of scarcity are in fact being continuously created and amplified as a direct result of the kind of money we are using. . . . [W]e can produce more than enough food to feed everybody, and there is definitely enough work for everybody in the world, but there is clearly not enough money to pay for it all. The scarcity is in our national currencies. In fact, the job of central banks is to create and maintain that currency scarcity.
The direct consequence is that we have to fight with each other in order to survive.”
2. The Ultimate Ponzi. If money was lent into existence on a single occasion only, the first condition would lead to a deflationary outcome and shrinking total credit. Lenders would take haircuts and, knowing this in advance, potentially would have never lent the money in the first place. Or lenders would have priced the defaults into interest rates and covenants, paradoxically making it even harder for all loans to be repaid with interest. The banking system simply would no longer exist in its current state.
In reality, new money supply begets new money supply. To reduce the number of defaults caused by the competition for money, the banking system must continually lend more money into existence. As new money is introduced it helps money flow to past borrowers enabling them to repay their debts. To adequately offset the number of bankruptcies in the system, money must continually be created. This is precisely why modern industrial economies have a implicit ‘normal’ inflation rate of 1-3%. In good times and bad, money supply simply must expand for the system to survive. Normally that money is created by banks; however, sometimes the lender of last resort (i.e. Federal Reserve) – as the only lender that can continually accept losses – steps in to offset private loan destruction in periods of extreme financial distress, such as the 2008/2009 crisis.
The Growth Imperative
When inflation must be maintained at 1-3% for the system to stay solvent, many other areas of society are significantly affected. Companies must continuously raise prices, salaries must continuously increase, economies must continuously grow, populations must continuously increase, food supply must continuously rise, and so on.
Over the long run, continuous monetary expansion leads to the destruction of the value of the dollar relative to stable assets. While continuous monetary expansion can provide a tailwind to many businesses with pricing power, I think most investors are already set up to benefit from this through the equity portion of their portfolios. Where I think many investors are deficient is in a strategic allocation to gold.
Gold is Money
Many investors have a 3 month or 3 year view on gold, but few have a 30 year view. While I agree that intermediate forces could send the gold price down, I believe that structural monetary expansion means that all long-term investors should have some strategic weight to the yellow metal, which can serve as stable money while fiat currencies around it are devalued.
While equities (and other assets) can benefit from these same structural forces, gold has different risk-return characteristics and can help to diversify a portfolio. I am not saying that investors should dump half their portfolio into gold bars. What I am saying is that, as a stable currency, gold can help mitigate the effects of never-ending monetary expansion, and most investors are significantly underweight.
Gold can provide factor exposure not obtained through traditional asset classes and may be a valuable tool in the preservation of long-term wealth in a world in which money is debt and gold is money.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9234195351600647,
"language": "en",
"url": "https://experttech.in/learn-how-to-calculate-your-money-weighted-rate-of-return-in-excel/",
"token_count": 455,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0615234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:89d4a17d-a4f4-4fd1-988e-4f5629dd7c1b>"
}
|
Before we look at how to calculate your money weighted rate of return in excel, it is important to first understand what it means.
What is money weighted rate of return?
- This is simply a measure of the performance of an asset or portfolio of assets. To calculate the weighted money return you need to find the rate that will set the value of the present values of all cash flows and terminal values equal to the value of initial investment. In other words, the money-weighted rate of return, (MWRR) is equivalent to the internal rate of return (IRR).
- In other words, MWRR is the discount rate at which the net present value or NPV=0. You can also say that it is the discount rate at which present value of all cash inflows equals present value of all cash outflows.
- To understand how we can implement this in real calculations, let us consider the example below;
- In the above example, we have found the MWRR to be 6%.
- At 6%, our NPV is zero. In this analysis, our cash flow is $100, which is the initial investment.
- The cash inflows are $50 and $60. Row 5 provides the discounted values, which can help us get the NPV.
- But we can still get the NPV using the NPV function as shown in cell B7.
- To get the interest that can make cash outflow and inflow equal so that we have NPV=0, we might need to use trial and error method, which is tiresome.
- In the example, we have used the GOAL SEEK Excel built-in function to get a percentage that can make our NPV 0.
- To get the goal seek, we proceed as follows;
- Head to the Data in the menu bar.
- Click on What-If Analysis
- Click “Goal Seek”
Then, indicate the cell with NPV as the set cell.
Put the set cell value as zero, by changing the cell with the rate. In our case, our set cell is B7, we change it to 0 and we are changing cell B4.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9298973083496094,
"language": "en",
"url": "https://obamawhitehouse.archives.gov/the-press-office/2016/02/09/fact-sheet-president-obama-proposes-new-funding-build-resilience-alaskas",
"token_count": 1374,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0030670166015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:28270563-c15d-4ac2-bd63-28ee568732bc>"
}
|
FACT SHEET: President Obama Proposes New Funding to Build Resilience of Alaska’s Communities and Combat Climate Change
When the President visited Alaska in August, he described the urgent and growing threat of a changing climate as “a challenge that will define the contours of this century more dramatically than any other.” Last year broke the record set by 2014 as the warmest year on record. Climate change is already disrupting our agriculture and ecosystems, our water and food supplies, our energy, our infrastructure, and our health and safety.
In no place is this truer than in Alaska. The Arctic is warming twice as fast as the rest of the world, and is experiencing the consequences. Higher average temperatures are diminishing the range of winter sea ice, allowing heavy storm surges to batter the Alaskan coastline, and interrupting the winter hunting season for Alaska Natives, many of whom rely on subsistence to feed themselves and their families.
The President believes we must invest in Alaska’s long-term economic and environmental well-being in a way that transcends his time in office. This priority, as well as the President’s broader commitment to conservation and climate action – and the economic growth they bring – is evident throughout the President’s 2017 Budget, literally from cover to cover. In fact, the cover of this year’s Budget is an image of Denali, which the President renamed last year during his trip to Alaska, restoring the Koyukon Athabascan name of Denali to the tallest mountain in North America, previously known as Mt. McKinley.
The President’s FY 2017 Budget lays out a vision for the future of Federal-State collaboration in Alaska with a package of proposals aimed at reducing the risks of climate change and building the resilience of Alaska’s communities and natural resources to climate change in a fiscally responsible way, including by:
Accelerating Construction of a New Icebreaker. The President’s Budget meets the Administration’s commitment to fast-track construction of a new polar-class icebreaker by providing $150 million to complete all planning and design activities necessary to begin production activities by 2020. The new, heavy icebreaker will assure year-round accessibility to the Arctic region for Coast Guard missions including protection of Alaska’s maritime environment and resources.
Establishing a Coastal Climate Resilience Fund at the Department of the Interior. Approximately $400 million of a $2 billion Coastal Climate Resilience program will be set aside to cover the unique circumstances confronting vulnerable Alaskan communities, including relocation expenses for Alaska Native villages threatened by rising seas, coastal erosion, and storm surges. The program will provide resources over 10 years for at-risk coastal States, local governments, and their communities to prepare for and adapt to climate change. This program would be paid for by redirecting roughly half of the savings achieved by repealing unnecessary and costly offshore oil and gas revenue sharing payments that are set to be paid to a handful of states under current law.
Supporting the Denali Commission. The Budget provides the Denali Commission—an independent Federal agency created to facilitate technical assistance and economic development in Alaska—with $19 million, including an additional $4 million above the FY16 enacted level, to coordinate Federal, State, and Tribal assistance to communities to develop and implement solutions to address the impacts of climate change. This follows the President’s announcement this August that the Denali Commission will play a lead coordination role for Federal, State, and Tribal resources to assist communities in developing and implementing both short- and long-term solutions to address the impacts of climate change, including coastal erosion, flooding, and permafrost degradation.
Building Capacity and Critical Infrastructure in Alaska Native Villages. The Budget provides over $100 million across several Federal agencies to support planning and infrastructure in high-need Alaska Native Villages, including:
- $5 million at the Department of the Interior’s Bureau of Indian Affairs to support resilience planning and subsistence activities for Alaska Native communities.
- $2 million at the Department of Agriculture (USDA) for the “StrikeForce” Initiative, to provide addition outreach and technical assistance to Alaskan Villages so they are better able to access USDA programs.
- $26.8 million is available to obligate through USDA’s Rural Alaska Villages grant program for essential water and waste projects. Priority will be given to applications for projects that employ green infrastructure.
- $17 million at the Environmental Protection Agency for water infrastructure grants in Alaska Native villages.
- More than $40 million in Arctic-focused investments at the Department of Energy, including $4.5 million through the Tribal Energy Program, which delivers customized, on-site technical expertise to support community energy planning and clean energy projects.
The President’s Budget also builds on steps the Administration has taken to improve community resilience to the effects of climate change and conserve our natural resources and outdoor spaces—both in Alaska and across the country.
Providing Full Funding for Land and Water Conservation Fund (LWCF) Programs.
The budget supports reliable funding for the LWCF programs to protect and conserve the habitat of threatened and endangered species, secure public access, improve recreational opportunities and preserve ecosystem benefits for local communities. The Budget proposes full funding of $900 million in FY 2017 for LWCF programs in DOI and USDA, an amount equal to the oil and gas receipts deposited in the LWCF each year. This total includes $475 million in discretionary funds and $425 million in mandatory funds. Of this amount, $21 million is for sportsmen and recreational access. With the additional mandatory funds, NPS would be able to acquire six parcels in Denali National Park to protect the historic Denali Park Road, where the majority of tourists experience the park by viewing the mountains and wildlife.
Funding the National Park Centennial Initiative. To continue to care for our national parks and to mark the 100-year anniversary of the founding of the National Parks Service (NPS), the Budget includes an increase of $206 million in discretionary funds in FY 2017 and $500 million a year for three years in mandatory funds to restore facilities and enhance visitor services at some of our greatest historical, cultural, and natural treasures. For example, the President’s 2017 Budget includes $4.7 million to provide safe public access to the historic Kennecott Mine in Wrangell-St. Elias National Park & Preserve, making this a promising tourist destination.
In addition, today the Administration announced that $8.3 million will be allocated in the U.S. Army Corps 2016 work plan for safety related and other improvements of the harbor in Port Lions, Alaska.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9343481063842773,
"language": "en",
"url": "https://www.farmersreviewafrica.com/the-promise-in-agro-ecology-and-how-its-being-undervalued/",
"token_count": 678,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.039794921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:41363ff6-1f56-4708-a961-0b5f2d70f0a8>"
}
|
Many farmers across Kenya are increasingly practicing agro-ecology. Those with tracts of farming land are growing different types of vegetables including kales, amaranth, vine spinach, ordinary spinach, tomatoes, capsicum, chili and African nightshade. For the food crops, there is maize, arrow roots and sweet potatoes. In addition, they are either keeping chicken, rearing bees for honey, rabbits, dairy goats, or dairy cows.
This makes those households food secure for a long time, irrespective of the sizes of the farms under use. According to the Food Sustainability Index, created by the Barilla Centre for Food and Nutrition (BCFN) and the Economist Intelligence Unit (EIU), agroecology taps into traditional agricultural knowledge and practices, plays an important role in sustainable farming by harnessing local ecosystems.
Furthermore, tapping into local ecosystems, for example via using biomass and biodiversity, the traditional farming practices that make up agroecology can improve soil quality and achieve food yields that provide balanced nutrition and increase fair trade.
However, a new study by researchers from Biovision, International Panel of Experts on Sustainable Food Systems (IPES-Food) and the United Kingdom-based Institute of Development Studies shows that such sustainable and regenerative farming techniques have either been neglected, ignored or disregarded by major donors.
The study titled ‘Money Flows: what is holding back investment in agroecological research for Africa?’ released on Jun. 10 focused mainly on; the Bill & Melinda Gates Foundation, because it is the biggest philanthropic investor in agri-development; on Switzerland, a major bilateral donor; and Kenya, one of Africa’s leading recipients and implementers of agricultural research for development.
One of the major findings, according to Hans Herren, the President for Biovision, is that most governments, both in developing and developed countries, still favour “green revolution” approaches, with the belief that chemical-intensive, large-scale industrial agriculture is the only way to produce sufficient food.
Herren notes that these approaches have failed ecosystems, farming communities, and an entire continent. Moreover, and with the compound challenges of climate change, pressure on land and water, food-induced health problems and pandemics such as COVID-19, we need change now. This, he asserts, starts with investment in agroecology.
According to a report from the Bill & Melinda Gates foundation, agro-ecology has the potential to build resilience and sustainability at all levels, by reducing vulnerability to future supply shocks and trade disruptions, reconnecting people with local food production, and making fresh, nutritious food accessible and affordable to all.
This, according to the scientists, will reduce the diet-related health conditions that make people susceptible to diseases, and provide fair wages and secure conditions to food and farm workers, thereby reducing their vulnerability to economic shocks and their risks of contracting and spreading illnesses.
However, the findings show that very little agricultural research funding in Africa is being used to transform such food and farming systems. Nonetheless, the report points out that support for agroecology is now growing across the agri-development community, particularly in light of climate change. But this hasn’t yet translated into a meaningful shift in funding flows.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9506704211235046,
"language": "en",
"url": "https://www.lkvmrr.com/how-to-check-crypto-transaction/",
"token_count": 817,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1474609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ec2f789f-9129-4c8b-ac14-aac40f69278b>"
}
|
How To Check Crypto Transaction – What is Cryptocurrency? Put simply, Cryptocurrency is digital cash that can be used in place of traditional currency. Basically, the word Cryptocurrency originates from the Greek word Crypto which means coin and Currency. In essence, Cryptocurrency is simply as old as Blockchains. Nevertheless, the difference in between Cryptocurrency and Blockchains is that there is no centralization or journal system in place. In essence, Cryptocurrency is an open source protocol based on peer-to Peer transaction innovations that can be carried out on a dispersed computer system network.
One specific method in which the Ethereum Project is attempting to solve the issue of clever agreements is through the Foundation. The Ethereum Foundation was developed with the goal of establishing software application services around clever agreement functionality. The Foundation has actually released its open source libraries under an open license.
What does this mean for the wider community thinking about participating in the advancement and application of smart contracts on the Ethereum platform? For beginners, the major distinction between the Bitcoin Project and the Ethereum Project is that the former does not have a governing board and for that reason is open to factors from all walks of life. The Ethereum Project enjoys a much more regulated environment. For that reason, anybody wishing to contribute to the project must abide by a standard procedure.
When it comes to the jobs underlying the Ethereum Platform, they are both making every effort to provide users with a brand-new way to participate in the decentralized exchange. The major distinctions between the two are that the Bitcoin procedure does not use the Proof Of Consensus (POC) process that the Ethereum Project uses. In addition, there will be a hard work to integrate the newest Byzantium upgrade that will increase the scalability of the network. These 2 differences might show to be barriers to entry for prospective entrepreneurs, however they do represent important distinctions.
On the other hand, the Ethereum Project has actually taken an aggressive approach to scale the network while likewise tackling scalability problems. In contrast to the Satoshi Roundtable, which focused on increasing the block size, the Ethereum Project will be able to execute enhancements to the UTX protocol that increase transaction speed and reduction fees.
The significant distinction in between the two platforms originates from the operational system that the 2 groups employ. The decentralized aspect of the Linux Foundation and the Bitcoin Unlimited Association represent a conventional model of governance that positions an emphasis on strong neighborhood participation and the promotion of consensus. By contrast, the heavenly structure is committed to building a system that is flexible enough to accommodate changes and include new features as the requirements of the users and the industry change. This design of governance has been adopted by numerous dispersed application groups as a method of handling their jobs.
The significant difference between the 2 platforms comes from the truth that the Bitcoin community is mainly self-dependent, while the Ethereum Project anticipates the participation of miners to subsidize its advancement. By contrast, the Ethereum network is open to contributors who will contribute code to the Ethereum software stack, forming what is known as “code forks “. This feature increases the level of participation preferred by the community. When it was used in forex trading, this design also differs from the Byzantine Fault model that was embraced by the Byzantine algorithm.
As with any other open source innovation, much controversy surrounds the relationship between the Linux Foundation and the Ethereum Project. The Facebook team is supporting the work of the Ethereum Project by offering their own framework and developing applications that incorporate with it.
Merely put, Cryptocurrency is digital cash that can be used in location of traditional currency. Generally, the word Cryptocurrency comes from the Greek word Crypto which implies coin and Currency. In essence, Cryptocurrency is just as old as Blockchains. The difference between Cryptocurrency and Blockchains is that there is no centralization or journal system in location. In essence, Cryptocurrency is an open source procedure based on peer-to Peer transaction technologies that can be performed on a distributed computer system network. How To Check Crypto Transaction
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.952032744884491,
"language": "en",
"url": "https://www.thinkadvisor.com/2019/05/29/5-states-where-income-inequality-increased-the-most/",
"token_count": 396,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1513671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3a431f65-c37f-45a8-b852-26819d061b42>"
}
|
Here’s a look at five U.S. states where a key measure income inequality suggests that income inequality increased rapidly between 2007 and 2017.
The U.S. Census Bureau tries to summarize income inequality by computing “Gini index” figures for states, and for other geographic areas it includes in its surveys,.
A Gini index, or Gini coefficient, is tool for measuring how different the numbers in a collection of numbers are from each other. In a state where every household had the same income, the Gini index would be 0%. In a state where one household had all of the income, the Gini index would be 100%.
We recently posted an article about the five states that had the highest Gini index figures in 2017 — the latest year for which figures are available.
(Related: 5 States Where the Haves Have the Most)
This time, we took another approach: We looked at how much Gini index figures had changed for each state between 2007 and 2017, then ranked states by the size of their Gini index change.
The rate of change ranged from a decrease of 0.4 percentage points, in Wyoming, up to an increase of more than 4 percentage points, in one state. The median rate of increase was 1.5 percentage points.
States with growing income inequality might be good target markets for products and services that appeal to high-income or low-income prospects more than to middle-income prospects.
To see the five states with the greatest growth in their Gini index figures between 2007 and 2017, see the data cards in the slideshow above.
To create this slideshow, we used Gini index of income inequality data from the U.S. Census Bureau’s 2007 and 2017 American Community Survey results databases.
The datasets, and data filtering tools, are available here.
— Read The 5 States Where Death Is Most Unfair, on ThinkAdvisor.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9513049125671387,
"language": "en",
"url": "https://business-accounting.net/t-accounts/",
"token_count": 1613,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.023681640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0d5e451f-e1d2-4f7b-b1ea-32f6f98a389b>"
}
|
This results in the elimination of the accounts payable liability with a debit to that account, as well as a credit to the cash (asset) account, which decreases the balance in that account. A debit is an accounting entry that results in either an increase in assets or a decrease in liabilities on a company’s balance sheet. In fundamental accounting, debits are balanced by credits, which operate in the exact opposite direction.
Thus, the use of debits and credits in a two-column transaction recording format is the most essential of all controls over accounting accuracy. Accountants record increases in asset, expense, and owner’s drawing accounts on the debit side, and they record increases in liability, revenue, and owner’s capital accounts on the credit side.
Liability, revenue, and owner’s capital accounts normally have credit balances. To determine the correct entry, identify the accounts affected by a transaction, which category each account falls into, and whether the transaction increases or decreases the account’s balance.
Let’s illustrate the general journal entries for the two transactions that were shown in the T-accounts above. We now offer eight Certificates of Achievement for Introductory Accounting and Bookkeeping. The certificates include Debits and Credits, Adjusting Entries, Financial Statements, Balance Sheet, Income Statement, Cash Flow Statement, Working Capital and Liquidity, and Payroll Accounting.
Is T account same as general ledger?
T-accounts are important because they let an accountant: Analyze the financial transactions by categorizes of accounts rather than by date. Visualize what is happening, which is useful when doing adjusting entries.
The general ledger is usually printed and stored in an organization’s year-end book, which serves as the annual archive of its business transactions. Jane wants to buy a $5,000 hot tub but doesn’t have the money at the time of the sale. The hot tub company would invoice her and allow her 30 days to pay off her debt. During that time, the company would record $5,000 in their accounts receivable.
How to Use Excel as a General Accounting Ledger
In a T-account, their balances will be on the right side. Using depreciation, a business expenses a portion of the asset’s value over each year of its useful life, instead of allocating the entire expense to the year in which the asset is purchased. This means that each year that the equipment or machinery is put to use, the cost associated with using up the asset is recorded.
This the system in which you record an account receivable. Anyone analyzing the results of a business should compare the ending accounts receivable balance to revenue, and plot this ratio on a trend line. If the ratio is declining over time, it means that the company is having increasing difficulty collecting cash from its customers, which could lead to financial problems. When recording an account payable, debit the asset or expense account to which a purchase relates and credit the accounts payable account. When an account payable is paid, debit accounts payable and credit cash.
Why are T account used in accounting?
A T-account is an informal term for a set of financial records that uses double-entry bookkeeping. The title of the account is then entered just above the top horizontal line, while underneath debits are listed on the left and credits are recorded on the right, separated by the vertical line of the letter T.
The credits and debits are recorded in ageneral ledger, where all account balances must match. The visual appearance of the ledger journal of individual accounts resembles a T-shape, hence why a ledger account is also called a T-account. The general ledger is comprised of all the individual accounts needed to record the assets, liabilities, equity, revenue, expense, gain, and loss transactions of a business. In most cases, detailed transactions are recorded directly in these general ledger accounts. In the latter case, a person researching an issue in the financial statements must refer back to the subsidiary ledger to find information about the original transaction.
When Jane pays it off, the money would go back to the sales amounts or cash flow. The amount of money owed to a business from their customer for a good or services provided is accounts receivable. Accounts receivable is recorded on your balance sheet as a current asset, implying the account balance is due from the debtor in a year or less. Most companies allow for a portion of their sales to be on credit. Often, a business offers this credit to frequent or special customers who receive periodic invoices.
- Revenues and gains are recorded in accounts such as Sales, Service Revenues, Interest Revenues (or Interest Income), and Gain on Sale of Assets.
Expenses normally have debit balances that are increased with a debit entry. Since expenses are usually increasing, think “debit” when expenses are incurred. In a T-account, their balances will be on the left side. The bottom set of T accounts in the example show that, a few days later, the company pays the rent invoice.
Since the service was performed at the same time as the cash was received, the revenue account Service Revenues is credited, thus increasing its account balance. A T account is a way to organize and visually show double-entry accounting transactions in the general ledger account.
The rate at which a company chooses to depreciate its assets may result in a book value that differs from the current market value of the assets. As noted earlier, expenses are almost always debited, so we debit Wages Expense, increasing its account balance. Since your company did not yet pay its employees, the Cash account is not credited, instead, the credit is recorded in the liability account Wages Payable. A credit to a liability account increases its credit balance. Whenever cash is received, the asset account Cash is debited and another account will need to be credited.
How to Calculate Credit and Debit Balances in a General Ledger
An account’s assigned normal balance is on the side where increases go because the increases in any account are usually greater than the decreases. Therefore, asset, expense, and owner’s drawing accounts normally have debit balances.
You may find the following chart helpful as a reference. With the accrual accounting, you record a transaction whether cash has been received or not.
Revenues and gains are recorded in accounts such as Sales, Service Revenues, Interest Revenues (or Interest Income), and Gain on Sale of Assets. These accounts normally have credit balances that are increased with a credit entry.
T- Account Recording
In practice, T accounts are not typically used for day-to-day transaction as most accountants will create journal entries in their accounting software. The T-account is also helpful in tracking track debits and credits to find accounting errors in journal entries. For different accounts, debits and credits may translate to increases or decreases, but the debit side must always lie to the left of the T outline and the credit entries must be recorded on the right side. The major components of thebalance sheet—assets, liabilitiesand shareholders’ equity (SE)—can be reflected in a T-account after any financial transaction occurs.
Whenever an accounting transaction is created, at least two accounts are always impacted, with a debit entry being recorded against one account and a credit entry being recorded against the other account. There is no upper limit to the number of accounts involved in a transaction – but the minimum is no less than two accounts.
Another way to visualize business transactions is to write a general journal entry. Each general journal entry lists the date, the account title(s) to be debited and the corresponding amount(s) followed by the account title(s) to be credited and the corresponding amount(s).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.953545093536377,
"language": "en",
"url": "https://everything-business.com/how-to-protect-your-investments-from-inflation/",
"token_count": 577,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.126953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6a606d57-375b-42d9-ac0f-188d0bf79c7b>"
}
|
Inflation is the rate at which the prices of goods and services increase. On the other hand, deflation shows how the prices of commodities decrease. Both deflation and inflation are economic factors that each investor should consider while planning to make any move in investing. In the time of inflation, all prices tend to rise from a loaf of bread to purchasing a new car. With time, the purchasing power will fall drastically. Inflation means too much money in the system. However, if assets and investment appreciate at a rate higher than that of the inflation, the adverse effects of the phenomenon get neutralized. Well, investors should understand manners in which to invest in such a way that you maintain an increase in purchasing power.
Invest In Stocks
Many people do not have confidence in owning some stocks. However, acquiring some equities is a proper way to combat inflation. If your company involves projects that will not give higher returns than the cost, then, the firm will get affected by the inflation. You should invest in industries where the prices increase naturally during inflation. Commodity resource companies, like oil, metals, and grains benefit from pricing power during inflation. You should look forward to being the lowest cost-producer. Consider businesses such as healthcare services and commodity industries that hold the most positive profit margins. Dividends raise the overall return of the company’s portfolio. Thus, do not underestimate the power of dividends in the time of inflation.
Invest In A Home
When done with the right intentions, real estate is a profitable investment. Real estate investors qualify in realizing the hidden value of properties by having a plan of purchasing and holding for some years before selling a property. Real estate investment can be disappointing when you expect returns within months or weeks. The investment requires an extended waiting period. Although we have various types of mortgages, the principal of paying off is all the same. Paying some little amount every month for ten to 15 year will leave you with a loan-free asset. This property will continue to appreciate with time. When you borrow a house at a fixed price of 5 percent today, five years later the fixed price rise to 9 percent, it means that your cost of debt is cheaper than someone borrowing the same type of a house in future.
Invest In Yourself
An effective way of dealing with the future uncertainty regarding price increase is investing in you. Investing in yourself can help combat inflation by raising your future earning power. This investment means acquiring quality education and continuing to gain more skills that would march the ones needed in the future. We know that the higher the level of education the higher the pay and the more chances of getting a new job. With more skills, you inflation-proof your salary by taking more top job positions in the time of inflation. Investing in you is an effective way to combat any kind of economic instability.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9317464232444763,
"language": "en",
"url": "https://odi.org/en/publications/pathways-in-the-paris-agreement-for-ending-fossil-fuel-subsidies/",
"token_count": 144,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.384765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0a4d8032-5efa-4d94-a9a7-790ee362aa96>"
}
|
The Paris Agreement, which was agreed in December 2015, sets the framework for immediate actions and long-term strategies to prevent dangerous climate change. This includes opportunities to address a significant obstacle to the Low Carbon Transition – subsidies and public finance for fossil fuels. Taking steps to end public subsidies for high carbon energy is critical for meeting one of the key goals of the agreement: "making financial flows consistent with a pathway towards low greenhouse gas emissions and climate resilient development".
Our analysis highlights a number of key pathways within the Paris Agreement that governments can use to support the phase out of fossil fuel subsidies. This briefing sets out some examples of those pathways and highlights how governments can pursue them as a means to transition away from fossil fuels.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9572007060050964,
"language": "en",
"url": "https://policyadvice.net/insurance/insights/average-american-income/",
"token_count": 4504,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1044921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a553ae79-51cd-4842-a7a1-d45239094cdc>"
}
|
What Is the Average American Income in 2021?
Did you ever wonder how good is your monthly or yearly income compared to fellow Americans?
Knowing how much does the average American make can help you figure that out.
Before digging into details, let’s check some impressive stats.
The Average American Income — Interesting Stats and Facts
- The median household income in the US in 2019 was $68,703
- The average wage in 2019 in the US was $51,916.27
- $19.33 was the median wage per hour in the US in 2019
- The top 1% wage earners in the US contribute 20% of American annual income
- There are 34 million people below the poverty line in the US in 2019
- Full-time working women in 2019 had median earnings of $47,299.
- Full-time working men in 2019 had median earnings of $57,456
- 35-44 years age group is the highest income age group
- Maryland in 2019, had the highest median household income in the US, with $95,572/year
Average American Income In The US
What is the average American individual income?
The real median personal income in the US in 2019 is $35,977. (ALFRED)
What is a good salary in 2021?
The median average salary for workers in the United States in the first three months of 2020 was $49,764 per year. Any amount above that should theoretically be considered a good salary; however, it is not as easy as that. What is considered a good salary in one city may not be so in another. Other factors that determine a good salary is the type of job, level of education, and sadly, even gender and race.
(Source: The Balance Careers)
What percentage of Americans makes over 100k?
About 34.1% of Americans earn an annual salary of over $100,000. Around 15.5% of the population earns between $100,000 and $149,999; about 8.3% of the population earns between $150,000 and $199,999; and about 10.3% of the population earn over $200,000.
What is middle-class income in the US?
According to a research study released in 2018, about 52% of US adults have middle-class income. This income ranges from around $48,500 to $145,500.
(Source: PEW Research Center)
What is considered middle-class in 2021?
According to PEW Research, middle-class Americans are those who have an annual income that is two-thirds to double the national salary average, adjusted for household size and cost of living. For example, for a family of three, the middle-class income can be anywhere between $40,100 and $120,400.
In another research, the range for a middle-class household of three is between $53,413 and $106,827.
(Source: PEW Research Center, US News)
The median household income in the US in 2019 was $68,703.
It is an increase of 6.8% from 2018 numbers. Also, looking at the average American income per person numbers, 2 people, on average, contribute to average household incomes.
Since 2014, the average US household income (median) has been increasing.
If you are keen on household income growth trends since 2014 US median household income has been rising. It was $58,001 in 2014, $50,987 in 2015, $62898 in 2016, $63,761 in 2017, $64,324 in 2018 and $68,703 in 2019.
The Northeast region is the richest in the US. States like Maryland, New Jersey, and Massachusetts were at the top of the table.
In 2019, Maryland had the highest median household income in the US, with $95,572. Following it is the District Of Columbia with $93,111, while Mississippi, with $44,787/year, has the least average income.
Average American Income Demographics
Full-time working women in 2019 had median earnings of $47,299.
An increase of 3% from 2018. While for men, the increase is only 2.1%. When you look at real numbers, it stands at $57,456.
Full-time working men in 2019 had median earnings of $57,456
If you want to compare the average male annual income vs. average female annual income, the median value for full-time working male stands at $57,456, and for females, it is at $47,299. However, as per average American salaries statistics, there is no difference in men vs. women’s earning ratios in 2018 and 2019.
Average Income by Age
Age is an essential variable while considering the average income in any country. As experience is a crucial determinant of average salary, we decided to present numbers on that front.
Based on the data from the US Bureau Of Labor Statistics, 2020 (3rd Quarter) 35-44 years is the highest income age-group.
Age Group Monthly Median Average Wage
65+ years of age: $4024 per month
55–64 years of age: $4432 per month
45–54 years of age: $4616 per month
35–44 years of age: $4516 per month
25–34 years of age: $3672 per month
20–24 years of age: $2516 per month
Source: Bureau of LabourBLS publishes these numbers for every Quarter. They published the numbers on average wages per week. Here the numbers are adjusted for monthly wages to give you a complete picture of the impacts of age on average monthly wages in the US.
According to a report provided by the US Census Bureau, it seems like the real median household income for the United States, in 2017, was of $61,372
An interesting aspect worth noting here is that this represents a record high, considering the fact that the income has always been lower. For instance, the previous record put the average household income at $60,000 annually, and this was before the economic crisis, back in 1999. Over the last couple of years, the numbers have fluctuated quite a bit before hitting this threshold, as determined by statistics concerning the average American salary.
Source: US Census Bureau
Average Americans Salaries
What is the average American Salary?
The average wage in 2019 in the US was $51,916.27, and the average median wage was $34,248.45.
There is a big difference between the SSA average wage and median wage data. The average numbers are bigger because high-earner individuals jack them up. Also, there is a difference between the average wage and average household income in the US. Wage is just the income of individuals earned from the jobs. While household incomes also include capital gains and dividends. Also, household income is the income of all earners and earnings.
Income levels in the US
How many Americans are poor?
There are 34 million people below the poverty line in the US. The poverty rate in the US decreased from 11.8% in 2018 to 10.5% in 2019. Since 2014, the poverty rate in America is declining every year. It decreased from 14.8% in 2014 to 10.5% in 2019. When you look at the percentage points, it might not seem that big, but take a look at real numbers. In 2018-2019, America had 4.2million lesser people below the national poverty line.
What is the lowest paying job?
According to the Bureau of Labor Statistics, the lowest-paid people are the Combined Food Preparation and Serving Workers that earn a median pay of $22,140. As you can see, this is the main reason why fast food and restaurant workers are always at the center of higher living wage debates. With an hourly rate of about $10 (and even less in some states), this category of workers are the lowest paid in the country.
(Source: Go Banking Rates)
What jobs make the most money per hour?
According to the Bureau of Labor Statistics, jobs in the medical and health field have the highest hourly salary rates. Doctors earn roughly an average salary of $89 per hour and the hourly wage is higher for some specialties and lower for others. For example, anesthesiologists have an average pay of about $113 per hour, while a general dentist has an hourly pay of $77.
(Source: Bureau of Labor Statistics)
How much did the average person make a year in 2020?
A US worker typically earns about $94,700 per year. The lowest median American salary is about $24,000 while the highest average salary is $423,000, although the actual maximum salary is much higher. This salary includes housing, transport, and other benefits. Salaries also vary drastically between different industries and job titles.
(Source: Salary Explorer)
What job makes the most money per month?
Anesthesiologists have the highest US wages with a salary potential of $411,000 per year. Anesthesiologists play a crucial role during surgical procedures and their training takes four to six years of residency plus fellowship program or private practice.
(Source: Career Addict)
What annual wage is considered rich?
If you want to be considered rich in the United States, you need to have a net worth of at least $2.3 million, accordion to a Charles Schwab wealth survey.
What percentage of the US population lives paycheck by paycheck?
An estimated 63% of Americans say they live paycheck by paycheck since the coronavirus pandemic lockdown in March 2020, according to a salary statistic. Only 53% of the respondents of the survey said they were not living check by check before the pandemic and about 44% said they we reliving beyond their means before the pandemic even started.
(Source: Highland Solutions)
What percent of Americans make less than 20,000 a year?
According to Statista, about 9.1% of Americans make under $15,000 and an additional 8% of Americans have an annual pay between $15,001 and $25,000. The BLS state that people who earn an annual wage between $20,000 are considered impoverished.
What careers make you rich?
A-list celebrities make tens of millions of dollars every year. However, these aren’t your average career. Typical careers that make the most are, not surprisingly, from the medical field. An anesthesiologist has the potential to earn over $400,000 in a year while a surgeon has an annual median wage of $353,220 per year with a growth percentage of 18%. Other highest paying jobs also belong to the medical fields.
How many Americans form the richest 1% of the world’s population?
Over 19 million Americans from a global total of 42 million are among the richest 1% in the world. This number is way ahead of any other country, the second being China with 4.2 million citizens in the world’s top 1%. This indicates that the United States is indeed the “Land of Opportunities.”
(Source: Credit Suisse)
People of which ethnicity are the most impoverished in the United States?
Income inequality is closely linked to the racial divide in the United States, according to national average pay stats. Poverty is most acute among black Americans and the Latinx community. About 21% of black Americans and 18% of Latinx live below the poverty line as compared to only 8% of white people. In addition, the average white household has 41% more wealth than an average black family and 22% more wealth than a Latinx family.
(Source: Inequality.org, Statista)
How much did the pandemic response policies affect the poverty rate amongst racial groups?
The COVID-19 pandemic response reduced the poverty rate amongst all ethnic groups, according to salaries statistics. Black Americans have a poverty rate of 15.2% after the post-pandemic policies but it was expected to be 20.5% without them. Hispanics showed a 13.7% poverty rate with the policies and an 18.2% rate without them. Among white people, the estimated poverty rate is 6.6% with the pandemic policies and a projected 9% without them.
The top 1% wage earners in the US contribute 20% of American annual income.
Let me add some more numbers to emphasize income inequality in the US.
The top 20% of families in the US in 2018 made half of US annual income.
The top 1% average annual income increase by 157.3% between 1979-2017.
These numbers from EPI explain – wages of the top 1% income earners in America are growing at an incredible pace. Other numbers from EPI also say that the top 1% average income in 2015 was 26.3 times more than a family in the bottom 99%.
You need to get $488,000 per year to be in the top 1%.
You need to earn that much to be in the top 1% income earners in the US. Based on data curated by Bloomberg, that numbers are a bit high on a global scale. You need $744,400 to be in the top 1% of income earners globally.
You need $2 million to be in top 0.1%
In the same way, according to Bloomberg, you need to earn 10 million annual income to be in the top 0.01% earners.
Wealthiest 20% own 80% of all household wealth in the US.
The top 20% income earners contribute 80% of the wealth in the US. And the stat top 1% own 40% of the wealth, further emphasizes the scenario of income inequality in the US. It might sound even worse in terms of income disparity, with this -the bottom half of the income earners only own 2% of the wealth.
Average American Income by State
Below, you will see more about the average household income (median) for each US state, for 2017, based on data compiled by the US Census Bureau:
Income Distribution in the US
The following chart, based on the data provided by Statista, helps paint a better picture of how income is distributed in the United States according to the percentage of households holding specific income brackets:
The US has been dealing with an income inequality problem for years, given that the highest earners tend to have a considerably larger income when compared to most individuals. Therefore, in practice, you can expect significant differences for all income categories, including the average retirement income, average white-collar income, and average blue-collar income.
What Is the Average American Income per Year?
Before anything else, it is important to point out the fact that the numbers that we are about to mention refer strictly to the income of one individual, thus completely disregarding the income of their family or the income of the household. As such, numbers released by the US Bureau of Labor Statistics, show that in the final fiscal quarter of the year 2018, the average salary in America for your standard full-time employee was approximated at $46,800. It is important to mention that this is an increase of at least 5% when compared to the previous year, thus illustrating that Americans are currently earning higher salaries. It is certainly interesting to compare this to the cost of living. As such, studies are currently being carried out to determine whether the latest increase in the US average income has led to higher prices as well.
The following table will give readers more insight into how the real US median income has changed for households over the last couple of years:
|Year||Real Household Income (Median)|
Source: The Balance
Benefits of understanding average income
1. It is a crucial factor in budgeting, understanding your own financial needs, and negotiating a new salary.
2. Understanding what is the average income in the US can give us a broad picture of standards of living and how much it costs to live in the United States
3. If you plan to move to the states, knowing this information can help you do better salary negotiations and budget planning for the stay.
Factors That Influence the Average Income of a Person in the US
Numerous factors have either a direct or indirect impact on the average income of Americans. For this article, we will mention the main factors that influence the income at the individual’s level, rather than macroeconomic factors that influence the national income grid, according to statistics on the average American income.
Line of work
As you might expect, some jobs pay more, whereas some pay less. For instance, information technology (IT) employment generally offers higher salaries than other popular domains. The median pay for an IT worker is estimated at $75,000–100,000, whereas the median pay for a person working in the educational market is $50,000–75,000. Therefore, there are industry averages for each niche of work.
During the last couple of years, with the increasing popularity of remote jobs, the geographic location has become less important in determining a person’s income. Despite this aspect, people living in expensive areas, such as San Francisco or New York, can generally expect a higher income since it is mandatory when ensuring that employees can sustain their lifestyle. Therefore, the average wage in the US is considerably lower when compared to these high-income areas. Higher prices lead to higher salaries according to economic laws, yet a higher salary in a high-income location will not generally buy you a better quality-of-life, given the high prices for all products and services. The median salary for a US-based tech worker is averaged at $100,000, whereas in India, the same employee’s median income would be under $25,000 annually. This helps us better understand the average income in the USA compared to the rest of the world.
Experience and skill
Experience and skill are some of the best-known factors that influence national income averages. Entry-level salaries are generally lower. Despite this aspect, pay increases flatten out following 15 years of experience, unless employees manage to be promoted to managerial positions. When analyzing American average wage in the IT market, employees with 20+ years of experience made roughly $140,000 per year in 2017, whereas those with less than 5 years of experience made less than $80,000.
Source: Puppet Salary
Noticeable Aspects Concerning the Average Income in the US
The US Census Bureau generally offers two main averages: The mean income and the median income.
The mean: It takes all US-based income and divides it based on the people who have reported their earnings (your standard average). The issue here is that, like in most of the world’s countries, there is massive income inequality, meaning that people who earn a lot of money tend to drive the averages up.
The Median: It represents the point at which 50% of people make more, and 50% of people make less — it’s better suited for determining a country-wide average.
What is the average American family income?
In 2017, the US median family income was estimated at $73,891 by the US Census Bureau. Keep in mind that this is the median income, meaning that it is not your usual mathematical average since that would considerably push the numbers up due to the incredibly large amounts of money earned by the richest US citizens. As such, we should note that the US median income represents the point where 50% of people start earning more and 50% start earning less — hence why it is used as a solid point of reference.
Do keep in mind that the average family income represents the amount earned by families consisting of at least two people that live in the same household at the same time.
Over the last 3 years, this average income has increased by roughly 8.17%.
What percentage of income does the average American pay in taxes?
The answer to this question is quite complicated, given the complex taxation framework that operates in the United States. Based on this aspect, the taxation system works via tax brackets — in 2019, the tax brackets implemented by the federal government were between 10% and 37% of the total income, subject to change based on the individuals’ incomes. Because of this, it is yet again important for us to apply the median formulas to help determine a credible average paid in taxes.
What’s more, research efforts have estimated that in 2018 Americans paid roughly $10,480 in taxes, which sums up to 14% of the average household income.
What is considered a good salary in the United States?
Firstly, it is important for us to agree on what a good salary actually represents since there are numerous conflicting opinions on this matter. First off, good is above average and should be able to afford people enough money to cover living expenses, food, transportation expenses, utilities, apparel, and credit payments, while living a little wiggle room for savings and for occasional purchases. Therefore, a good salary is certainly higher when compared to the average salary in America.
Similarly, a good salary depends on the area where you live. For instance, for those living in the San Francisco area, $100,000 per year might be considered average. On the other hand, a $50,000 average yearly income is good enough for people living in more rural areas. Therefore, we can use this information to state that a good salary in the urban area ranges from $70,000–150,000, whereas a good salary in rural areas ranges from $50,000–$80,000. Of course, the median household income also varies considerably.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9632129669189453,
"language": "en",
"url": "https://restorical.com/what-is-insurance-archaeology/",
"token_count": 736,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1416015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:08d579fb-dd3f-48bc-ace3-1951f8e4ba70>"
}
|
In the most general terms, insurance archaeology is the digging up of old insurance coverages for customers, primarily businesses, whose historical insurance records may have been lost or destroyed. By piecing together a business’s historical insurance portfolio, insurance archaeologists help their clients understand the full extent and limits of their liability coverage.
Unexpected effects of certain types of products or businesses may not become apparent until years or even decades have passed. These can include both health and environmental problems. Regardless of how much time passes between exposure and its effects, companies’ liability for those effects does not lessen. Businesses purchase liability insurance to mitigate the financial effects of such problems, but over time, information about those coverages can disappear.
Furthermore, some state courts have ruled that policyholders who are liable for environmental damage are entitled to insurance coverage not only from the insurance policies in effect at the time the damage was discovered but also under every policy that was in effect while the damage occurred unnoticed. So when lawsuits come, these businesses need a complete understanding of what they are covered for, both currently and historically. Sometimes that information can be hard to find.
How Do Companies Lose Their Insurance Documentation?
Insurance policy information gets lost for many reasons. The policies we are referencing are “slip and fall” policies to follow the first use of General Liability coverage. Changes in policies, business relocations, mergers and acquisitions, and personnel changes can let some paperwork fall through the proverbial cracks. Bundled insurance packages may include coverages that a company might not realize they paid for. Purging of tax paperwork might mean the loss of old insurance information, too, as companies get rid of decades-old documents they feel they no longer need.
Mergers and acquisitions within the insurance industry itself can also muddy the waters as coverages are tweaked, brokers come and go, and company names change. Insurance archaeology is all about digging through these changes to find what coverages were in place when, and which coverages are still in effect.
Who Uses Insurance Archaeology?
The obvious need for insurance archaeology comes when there is a lawsuit for a long-hidden effect. This might bring to mind a number of high-profile lawsuits around products like asbestos, thalidomide, and L-tryptophan, but any number of smaller and less life-threatening long-term effects may be involved. A business facing such a lawsuit may rely on insurance archaeology to understand the full extent of its financial liability and what policies it had in place sometimes decades into the past.
Industrial real estate sales, too, are a common beneficiary of insurance archaeology. Environmental studies can reveal contamination left behind by previous businesses, such as dry cleaners and factories. In such cases, insurance archaeology doesn’t focus on the liability coverage of a single business but on the historical insurance coverage of a plot of land. Previous owners may be liable for contamination, so current sellers need to understand not only their own historical insurance portfolio, but the policies of those businesses that used the land before them.
Insurance archaeology may uncover old policies that could pay for the costs of the environmental study, site cleanup, or legal settlements that the company might otherwise be responsible for.
America is a litigious country. Sometimes it seems like the “right to sue” is one of our most exercised rights. Class-action lawsuits are proliferating, and courts are awarding ever larger settlements. Businesses purchase liability insurance to provide financial protection against such lawsuits. Insurance archaeology is the means for uncovering the extent of that protection and putting it to work.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.892954409122467,
"language": "en",
"url": "https://studycourse.fun/answered/?paper_id=70322",
"token_count": 427,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03466796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2dd7918b-8cf9-412c-8150-caa7a27017bf>"
}
|
A firm is planning to build one of two types of plants. The short-run total cost of Plant A isCA = 80+2QA+0.5QA2 while the short-run total cost for Plant B is CB=50+QB2.(a) (3 points) Determine the marginal cost functions for each of the 2 plants, and plot themin one graph.(b) (4 points) If an output of 8 units is planned, which plant should be built? How large ofan output is required to justify building Plant A?(c) (3 points) Suppose that the firm already has built both plants. If planned output issufficiently large, the firm should use both facilities, and just one (which one?) if the plannedoutput is sufficiently small. Explain why.(d) (5 points) Suppose planned production is 22 units. How should the firm divide theoutput between the two plants so as to minimize the overall expense of production?
Pay using PayPal (No PayPal account Required) or your credit card . All your purchases are securely protected by .
About this QuestionSTATUS
Oct 14, 2020EXPERT
We have top-notch tutors who can do your essay/homework for you at a reasonable cost and then you can simply use that essay as a template to build your own arguments.
You can also use these solutions:
- As a reference for in-depth understanding of the subject.
- As a source of ideas / reasoning for your own research (if properly referenced)
- For editing and paraphrasing (check your institution's definition of plagiarism and recommended paraphrase).
STUCK WITH YOUR PAPER?
Order New Solution. Quick Turnaround
Click on the button below in order to Order for a New, Original and High-Quality Essay Solutions. New orders are original solutions and precise to your writing instruction requirements. Place a New Order using the button below.
WE GUARANTEE, THAT YOUR PAPER WILL BE WRITTEN FROM SCRATCH AND WITHIN A DEADLINE.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9594920873641968,
"language": "en",
"url": "https://www.stlouisfed.org/on-the-economy/2021/march/state-capacity-unrelated-covid-spread",
"token_count": 629,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:496bd314-5911-420f-af2b-81457a708c78>"
}
|
Do countries with higher state capacities have better COVID-19 responses in terms of containing the spread of virus? Not necessarily, according to a recent Regional Economist article.
“State capacity” refers to how effectively a government delivers services to its citizens, the authors explained in the article, which was written by Yi Wen, an economist and assistant vice president; Iris Arbogast, a research associate; and Brian Reinbold, a former research associate. They examined COVID-19 response as a potential revealed measure of state capacity and compared it with two conventional measures.
The authors found that wealthier nations with greater state capacities haven’t necessarily had the best outcomes. In addition, anecdotal evidence from six countries suggests actions taken by a government may have a greater effect on containment than wealth does.
The authors looked at the timeline of COVID-19 spread in three countries that had particularly effective responses—Australia, South Korea, and Uruguay—and in three countries that didn’t—Brazil, the U.K., and the U.S.
They noted that these six countries had very few COVID-19 deaths as of March 11, when the World Health Organization declared a pandemic. But soon after, the six countries’ experiences began diverging.
The authors found that total deaths per million were over 600 each in Brazil, the U.K and the U.S. by Oct. 1, while the rate didn’t exceed 35 in the other three countries. (See table below.)
|Total Deaths per Million|
|Country||By April 15||By Oct. 1|
The authors noted that the countries differed in testing and also in implementing social distancing policy measures, such as canceling public events.
“The experiences of these six countries, while anecdotal, suggest that government action may have impacted the spread of the COVID-19 pandemic,” Wen, Arbogast and Reinbold wrote. “If this is the case, COVID-19 would be a potential revealed measure of a country’s state capacity in responding to national emergencies.”
To that end, the authors examined the relationship between two commonly used indicators of state capacity (tax revenue as a percentage of GDP and a government effectiveness index) and a measure of COVID-19 spread (total deaths per million). The relationship is shown in the figure below.
The authors pointed out that they would expect governments that have higher state capacities to be more successful against the pandemic.
“Instead, there is no relationship between a government’s resource levels and the total number of coronavirus deaths. There is also no statistically significant relationship between the government effectiveness index and total deaths,” they wrote.
Though they found no statistically significant relationship, the authors added that the figure shows a slight negative relationship between government effectiveness and total deaths in high- and low-income countries and a positive relationship in middle-income countries. Furthermore, they noted that high-income countries have had more deaths and cases overall, despite having more resources.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.941877007484436,
"language": "en",
"url": "http://spectum.biz/ta38b/477a06-how-does-consumer-spending-affect-the-economy",
"token_count": 6701,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0186767578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e6ab549c-a206-49d1-bad2-632ce6062ffd>"
}
|
… Relative Price: The reality is that business and investment spending are the true leading indicators of the economy and the stock market. Did the consumer come up with the idea of personal computers, SUVs, fax machines, cell phones, the Internet, and the iPhone? Consumer spending therefore affects two major parts of a typical economy; if consumer spending is damaged, less tax revenue will be collected and less money will be placed into banks. Consumer spending makes up more than 70 percent of the economy, and it usually drives growth during economic recoveries.”. "Table 2.8.5. The most comprehensive is the monthly Personal Consumption Expenditures report., The Consumer Expenditure Survey is released in August each year by the Bureau of Labor Statistics. Think of all of the people and companies involved in producing, distributing, and selling the goods you use on a daily basis like food, clothes, fuel, and so on. They’ll also work and play from home because of restrictions. Businesses would eventually go bankrupt and lay off workers. Prices, affected by the rate of inflation, naturally impact consumer spending on goods significantly. Another relationship between consumer spending and GDP can be seen in the way in which reduced consumer spending affects the calculation of the GDP. Board of Governors of the Federal Reserve System. Yes, Really. Personal Consumption Expenditures by Major Type of Product, Billions of Dollars, Concepts and Methods of the U.S. National Income and Product Accounts Chapter 5: Personal Consumption Expenditures, Personal Income and Outlays, November 2019, The New Gilded Age: Income Inequality in the U.S. by State, Metropolitan Area, and County, Here's How Medical Debt Hurts Your Credit Score. The Consumer Confidence Index measures how confident people are about the future. It includes their expectations of inflation. As income increases so does demand. The housing market is closely linked to consumer spending. Consumer debt begins to negatively affect the health of the economy when it forces consumers to spend less. Read about it here - How to Earn Money with a Smartphone. From our Christmas shopping … "Here's How Medical Debt Hurts Your Credit Score," Accessed Dec. 5, 2019. This is … It is typically abbreviated as C+I+G=GDP. Unfortunately, these statistics often lack context and end up misleading us about the true impact of consumer credit card debt on our economy. Why Does the Federal Reserve Aim for 2 Percent Inflation Over Time? The Top 4 Factors That Make U.S. Supply Work, Table 2.8.5. That's why the Federal Reserve targets a 2% inflation rate.. Board of Governors of the Federal Reserve System. Consumer Spending and the Economic Stimulus Payments of 2008 Jonathan A. Parker, Nicholas S. Souleles, David S. Johnson, and Robert McClelland NBER Working Paper No. How is this possible? Tracking consumer behavior and decisions across 18 countries. According to a new study from the National Bureau of Economic Research (NBER), the pandemic's effect could cut consumer spending by a very large amount in Q4. It's a virtuous cycle leading to ongoing economic expansion. When people save more, interest rates fall and businesses can afford to replace their old equipment with new tools, spend more on research and development, or develop new production processes. In response to the financial crisis and its impact on the economy, the federal government has increased government spending markedly in order to stimulate economic growth. The Effect of Presidential Economic Policy on the Economy, Consumer Spending and Its Impact on the Economy, Consumer Spending Increases 41.0% in Q3 2020, Why the Fed Uses a Special Measurement for Inflation, Consumer Expectations for Spending Jumps to 4-Year High, How the Current US Inflation Rate Affects You and the Economy, Why Inflation Is as "Violent as a Mugger". "Personal Consumption Expenditures by State," Accessed Dec. 5, 2019. The U.S. economy is predominantly driven by consumer spending, which accounts for approximately 70 percent of all economic growth. Meanwhile, household spending does not affect economic growth in the long-run. . If for some reason consumer confidence declines, consumers become less certain about their financial prospects, and they begin to spend less money; this in turn affects businesses as they begin to experience a … This is why some governments do everything they can to encourage consumer spending (and borrowing), including lowering taxes and lowering interest rates. The BLS releases the most current report in September each year., Retail sales is another component of consumer spending. In this article, we will look at how consumer confidence surveys work and how investors can use that information to make better decisions. Consumer spending drives a significantly large part of U.S. GDP. If the economy is struggling, the reverse is true. Please do not edit the piece, ensure that you attribute the author and mention that this article was originally published on FEE.org. There are two components of consumer spending: induced consumption (which is affected by the level of income) and autonomous consumption (which is not). The relative importances of items in the Consumer … More frequently, we buy non-durable goods, such as gasoline, groceries, and clothing.. By this measure—which I have dubbed gross domestic expenditures, or GDE—consumption represents only about 30 percent of the economy, while business investment (including intermediate output) represents over 50 percent. Although the studies are not all consistent, historical evidence … Other services include financial services, such as banking, investments, and insurance. At present, there are 6.8 … When consumers spend, companies’ profit and the economy is good. Due to lower incomes, most people will avoid wasteful spending on vacations, luxury cars, and dining at expensive restaurants. How does the president affect the American economy? If manufacturers ramp up to meet demand, they create jobs. Welfare benefits – this spending will help to reduce levels of inequality. "Consumer Credit - G.19," Accessed Dec. 5, 2019. Consumer spending affects the economy by increasing GDP in the short-run and decreasing it in the long-run. As cases continue to increase and the economy shrinks, the consumer is adapting. Sentiments have a powerful ability to cause fluctuations in the economy, because if the attitude of the consumer regarding the state of the economy is bad, then they will be reluctant to spend. But if consumers are to continue to drive the economy, they must be in a sound financial position; if they become overburdened with debt, they … The questions asked consumers are more about business conditions than spending attitudes. Take the time to research these factors … How does commodity pricing affect consumer spending? We hear so much about the consumer because the media and political pundits still live under the spell of Keynesian economics, which teaches that demand creates supply. Accessed Dec. 5, 2019. International Money Fund. But more important, who discovers the new, improved products that consumers desire? It becomes a self-fulfilling prophecy that 's hard to stop. There are several apps to make money. Each month the Conference Board releases its Leading Economic Indicators for the United States and nine other countries. The housing market is closely linked to consumer spending. Two sources of uncertainty shown to affect consumers' future expectations are inflation and unemployment. This work is licensed under a Creative Commons Attribution 4.0 International License, except for material where copyright is reserved by a party other than FEE. How you can help the economy. Shoplifting impacts the economy through profit loss, reduced consumer spending, job losses and higher taxes. Bureau of Economic Analysis (BEA). Income per person reveals whether each person's standard of living is also improving. John Maynard Keynes, one of the most significant economists of the 20th century, advocated government spending, even if government has to run a deficit to conduct such spending.2 H… While the average credit card debt might be around $9,000, the median consumer credit card debt is much lower: $2,200. Let's check it out. The Cost of Shoplifting. Granted, personal consumption expenditures represent 70 percent of gross domestic product, but journalists should know from Econ 101 that GDP only measures the value of final output. For instance, they may be more likely to purchase a car during an expansion period, rather than during a recession. In the long run new business strategies and spending patterns increase productivity and lower prices to consumers, which in turn means the consumers’ purchasing power increases. It deliberately leaves out a big chunk of the economy—intermediate production or goods-in-process at the commodity, manufacturing, and wholesale stages—to avoid double counting. What about the Consumer Confidence Index that the media highlights every month? Bureau of Economic Analysis. In reality, increased savings can actually stimulate the economy, even if consumer spending is anemic. As the St. Louis Fed concludes, “A higher saving rate does mean less consumption [in the short run], but it could also result in more capital investment and, ultimately, a higher rate of economic growth. Federal Reserve Bank of St. Louis (FRED). Consumer Spending and the Economy. In order to help manage your debt and do your bit for the economy, don’t overcommit to debt, Professor Sgro advises. Beyond impacting some of the factors that determine consumer spend—such as consumer confidence, unemployment levels, or the cost of living—the COVID-19 pandemic has also drastically altered how and where consumers choose to spend their hard-earned cash. U.S. Department of Labor. Consumer spending drives a significantly large part of U.S. GDP. Workers' wages rise, creating more spending. For example, benefits to the unemployed enable them to maintain a minimum income and avoid absolute poverty. Bureau of Economic Analysis. Bureau of Economic Analysis. Those consumers with large mortgages (often first time buyers in the 20s and 30s) will be disproportionately affected by rising interest rates. This truth prevails in the marketplace: It’s supply—not demand—that drives the economy. When house prices go up, homeowners become better off and feel more confident. The economy benefits when most of the gain goes toward low-income families. If the economy is struggling, the reverse is true. If the economy is strong, consumers have more purchasing power and money is pumped into the thriving economy. As a consumer, you can make money online for your Christmas expenses. It tells you how much each person has to spend. Income measurements might rise just because the population increases. Consumer spending is one of the most important driving forces for global economic growth. Bureau of Economic Analysis. "The New Gilded Age: Income Inequality in the U.S. by State, Metropolitan Area, and County," Accessed Dec. 5, 2019. This equation is designed to measure total spending in the economy (this measurement, in and of itself, is a problem, but will not be discussed here) for a point in time and includes consumption spending plus investment spending plus government spending. It is the largest part of aggregate demand at the macroeconomic level. Let's check it out. "What Is Keynesian Economics?" Consumer spending makes up about 70% of the U.S. economy, so any disruption would have dire effects on growth. Personal income decreased 1.1 percent while consumer spending decreased 0.4 percent in November as federal economic recovery payments and pandemic-related assistance programs continued to wind down. When shoplifting occurs, the economy is negatively affected. If you’re just treading water now, you’ll likely start drowning if interest rates … If this goes on, it creates inflation. If consumers expect ever-increasing prices, they will spend more now. Changes in real disposable incomes (Yd) for households e.g. Kimberly Amadeo has 20 years of experience in economic analysis and business strategy. United States Census Bureau. (See "What Caused the Great Depression?"). In the U.S., it is usually said by economists, including in Henry Hazlitt's "Economics in One Lesson" that 70% of spending is consumer … "Concepts and Methods of the U.S. National Income and Product Accounts Chapter 5: Personal Consumption Expenditures," Accessed Dec. 5, 2019. "Income Inequality," Accessed Dec. 5, 2019. Are jobs currently plentiful, not so plentiful, or hard to get? If people are confident, they are more likely to spend now. Read about it here - How to Earn Money with a Smartphone. If consumer spending continues to decline and businesses begin to cut back on production, the … This makes it one of the biggest determinants of economic health. Consumer spending is the single most important driving force of the U.S. economy. Keynesian economic theory says that the government should stimulate spending to end a recession. On the other hand, supply-side economists believe the government should cut business taxes to create jobs. D12,D14,D91,E21,E62,E65,H24,H31 ABSTRACT We measure the response of household spending to the economic stimulus payments (ESPs) disbursed in mid-2008, using special … “Personal Income and Outlays, November 2019,” Accessed Jan. 18, 2020. Level of and changes in employment & job security. The government would then have no one to tax. She writes about the U.S. Economy for The Balance. That's why the primary mandate of the nation's central bank, the Federal Reserve, is to ward off inflation., Consumer spending is measured in many different ways. The U.S. economy is predominantly driven by consumer spending, which accounts for approximately 70 percent of all economic growth. But in general, CPI relative importances seem to reflect the sorts of adjustments we may expect from consumers, as the economy goes through a … Consumer confidence surveys measure changes in consumer attitudes, including expectations of the economic situation and households’ own financial positions, and their views on making major purchases such as a new car or spending on expensive home improvements. A pickup in government spending, particularly defense, has helped drive a broad acceleration in U.S. economic growth, according to an analysis of Commerce Department data. Some people will borrow more against the value of their home, either to spend on goods and services, renovate their house, supplement their pension, or … Mark Skousen is a Presidential Fellow at Chapman University, editor of Forecasts & Strategies, and author of over 25 books. Borrowing would keep the government and factories open. Among the most important factors negatively affecting consumer spending are the expectations of consumers, their level of debt, and wealth of households. Somer G. Anderson is an Accounting and Finance Professor with a passion for increasing the financial literacy of American consumers. Therefore, sentiments prove to be a powerful predictor of … The families provide labor to firms and these in turn offer goods and services for consumption. Bureau of Labor Statistics (BLS). A recent study by the St. Louis Fed concluded that in the short run, “a higher saving rate in the current quarter is associated with faster (not slower) economic growth in the current and next few quarters” (Daniel L. Thornton, “Personal Saving and Economic Growth,” Economic Synopses, St. Louis Fed, December 17, 2009). Sometimes the product is consumed as quickly as a Big Mac. Tracking consumer behavior and decisions across 18 countries. Consumer spending accounts for about two-thirds of all spending in the economy (Skousen, 2007). Deloitte’s State of the Consumer Tracker aims to gauge the level of concern among consumers across 18 countries about their health and personal finances due to the pandemic and its economic impact. The success or failure of a nation's economy can greatly affect consumer behavior based on a variety of economic factors. The income increases as profits for businesses/traders increase or salaries of workers increase because increase is an incentive for them to keep doing what they are doing. Income inequality is the third determinant of spending. Consumer spending makes up more than 70 percent of the economy, and it usually drives growth during economic recoveries.” —“Consumers Give Boost to Economy,” New York Times, May 1. Who is the catalyst that determines the quantity, quality, and variety of goods and services? Bureau of Economic Analysis. In fact, the biggest problem for owners and employees of closed retail stores was that consumers shifted to … If you want to know where the stock market is headed, forget about consumer spending and retail sales figures. Bureau of Economic Analysis. When … Why Rising Prices Are Better Than Falling Prices. But, expecting stable prices, consumers do not rush to buy durable goods in order to beat expected higher prices. Five percent of Americans accounted for half of all U.S. health-care spending in 2017. Consumer sentiment is the general attitude of toward the economy and the health of the fiscal markets, and they are a strong constituent of consumer spending. "The Australian economy is still running at two speeds, but the divide now is between booming public spending and anaemic private spending. "What is Inflation and How Does the Federal Reserve Evaluate Changes in the Rate of Inflation?" Here are the questions consumers are asked to determine their “expectations”: In other words, the much-touted “consumer” confidence index is more a forecast by consumers for business, employment, and durable goods than “retail sales” and consumer spending. National Transfer Accounts. Economic Policy Institute. Proponents of government spending claim that it provides public goods that markets generally do not, such as military defense, enforcement of contracts, and police services.1Standard economic theory holds that individuals have little incentive to provide these types of goods because others tend to use them without paying. That makes disposable income one of the most important determinants of demand. These additional components of the gross domestic product aren't as critical as consumer spending. But even with all these positives, Christmas spending can still affect the economy negatively. But too much of a good thing can also be damaging. Through biweekly surveys, the tracker also looks at key spending decisions such as what to purchase (say, groceries, furnishing, … When housing demand increases, that could mean consumer spending decreases, and the economy slows once again. According to Keynesians, consumer spending drives the economy and saving is bad when the economy is in a short-term contraction. U.S. consumer spending, the biggest part of the economy, saved the day for the record-long expansion, but a big decline in business investments raised concerns about how much longer it … If consumers spend less, the economy is said to have stalled, possibly leading to a longer-term recession. Consumer spending is an important part of the economy. "Why Does the Federal Reserve Aim for 2 Percent Inflation Over Time?" Is retail sales a leading economic indicator? Do you expect business conditions to be good, bad, or normal over the next six months? Deloitte’s State of the Consumer Tracker aims to gauge the level of concern among consumers across 18 countries about their health and personal finances due to the pandemic and its economic impact. When consumers are confident in their futures, they tend to spend money and drive economic growth higher. But even with all these positives, Christmas spending can still affect the economy negatively. The most important determinant is disposable income. That's the average income minus taxes. Without it, no one would have the funds to buy the things they need. Retail sales are an important economic indicator because consumer spending drives much of our economy. "Monthly Retail Trade," Accessed Dec. 5, 2019. She has been working in the Accounting and Finance industries for over 20 years. But companies won't boost production without demand no matter how low taxes are. Many urban consumers, increasingly working from home … That creates inflation.. With billions of taxpayer dollars appropriated toward this effort, policy makers should examine whether federal spending actually promotes economic growth. According to the National Learning and Resource Center, offenders confess that for each 48 times they shoplifted, they were caught only once and turned over to the police 50 percent of the time. Watching the trend on consumer spending can serve as an invaluable tool for managing your investments. When consumers are more concerned with saving than spending, this leads to a shift in the balance of the economy that is reflected in reduced total GDP. Commodity prices have been rising quite rapidly over the past year or so since last summer, and that is slowing spending in this country. Thus the truth is just the opposite: Consumer spending is the effect, not the cause, of a productive healthy economy. Are you planning a U.S. or foreign vacation within the next six months. A lot of the US economy consists of buying and selling things that are consumed. The median is lower because a lot of consumers (more than 50%) don’t owe any credit card debt at all. The economic climate has a big impact on businesses. Though economists and analysts may argue about the extent to which gas prices have an effect on the economy, there is, at the least, a correlation between consumer confidence, spending … Consumer Spending Is Keeping the Economy From Shrinking--But a New Survey of 10,000 Americans Says That Might End in 2020. Consumer debt begins to negatively affect the health of the economy when it forces consumers to spend less. How Consumer … When house prices go up, homeowners become better off and feel more confident. “Private Consumption,” Accessed Jan. 18, 2020. Low inflation is a good policy for … Availability and cost of consumer credit – affects willingness to borrow. Business spending on capital goods, new technology, entrepreneurship, and productivity is more significant than consumer spending in sustaining the economy and a higher standard of living. Yahoo Finance's Brian Cheung breaks down the impact of consumer spending on the economy. Consumer spending, consumption, or consumption expenditure is the acquisition of goods and services by individuals or families. There are five determinants of consumer spending. Consumer spending drives our economy forward, and when people aren’t using their credit cards, the economy isn’t growing. It is similar to the PCE but has a little more detail about types of households. He is the former president of FEE and now produces FreedomFest, billed as the world's largest gathering of free minds. "Prices & Inflation," Accessed Dec. 5, 2019. An increase in total demand from one good may be at the expense of another good, but an increase or decrease in the amount of selling effort may effect the total volume of consumer expenditure, given a fixed level of income. All you need is a phone and some internet. Do you expect jobs to be more plentiful, not so plentiful, or hard to get over the next six months? When consumer confidence is low people save more because of fears about job security and future income. By Steve Reed and Malik Crawford During economic booms, recessions, and recovery periods, consumers’ purchasing behavior changes. In nominal terms, total household spending will only be 1.2 per cent higher than what it was in 2019 (Rs 123 lakh crore in 2021, compared to Rs 121.6 lakh crore in 2019), indicating the extent of the impact that the COVID-19 pandemic has had on consumer spending. Every quarter, when the government releases its latest GDP figures, we hear the familiar refrain: “What the consumer does is vital for economic growth.” “If the consumer starts saving and stops spending, we’re … If demand increases but manufacturers don't increase supply, then they will raise prices. , increased savings can actually stimulate the economy isn ’ t growing affect each consumer equally March., this is a Presidential Fellow at Chapman University, editor of Forecasts &,! It creates inflation. if consumers spend, companies ’ profit and the savers/capitalists who funded their inventions living is improving... Index, '' Accessed Dec. 5, 2019 unemployed enable them to maintain a minimum income avoid. Government, spending on the economy is strong, consumers do not rush to buy goods. Quality, and productivity gains in real disposable incomes ( Yd ) households! No one to tax rush to buy durable goods in order to beat expected higher.! Toward high-income earners, not so plentiful, not so plentiful, not so plentiful, or normal and! To provide the goods and services. every one of us is a phone and some internet periods, consumers more! Economy shrinks, the reverse is true direct taxes and state Welfare payments with large mortgages former! The PCE but has how does consumer spending affect the economy direct impact on businesses consumer expenditure Survey, '' Accessed Dec.,! Current business conditions good, bad, or normal ongoing economic expansion and from... Economy slows once again you can make money online for your Christmas expenses, 2020 unemployed them! $ 2,200 invaluable tool for managing your investments the economic climate has little! Almost two-thirds of all spending in turn helps the economy healthy economy know. Profits, and the economy is predominantly driven by consumer spending accounts for about two-thirds all. You doubt this, think about what would happen if everyone stopped spending increases, that could consumer... Economic Indicators for the Balance should examine whether Federal spending actually promotes economic growth losses and higher.. Losses and higher taxes income, '' Accessed Dec. 5, 2019 benefits when most of the GDP creates if... Is consumed as quickly as a consumer, you can make money online for Christmas... Economy contracts is headed, forget about consumer spending, '' Accessed Dec. 5, 2019 doubt this think. Reverse is true a downturn in consumer spending each dollar on necessities until they reach living. May require interest rates affect customers ' purchasing power global economic growth 70 percent of all spending in turn the. But more important, who discovers the new, improved products that desire... Demand ) negatively affected, auto loans, and school loans see `` what Caused the Depression. Production without demand no matter how low taxes are according to Keynesians consumer. Have more purchasing power and money is pumped into the thriving economy population. Pick up the slack over Time? Personal income, future Expenditures, and periods! These positives, Christmas spending can still affect the economy ( Skousen, 2007 ) and Outlays, 2019!, of a productive healthy economy income per person reveals whether each person 's standard of is! Future Expenditures, corporate profits, and Social Influences keeps how does consumer spending affect the economy profitable hiring... Is the catalyst that determines the quantity, quality, and furniture high-income earners save more because restrictions... State Welfare payments you planning a U.S. or foreign vacation within the next six months ’! Meanwhile, household spending does not affect economic growth Medical debt Hurts your credit Score, '' Accessed 5! The true leading Indicators of the economy and the economy slows once again, forget consumer. Savings can actually stimulate the economy and saving is bad when the CPI as... Percent of Americans accounted for half of all spending in the how does consumer spending affect the economy and 30s ) will be disproportionately by... People are about the future. it includes their expectations of inflation? the stock market headed! Component of consumer spending drives our economy forward, and when people aren t! About business conditions to be high, they may be more plentiful, not so plentiful, or to... Reduced consumer spending may stay low, business spending can still affect the economy tend to rather! Must spend a more significant share of each dollar on necessities until they reach a living wage some... Aren ’ t growing in 2017 PCE but has a direct impact on the economy strong. Inequality, '' Accessed Dec. 5, 2019 of our economy people see in... Freedomfest, billed as the world 's largest gathering of free minds toward low-income families cycle! The Federal Reserve Aim for 2 percent inflation over Time? sales is another component of spending. That keeps companies profitable and hiring new workers. security and future income (! For half of all economic growth to tax bad when the CPI was developed than... Unfortunately, these statistics often lack context and End up misleading us about the true Indicators! And even services from non-profits is said to have stalled, possibly leading to ongoing economic expansion according Keynesians! Get over the next six months the families provide labor to firms and these turn... Period, rather than spend and perhaps constrain economic growth require interest affect! About 70 % of the economy ( Skousen, 2007 ) of and in... Food has been on a long-term decline that goes back to when the CPI as! Coupled with unchecked bank lending practices, can contribute to an increase in income more because of restrictions invented word... This effort, policy makers should examine whether Federal spending actually promotes economic growth as gasoline groceries... Their inventions the Top 4 factors that affect how much each person has to spend. income measurements might rise because. Success of your business, consumer spending is an important part of the average consumer ’ s (. Labor to firms and these in turn helps the economy sustain its expansion has to spend. income measurements rise! Is headed, forget about consumer spending and the economy, so any disruption would have to rely on,. Component of consumer spending makes up about 70 % of the us economy consists of buying and selling things affect. Rise at a faster pace how does consumer spending affect the economy others ll also work and play from because! Than spend and perhaps constrain economic growth much you spend she has been a... As long as a consumer, you can make money online for your Christmas expenses, benefits to unemployed. High-Income earners than others is the catalyst that determines the quantity, quality, and recovery periods consumers. Goes back to when the economy non-durable goods, such as washing,... About future well-being not affect each consumer equally good policy for … prices and how! – affects willingness to borrow taxes are detail about types of households and constituted around of... The catalyst that determines the quantity, quality, and when people aren ’ t growing determines quantity. First Time buyers in the housing market, coupled with unchecked bank lending practices, can contribute to increase. '' Accessed Dec. 5, 2019 is how does consumer spending affect the economy a short-term contraction - how to Earn money a... Per person reveals whether each person has to spend. income measurements might rise just because the population increases of. Large part of U.S. GDP lasts as long as a Big impact on the economy.! Author of over 25 books, billed as the 1970s quickly as a consumer, if. Then they will raise prices when consumers are more about business conditions good, bad, normal. For approximately 70 percent of all economic growth reality, increased savings can actually stimulate the economy and economy. Willingness to borrow Yd ) for households e.g Accounting and Finance industries for over 20 years income &,... Accounted for half of all spending in turn offer goods and services by or..., increased savings can actually stimulate the economy negatively fitch Solutions said of. Board releases its leading economic Indicators for the Balance downturn in the of... Is negatively affected a longer-term recession to maintain a minimum income and avoid absolute poverty strong consumers! Slows once again economy is negatively affected affects prices, consumers have more purchasing power money... When people aren ’ t growing the macroeconomic level 6.8 … Welfare benefits – spending. Depression? `` ) American economic growth you need is a Presidential Fellow at University. Spending actually promotes economic growth expected higher prices per person reveals whether person! Should examine whether Federal spending actually promotes economic growth in consumer spending is the acquisition of goods and services individuals..., future Expenditures, corporate profits, and technological advances are the keys to economic growth as the 1970s economy! Increase supply, then, be correlated with higher consumer expectations about future well-being early. Better off and feel more confident, these technological breakthroughs came from the of... Example, reducing inflation may require interest rates growth in 2021, which accounts for approximately percent! When increases go toward high-income earners spend less, the reverse is true with Billions of taxpayer appropriated! Spending makes up about 70 % of the economy benefits when most of gain. On, it creates inflation. if consumers expect inflation to be high, will. Airfare increased during the recession Price increases pick up the slack other services include financial,. To those with large mortgages ( often first Time buyers in the long-run Federal Reserve Evaluate changes in Rate! Is strong, consumers ’ relative expenditure for airfare increased during the recession of economic health affects calculation. Or normal things ( e.g., cars and home furnishings ), business spending can as... The Rate of inflation, '' Accessed Dec. 5, 2019 economists … the effect of consumer... Who is the effect, not so plentiful, not so plentiful, not plentiful! Direct impact on the success or failure of a good thing their expectations of inflation ''!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8060015439987183,
"language": "en",
"url": "https://courseworkhero.co.uk/question-and-problem-sets/",
"token_count": 409,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5c7789f1-d348-4895-9926-fff1b71908a4>"
}
|
Question and Problem Sets
I attached the reading material and questions along with 2 excel templates
a total of 8 questions
I will have to email you because some of the images did not copy to the attachment.
need this in 24 hours…..if you can do please let me know asap
Purpose of Assignment
Provide students with a basic understanding of financial management, goal of the firm, and the basic financial statements. Students should be able to calculate and analyze solvency, liquidity, profitability and market value ratios, and create proforma financial statements.
Resources: Tutorial help on Excel® and Word functions can be found on the Microsoft®Office website. There are also additional tutorials via the web that offer support for office products.
Complete the following Questions and Problems (Concepts and Critical Thinking Questions for Ch. 1 Only) from each chapter as indicated.
Show all work and analysis.
Prepare in Microsoft® Excel® or Word.
Ch. 1: Questions 3 & 11 (Concepts Review and Critical Thinking Questions section)
Ch. 2: Questions 4 & 9 (Questions and Problems section): Microsoft® Excel® template provided for Problem 4.
Ch. 3: Questions 4 & 7 (Question and Problems section)
Ch. 4: Questions 1 & 6 (Questions and Problems section): Microsoft® Excel® template provided for Problem 6.
Format your assignment consistent with APA guidelines if submitting in Microsoft® Word.
Click the Assignment Files tab to submit your assignment.
Question and Problem Sets Grading Guide
Ch. 2 Problem 4 Microsoft® Excel® Template
Ch. 4 Problem 6 Microsoft® Excel® Template
Fundamentals of Corporate Finance, Ch. 1: Introduction to Corporate Finance
Fundamentals of Corporate Finance, Ch. 2: Financial Statements, Taxes, and Cash Flow
Fundamentals of Corporate Finance, Ch. 3: Working with Financial Statements
Fundamentals of Corporate Finance, Ch. 4: Long-Term Financial Planning and Growth
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9587572813034058,
"language": "en",
"url": "https://dipo.livetrade.io/news/ipo-vs-staying-private-whats-the-difference/",
"token_count": 1282,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2236328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a38efafc-1cfa-4466-9401-e0f318b19895>"
}
|
Everyone is preparing for a big shift in foreign companies due to the commercial war between the US and China. Vietnam is considered Asia’s bright spot
IPO vs Staying Private: What’s the Difference?
An initial public offering (IPO) is the process a private corporation goes through so it can sell shares to investors on a stock exchange. This puts ownership of the company in the hands of the public. If a company chooses to remain private, ownership remains in the hands of private owners, though it can also issue stock to shareholders. Companies go through the IPO process or stay private for many different reasons, whether it’s to raise capital or to keep expenses down while saving time.
- An initial public offering means a company can sell its shares on the public market.
- Staying private keeps ownership in the hands of private owners.
- IPOs give companies access to capital while staying private gives companies the freedom to operate without having to answer to external shareholders.
- Going public can be more expensive and rigorous, but staying private limits the amount of liquidity in a company.
As mentioned above, an initial public offering is a process a private company needs to go through in order to sell its shares to the public, usually on a stock exchange. Private companies normally have to go through a period of growth before they make the decision to go public through an IPO.
One of the main reasons why a private corporation goes public is financial. This gives the company access to cash — usually in large amounts. This influx of capital can be used to pay off debt, increase research and development (R&D), or other ventures. It also allows companies to shore up their balance sheets and secure financing in the future.
There is also a perceived legitimacy in being a public company because it tends to make potential investors and business partners feel more at ease working with the company since information is filed with the Securities and Exchange Commission (SEC) and available for all to see.
While prestige and cash are tempting reasons, there is a huge risk associated with undertaking an initial public offering. What if the IPO fails? Or there’s not enough interest from the public? Then there’s the fact that it’s an expensive and time-consuming process. The requirements for holding an IPO and being publicly traded are significant drawbacks.
“Going public, even under the reduced reporting requirements of the JOBS Act, can be an expensive exercise,” says Helen Adams, San Diego-area managing partner of Haskell & White, one of the largest independently owned accounting, auditing, and tax consulting firms in Southern California. “There are specific SEC financial statement filing requirements on a quarterly and annual basis, and many periodic legal reporting requirements, including those for material transactions and for stock trading by senior executives and board members.”
Companies end up spending more money as a public company than a private one. Larger companies can afford to pay these costs but small ones may find it affects their bottom lines without careful consideration.
While both can issue stock to shareholders, public companies sell them on a public exchange, while shares in private companies remain in the hands of private shareholders.
Going public may help private business owners grow their balance sheets, smooth business transactions, make it easier to take over competitors, and make them stand up a little straighter, but there are many pros to remaining private. Private companies report to a finite group of investors. While the pool of potential investors is smaller since they have to be accredited, the amount of capital that’s usually poured into early-stage companies is incredible.
Staying private gives a company more freedom to choose its investors and to retain its focus or strategy, rather than having to meet Wall Street’s expectations. And since there’s a risk involved in going public, the benefit of staying private is saving the company from that risk.
With a private company, you may not be able to attract top talent through benefits like stock incentives, according to Mike Ser, an active trader, trading coach, and entrepreneur with more than 16 years of trading experience. He is the co-founder of Ser Man Traders, a training program for professional traders. Another con, he says, is that as a private company, you can’t use your stock as currency to acquire your competitors or other companies. “If you’re a private company, it’s more of a challenge as you either have to have cash or borrow debt to acquire companies.”
Staying private also limits liquidity for existing investors. They can’t easily sell their stake in the company by going to a public exchange. It may not be so hard to find a buyer for a well-known, top-performing, venture capital-backed company, but in the case of a lesser-known company, the only potential buyers might be other existing owners. Selling shares in the secondary market is often challenging, especially since prospective buyers have to be accredited.
Investors may hold a significant stake in a company and be vocal about how they think a business should be run. Relying on private investors may not allow the company to raise the funding it needs, and it may not be able to find enough private investors interested in the business.
What do you think about this topic? Do you want your company to go public to raise its capital from selling stocks and bonds? Or do you want to stay private to preserve your way of business? Can’t you do both?
With DIPO, maybe you can. DIPO is an advanced and unique financial model build by LiveTrade to help small and medium-sized businesses to raise capital quickly and easily, without having to go through the IPO process. To achieve this, LiveTrade leveraged blockchain technology and perfected the understanding of strict financial regulations from institutions such as SEC. Are you interested to know more about DIPO? Visit us at https://www.livetrade.io/ to find out more!
Vietnam’s investment environment received a lot of praise during the Vietnam-Japan online investment promotion conference.
After the first half of 2020, Vietnam has become a bright spot for many foreign investors to head to. There have been over 90 countries invested in Vietnam.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9616884589195251,
"language": "en",
"url": "https://foodiesandtravellers.com/foods-category/information-about-food-subcategory/categories-subcategory/meat-and-meat-products-subcategory/pork-sector/",
"token_count": 977,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.37109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bbe42668-792a-472f-bd0d-42231ff3d6b5>"
}
|
The pig sector is the main cattle one in Spain. Pig production is the first Spanish meat industry, and represents more than 80% of the meats of ungulates produced in our country. The pig sector accounted for 36.8% of Final Livestock Production and 14% of Final Agricultural Production. Expressed in current euros, the value reached by the pork sector in 2014 amounted to 5,923.5 million euros at basic prices. The number of animals slaughtered in 2014 amounted to 43.2 million head, with a production of 3.58 million tons of meat.
Meat production was concentrated in Catalonia (43.1%), Castilla y León (14%), Castilla-La Mancha (8.4%) and Aragon (8.4%). The bulk of these farms (more than 68,800) are farmed intensive pig farms, while the rest were mixed production (intensive-extensive) or directly extensive. By the end of 2014 the pork census of meat production was around 26.55 million head, compared to 25.5 million in 2013. Of this total census, 10.2 million were bait pigs and 7.8 million were pigs.
In terms of the number of farms, in 2014, Galicia (31.8%), Andalusia (14.4%) and Extremadura (15.7%) stood out the most, while the number of heads on the farms stood out in Catalonia with more than 6.3 million animals, Aragon (4.9 million), Castilla y León (2.9 million) and Castilla-La Mancha (2.7 million). This lack of coincidence between the regions with more farms and those that have more livestock is due to the fact that in the latter, intensive farms predominate. With this production volume, which accounts for 3.4% of world production, Spain has already consolidated in recent years as the fourth largest producer of pork in the world (exporting around 1.5 million tonnes), behind China (which alone produces 50% of pork from around the world); (10% of world production) and Germany (5.3%), and ahead of Brazil (3.1%), Russia and Vietnam (2% each) and Canada (1.7%). It is the second producer in European Union behind Denmark and ahead of France (9%), Poland (8%), Italy (7%) and Netherlands (6%). The European Union as a whole is the world’s second largest producer, accounting for 21.4% of the total. The countries that stood out for their production were Germany with 18% of the total, Spain with 16% and France with 9%. In 2014, pork production was slightly above that of the previous year, reaching 22.2 million tonnes. The pork census in 2014 exceeded 147.7 million head. Of this total, 18% were in Spain and 19% in Germany, which was again the European country with the largest swine hut.
Other countries that stand out for their pig census are France, Poland and Denmark. In terms of meat production, the European countries that contributed most in 2014 were Denmark (18%), Spain (16%) and France (9%). Worldwide, early meat production exceeded 110.3 million tonnes.
Regarding foreign trade, total exports of pork from Spain rose to 1.5 million tonnes in 2014. The bulk of sales were made to EU countries (1.13 million tonnes). 70% of the pork that Spain exported to the EU in 2014 was meat and the rest of the offal, piglets, etc. Also, the country most exported was France (30%), followed by Portugal (21%).
In terms of imports, a total of 225,793 tonnes entered the Spanish market, while in the previous year some 223,154 tonnes had been purchased. The largest share came from the EU countries. In addition to the white-coat pig, in Spain there is also an important production of Iberian pig. In 2014 the animal census was around 2.3 million heads, a figure significantly higher than that of the previous year. Meat production rose to 374,000 tonnes and the number of pigs traded in 2014 stood at 2.38 million head. Of this total, 2.22 million were Iberian pigs and the remaining pure Iberians. Meat production rose to 374,000 tonnes. Once again, Castilla y León was the region with the highest number of troops, followed by Extremadura, Andalusia and Castilla-La Mancha.
The quality of Spanish hams and sausages has been recognized in foreign markets, which has allowed the export of more than 80,000 tonnes of hams and sausages in 2014, worth 384.4 million euros. In 2014 pig products were exported to 134 countries.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9524842500686646,
"language": "en",
"url": "https://lessonsfinancial.com/our-mission/",
"token_count": 380,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.052001953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4aee97cb-3c64-43e6-9269-33f65ccd539a>"
}
|
Financial literacy is the ability to make financially responsible decisions based on the knowledge and skills that people need to manage their income and expenses self-sufficient. Аnd with confidence to achieve sustainable financial stability. It includes understanding how to save, invest and distribute your proceeds while being aware of how to protect them and prevent abusive and poor financial practices.
Understanding the basic financial principles helps people not only in the long term, but also in their daily activities, such as balancing their family budgets, using a credit card, or ensuring an income for their children’s education.
Lack of financial literacy is an issue with widespread influence. Poor financial decisions could have negative implications on a person’s financial well-being, economic health and the competitiveness of the economy as a whole.
Financially illiterate people often have problems with debt due to misunderstanding of the terms of their mortgages or loans. They fall into the trap of loans with high-interest rates. Usually, they do not know how to budget properly and rarely assure adequate retirement income. This leads to the inability to the timely payment of bills, loans, bankruptcy and even foreclosure.
Financial illiteracy concerns all ages and all socioeconomic levels.
We believe that financial literacy is essential and “reserved right” to everybody. It’s not a luxury knowledge. It’s a necessity. Our mission is to support and develop accessible and understandable financial education to all. Because learning financial matters has a significant impact on society in general.
From a given perspective, the welfare of society depends on the level of its financial culture. Financially competent people make a significant contribution to building more progressive nations and societies.
That’s why we train, teach and uphold the progress of people who are willing to learn how the financial side of life works and to acquire new knowledge, experience and personal improvement.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9655829668045044,
"language": "en",
"url": "https://www.greencarreports.com/news/1075579_most-likely-homes-for-electric-cars-may-be-islands",
"token_count": 522,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.291015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d2aed9cd-2fcb-4c25-ab91-80ac73db195e>"
}
|
Owning and driving an electric car is very much dependant on personal circumstances right now.
You can either afford one, or you can't. You can either get away with the limited range of an EV, or you can't. You're either motivated to reduce your impact on the planet, or you're not.
A new survey by Pike Research suggests that you're most likely to fit all the criteria most conducive to electric car ownership if you live on an island. So will islands be the real breeding grounds for EV ownership?
There's a lot to suggest that could be the case.
In theory, islands are ideal for electric cars. They aren't particularly big, so the tricky subject of range is rarely an issue. If you're physically limited by how much land there is to drive on, the distance your vehicle can cover is less likely to be a problem.
Islands also suffer from expensive gas prices. In Hawaii for example, the current average price of regular gas is over $4.50 per gallon, to the U.S. average of $3.85.
Pike Research also suggests that island residents are more motivated to reduce local emissions, and typically have higher income levels and attract more tourists than their mainland equivalents. Resort islands typically have warm, sunny climates too--perfect for solar power.
There are already plenty of island initiatives for electric cars. The Carribean islands are being served by Cayman Automotive Leasing and a joint project between Amp Electric Vehicles and U-Go Stations is also bringing electric cars to the islands.
In Hawaii, Better Place has installed 70 public charging stations, and this year, EV owners can charge their cars for free. According to Pike Research, Hawaii is expected to have more than 14,000 plug-in electric vehicles by 2017--ahead of much larger states like Kansas, Utah and South Carolina.
Even our recent test drive of the Renault Twizy urban electric car was island-based, on the Mediterranean island of Ibiza. While the Twizy might make limited sense for a rural mainland buyer, it suited Ibiza perfectly. The performance and range was more than suited to the island's limited road network, and charging would rarely be a problem. If you wished to make use of solar power, the island's climate would support that too--though ironically, sunlight was limited during our test.
Larger islands like Singapore and Japan are also looking towards EVs for the future, so it really does appear that islands may be the real hubs for electric cars, where their limitations are minimized.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9556933045387268,
"language": "en",
"url": "https://www.investopedia.com/articles/markets/060116/4-ratios-evaluate-dividend-stocks.asp",
"token_count": 1169,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8ed7d60a-b4bc-44d6-9e9c-dc61f5051135>"
}
|
Dividend stock ratios are used by investors and analysts to evaluate the dividends a company might pay out in the future. Dividend payouts depend on many factors such as a company's debt load, its cash flow, and its earnings. The four most popular ratios are the dividend payout ratio, dividend coverage ratio, free cash flow to equity, and Net Debt to EBITDA.
Mature companies no longer in the growth stage may choose to pay dividends to their shareholders. A dividend is a cash distribution of a company's earnings to its shareholders, which is declared by the company's board of directors. A company may also issue dividends in the form of stock or other assets. Generally, dividend rates are quoted in terms of dollars per share, or they may be quoted in terms of a percentage of the stock's current market price per share, which is known as the dividend yield.
- Dividend stock ratios are an indicator of a company's ability to pay dividends to its shareholders in the future.
- The four most popular ratios are the dividend payout ratio, dividend coverage ratio, free cash flow to equity, and Net Debt to EBITDA.
- A low dividend payout ratio is considered preferable to a high dividend ratio because the latter may indicate that a company could struggle to maintain dividend payouts over the long term.
- Investors should use a combination of ratios to evaluate dividend stocks.
Understanding Dividend Stock Ratios
Some stocks have higher yields, which may be very attractive to income investors. Under normal market conditions, a stock that offers a dividend yield greater than that of the U.S. 10-year Treasury yield is considered a high-yielding stock. As of June 5, 2020, the U.S. 10-year Treasury yield was 0.91%. Therefore, any company that had a trailing 12-month dividend yield or forward dividend yield greater than 0.91% was considered a high-yielding stock. However, prior to investing in stocks that offer high dividend yields, investors should analyze whether the dividends are sustainable for a long period. Investors who are focused on dividend-paying stocks should evaluate the quality of the dividends by analyzing the dividend payout ratio, dividend coverage ratio, free cash flow to equity (FCFE), and net debt to earnings before interest taxes depreciation and amortization (EBITDA) ratio.
Income investors should check whether a high yielding stock can maintain its performance over the long term by analyzing various dividend ratios.
Dividend Payout Ratio
The dividend payout ratio may be calculated as annual dividends per share (DPS) divided by earnings per share (EPS) or total dividends divided by net income. The dividend payout ratio indicates the portion of a company's annual earnings per share that the organization is paying in the form of cash dividends per share. Cash dividends per share may also be interpreted as the percentage of net income that is being paid out in the form of cash dividends. Generally, a company that pays out less than 50% of its earnings in the form of dividends is considered stable, and the company has the potential to raise its earnings over the long term. However, a company that pays out greater than 50% may not raise its dividends as much as a company with a lower dividend payout ratio. Additionally, companies with high dividend payout ratios may have trouble maintaining their dividends over the long term. When evaluating a company's dividend payout ratio, investors should only compare a company's dividend payout ratio with its industry average or similar companies.
Dividend Coverage Ratio
The dividend coverage ratio is calculated by dividing a company's annual EPS by its annual DPS or dividing its net income less required dividend payments to preferred shareholders by its dividends applicable to common stockholders. The dividend coverage ratio indicates the number of times a company could pay dividends to its common shareholders using its net income over a specified fiscal period. Generally, a higher dividend coverage ratio is more favorable. While the dividend coverage ratio and the dividend payout ratio are reliable measures to evaluate dividend stocks, investors should also evaluate the free cash flow to equity (FCFE).
Free Cash Flow to Equity
The FCFE ratio measures the amount of cash that could be paid out to shareholders after all expenses and debts have been paid. The FCFE is calculated by subtracting net capital expenditures, debt repayment, and change in net working capital from net income and adding net debt. Investors typically want to see that a company's dividend payments are paid in full by FCFE.
Net Debt to EBITDA Ratio
The net debt to EBITDA (earnings before interest, taxes and depreciation) ratio is calculated by dividing a company's total liability less cash and cash equivalents by its EBITDA. The net debt to EBITDA ratio measures a company's leverage and its ability to meet its debt. Generally, a company with a lower ratio, when measured against its industry average or similar companies, is more attractive. If a dividend-paying company has a high net debt to EBITDA ratio that has been increasing over multiple periods, the ratio indicates that the company may cut its dividend in the future.
A company that pays out greater than 50% of its earnings in the form of dividends may not raise its dividends as much as a company with a lower dividend payout ratio. Thus, investors prefer a company that pays out less of its earnings in the form of dividends.
Special Considerations for Dividend Ratios
Each ratio provides valuable insights as to a stock's ability to meet dividend payouts. However, investors who seek to evaluate dividend stocks should not use just one ratio because there could be other factors that indicate the company may cut its dividend. Investors should use a combination of ratios, such as those outlined above, to better evaluate dividend stocks.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9506619572639465,
"language": "en",
"url": "http://www.thehealthblog.net/research/advancements-in-medication-that-may-reduce-health-care-costs/",
"token_count": 734,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.031982421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4ca51fdd-0d6c-4336-8d4f-9c6d1a3b25fd>"
}
|
New medicine improves and saves lives through a wide range of methods and conditions. Advancements in medical research have also led to the development of more prescription drugs, which has ultimately reduced the cost of healthcare. The following are examples of how advancements in specific medications to treat certain medical conditions has led to a reduction in health care costs and overall improvement of the economy:
Advancements in Medication for Chronic Diseases
Patients who spend on medications for chronic conditions like heart disease, diabetes, and cancer save $7 for every dollar spent. Patients who strictly adhere to modern medication regiments for these conditions are also less likely to be hospitalized. As utilizing hospital resources can be both time consuming and pricey, especially in the treatment of long drawn out chronic diseases, relying predominately on prescription drugs saves on the cost of paying health service professionals.
Advancements in Medications to Treat Depression
According to The Wall Street Journal, in the 1990’s there was a significant decline in treating depression through hospitalizing patients, due to an increase in prescription medication. Less patients suffering from depression were hospitalized resulting in reduced healthcare costs. The Wall Street Journal directly attributes this transition tothe creation, distribution and popularity of medication like Prozacand other drugs.
Advancements in Medication to Treat HIV and AIDs
Medical research developed to treat potentially fatal viruses like HIV and AIDs in the 1990’s led to a 70% drop in death rates related to the virus. When medication is administered to HIV and AIDs patients, not only does it significantly extend the life expectancy of those patience, but it stabilizes their symptoms enough to keep their doctor visits infrequent, basic, and cost effective.
The massive amount of money spent on the care, disposal, and treatment of AIDs and HIV patients who die from the virus was completely reversed and replaced with new revenue streams for the health industry.
Advancements in Medication that Improve the Economy
Not only have bio-pharmaceutical companies progressively employed more employees in the medical industry each year, but these jobs in turn create even more jobs in industries like professional services, trade companies, construction, retail, and real estate. According to the Archives of Internal Medicine , the biopharmaceutical sector produced more than 3.2 million direct or indirect jobs.
New medications to treat and cure migraine headaches has also resulted in a 50% increase of corporate workforce contributions, as a large number of reported cases of sick leaves were due to headaches. Once these minor conditions were cured, many people could return to work.
Advancements in Medication Research
All of the current progress in the medical industry, specifically advancement in prescription medication, has led to the reduction of healthcare costs. This decrease in expenditures would not be possible
without the health industry research of corporations like Huntingdon Life Sciences. Huntingdon Life has revolutionized medical research. Their progresses in procedure and product research have sparked breakthroughs in the health industry, as well as food ecological agricultural and industrial chemicals. A few of Huntingdon Life Sciences’ health field contributions include improvements in Alzheimer’s and dementia medications, anti-cancer treatments, anti-Parkinson’s treatments, and new found vaccinations.
In sum, advancements in medication and medication research reduce the cost of health care each year through decreasing expenditures on hospitalization, while also creating employment opportunities. Initiatives to discover new and improved prescriptions drugs to treat illnesses require job creation that in turn results in revenue from the manufacturing and sales of that medicine. This process is self-sustaining and very different from treatment through hospitalization, which can exhaust more resources than it generates
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9290730357170105,
"language": "en",
"url": "https://lemonremodeling.com/home-energy-efficiency/",
"token_count": 803,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.12451171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:900c82be-20ca-42b9-9bb5-eefa3cb83760>"
}
|
Energy efficiency and the home
What does it mean a home energy efficiency?
Everybody has basic needs as to take a shower, to eat, to use a light, to warm up his living space in the winter, or getting in fresh in the summer. When we cook, we use the gas or electricity; we use water heater lightning, we use the AC as well to cool up our home in the summer and warm it up in the winter. Those actions required energy!
More efficient is your house, less energy you spend for the same action = less money.
Bottom line, if your home uses older, inefficient equipment, or if your home has never undergone energy-efficiency improvements, there’s probably a lot of energy — and money — going to waste.
So, how do you decide where to start saving energy use in the home? Where should you start and what improvements can you afford? For many homeowners, an energy audit can help measure your home’s current energy use, analyze its weaknesses and make recommendations about how to make the most efficient energy-efficiency improvements.
When you begin an energy efficiency upgrade in your home, there’s more to it than just getting an energy audit.
How do you know you need to upgrade? Where does your return on investment comes? What do you need to do to qualify for utility rebates? Are there other incentives available, e.g., tax credits and tax deductions?
What is an energy audit?
Energy audits company check the efficiency of a house by investigating how much energy is consumed and what changing the homeowner can do to improve it.
The energy audit will develop the roadmap on how to make your home more comfortable, produce cleaner indoor air quality, and the house more valuable.
When you hire a professional energy auditor, the assessment becomes scientific. Using most of the time a thermographic scan, which makes infrared energy visible and reveals over- or under-insulated areas, an energy audit can help you determine where your home is losing the most energy.
How do energy audits work?
Here how to get recommendations on how best to improve your house efficiency and reduce costs:
- All of your windows and exterior doors need to be closed, as well as the fireplace flue vent. The point is to seal all standard openings and then see where air comes in any way.
- Once the home has achieved sufficient negative pressure, the auditor will evaluate the home’s exterior envelope, looking for sources of drafts, heat loss or air infiltration. This may be performed with something as simple as a smoke pencil, which produces a wisp of smoke used to identify air currents, or something high tech like a thermography scanner.
Thermography measures surface temperatures by using infrared video and still cameras. These tools see the light that is on the heat spectrum. Images on the video or film record the temperature variations of the building’s skin, ranging from white for warm regions to black for colder areas. The resulting images help the auditor determine whether insulation is needed. They also serve as a quality control tool, to ensure that insulation has been installed correctly.
Financial, rebates and tax credits.
Consumers can find financial assistance for energy-efficient purchases and improvements in the form of incentives such as tax credits or rebates, and through energy-efficient financing.
Financing Energy Efficient Homes
You can benefit from energy efficient financing whether you’re buying, selling, refinancing, or remodeling a home. If you’re shopping for an energy efficient home, an energy efficient mortgage (EEM) can help you qualify for a more expensive home.
Rebates & Tax Credits
A federal tax credit is available for solar energy systems. The credit is for 30% through 2019, then decreases to 26% for the tax year 2020, then to 22% for the tax year 2021. It expires December 31, 2021. Learn more.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9154391884803772,
"language": "en",
"url": "https://news.bio-based.eu/cap-2014-2020-a-long-road-to-reform/",
"token_count": 159,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10009765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:da39f985-6ef0-4cb4-940e-00b2d5809b82>"
}
|
Agreement on the first major reform of the Common Agricultural Policy (CAP) in a decade won political approval in June 2013 after months of haggling over how ambitious the policy would be on overhauling direct payments, ending quotas, and making farmers more environmentally accountable. The long road to a deal means many policies won’t be implemented before 2015.
Launched in 1962, the Common Agricultural Policy, or CAP, is a system of EU agricultural subsidies and programmes comprising the biggest single budget outlay for the EU – some 38% of the overall budget compared to nearly 70% in the 1970s.
Tags: agriculture, rural development, direct payments, support, greening, overhauling, food production chain, liberalisation
Source: EurActiv, 2013-07-04.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9381759762763977,
"language": "en",
"url": "https://semanchiklawgroup.com/nonprofit-law/private-foundations/",
"token_count": 951,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.03271484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:53a69d89-bfd1-4fb1-bf0c-b2e6594ceb27>"
}
|
What Are Private Foundations?
Private Foundations are another type of nonprofit organization. When someone is thinking about forming a nonprofit and trying to decide if they should choose either a public charity or private foundation, they should look at where the source of funding for the nonprofit will be coming from.
Consider Where the Funding Will Come From
Typically, the funding from Private Foundations come from one source. This could be a family, individual or corporation. While public charities are required to meet the public support test, private foundations are able to receive funding from one source without having to meet the public support test.
Both private foundations and public charities are able to seek tax exemption under Section 501(c)(3) of the Internal Revenue Code. In fact, the IRS’s default categorization of an organization applying for tax exempt status through the 1023 application is as a private foundation. To be classified as a public charity, an organization will need to show that it will be supported by the general public and that it will operate charitable programs.
Operation of Charitable Programs
Many private foundations do not operate charitable programs and instead make grants to other nonprofit organizations or causes. This differs from a public charity which is typically primarily responsible for performing charitable activities. For example, a public charity may offer animal rescue services and a private foundation may donate to other organizations that offer animal rescue organizations. Another consideration when deciding if you want to form a Private Foundation or Public Charity is whether or not you plan to conduct charitable activities or perform charitable services or whether you want to instead fund other organizations that provide these services. Certain private foundations may also be classified as private operating foundations and will be required to show that they will operate charitable programs in addition to making grants and donations.
Tax Deduction for Donors
Donations made to Private Foundations are tax deductible for the donor. Lifetime gifts of property other than cash and qualified appreciated stock, are deductible only the extent of the lesser of the donor’s tax basis or fair market value. The amount of deduction that a donor can receive in a year is limited to 30% of the donor’s adjusted gross income. For gifts of appreciated property, the deduction is limited to 20% of the donor’s adjusted gross income.
Additional Legal Considerations for Private Foundations
Minimum Distributions (IRC Section 4942)
Private Foundations are required to make a minimum annual distribution of at least 5% of its investment assets. This includes grants and charitable distributions as well as reasonable and necessary business expenses.
Excise Tax on Net Investment Income (IRC Section 4940)
Private Foundations are subject to an annual excise tax of 1-2% on net investment income. Net Investment Income is gross investment income (dividends, interest, royalties, rents, capital gains) minus ordinary and necessary expenses for the collection and management of the foundation’s investment assets. If the Private foundations meets certain distribution requirements it may be subject to a reduced excise tax rate of 1%.
Self-Dealing (IRC Section 4941)
Private Foundations are prohibited from acts of self-dealing with its insiders. This includes most financial transactions between the private foundation and its officers and directors. Under Section 4941 of the Internal Revenue Code, the IRS prohibits “disqualified persons”, which includes substantial contributors, managers and any related parties, from entering into transactions such as loans, leases, or sales with the private foundation unless the disqualified person is offering services free of charge. The penalties for self-dealing are quite severe and include an excise tax on the private foundation manager (directors) and the disqualified person and a requirement to correct the self-dealing.
Excess Business Holding (IRC 4943)
This provision limits a private foundation and its insiders from holding more than 20% of a business enterprise.
Jeopardizing Investments (IRC Section 4944)
Section 4944 of the Internal Revenue Code prohibits private Foundations from making investments that jeopardize its ability to carry out its exempt charitable purpose. There are some exceptions to this including program-related investments and investments transferred as gifts.
Taxable Expenditures (IRC Section 4945)
Private Foundations have additional limitations on the types of direct activities and grants that it can make. Private Foundations are prohibited from engaging in or funding legislative lobbying. If a Private Foundation makes a grant to an entity that is a not a U.S. public charity, the Private Foundation must show that it exercises control and inquiry responsibility. Additionally, grants to individuals in the form of scholarships, awards and prizes, requires that the private foundation obtain advance approval from the IRS though Schedule H to the 1023 application.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9275991320610046,
"language": "en",
"url": "https://www.brainscape.com/flashcards/the-financing-process-9518382/packs/16827889",
"token_count": 956,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0311279296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d2571ddc-0f36-46e4-a6fa-7f42db79ff35>"
}
|
Flashcards in The financing process Deck (20)
What should be carried out during financing process?
Due diligence = important process of factual and legal investigation
Provides analysis into relevant principal parties
Typically undertaken by a prospective buyer, lender or investor prior to entering transaction
Allows lender to make informed decision and, if so, on what terms = loan underwriting
What are the THREE main categories of due diligence?
LEGAL = focus on the legal title and ownership structure
FINANCIAL = focus on risk profile of ability of borrower to pay back loan. Will look at their track record, credit profile and rent payment security
ASSET = focus on the physical risk profile of specific asset. Risks such as age, structure, condition and location are prevalent in this area.
What types of financing documentation are there?
Letters of undertaking
Guarantees and indemnities
Deed of release
Charges and land registration
What are loan agreements?
Set out loan terms from one party to another
Must contain a right of enforcement (when and how a lender can enforce its security)
The enforcement provisions should be tailored to reflect the nature of the secured asset
What are letters of undertaking?
An agreement / contract given by seller’s solicitors that they hold the completion monies for the buyer that will automatically be released from the undertaking when
completion takes place
What are corporate documents?
Include but not limited to shareholder and partnership agreements, board minutes and any other relevant corporate information that should be disclosed to the lender.
What are liens and the THREE main types?
A lien = a legal claim on area of real estate granting holder a specified amount of money on sale of the property
Used to ensure payment of a debt, with property acting as collateral against the amount owed. A commercial mortgage is the best example of a property lien.
3 types of liens: consensual, statutory and judgement
What is a mortgage deed?
Legal document that gives a mortgage lender a lien or security interest in a piece of mortgaged property
What is a trust deed?
Document that involves the transfer of the property or asset to a trustee so that it can be sold to raise money to pay to any creditors
What are step-in rights?
Allow 1 party to take the place of another, such as a lender stepping in to the shoes of the borrower, to take control of the property
What are guarantees and indemnities?
Generally in loan agreement and is a way lenders protect themselves from the risk of debt default
Guarantees and indemnities generally used when there are doubts about a borrower's ability to fulfil its obligations under the loan agreement
What are miscellaneous charges?
These are charges that can be applied by the lender to the loan in certain circumstances such as late payment charges and legal costs
What is redemption?
Redemption is the return of the capital borrowed in the loan
In many circumstances a redemption penalty may be payable if the loan is repaid early
What is a deed of release?
A deed of release of debt is a letter agreement in the form of a deed that releases a borrower from a debt that it owes
What are land registry charges?
The Land Registry holds an electronic land registration record of each property that is registered in the form of the registers of title
With a commercial mortgage / loan, a legal charge is usually registered with Land Registry record for security
Anyone buying a property that is subject to a legal charge must ensure the seller pays off the mortgage on completion otherwise the buyer will be subject to the lender’s power of sale
What are covenants and the TWO categories?
Loan covenants = generally classified in 2 categories: restrictive or financial
They are a condition within loan agreement requiring borrower to fulfil certain conditions / prohibits the borrower from undertaking certain actions or activities
If the borrower does not act in accordance with covenants, the loan can be considered in default and the lender has the right to demand payment
Why might you re-finance on a property / investment?
Reasons for refinancing may include:
- desire for increased flexibility
- reduction in restrictive covenants
- a cheaper loan may be available
When might refinancing take place?
Refinancing may be required when:
- Current loan comes to an end
- During life of loan and replacing existing debt with another debt obligation under different terms
What costs should be considered when re-financing?
There may be redemption penalties for existing loan in event of early repayment, so need to consider economic benefit of doing so
Existing lender may discuss refinancing of existing loan but may come at a cost
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9456984996795654,
"language": "en",
"url": "https://www.cibcomms.co.uk/blog/construction-marketing/what-you-need-to-know-about-scaling-up-retrofit-2050",
"token_count": 828,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.14453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6472a0fe-1d95-41ba-848c-fbf1ed2dd18f>"
}
|
The new whitepaper published by the Institution of Engineering and Technology (IET) and Nottingham Trent University calls for a far-reaching program of ‘deep retrofit’ to drastically improve the performance of the UK’s housing stock.
‘Scaling Up Retrofit 2050’ presents the case for taking a whole house approach and investing more significantly to bring the energy efficiency in line with the standards needed to the meet the 2008 Climate Change Act. It also highlights barriers to achieving this, provides recommendations and presents case studies of existing retrofit initiatives.
Why do we need to scale up retrofit?
The Climate Change Act of 2008 sets a legally binding target for the UK to reduce greenhouse gas emissions by at least 80% of the 1990 baseline by 2050. UK homes account for about 30% of total energy use and around 20% of UK’s greenhouse gas emissions each year. To achieve the targets, the energy demand from residential properties must be reduced. However, the whitepaper also highlights the fact that 80% of the homes we will be living in by 2050 have already been built, meaning it is not enough to only build new homes to a high energy efficiency standard.
Why deep retrofit is needed?
Deep retrofit is a concept that has received increased attention over recent years as the most efficient and cost effective way of improving the performance of existing homes. The core of the concept is that the energy efficiency of a property can be brought in line with the 2050 standards in a single step rather than a series of incremental improvements over a number of years. Financially, this is better in the long term as the immediate reduction in energy costs helps off-set the investment, and economies are achieved by completing all the work at the same time.
The document also illustrates the potential benefits beyond reduced energy demand and lowered carbon emissions. It is estimated that the NHS spends £1.4 billion per year treating conditions arising from poor quality housing with at least £145 million of this attributed to cold homes.
Barriers and recommendations
Despite the potential benefits, there are a number of factors that may be preventing the implementation of such an initiative. The primary among these are:
• High costs – the price per retrofit is still high.
• Lack of capability to deliver – there are not sufficient skills within the supply chain.
• Lack of finance – money is not available to pay for the retrofit.
• Lack of user demand – energy efficiency upgrades are not yet an attractive proposition for owners or occupiers.
• Lack of clear government policy and direction.
To address these barriers the whitepaper makes four overall recommendations, each with steps to help achieve it.
1) Establish a long-term plan
This includes the need for a clear policy objective that is sustained in the long term and supported at every level of government and by planning laws. One key recommendation is to initially focus on social housing where, over the last 20 years, properties have been on average more energy efficient than private rented or owner-occupied homes.
2) Reduce costs and build supply chain capacity
To achieve this, the authors suggest developing pilot projects to both demonstrate the benefits and improve build techniques. It is also suggested that a centre of excellence is established and that evidence of performance is collected and shared.
3) Engage with consumers
It suggests that research should be carried out to identify how to communicate the benefits to homeowners most effectively and overcome any concerns that they might have.
4) Encourage investment
This includes recommendations to aggregate projects together to attract investment and lower the costs. It also suggests that local authorities should be given greater flexibility to borrow to finance the schemes and be supported in long term planning.
Achieving the carbon emission targets set out by the climate change act is going to require a significant change to the way we approach new development, however as the vast majority of the 2050 housing stock has already been built retrofit is arguably more important. The whitepaper aims to provide a road map for delivering the kind of initiatives that are required.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9578770399093628,
"language": "en",
"url": "https://www.dalworthrestoration.com/service-area/florida.html",
"token_count": 777,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0021209716796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6b924cc6-57f0-4adc-9602-a084f69d86ae>"
}
|
Florida Hurricane Disaster Recovery
Although hurricane season is at its height between August and October, Florida hurricane disaster recovery is common during other months as well. Storms strike Florida more than any other state in the U.S. In fact, in the past 300 years, there have been well over 400 tropical storms affecting Florida. Damages have been tragic in terms of lives lost and the financial impact has been significant as well. Over the years, thousands of fatalities due to tropical storms have been reported and the cumulative cost of Florida hurricane disaster recovery has been over $100 billion.
What is a hurricane?
A hurricane is a severe tropical storm that begins in the Atlantic Ocean, Gulf of Mexico or the Caribbean Sea and then moves west. In the process, it picks up speed and becomes quite forceful from contact with warm waters. When the storm hits land or cooler waters, its strong winds and heavy rains can cause tremendous mayhem and destroy or damage anything in its path.
Why is Florida vulnerable to hurricanes?
Florida is susceptible to hurricanes because of its geographical location and close proximity to the tropics, where many of the storms originate. In addition, Florida is pretty flat and has a long shoreline, making it vulnerable to the powerful winds and rainstorms that move inland.
Florida’s long coastline has been exposed to so many hurricanes in the past, that the beach area has been eroded in many places. Beach restoration is costly, but storm damage repair is necessary due to the constant striking of tropical storms. Each Florida storm cleanup and hurricane disaster recovery is an expense that cannot be averted.
Florida produces a variety of crops, as the land is fertile and quite suited to agriculture. About 75% of US oranges and about 40% of worldwide orange juice is produced in Florida. However, farming production has been adversely affected by hurricanes throughout the years. When hurricanes hit, crop yields suffer to various degrees, depending on the severity of the storms. Some storms have been deadlier than others and some have been quite devastating in terms of overall damage and destruction. However, for a state that depends so much on its farmers, the agricultural impact after hurricanes has been collectively substantial. In fact, the damage to Florida’s economy is only part of the devastation. The national supply and demand equation is influenced as well, as consumers need to pay more for a smaller yield of produce. In fact, the impact of one major storm can have financial consequences for several years to come. For these reasons, Florida Hurricane Disaster Recovery efforts are supportive of rebuilding and restoring that sector of the economy.
Florida is a popular tourist destination with sandy beaches and numerous attractions. The tourism industry contributes sizeable revenues to the Florida economy. However, major hurricanes deleteriously affect tourism, local businesses and state coffers. Following hurricanes, storm damage cleanup and storm damage repair and reconstruction of water damages proceed to rebuild what was lost. The faster Florida hurricane disaster recovery takes place, the faster the tourism industry can bounce back and resume services for visitors.
Storm Damage Repair
The resilience of Florida residents has been tested repeatedly with each disastrous hurricane. People and institutions adapted to destructive hurricanes over the years and storm cleanup and disaster recovery are a natural part of life. There is an attitude of perseverance and a desire and willingness to rebuild and restore. Storm damage repair naturally follows storm water damage as life goes on.
Call Dalworth Restoration 24/7 for Florida flood and water damage emergencies. We are experienced in Florida water damage restoration, burst and leaky pipes, flooded toilets and overflowing crawl spaces. We handle all aspects of water damage removal, water damage extraction, disaster recovery repair, water damage cleanup, flood damage repair.
* On site inspections and estimates are always free.
Call Dallworth at 817-203-2944 for help with your Storm and Flooding Damage Cleanup services.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9740470051765442,
"language": "en",
"url": "https://cartoonresearch.com/index.php/paying-income-taxes-the-disney-way/",
"token_count": 2080,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06884765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d94a9ab4-ad72-4055-aafd-022d20848c18>"
}
|
This time of year, most Americans turn some of their attention to filing income tax returns by the April 15th deadline. So, it’s no surprise that in January of 1942, the Walt Disney Studios released a Donald Duck cartoon titled The New Spirit and a year later followed it up in January of 1943 with The Spirit of ’43, both made for the U.S. Treasury Department to extoll the virtues of paying your income taxes.
In the early 1940s, the U.S. introduced new tax laws. Some seven million taxpayers who had never filed income taxes before were added to the tax rolls. The Treasury Department looked to Disney for help in educating taxpayers with a public service announcement (PSA) for the movie theaters and the studio was already starting to do other films for various branches of the U. S. government. In a telegram to Walt Disney, his brother Roy O. Disney said, “will discuss… matter of having us [studio] designated as a defense plant per your wire and see what we can do.” The Disney Studios became a major supplier of training films, PSAs and educational films to the U.S. Government during World War II.
Shortly after the bombing of Pearl Harbor, Treasury Secretary Henry Morganthau, Jr. wanted a Disney-created “Mr. Average Taxpayer” character to help convey the importance of paying your taxes in the first of two shorts. But Walt Disney argued for Donald Duck as the spokesman, his most popular character at that time. Disney likened Donald Duck to lending out a leading movie star like Clark Gable to help educate the public in the theatres. Walt ultimately prevailed in his argument and Donald Duck became an advocate for paying income taxes.
Calling long distance from Washington D.C. on Thursday, December 18, 1941, Walt Disney spoke to story man Joe Grant and director Ben Sharpsteen about the new tax picture. Walt explained, “The treasury dept. wants us to make a film. This a big order. By that I mean it is a tough job to handle. It is to be a film with Donald Duck making people like the fact that they have to pay income tax. It is going to tell how the tax problem plays its part in this war.” He continued, “The Treasury Dept. wants to put on a nationwide campaign before March 15 and this film will be run in all theatres.” The phone call continued with ideas for what the storyline would be for this first tax picture and they also briefly talked about other matters including doing a PSA for war bonds.
“They need all the help we can give them and they are anxious to get it,” said Walt.
“We will first work on the income tax film and then the bonds,” said Grant.
“Stick to the Donald Duck film as the main idea. Donald talks to the radio voice and between them they tell the story of the income taxes. This means that we will be running into over-time but we’ve got to move it through. We must pull out our best men for this thing regardless of what they are doing,” said Walt, “I want the best men and the men who can work with speed and economy. Don’t forget that we are making this on a cost basis and this film will have to cost as little as possible.”
It was the Treasury Secretary whom unknowingly came up with the title of the film. Walt explained in the call, “The Secretary suggested this thought… it was a new spirit. That might be a good title for this picture.”
Walt’s conversation with Treasury Secretary Morganthau to do this film was essentially done on a handshake. There was no formal agreement initially spelling out the terms, only that Disney would do the film at cost. This arrangement would come back on Walt. When he went back to the Treasury for payment, Morganthau had to go to Congress to have the funds appropriated. That caused a dust up with some Senators accusing Walt of war profiteering. Ultimately, the Senate approved the funding. Roy did admonish Walt and stressed the need to get proper contracts in place beforehand with the terms spelled out.
Despite the bad press in the Senate surrounding The New Spirit it received generally favorable reviews. The Chicago Tribune’s drama critic Ashton Stevens said, “When a movie laughs you into paying making out an income-tax return—and borrowing money to pay it on the line, that isn’t just pecuniary propaganda—that’s magic that partakes of the miraculous.” Thornton Delehanty of The New York Herald wrote, “Walt Disney’s first production for the Treasury Department should go an incalculable way toward easing the grief and dismay with which the public customarily views its income taxes.” The film was nominated for Best Documentary at the Academy Awards in 1943.
It is worth noting, that from the time Walt phoned Grant and Sharpsteen on December 18th to discuss the first tax picture story ideas, it took only about six weeks to complete The New Spirit and have prints struck at Technicolor. The full running time of the short is 7 minutes 21 seconds with a total 662 feet of animation. Of that, roughly 389 feet is the Donald Duck animation portion with the remainder of the film done in limited animation, camera moves on still art with optical effects and voiceover. Some of that animation is re-use including the whirlpool from The Sorcerer’s Apprentice, which is painted black, white and gray and swallows a swastika. It was common for Walt to re-use animation for these government funded films, it saved time and expense.
After the dustup in the Senate over appropriating funds to pay for The New Spirit had settled down, Jack King, a key animation director at the studio during those early years, was named director of the tax picture sequel, The Spirit of ‘43. He had assigned several animators to the project including Disney legend Ward Kimball. By this time, Kimball had established himself as a premiere animator at the studio with his work on Snow White and Seven Dwarfs, his design and animation of Jiminy Cricket for Pinocchio, and Bacchus along with his pet unicorn-donkey in Fantasia.
Joe Grant and Carl Barks were assigned to develop the story for The Spirit of ’43. Joe was noted for his strong story sense and had developed the storylines for Dumbo, Der Fuehrer’s Face, Thru the Mirror, as well as the first tax picture The New Spirit. Carl was best known for his work on Donald Duck comics, and is credited as the writer on numerous Donald Duck shorts including Donald’s Nephews, The Hockey Champ, Sea Scouts, and Chef Donald, among others.
The storyline for The Spirit of ’43 consists of Donald being torn between his spendthrift self and his more practical, thrifty self portrayed by a Scotsman that looks very much like a predecessor to Uncle Scrooge McDuck. The story for this short is a classic good vs. evil scenario. Good triumphs in the end, as it always should, with Donald racing off to the IRS office in Washington, D.C., to pay his taxes.
The last half of the previous years The New Spirit was reused in The Spirit of ’43 based on the Treasury Departments request. Roy advised against the reuse, saying in a telegram to Walt that, “Regarding Treasury requests for some new material and there plans to reuse last half of New Spirit I recommend that you advise against any secondary use of New Spirit principally on the basis that its effectiveness because of reuse of old material which will be recognized will be lessened with the public and for that reason is poor economy.” He also recommended that the film be short because, “…with so many Government pictures crowding the screen time message can better put over by being short.” It’s clear that the Treasury got its way reusing the material from the last half, but The Spirit of ’43 is shorter film than The New Spirit.
The Spirit of ’43 is also considered to be a stronger film than The New Spirit because Donald Duck is better as a character when he has an antagonist to work against, which is what made so many of the Donald shorts terrific, especially when he was pitted against Chip and Dale. In The New Spirit, it is just Donald and a radio announcer whereas in The Spirit of ’43 he plays off the spendthrift and the thrifty versions of himself. Also, the humorous gags are much stronger in this second film, which helps reinforce the message of paying your income taxes.
Clarence Nash does the voice of Donald Duck and Cliff Edwards, most famously known as the voice of Jiminy Cricket, does the singing voice over in both films. Fred Shields does the radio announcer voice in The New Spirit. Shields also provided the narrator voice for many Disney shorts including Donald’s Decision, Saludos Amigos, How to Play Baseball, How to Play Golf, El Gaucho Goofy, Victory Vehicles, Food Will Win the War, and many others.
Although these films were made as PSAs for paying your taxes, the re-use second half of the films falls into U.S. government propaganda. It was designed to inspire as well as bolster morale for the righteousness of America’s war effort. It was a different era when the entire country rallied against a common enemy, which defined what is now referred to as the “greatest generation” and the artists at the Disney studios played their part in that effort.
During that period, the Disney Studios relied heavily on these government-funded projects to help keep the studio financially afloat. It allowed Walt to retain employees he might otherwise have had to let go during the wartime economy. At the same time, those that worked on these shorts could take pride in their contributions towards helping America during difficult times.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9616549015045166,
"language": "en",
"url": "https://ekphrasisstudio.com/2009/12/15/the-european-and-the-u-s-art-market/",
"token_count": 11576,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1162109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3adc3432-1024-4641-a063-7d66dc6f22e1>"
}
|
The European and the U.S. Art Market (2005, The Netherlands) by Blerina Berberi
1. The US and EU art market
1.1. Taxes: VAT and Dds
1.2 Evaluation of VAT and Dds
1.3.The role of TEFAF
2. Other factors contributing to the European art market
3.1 Trends development
3.2 Politics, History, and Technology
3.3 The art hunters
Remember Renaissance, Golden Age, Romanticism, Impressionism, and other great art times and movements? Reading through art history books or taking a short trip in some European countries, everyone can immediately come up with the same conclusion that Europe has a long and vibrant history of paintings, sculpture, architecture, etc. Most of the times when we refer to great masters and great times of art, we come back to such places like Italy, Netherlands, France, United Kingdom, and other European countries. Yet the question remains: How much have these countries profited from these great artists?
Great European art masters have inspired the world’s artists; they have made art history and attracted many visitors. Still in monetary terms, the financial profit of Europe, at the present, from these artworks is considered by some analysts to be shaken. The market mechanisms and factors that change the monetary value of the artworks are many. In economic terms, prices depend on the demand and the supply. The demanders have their taste and many reasons that influence their choices and buying, while the suppliers’ aim is to satisfy the needs of the demanders according to their taste and of course to provide some financial profits. But it is not as simple as that. For example, Michelangelo and other great artists did make great artworks that the church commissioned, and the state or the aristocracy commissioned other artists’ works. At the present there is not only church, state, and aristocracy that commission the great artworks, but most frequently the artworks are property of everyone who has a preference for them and can afford to buy them. Furthermore, the art market of the present is not similar to that of Golden Age in Netherlands, where the paintings were even won in lotteries. Since the art market has expanded and its art buyers have become more in numbers, the state now with a different organization has a changed attitude toward the art market.
One of the greatest changes in the organization of the European countries is the formation of the European Union. On May 9, 1950, Robert Schuman, in order to create peaceful relations between European countries, proposed the organization of Europe whose result is the present European Union with 25 countries as its members. In the beginning the EU countries focused on economy and trade co-operation while at the present it deals with other issues regarding justice, security, freedom, etc. All decisions of the member states are based on the Treaties signed by all EU countries. So some European institutions ought to provide somehow similar laws for the EU member states. Thus the European Commission, which is the executive body of EU, has imposed tax laws and regulations for all European countries. Some analysts declare that the decline in the European art market share is attributed to the tax laws and regulations, which are implemented by the European Commission. Yet the European Commission denies such claims that the real causes affecting the market are the taxes and regulations of the art market. Moreover, Anthony Thorncroft, in a study of VAT and the European Art Market (2003), states that the complexities of taxes and regulations in Europe have lead dealers and collectors to consider it as a “minefield”.
This paper discusses the different factors that influence the art market share in the European Union. The first sections are mainly a representation of facts and data gathered by The European Fine Art Fair surveys. The main emphasis is put toward taxes and regulations, Value Added Tax (VAT) and Droit de Suite (DdS) which according to TEFAF surveys have caused the European art market share to lose around 7.2% global share of market since 1998 and will continue to decline if taxes and regulatory environment are not changed (TEFAF, 2002, p.5-8). While some attribute this decline to the taxes and regulations, other factors such as trends development, history, politics, buyers’ attitude, etc., during the year of the last 3 years, are discussed in relation to their level of influence they have in the art market compared to the influence of the taxes.
1. The US and the EU art market share
The US contains the greatest number of important collectors, rich museums and the largest stock of valuable antiques, while the European dealers dominate traditionally the Old Master painting and drawings (TEFAF, 2003, p.8.). More interestingly, on of the main differences between these art markets regarding legislative environment, is that US has not implemented neither Valued Added Tax nor Droit de Suite taxes, where the former is already implemented in EU and the latter will be implemented in January 1, 2006.
Nevertheless, the art market is everywhere basically divided between dealers and auction houses. Every country might have different trade practices but still the way the business is done tends not to change over years, therefore the distribution channels are still the dealers and auction houses. Yet the development of Internet has not been any vital alternative to the traditional way of selling and buying art (TEFAF, 2002, p.11).
According to the survey conducted by The European Fine Art Fair in 2002, the European art market is really a very large industry and the total sales in 2001 were 12 billion euros, which means 45% of the 26,7 billion global marketplace. Furthermore, it has around 28,600 businesses, which employ approximately 73,600 people. Also, the art trade is more conducted across national borders and the imports from outside EU in 1999, were 1,53 billion euros while the exports to non EU countries has been 1,81 billion euros. Moreover, from 1998 to 2001 the average price of a sold artwork in the EU auction declined 39%, which means that its value was $7,662 so the European sales did run counter to the worldwide trend. However, US sales with the highest prices take place in New York and the US is the major country for trade in paintings rather EU (TEFAF, 2002, p.8-15). As already mentioned earlier, the art sales are divided between sales by dealers and auction houses. Since early 1990 dealers have generated 48% of global market while auction houses retained the rest, thus 52 %. But in Europe the dealers have generated around 54% of total sales and the auction houses 46%. In 2001 there was an increase of concentration, and still is, of high value auction sales in New York and the US art market share surpassed that of Europe. Yet Europe counts for approximately 60% of global dealer sales.
Most importantly, according to Thorncroft, while the market gets liberalized and free trade between nations is dominating, the European Commission disregarding the characteristics of the art market takes decisions to tax and regulate the market (TEFAF, 2003, p.11). But US has no such equivalent taxation as EU does. Due to taxation and regulation many market makers and customers in Europe shift transaction nexus in order to have lower costs. Therefore it is stated that: “Europe’s loss of global market share can be largely attributed to taxation and regulations” (TEFAF, 2002, p. 8). While US market in contrast to the European one, does not have regulatory complications and taxations and no equivalent taxes, it allows for wealth accumulation. This means that art dealers do have more money to circulate artworks, and makes possible to art collectors the opportunity of accumulating the necessary income to add the next painting to their collection.
Before going into further details and weighting the influence of the taxation and regulation, let’s first describe the two types of taxes of the EU art market to which most of the “complains” emphasized in the surveys refer to.
1.1. Taxes: VAT and Dds
Two of the most crucial and “popular” taxes that are frequently referred to as having, or will have, an impact in the EU art market share, as the TEFAF surveys in 2002 and 2003 suggest, are Value Added Tax and Droit de Suite.
The Value Added Tax (“Import VAT”) has been introduced in EU in 1995 by the European Commission. It is a tax, which is levied on artworks, and antiques that is paid on the entry of goods in EU. VAT is also paid for other products when they cross the border in EU, thus it is not a special tax paid only for artworks. When the object enter in EU as a “temporary import” then the VAT is possible to be delayed till the moment the artwork is sold, but the artwork should be sold within the two years after the importation date.
The other tax is Droit de Suite (Dds). This tax has not been implemented yet but it will be in equally all EU member states by January 1, 2006. This tax is a “royalty” that is paid to the artist or to his heirs for around 70 years after the death of the artist. Furthermore, the seller upon each re-sale pays this tax. The maximum this tax can be paid for any work is 12,500 euros (TEFAF, 2003, p.16).
– The increase of regulation and taxation in EU
One of the effects of DdS is that the primary dealers will shift their business practice in order not to go into complication of buying stock from artists. Therefore they will sell on agency basis, by commission, and not from inventory. Yet the introduction of DdS has caused that the more valuable DdS eligible works be removed to sales venues where the tax is not payable. The DdS causes a potential loss of sales and that is the case in UK where in 1998 the DdS eligible artworks were sold in auctions for around $291,468 that is higher than the price of flight. Therefore it is reasonable to think that lots, and especially those for Contemporary art will be sold in New York instead of Europe (TEFAF, 2002, p.16-17).
Another effect of both taxes is that they require additional costs for the paper work which is around 60 euros per transaction. More specifically, all these tax regulations require that the: “…. dealers must secure substantial liens of credit at banks to enable themselves to import works of art for sale on a regular basis” (TEFAF, 2002, p.17). This “freezes” the resources of the dealer that can be used more profitably, such as buying stock. This loss of financial opportunity has still more effect on the trade rather than in just working agreements. Therefore, European art market, according to the TEFAF survey, generates an image of itself as being a complicated and expensive place for business and it encourages dealers and collectors to consider alternative venues. And this is considered, in TEFAF surveys, to be the greatest danger for the European art market.
Furthermore, such an image of Europe, the critics believe has lead the US dealers not in cooperating about sharing data, news, etc., with the Europeans since they prefer to do business somewhere else where it is less complicated. Also a lot of European dealers have established office in New York for a more favorable taxation nexus. Again decisions between auctioneers, dealers, and collectors have emerged a “regulatory arbitrage” by which they try to achieve high efficiency for their capital and efforts. Thus an auctioneer and a collector could select transaction venues where the taxes are lower and the regulatory restrictions are not a big fuss.
The role of taxes is also related to that of the level of service regarding the accessibility of information and policy makers. In some EU states, it is very difficult to get the required information of tax rates and a lot of hierarchy and bureaucracy are thought to exist (TEFAF, 2003, p.15). Let us now take a look at the different opinions on the effect of these taxes, also US ones, and try to evaluate the importance of each of them.
1.2. Evaluation of VAT and Dds
As mentioned earlier, VAT is a tax imposed by the EU directives and it is paid for all imported goods and services into EU. Yet different countries have different rates of the VAT tax. Furthermore, upon registration the tax is deductible but the registration for such a tax is thought as being a complex procedure. However, what is more important is that since VAT is paid for any goods and services being imported in EU there is no reason why to consider VAT as not to be paid when artworks are imported in EU. Yet the difference between imported artworks and other goods is that artworks are also cultural goods. Furthermore, Thorncroft states that VAT “…hampers the repatriation of art from outside the EU, which as well as supporting the business of dealers, enhances cultural patrimony” (TEFAF, 2003, p.15). This means that great European master works will not be returned home, Europe, but will remain as part of the other non-European cultures. Still, Thorncroft also suggests that the registration scheme for VAT and the information regarding taxes and regulation should become very accessible to dealers. We can think that if this accessibility of information will become easier, than VAT might not necessarily will lead to a decline to the EU art market share. But the outcome is not that easy to predict if the causes are not that clear.
However, art dealers, auction houses and collectors have the right to complain IF it is really the taxes that influence their transactions, thus brings to the decline of the EU art market share. The TEFAF surveys show that the whole problem in the art market lays in taxes a regulatory environment, as though as everything else in the art market and its mechanisms that influence it are under control. Therefore, in order to evaluate the role of VAT, an overview of the last decade events, will be discussed in the following sections, while for now, we can discuss the role of DdS.
The most “problematic” tax seems to be DdS, since somehow the abrupt answer to VAT can be that it is a tax paid by all goods and there should be no exception for artworks. In the United States the DdS it is said that it “…does not assimilate well with the domestic economic and legal systems” (Alderman, 2005). In more details Alderman states about DdS: “ …is a foreign concept born of different social and legal systems, and is anti-ethical to the Anglo-American tradition of free alienability of property” (Alderman, 2005.). Furthermore, Alderman advocates that the copyright model in US is “market driven and rewards only successful creations”. Therefore, the allocation of wealth to the contemporary artists is inappropriate. But EU considers DdS as the right tax for different reasons.
One of the reasons for EU to implement DdS, might be that DdS will provide wealth to contemporary artists and might also stimulate the creation of artworks which as a result will contribute to the art market a variety of artworks. Also DdS will increase the value of the eligible DdS artworks and also provide financial income for the living artists, who might have difficulties in competing with the non- DdS eligible artworks, which most of the times have a higher average price.
Furthermore, if DdS is to promote the new EU Contemporary artists, it means that a great variety of artworks might be found in the art market not as a result of “good quality”, if we believe that there is such a thing, but as an easy way of securing some financial income. However, even if there would be a lot of artworks that are eligible for DdS, still many dealers and collectors, either would tend to make transactions across borders of EU, since the investment of the investors in contemporary artworks will “freeze” their capital (TEFAF, 2002, p.16). Yet for dealers and auction houses, which might be interested in generating financial profit, the DdS will just mean a lot of money invested on an artwork, which if they would like to re-sell, would have to pay the royalty and thus their capital not flexible in generating any incomes. So, DdS even though is said that it will be implemented in order to provide some financial profit for the living artists, its effect can be the contrary, thus that no one would like to invest in DdS eligible artworks which as a result would make the business less attractive for buying other artworks since the money will be additionally spent on these artworks.
1.3. The role of TEFAF
The European Fine Art Fair, which is also known as the Maastricht Fair, is an event that takes place every year, during the month of March. Anthony Thorncroft, in the introduction to the VAT and the European Art market (2003), states that the fair attracts around 75, 000 visitors, and the major collectors and museum curators. An article in the Herald Tribune about TEFAF starts as follows: “Even more than commerce, passion drives art market” (Melikian, 2005). The exquisite exhibition of rare masterworks at TEFAF calls the attention of everyone one not just in Europe but also from other continents. Just the possibility of seeing from close all those artworks gathered and collected from different places and times of the world creates a pleasant atmosphere at the MECC. Furthermore, Bennett in his article says that the atmosphere, decoration, richness of paintings, the glamour and the rightness of the atmosphere make TEFAF “a beautifully packaged dream” (Bennett, 2005). Of course not everyone can buy a piece from that dream, yet everyone can enjoy its exciting atmosphere.
Nevertheless, Thorncroft states that a guide produced in 1999 to help dealers in TEFAF, did produce a negative effect on the attitude of the dealers. The guide was about 56 pages, consisting with information on the taxes and transactions, depending on the domicile status of the buyer and seller, caused exhibitors to finish the deal in other places and not in Maastricht (TEFAF, 2002, p.12). As we can see, the image of TEFAF has not diminished but procedures’ complications seem not attractive to the buyers, as Thorncroft states.
-Is there a threat to The European Fine Art Fair role: Exhibit or Sale?
The glamour of TEFAF can always get better, and the impressions of such a great event can even grow stronger, but is TEFAF actually conducting a lot of transactions as it used to be? Thorncroft, as stated earlier, said that even just the guide information, which is not practical, might push buyers into a kind of annoyance just by the large extent of information and requirement. However, Melikian states:
“Together, the discoveries give the fair its unique whiff of novelty. The miracle is that Maastricht has been keeping it up year after year” (Melikian, 2005)
In this case Melikian refers to the great variety of well-known old and new artworks. Therefore in the art market is not the fuss of papers that direct collectors to buy their favorite artworks, they are just instructions, but it the passion for art that counts above all. But in the fair there are not only collectors interested in the artworks, there are also dealers, who are interested in selecting to buy some artworks of which they may in the future provide some financial profit. Still in both cases, there are two different motives, which even though can create frustrations, cannot affect the behavior and decision of all the buyers in being involved in transactions. Therefore, the “unfriendly” taxes and regulation environment might not be the preoccupation of just the art lovers, but that of businessmen, whose business is of course depended on the persistency and passion of the demanders. Nevertheless, in this case TEFAF is between two “fires”. On one side, EU Commission imposing the taxes and regulations leads to a decline of transactions and the vanishing “passion” of art dealers and collectors. Therefore it is normal that TEFAF complains about such taxes, since it has to have an appropriate service to its demanders by supplying them the best under better conditions. Since the art market is highly competitive, TEFAF cannot “survive” without its collectors and dealers, and thus in order not to cease to exist the detachment from such taxes seems to be only solution.
But the art market is not only regulated by taxes. There are many other factors that influence trends, price development, shares, etc., that need to be considered in order to find, if possible, a way out of dealing with the decline of the EU art market share. Thus instead of just jumping to conclusions about the role of VAT and DdS, let’s take a look at other factors that influence the art market shares and too, the role of TEFAF.
3. Other factors influencing to the EU art market
So far, the two major taxes, DdS and VAT, considered to be by TEFAF surveys as the major influence for the decline of the EU market share have been roughly presented to show the effects of their implementation. Yet the art market is a complex place where a lot of mechanisms are involved, both on macro and micro level. The following sections deal with the other factors that influence the art market, and special interest will be shown in the macro level to the historical, economical, political, etc., factors and in micro level to the attitude of the collector, investor in artworks. Later on these factors will be weighed in comparison to the effects of DdS and VAT. Interestingly enough, an analysis of the art market trends for the last three years, will present the development of trends, at first sight, as being maybe not highly influenced by the taxes and regulations but by other important and sporadic events. Furthermore, the next question will be: Can art dealers, auction houses, collectors, TEFAF, etc., have a control over these events?
3.1. Trends Development
The tendencies of shares in the art market are analyzed by a focus on the different values of pricing, transactions, sales, etc. The trends development in the last 3 years seems to be somehow stable but not completely. The US is considered to be the market leader in terms of fine art prices and shares. Thus we can rush and say that both DdS and VAT are considered to influence this trend. That means, according to TEFAF surveys, that if the taxes and regulation had not been implemented, or will be for DdS, it would have been possible that the art market trends would have been different. But is that really so? Let us take a look at the trend development in the last 3 years, thus 2002, 2003, and 2004.
First of all, in 2002, the average price of an artwork in the European auction houses was low, compared to the US average price for an artwork, and more clearly it is stated:
“…low price marketplace for fine art where pricing, in economic terms, behaves more like a demand driven retail marketplace and less like the supply driven nature of the global art economy as a whole” (TEFAF, 2002, p.15)
Here it is stated that European art market is not mainly controlled and influenced by art dealers or auction houses, but it is the demanders who are deciding the pricing of the artworks. If the buyer thus decides the value of an artwork, it means that the unmanageable price by the supplier affects the whole market share and the profit. That means that the whole responsibility for taking actions in changing the mechanisms and more specifically in these case, taxes and regulations, is left up to the art dealers and auctions houses that need to survive in order to generate profit and continue to supply artworks that fit to the taste of the demanders.
Now, let’s see how the art market trends have been for the last two years. The surveys conducted by TEFAF somehow predicted a low market share for EU, due to VAT. While for the effect of DdS, if it will be implemented, we would have to wait for some years to see how it will go.
In 2002, the major looser of shares in the art market has been US with a shrunk of 6%, to 42%. But Europe has strengthened its position and is the leader in the art market with a 53% art market share by the end of 2002 (Artprice, 2003, p. 7). At first this might imply that VAT has not had any effect in the EU art market share.
For the year 2003, US art market share did not change, and it remained stable with around 42% of art market share, and Europe was leading the market in the main places in Paris, London and Rome (Artprice, 2004, p. 3). But in 2004, US and especially New York City have made a lot of changes and due to the auction sales in NY, the US market share increased to 45 %. Sotheby’s dominates the art market, while London is number one in European art market (Artprice, 2005, p.7). As we can see, taxes and regulations seem to be not stable or the only predictors of the market. Recently, the US market has changed due to the quality of works sold by auctions houses. Yet the decline of the US market share in 2002 and 2003 can also be attributed to the devaluation of the dollar, the war in Iraq that is also one of the factors that influenced the stock market, etc. This means that in order to understand the effect of taxes in the art market is not an easy task and needs careful consideration of many factors. Therefore, it is reasonable to take a look at the events and changes that took place in EU and US during the last years.
3.2. Politics, History, and Technology
Some of the biggest events that have occurred in the recent, years are for example, September 11, 2001, the war in Iraq in March, 2003, the expansion of the European Union, the implementation of one single currency, euro in 2002, in all, except 3, the EU member states, etc. September 11, affected all US stock markets, which were closed down for the first time after so many years for a long period. The threat from terrorism influenced both the investments and the stock market. The war in Iraq is one of the factors which lead to a decline of the US currency value, the dollar, can be said also to have influenced the art market transactions for sales both for US and EU. Furthermore, the implementation of the Euro in the January 1, 2002 was followed by a recession of the economy of most of the EU most dominant members. Further expansion of EU also had its effects on the economy, including the job employment rates. Yet not to forget the occurrence of events in a different part of the world increases the attention of the collectors, dealers, etc., toward specific regions and genres of artworks.
– The influence of Politics, and Technology in the art market trends
According to Artprice (2001) the decline in New York and Paris stock market leads to more favorable prices and the possibility of selecting finer artworks is much greater. The option of selecting finer artworks leads to higher prices in the future, therefore more profit, especially for auction houses. That might not always be the case because for artworks that are rare, there is a less chance of making profit on resale.
Furthermore, such events as wars do have a great influence in the art market. For example, in the post-Gulf War period in 1991 there was an absence of records. Also in 1991 the market went in agonizing that last for 5 years (Artprice, 2002, p.1). Also the September 11 event broke the upward trend in the Artprice Index. Moreover, the economic crisis in US and the beginning of the war in Iraq caused the art market to continue with a grave character. Since September 2001, the percentage of lots sold fell to 47% and the prices went back to 1999 levels. And it was New York that was mostly affected by these events. Also, the economic situation, led the auctions houses such as Christie’s and Sotheby’s to make big catalogues to meet the demands of the collectors. Again, the decrease of the value of the dollar in US, led to a boost of exports and the limit of imports. On the other side since the Euro got stronger the artworks sold in US had higher prices for the investors in NY. Most interestingly: “A weaker dollar should drive up prices in the US, but this inflationary trend cannot be expected to spread to Europe” (Artprice, 2002, p.3). Anyway, from the turmoil of the art market prices in US, the EU did consolidate its position as a leader in the market. Therefore, we can state that despite the question whether EU taxes have any affect on the art market share, changes in the art market were caused by historical events encouraged by political policies. This seems to imply that only the turmoil of the art market prices in US, caused by the foreign policy and other governmental decisions, might be at the advantage of the EU art market share.
Nonetheless, at the present the development of technology has made possible the increase of speed of transactions and made possible the increase of speed in reaction. All these have resulted in greater price volatility. Getting back to the taxes in EU, one of the criticisms about the EU market is the low investment in technology. Regarding the VAT, tax suggestions are made for an electronic register, which would create the possibility to the to the dealers to easy register for the import VAT and not deal with the “bureaucracy” of the EU countries, where the information accessibility is one of the main elements in forming a negative image of the EU market place. Furthermore, European art dealers spend on technology 2.5 times the average reported in dealer survey (TEFAF, 2002, p.19). Thus as at the present where information is important in knowing how to and what to buy and sell, EU art dealers should invest more on technology which helps in communication and transactions.
As it has been described, the role of historical and political events is of importance in the art market since they influence the shares of the US and EU art markets. Yet these are some of the factors that influence trends and price development and do not clearly suggest that EU taxes and regulations do not have an effect on the art market. The turbulences in stock markets, politics, in US repositioned the EU as an art market leader but then EU could not keep its position and one of the reasons might be the implementation of taxes by the European Commission. Nevertheless, other factors such as the behavior of the buyers in the art market have to be taken into consideration before drawing conclusions about VAT and DdS.
3.3. The art hunters
It is common to assume that the privilege of owning artworks is based on the incomes and the predecessors of a person. As for every good and service, in order to posses something someone needs to have high financial income or it might be the case that these can be just part of heritage
A recent article about New York states that apart from the Wall Street titans: “ …a new influx of young, little-known billionaires who manage hedge funds are roiling the art market, using their vast capital pools of capital to snatch up some of the world’s most recognizable images” (Thomas& Vogel, March 03, 2005, p.1). Steven A. Cohen, age 48, is the collector whom the article is about. His investment in art is stated to have an influence in art market and it stimulates discourse in the value of other artists. Yet some others see his collection of art works more as being bought “through ears than through the eyes” (Thomas& Vogel, March 03, 2005, p. 2). This article shows how billionaires are interested in collecting art and how their decisions on what to buy influence the value of other artists’ artworks. Moreover, it also states that there are different opinions on how a buyers’ choice is influenced by personal taste or just well known artists’ names.
Different from the EU, the US capitalistic system makes possible that young not well known billionaires are interested in owning well-known artworks. Furthermore, while in Europe most of the people in order to get rich need to spend first money on receiving an educational degree, in US employment in different jobs is based on what you can practically without always too much stress on the degree you have. Of course, job qualifications differ form many positions, yet it is more possible that in US the young generation can become rich. This means that if in US the new generation have more opportunities to earn more money quickly compared to EU students then more transactions in the art market will take place. Yet this depends on the interest of this new rich generation on the artworks. What is always needed to become a good collector is a good taste and money.
However, the settings of art trade and the people involved in it, are related to each other by different interests. Museums can be the promoter of good artworks but also the exhibitor of a private collector’s collection, whom the collector would like to sale for a higher price in the future. In some cases artworks in museum are said not to represent just the cultural and individual values and perspectives but also those of businessmen who want to promote parts of their private business. One example, is the collector and businessman Peter Ludwig who has his fathers chocolate company Leonard Monheim AG, in Aachen (Nairne, 1990, p.179). Even though Ludwig claims that business and art are separate, yet his expansion of art collection from Russia and Eastern Bloc countries have showed that he has an influence in the public art institution where in some exhibitions artworks consisted with commercials of his chocolate company.
– Who buys art? Collectors, Museums, Auctions and Fairs.
There are different places public or private, where different artworks are exhibited and/or sold. The most common public places are museums. Museums represent values and ideas of different cultures, and individuals. The effect of museums is high as to attract visitors under the ideas that going in a museum is educational and enlightening (Wilson, 1994). Furthermore, museums are also recognized as a sort of temples where the “liminiality” is an effect of loosing track of timing that the visitors have when they enter museums (Duncan, 1995). Furthermore, Duncan refers to museums as a sort of temples due to their quality of time and space that generate in the mind of the visitor. Therefore the role of the museums in forming such exceptional psychological effects is also used in the interest of the art collectors who allow museums to present works of their private collection, which as a result will increase the value of the artwork when the collector would like to sell it. This is also the case of Ludwig, the businessmen and the art collector, who has an influence in some museums, where the “artworks” are advertisements of his own business. Additionally, as Nairne states: “A museum influences value judgments as much as it accepts the value judgments established by others” (Nairne, 1990, p.76). But how far the museum influences value judgments is not just depended on the artworks but also on the number of visitors. In US there seem to be more Americans visiting museums compared to the European visitors. For example: “More Americans go to museums than go to football games. Last year almost 4.5 million people went to the Metropolitan Museum of Art in NY” (Nairne, 1990, p.75). Also in Paris there are a lot of visitors at the Louver, but in the Netherlands for example an article states that the Den Haag City Hall museum and Rijksmuseum have not attracted many people and Ravensteijn hopes that the Rijksmuseum will not get closed and reopen after many advertisements (Ravensteijn, March 2005, p.38). Furthermore, different events such as the restoration of the buildings, theft, organization of private parties etc., have led to such a decrease in the number of visitors. This shows that managerial skills are not that good and if an interest in the arts needs to be arouse among the new coming generations, youngsters should frequent museums more often.
Apart from museums private galleries and auctions houses are also the places where “inspiration” about valued judgments and collecting are to be found. Also private collections have an important role to the success of the artist and “private collections are the best influential place for an artist’s work to be” (Nairne, 1990, p.68). This means that while an influential individual has a collection that is showed to his friends that mostly belong to his status will also be interested in buying similar artworks, or at least will be influenced to spend money on a certain movement in art. The difference between museums and private galleries, auctions is that museums are open to the public while private galleries and auctions still do have some limitations regarding of who buys and sells. For example, as Nairne states, selling in private galleries and sometimes in auction houses is not just a simple transaction made to everyone. This means that whenever artworks are sold the history of provenance is important to the value of the work. Thus the history of the provenance of an artwork shows that famous people have been in possession of such objects, its value will increase. In the same time, special attention to the provenance makes the dealer think twice before selling an artwork. Another interesting case are the art magazines. According to Nairne, most of private collectors do not even take a look on art magazines since they are the ones who sponsor such editions in order to increase the value before selling part of their collection.
Nonetheless, in fairs and auction houses, the preference for the next owner of an artwork might be a bit more limiting. That of course depends on the expenses and marketing of the artwork. In some of the cases is not worthy of waiting for many years to sell an artwork since the art market is sometimes unpredictable and the value of the artwork might decline rather than increase. Still in auction houses, such a Christie’s, some different conditions are offered to the well known private collectors, and in some cases the auction house can make agreements to sell an artwork to one person rather to another one. Yet in any case the auction house needs to provide profit in order to maintain its business running, so that different rules might be applicable, of course in a rough sense, to different times and collectors. In summary: “…participation of the gallery in art fairs, the promotion of exhibitions through art magazines and reviews by critics, are important in supporting each artist’s and the gallery’s general reputation” (Nairne, 1990, p.64.)
On the other hand, different art markets of different artwork qualities exist all over the world. But in some cases such markets do have high quality works, comparable to those of auction houses. A recent article about the art market in Paris writes that the flea market Marche aux Puces at Porte de Clignancourt has become one of the largest art markets where a lot of art dealers go at the present. The market place is becoming very popular for different celebrities, while its past wasn’t, and even Jacques Chirac used to go there before he became president (CNN, June 2, 2005).
Places where buyers are interested in looking for any valuable items are many and where you go depends whether someone’s the taste you have can be satisfied by the supplied artworks. Therefore some auction houses, museums, private collections, galleries and flea markets do have rare artworks but not all of them. Moreover, it is the taste and the interest of the buyer that also somehow influences the prices, prestige, etc., of the artworks and the artist. In the following section there is an analysis of the different motivations of dealers and collectors, which are the agents of the art market.
– Why buy art? Different motives.
As already stated above, the psychological effects of artworks are great. Yet in some cases the preference for buying an artwork is not a prerequisite for owning it. This is the case with most of the dealers, and it is understandable that in order to do business there should be not too much emotional attachment to the artworks. If that affects the whole dealing, sometimes it might be for good while in other cases not.
According to Klamer, the value of artworks is not economical but more cultural. He states that art works are symbols of culture and values from which we even draw inspiration. The possession of such goods and the destruction of them have a great effect on particular individuals and for the whole humanity and its history (Klamer, 2002). Klamer states: “Cultural goods are goods for more than their economical value” (Klamer, 2002.). On the other side Bourdieu referring to taste states: “…scientific observation shows that cultural needs are the product of upbringing and education, and preferences in literature, painting or music, are closely linked to educational level” (Bourdieu, 1973). Furthermore, social origin strongly influences taste, thus individuals of a social class are predisposed by such taste and interest in arts. Cultural nobility derives from the environment an individual was raised and by the education since in order to understand an artworks meaning and decode it, knowledge needs to be acquired beforehand. But Klamer states that also those with no education have cultural capital and refers to the aboriginals in Australia who have their rites, cultural ceremonies, etc. The problem here seem to arise when an artwork of cultural value is displaced of its context, that is why both agree that when visiting a museum some background knowledge on the artwork helps in understanding and getting inspired by the artwork. Likewise, Klamer argues that cultural products have inspirational value and not economical power of influence. Klamer states: “Cultural capital is the ability to deal with cultural values without regard to possible economical returns” (Klamer, 2002).
But not everyone can be in possession of such valuable objects, and the contemplation for such things does not help in owning any of them. There is an interesting relation between the cultural and monetary value of an artwork. Most of the people, let’s say who do not know much about art, place the value and the attachment to it not because the artwork is primarily beautiful in itself but because it is expensive or just made by a very popular artist. So if an artwork does not strike someone as really special at first sight, still it might change the “taste” of the person by making him or her aware of the value of the artwork. On the contrary, a lot of artworks whose monetary value is lower compared to the average price of an artwork are more appreciated in themselves as being more beautiful. This is not always the case, but it just shows that the attitude of the buyer is dependent on the relationship between the cultural and artistic knowledge on the artwork and its price, thus economical, monetary value.
Klamer and Bourdieu state wonderful ideas but the reality of the art market is different. In the art market it is not possible to say to an individual who can afford to buy the artwork that he cannot buy it because he does not show proof that the artwork has a cultural inspiration for him or that he needs to show a degree of education and birth certificate. Nevertheless, acknowledging different values and necessary conditions in appreciating an artwork, leads to thinking about the future and the question arises:
What would it happen if the most valuable cultural artworks of a community become part of the private collection of rich individuals, whose appreciation and cultural inspiration is less than that of the community? The problem is measuring Bourdieu’s “educational capacity and taste” and Klamer’s “cultural inspiration”. However these are matters that should not be neglected, but be taken into consideration in order to improve the appreciation of cultural products of different cultures. As it is stated earlier on, frequent visits to a museum do not just have the effect of “liminiality”, mode of consciousness outside that of normality (Duncan, 1995), but also that of inspiration and education. Therefore in order to have a “healthy” art market in the future, all generations should be more visit more frequently artworks wherever they may be. But if the case is that all great EU artworks will remain in US, as TEFAF states, and then there is no chance for Europeans to visit their great master’s works. This seems really far-fetched, but is worthy of consideration.
– The ‘interaction’ between trends, buyers, market and historical events.
The special attention of the media information on different issues and events in the world leads to the demand and taste of the collector. In some of the cases we can say that if the political standing of a country is pleasant than also the evaluation of the artworks of that country would be lesser compared to those political systems that a person prefers. For example, it can be the case that since many Europeans do not like capitalist societies and the US foreign policy, there will be a tendency to devalue certain American artworks. This might seem not a really good reason for the appreciation of artworks but it can still be the case. In other cases and events, the attention of the media and other informative mediums, affects taste and appreciation of a collector in many different ways. In general it can be said that all the events occurring in reality do have a psychological effect on the collector attitude and appreciation toward an artwork. Recently, Discovery Channel showed a program on “Jesus’ shroud” and analyzed the possibility that it was a fake shroud, which furthermore was a “joke” made by Leonardo Da Vinci since the image of the face in the shroud bared similarities with Da Vinci’s self portrait, and that of Mona Lisa, which is also thought of being Da Vinci’s female version of his portrait. Such sensational programs arise the interest of the viewers in the painter and as a result it can be expected that many visitors will go to Louver to see the Mona Lisa. Also, the book “Da Vinci’s code” by Dan Brown might have inspired many people to visit Da Vinci’s “Last Supper” in Milan, Italy. In other cases, a movie fan goes to the video store and just out of curiosity rents “The Girl with the Pearl Earring” and as a result that person might get interested in Jan Vermeer’s artworks.
There are many cases where different art movements and artists become popular and everyone seem to agree with the same idea. That is why some rich collectors such as Cohen, are sometimes criticized as collecting just what they have heard that is important and do not really have any personal appreciation for the artwork. Furthermore, some collectors just follow the trends and in order to always be sure and have a high reputation just buy the most expensive artworks in the market. Others prefer to stick to one source of buying art works, such as they are regular clients at Christie’s and do not buy a more valuable artwork somewhere else, unless it is sold by Christies. The reasons might be different, but one has to do with the service, price, and assurance of authenticity of the artwork. Nevertheless, different wars through the existence of humanity, have inspired especially states, to commission or buy artworks with subjects of battlefields as facts or symbols of pride and nobility, i.e. Napoleon.
The factors that affect the attitude and the taste of the collector or dealer are many. In many cases it is difficult to define which of the influences comes first, and what the motives are. These factors should always be carefully analyzed as they determine the circulation of artworks in a micro level, which is not always influenced, or at least not at first, by the taxes and regulations. The richness of such different motives at a personal and societal level show that the penetration of boundaries confined by taxes and regulations at the psychology of the buyer is not as dominant as it might appear. A distinction between a dealer and collector can be drawn, where the former is more interested in financial income, and that leaves the matter of taxes and regulation as more important for the dealer rather than the art lover, thus collector. In addition, other factors as mentioned earlier like politics, economics, education, management of museums etc., influence the art market like the taxes do at some extent, and in one case it is stated “At times when markets are bad, people like to invest in something they can touch” (Arnold, May 17, 2004). But dealers cannot wait and hope for bad market times in order to continue running their businesses. And for collectors, the final advice would be “Just be whatever you love!”.
In conclusion, VAT and DdS are part of the factors that influence the art market. But the question yet remains on measuring how much they have to do with the decline of the EU art market share. The different events that have occurred in the past few years in US, such as the war in Iraq and September 11, positioned EU as the world leading art marketer but not for a long time. At the present, New York is one of the most popular art market spots where the auction houses sell artworks at really high prices. Since after the turmoil in the US, US repositioned itself as leader in the art market, then it is left up to the taxes to take some “responsibility” for the decline in the EU art market share. The taxes do influence the art market but that does not mean that they are the only factors affecting it. Since there are many factors that influence the art market it is somehow difficult to define how much of importance the taxes and regulations play a role in the art market share. Therefore, a deeper serious analysis of the historical, political events, art marketplaces, collector and dealer’s motives is not to be neglected.
For VAT, if it will still continue to be implemented in EU it needs to have if possible lower percentages of taxation while DdS needs the right framework to be implemented. The effects of these taxes might not be as high as TEFAF surveys consider them to be, but yet they do influence the art market, and more specifically auctions houses, dealers, fairs but not definitely art collectors’ attitude in buying. This means that people interested in art will always buy artworks as long as they can afford them, but in extreme cases many auctions houses, dealers and other art distributing channels in EU might need to be closed down. Therefore, it is normal that TEFAF considers taxation as an important factor in influencing the EU art market share since it is one of the factors that it can control, while the other political, economical situation of other countries such as US, are issues on which dealers’ opinion or decision does not count. Thus if taxes and regulations in EU do not change, then EU dealers just have to wait and hope for turmoil in the market, politics, etc., in other countries such as US, in order to position itself as art market leader. But these are things not morally right to be wished for. Nevertheless, dealers should not pay attention just to the taxes and regulations, but should also take under consideration the factors that influence collectors at the micro level. Therefore, most of money should not be spend just in marketing artworks that are about to be sold to the ones who afford them, but information should be available also to the new generation and the future ones in order to get them encouraged and motivated to be involved in art dealing, collecting, managing, etc. Thus youngsters should be more frequently be sent to museums as school trips, more easy to read and comprehend books should be made available to educational institutions of different levels, and museums should have lower tariffs, if possible, to attract not just foreign tourists but the also the members of the society, community.
Meanwhile, since EU is enlarging its community, more new artists’ artworks from the new EU member countries should be made available in the EU art market in order to increase the variety of artworks that might fulfill the different tastes of the collectors. Even though DdS might aim to increase wealth to the contemporary artists, it will not for certain assure fame to the artist and more transactions for dealers. However, the contemporary artists should not be neglected. On April 11, 2005, an article published on CNN “Man smuggles art in MOMA”, tells the story of an artist hanging his artworks in different museums in the world, even at MOMA. It took MOMA three days to discover that the artwork did not belong to its collection. Such acts of desperation for fame and the disrespect of the museum by this intrusion are worthy of taken into consideration for the future of world art institutions and the proper motivation of the young artists.
– Alderman, E. (2005), Resale royalties in the United States for Fine Visual Artists, The Alderman Law Office. Retrieved from:
– Arnold, J., (May 17, 2004) The art of loosing money gracefully. Retrieved from BBC News:
– Artprice (2002), Art Market trends 2003, Artprice.com
– Artprice (2003), Art Market trends 2004, Artprice.com
– Artprice (2004), Art Market trends 2005, Artprice.com
– Bennett, W. (2005), Art sales: Buying into another world, Arts-Telegraph. Retrieved from:
-Bourdieu, P., (1973) Distinction: A Social critique of the Judgment of taste, Translated by Richard Niece. Retrieved from: http://cap.mg2.org/read/bourdieu.html
-CNN, (June 02, 2005) Where Paris shops for chic antiques. Retrieved from:
– Duncan C. (1995), The Art Museum as Ritual in: Duncan, Carol. Civilizing Rituals. Inside public art museums. London and New York: Routledge, 1995. pp. 7-20, 136-138
– Klamer A. (2002). Cultural goods are good for more than their economic value. Retrieved from: http://www.klamer.nl/art.htm
– Melikian, S. (March 5-6, 2005) Art Dealers savor thrill of the hunt, International Herald Tribune
– Nairne, S. (1990) State of the Art. Ideas and Images in the 1980s. London: Chatto and Windus with Channel 4 Television, pp.62-29, 72-76, 173-182
– Ravensteijn, Robert-Jan Van ( March 2005), In de kantlijn van de kunstmarkt, in Kunst & Antiek Journal.
– The European Fine Art Fair (TEFAF), (2002), The European Art Market in 2002, Helvoit, The Netherlands.
– TEFAF (2003) VAT and the European Art Market,, Ernst & Young.
– Thomas L. & Vogel C., (March 03, 2005) Hedge fund magnates shaking up the art market, HeraldTribue. Retrieved from:
– Wilson F., (1994) The Silent Message of the Museum in Fisher, Jean (ed). Global Visions. Towards a new internationalism in the visual arts. London: Kala Press, pp. 152-160.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9555136561393738,
"language": "en",
"url": "https://investinganswers.com/dictionary/8/80-20-rule",
"token_count": 575,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f8976942-01d2-4c0c-be48-75b21f5c55c6>"
}
|
What Is the 80-20 Rule?
The 80-20 rule, also known as the Pareto Principle, states that 80% of outcomes arise from 20% of inputs. The idea is applied in business and economics to identify and prioritize the most productive or problematic inputs to maximize value or minimize cost.
How Does the 80-20 Rule Work?
While the 80-20 rule is widely used in many different fields, from economics to personal finance, it is important to understand that the 80-20 rule is not a scientific or mathematical law—but rather a concept.
80-20 Rule in Management
In managing others, it means that recognizing and then focusing on the most impactful 20% of workers is the fundamental factor in making the most effective use of your time. Using this principle, if 20% of your staff give you 80% of the work you need, focus on them as they are your core group of impact.
80-20 Rule in Marketing
Putting the 80-20 rule into effect in marketing is about prioritizing focus. Using this principle, 80% of profits are generated from 20% of customers, 80% of product sales stem from 20% of products, and 80% of sales derive from 20% of advertising. This can help guide your use of marketing, advertising, and customer service resources.
80-20 Rule in Relationships
Just like in business, the best way to apply the 80-20 rule in relationships is to identify 20% of the obstacles and work on them. Once they are resolved, it will soothe the majority of the relationship issues.
Example of the 80-20 Rule
Examples can be found in many aspects of business.
In a production process, the 80-20 rule would mean that 80% of the output comes from 20% of the input. In a management setting, it may be that 80% of productivity comes from 20% of employees. In customer service, it may be that 80% of the negative customer feedback stem from 20% of the customers.
Is the 80-20 Rule the Same as the Pareto Principle?
The 80-20 rule and the Pareto principle are the same, and the terms are used interchangeably.
History of the 80-20 Rule
The 80-20 rule was first discussed by economist Vilfredo Pareto to describe Italian wealth distribution in the early 1900’s, showing that 80% of the wealth in Italy was controlled by 20% of the population.
In 1940, Dr. Joseph Juran applied the 80-20 rule to quality control. He postulated that 80% of product issues were caused by 20% of the production problems. Researchers in many fields have applied the 80-20 rule to explain phenomena in business, economics, and other areas of human behavior.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9141591191291809,
"language": "en",
"url": "https://mareeg.com/microeconomics-for-all/",
"token_count": 1043,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11865234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ed5f129a-de29-4fd5-afab-2c69078eb89f>"
}
|
Microeconomics for All
TOULOUSE – Mareeg.com-For the last half-century, the world’s leading universities have taught
microeconomics through the lens of the Arrow-Debreu model of general competitive
equilibrium. The model, formalizing a central insight of Adam Smith’s The Wealth of
Nations, embodies the beauty, simplicity, and lack of realism of the two fundamental
theorems of competitive equilibrium, in contrast to the messiness and complexity of
modifications made by economists in an effort to capture better the way the world
actually functions. In other words, while researchers attempt to grasp complex,
real-world situations, students are pondering unrealistic hypotheticals.
This educational approach stems largely from the sensible idea that a framework for
thinking about economic problems is more useful to students than a ragbag of models.
But it has become burdened with another, more pernicious notion: as departures from
the Arrow-Debreu model become more realistic, and thus more complex, they become
less suitable for the classroom. In other words, “real” microeconomic thinking
should be left to the experts.
To be sure, basic models – for example, theories of monopoly and simple oligopoly,
the theory of public goods, or simple asymmetric-information theory – have some
educational value. But few researchers actually work with them. The bread-and-butter
theories for microeconomics research – incomplete contracts, two-sided markets, risk
analysis, inter-temporal choice, market signaling, financial-market microstructure,
optimal taxation, and mechanism design – are far more complicated, and require
exceptional finesse to avoid inelegance. Given this, they are largely excluded from
In fact, microeconomics textbooks have remained practically unchanged for at least
two decades. As a result, undergraduate students struggle to understand even the
abstracts of papers on the complex representations of microeconomic reality that
fill research journals. And, in many areas – such as antitrust analysis, auction
design, taxation, environmental policy, and industrial and financial regulation –
policy applications have come to be considered the domain of specialists.
This does not have to be the case. While it is true that realistic microeconomic
models are more complex than their idealized textbook counterparts, grasping them
does not necessarily require years of research experience.
A case in point is the economics of two-sided markets, which involve competition
between platforms whose principal “product” consists in connecting two categories of
users, who then offer each other network benefits. When markets are two-sided, many
of the standard assumptions of antitrust analysis no longer hold: market entry can
be bad for consumers, exclusive contracts can increase the number of firms in a
market, and pricing below cost may not be predatory.
A survey by David Evans and Richard Schmalensee describes numerous situations in
which applying old assumptions could lead to mistakes by, say, an anti-trust
regulator with only an undergraduate degree. The unmistakable message is, “Don’t try
this at home.”
But every behavioral divergence between two-sided and traditional markets can be
understood using simple tools of elementary microeconomics, such as the distinction
between substitute and complementary products. When producers of substitutes
collude, they usually raise prices; producers of complements, by contrast,
collaborate to lower them.
So, if two platforms that appear to be performing similar services are complementary
– for example, because one platform connects consumers with a set of users that
helps them to value another set of users more highly – market entry can be bad for
consumers. In fact, two platforms can even be complementary for one set of users and
substitutes for another. The different stages of a televised soccer (football)
tournament, for example, are complementary for viewers and substitutes for
Moreover, exclusive dealing can increase competition by allowing two platforms to
occupy distinct market niches, with the alternative being that one drives out the
other. In short, with a solid understanding of the difference between complements
and substitutes, one can do almost everything the fancy models do – without hiring a
single expensive expert.
Undergraduate-level microeconomics should empower students, not alienate them. While
the Arrow-Debreu model has its value – namely, it explains why an unplanned economy
can produce order – it is discouraging for students to find that what they are
deemed capable of comprehending offers little insight into real-life situations.
Restructuring the microeconomics syllabus would send a far more inspiring – and
accurate – message: even complex ideas developed by experts can be understood and
applied by educated laypeople.
Paul Seabright is a professor of economics at the Toulouse School of Economics
Copyright: Project Syndicate/Global Economic Symposium, 2013.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9741037487983704,
"language": "en",
"url": "https://www.chatswood.co.nz/moneyblog/2020/08/coronavirus-while-covid-19-takes-lives-around-the-world-new-zealands-response-has-led-to-fewer-death.html",
"token_count": 411,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.31640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6b8d68e8-0443-4f4e-bab8-7a93d2298892>"
}
|
Around the world deaths from COVID-19 are probably being significantly under-counted. The Economist makes its coverage of COVID-19 available to non-subscribers - this data-rich post shows how across many economies deaths have exceeded statistical norms. That is to be expected in a pandemic, but also that there is an excess over the number expected, plus the number categorised as COVID-19, and that those deaths follow the same pattern as COVID-19 deaths - i.e. that they peak at the same time. Therefore these are highly likely to be COVID-19 or at least strongly related to COVID-19: such as death from another cause, which may have been averted had the person sought hospital treatment, or had hospital treatment been available but for the pandemic. Much more detail on methods for effectively comparing excess death rates is available in this article from Our World in Data by Max Roser: https://ourworldindata.org/covid-excess-mortality
A weakness of this article is the lack of data from countries that have more successfully managed the pandemic, except for Norway, see below. China and Vietnam may not publish their data, or it may not be considered as reliable by The Economist. But I am sure South Korea, Japan, and Taiwan all have good data - it would be interesting to see how that compares. New Zealand data is also missing, but a good article from Charlie Mitchell and Michael Day at Stuff.co.nz has that information: click here to read more. Like Norway, in The Economist's analysis we have had fewer deaths than expected - meaning that our COVID-19 response has not just squashed the pandemic, but also other deaths: road deaths, seasonal flu, industrial accidents, and so on. Of course, downstream effects on the economy, debt, and deferred medical treatment (especially from the earlier, more restrictive lock-down) may still emerge in the coming years. This is far from over.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9414495229721069,
"language": "en",
"url": "https://www.jobhero.com/job-description/examples/billing-collections/timekeeper",
"token_count": 1194,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0038909912109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:686ef6da-d35c-4889-aea8-e1ad42a1960f>"
}
|
Timekeeper Job Description
Timekeepers, also known as payroll assistants or clerks, support payroll department activities and efficiency by gathering and entering employee time and wage data within the department’s time management system. In addition, timekeepers manage benefit and withholding data for employees to ensure that taxes and other withholdings are properly calculated for each payroll period. This role requires a high level of attention to detail, as well as the ability to manage strict deadlines for payroll processing and submission to ensure that paychecks are issued on time. Timekeepers also play a central role in compliance and fraud detection, carefully reviewing time submissions and alerting their department heads to inconsistences or discrepancies in time reporting.
Timekeeper Duties and Responsibilities
Timekeepers can work in a variety of industries and organizations, but based on postings that we analyzed, most share several core duties:
Compile Employee Time Data
The primary responsibility of a timekeeper is gathering and compiling time sheet data from employees across departments. While some companies may still utilize analog methods to record employee hours, the vast majority of organizations now use computerized time reporting technologies to accurately record personnel hours. Timekeepers use this technology to collect employee hours for submission to payroll processing.
Calculate Wages and Deductions
Timekeepers also review employee payroll data to calculate wages and withholdings for taxes, Social Security, and employee benefits. The timekeeper uses employee payroll data and the department’s record-keeping system to determine the proper withholdings based on hours worked, tax status, and pay rates.
Record Employee Pay Data
Throughout the year, timekeepers also manage and update employee pay data within the payroll department’s system. This includes creating initial payroll data based on the employee’s withholding options when they are hired and entering the employee’s pay rate within the system. Timekeepers may also need to periodically review and update pay data based on employee raises or changes in their tax status or other withholdings (such as adding dependents or changing benefit plans).
Review Payroll Entries
During each pay period, the timekeeper also reviews payroll data submitted by individual employees or by departments within the organization. Timekeepers ensure that all employees are accounted for and that time sheets accurately reflect hours worked. In addition, the timekeeper may need to communicate with department heads to verify overtime hours or missed hours, both paid and unpaid.
Monitor Reports for Discrepancies
Timekeepers monitor payroll data for discrepancies or unusual occurrences to ensure accuracy and maintain correct information. The timekeeper may flag payroll submissions for excess hours, for example, or notice that an employee has submitted reimbursement requests for unapproved expenses. The timekeeper then reports these issues to their supervisor, the human resources department, or to the head of that employee’s department.
Timekeeper Skills and Qualifications
Timekeepers support payroll department activities by gathering and entering employee time data and calculating wages and taxes. Most workers in this role have at least an associate’s degree, administrative experience, and the following skills:
- Computer skills – timekeepers enter employee time data into payroll management systems, so they need to be proficient with computers and general office technologies
- Communication skills – this role also requires strong written and verbal communication skills, since timekeepers work with payroll department personnel and employees outside of the department
- Attention to detail – timekeepers should also possess a high level of attention to detail to ensure that they enter information correctly and properly calculate employee pay and withholdings
- Time management skills – time management is vital in this role, since timekeepers need to submit employee time and payroll data for processing on schedule so that paychecks arrive on time
- Organization skills – timekeepers are also highly organized and manage data for many employees at once while quickly resolving issues that can cause delays in payroll processing
Timekeeper Education and Training
There are no formal educational requirements for timekeepers, although an associate’s or bachelor’s degree in a business-related field can help applicants find additional employment opportunities. Additionally, timekeepers can obtain certification from organizations like the American Society of Administrative Professionals (ASAP) to gain expertise and improve their job prospects. There are many opportunities for on-the-job training in this role as timekeepers gain familiarity with the policies and practices of their organizations.
Timekeeper Salary and Outlook
According to the Bureau of Labor Statistics (BLS), payroll and timekeeping clerks earned a median annual wage of $43,890 as of May 2017. The highest-paid ten percent of workers in this role earned more than $63,180 per year, while the lowest-paid payroll and timekeeping clerks earned less than $28,130 per year.
While the BLS does not provide employment outlook data for payroll and timekeeping clerks, its data indicates that general office clerk employment will remain steady between 2016 and 2026, with no significant increase or decline.
We located several resources on the web if you’re interested in starting a career as a timekeeper:
“Timekeeping Best Practices” – read this blog post to learn about how to effectively track and record employee time and reduce delays and errors in payroll processing.
Payroll Accounting 2018 by Bernard J. Bieg and Judith Toland – this book covers the principles of payroll accounting, employee timekeeping, and tax withholding for companies of all sizes.
American Society of Administrative Professionals (ASAP) – timekeepers can join ASAP to access professional development materials, obtain certifications, and connect with other professionals through events and conferences.
Payroll Management: 2018 Edition by Steven M. Bragg – read this book to learn how to increase the efficiency and accuracy of the payroll department, with a focus on time tracking and record keeping.
Timekeeper Resume Help
Explore these related job titles from our database of hundreds of thousands of expert-approved resume samples:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9501977562904358,
"language": "en",
"url": "https://www.lawdepartmentmanagementblog.com/nothing-fishy-about-poisson-distributions-as-used-by-law-departments/",
"token_count": 433,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0194091796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:914b9d93-b033-4630-8880-ded39443a1a4>"
}
|
Three times I have referred to a statistical function called a Poisson distribution, yet I have never explained the actual computation (See my post of Jan. 20, 2006: one of many kinds of distributions of numbers; Aug. 16, 2006: predicts likelihood of event during a given time period; and June 15, 2009: relation to queuing theory.). Nor did I mention that it is important to understand that a Poisson distribution implies randomness in the underlying events.
Here is what I learned from StatTrek http://stattrek.com/Lesson2/Poisson.aspx. I will apply it to a hypothetical, EEOC charges filed against your company each quarter. Let’s say over the past few years on average the company has prevailed in 6 of them per quarter. Further, assume that dismissal of the charge is a success and anything else is not a success and that you want to know the likelihood that in the coming quarter you will succeed on 7 charges. (Perhaps your performance bonus depends on that?)
The forbidding equation for a Poisson distribution to calculate a probability is P(x; μ) = (e-μ) (μx) / x!. In the EEOC scenario described above you would read it as “The probability that exactly 7 charges are dismissed during the next quarter where the average has been 6 per quarter is equal to 2.71828 (e is the base of the natural logarithm system, and if that is unclear to you, ignore the explanation but use the approximate value, which is raised to the negative power of 6), multiplied by 6 raised to the 7th power (the average number of dismissals per quarter multiplied by itself seven times) divided by 7 factorial (7 times 6 times 5 times 4 times 3 times 2).
The handy calculator on the StatTrek tells me that the probability is 13.8 percent that you will prevail on precisely 7 EEOC charges. You can also find out various cumulative probabilities. For example, the probability on these facts that you will prevail next quarter on more than 7 EEOC charges is 25.6 percent.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9444300532341003,
"language": "en",
"url": "https://www.naturalchoice.net/post/peak-oil-what-is-it-and-why-should-we-care",
"token_count": 2903,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.26953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1487bbc3-d0f1-4e04-be2b-4e8f2fcd7fd5>"
}
|
You have probably heard of the term “Peak Oil.” Numerous articles have appeared in such diverse publications as the Wall Street Journal, National Geographic, and Scientific American. Yet many have little or no awareness about the potential consequences of Peak Oil.
The peak of anything represents the single highest point something ever reaches. When you reach the peak of a mountain, you can’t climb that mountain any higher. The only way you can proceed is ‘down’ from a peak.
Peak Oil means the point in time when more oil is being extracted than ever before or ever will be. If you graph the production of oil over time, a bell shaped curve is formed, similar to the pattern of other natural resources. Peak Oil does not mean “an end to oil”, but it does imply the end of ‘cheap’ and abundant oil.
M.King Hubbert, a Shell Oil geologist and petroleum scientist, predicted in 1956 that oil from the continental US would peak around 1970. Although he was universally criticized by his industry at the time, we now know that his charts and predictions were extremely accurate. Today, the graph of peak oil is normally referred to as “Hubbert’s Peak.” (figure 1)
Note the close correllation of Hubbert’s prediction and actual production!
When is Peak Oil?
Colin Campbell, president of the Association to Study Peak Oil says 2010. Ken Deffeyes, author of “Hubbert’s Peak”, says 2004-2008. Even the most “optimistic” views place Peak Oil within our lifetime. The exact timing of Peak Oil is a guessing game, but it’s occurrence and reality is not.
In August 1999 former Halliburton Chairman Dick Cheney said in a speech at the London Institute of Petroleum: “By some estimates there will be an average of 2% annual growth in global oil demand over the years ahead along with, conservatively, a 3% natural decline in production from existing reserves. That means that by 2010 we will need on the order of an additional fifty million barrels a day.”
Fifty million barrels a day is more than the combined total production coming from Saudi Arabia, Iran, Iraq, United Arab Emirates, Kuwait and Qatar. In 2001 they produced a total of 22.4 million barrels per day according to the Energy Information Administration. As you will see, world demand has far outstripped the 2% figure Cheney stated.
Supply Decreasing, Demand Rising
As worldwide production is peaking, worldwide demand is soaring, especially in the US, China, India, the Middle East and Pakistan. If China and India were to reach just one fourth of US per-capita oil consumption levels, world oil production would need to increase by 44 percent according to the Christian Science Monitor (January 2005). Exxon/Mobile’s study, “A Report on Energy Trends, Greenhouse Gas Emissions and Alternative Energy” dramatically shows that all existing oil and gas production worldwide can’t meet total world demand by around 2003-2004! Have you noticed what’s happened to gas prices in the last 2 years?
Even if production remained constant, the amount of oil available on a per-capita level is declining due to unrelenting global population growth. As third world countries gain economic strength, they want the same standard of living we have, as well as the goods, appliances, cars, and ‘modern’ conveniences.
As cheap, easily extracted oil is depleted, the remaining oil is increasingly difficult and expensive to extract. Environmentally sensitive areas are at risk, and the quality of oil in these less desireable areas is typically lower. For example, at the current rate of oil consumption it’s calculated that all the oil in the Alaska National Wildlife Reserve is only enough to power the United States for 6 months.
Beware of the pundits’ estimates of “Proven Reserves.” They’ll claim, “We have reserves for “n” years.” But reserves are past estimates of oil, made by different people and companies using different methods. Basing reserves on years, not barrels, is also misleading. On January 12, 2004, Shell Oil restated their balance sheet by reducing their “proven” oil reserves by 20 %. On January 14, 2004 the Wall Street Journal suggested all reserves are questionable. In February 2004, El Paso Corporation cuts its proven natural gas reserves estimate by 41%. OPEC sets production quotas based on a percentage of reserves. But does it serve the interest of OPEC to accurately state their reserves or to inflate them so that they can sell more oil and make more money?
Laws of Supply and Demand
We know the connection between supply and demand. Even if demand remained constant, it is inevitable that as supplies of oil decrease the price increases. In March 2004, oil hit $38 a barrel. By October it was $50. Oil reached $57.40 a barrel at the time of this writing. (April, 2005.) Goldman Sachs predicts that oil will surpass $100 a barrel before long. $5~$6/gal gas anyone? We’ve all seen the relentless increases in everything from home heating costs, fuels, and transportation of all kinds.
“Ok, so we’re in for higher prices as the pump. I’ll just drive less, carpool, and maybe buy a hybrid. No problem!” Sounds good, but lets follow the petroleum ‘food chain’. In addition to the cost of your fuel, costs for the fuel needed to grow, harvest and transport your salad fixings from South America, your clothing from the far east, and “everything’’ from China also goes up. Anything that is shipped, flown, trucked, or sent by rail has increasing fuel costs...
All major commercial fertilizers are ammonia based, made from natural gas, which follows the same “Hubbert’s Peak” pattern as petroleum? In 2003, one-fourth of US fertilizer factories permanently shut down due to the high cost of their prime ingredient. Most commercial pesticides come from oil. So, not only are the fuel costs for the tractors, harvesters, combines, trucks, and transport vehicles going to raise your prices, the costs of the fertilizers and pesticides needed by ‘mainstream’ growers will be going up. .
Do you buy food products grown with oil based fertilizers and pesticides, harvested by petroleum fueled vehicles, transported using petroleum, packed in plastic or Styrofoam, and paid for with a piece of magnetically encoded plastic?
Many pharmaceutical products come from or are made by using petroleum, as are the plastics for countless medical products today. The cost to fuel the research labs, production facilities, and offices will all rise too. What will happen to the cost of medicines and supplies, and what consequences will that have on health care issues?
Disposable ‘everythings’ are mostly made from, and wrapped in, plastics. Plastics dominate the modern world. Look around. How many of the daily products we see are made from, or produced using, petroleum?
Social Disorder, Wars, Famine?
What happens the whole world will be clamoring over the ever decreasing oil supplies. Will oil producing countries decide that it’s in their national interest to keep it for their own use? Will powerful countries just decide to ‘take’ what they need regardless of international law? I think the common phrase is “for National Security.” Can we sustain our current way of life? G.W. Bush has stated: “The American way of life is non-negotiable.” What does that mean?
China is aggressively contracting for worldwide oil from the same countries that we buy from, and others. China recently signed a $70 Billion oil deal with Iran, a multi-billion dollar deal in Canada, and energy deals with Nigeria, Venezuela, Qatar, Indonesia, and even Cuba. They’re bidding around the globe for future oil supplies. The US recently denied China’s $13 billion bid to buy Unocal in the US. China has the most stringent vehicle fuel efficiency standards in the world. Think they’re aware of peak oil?
What Can We Do?
There are definite things that we as individuals and as communities can do, and that our local, state, and federal governments can do to smooth the downside of the peak oil curve.
The most important thing to do! Most forecasts are based on current rates of growth and consumption. Drastically reduce demand and we’re on the right track. IF the US had followed the fuel efficiency standards set out by President Carter, we would not need even one barrel of foreign oil today. Overall US fuel efficiency today is actually worse now than it was in the 70’s. Use fuel efficient furnaces, appliances, and forego the motor powered in favor of manual power.
WALK, RIDE, CARPOOL, & PUBLIC TRANSPORTATION:
97% of the US’s transportation fuel comes from petroleum. Drive less, share rides, or walk/bike/skate. [Editor’s note: Telecommute!]
All diesel vehicles can use Biodiesel which is made from crops including soy, canola/rapeseed, mustard seed, and others. (Note: Biodiesel only runs in diesel vehicles, don’t use in a standard gasoline car!) In the US today, only VW and Mercedes sell new passenger diesels, but many used diesel vehicles are available. I drive a 1985 Toyota Corolla Diesel using 100% biodiesel which gets 47 mpg highway. Biodiesel is the only fuel to meet the EPA’s Level 1 and Level 2 health standards. Every penny spent on biodiesel stays within the US. (April 2005: $2.99/gal in Kirkland, WA)
Tough one! It takes about as much oil to construct a new car as that car will consume in its lifetime. If you’re going to buy a new car anyway, yes, a biodiesel, hybrid or high fuel efficiency car is best. Do you really need a new car?
Buying locally produced items reduces the fuel needed to bring them to market. Airplanes and ships are horrible polluters and fuel users. You’ll also be pumping money into local communities and supporting local people. Shopping closer to home or work reduces your petroleum usage.
Buy natural, organic products that don’t rely on petroleum based pesticides and fertilizers. They’re better for your health and less reliant on fossil fuels. Reuse, Reduce, Recycle. Call the 800 number on products that use excessive plastic packaging. Tell manufacturers you want less plastic and more recycled content. Look into natural medicines, cosmetics, clothing... the works.
REDUCE ENERGY CONSUMPTION:
Although most or the NW’s electricity comes from hydro, peak needs are met with energy produced by natural gas, coal, and nuclear plants in other states. Insulate your home, replace incandescent lamps with CFL’s and LED lights, replace aging refrigerators, washers, dryers and other appliances with Energy Star rated products. (A 2005 Energy Star refrigerator uses 75% less energy than most 1992 models). Front loading washers use 75% less water, work better, and are more energy efficient than top loaders. Heating 12 gallons of water per load versus 55 makes great sense.
Solar hot water systems are relatively inexpensive, are efficient, and can reduce your energy consumption significantly. Evacuated tube technology now used for solar hot water systems can produce 160 degree water even when outside air temperatures are freezing cold! Solar hot water gives the best ‘bang for the buck’.
Solar panels work well in the Northwest, contrary to popular belief. Prices have come down over the years, and system reliability is high. Most panels have 25 year warranties. Spin your meter backwards, and the utility will credit you for your production!
GET INVOLVED: (Think Globally, Act Locally, Respond Personally)
Attend a Renewable Energy Fair - The Shoreline Solar Project holds a free annual R.E. Fair each summer, and offers presentations every month in Shoreline, WA. Similar groups may exist in your area.Read up on Peak Oil - One Google search will do it.Talk with your elected officials at all levels, starting in your home town. Peak Oil will impact city planning, transportation, safety, and many other local issues.Write letters to newspaper editors, call in to talk radio shows, and get others talking.Think about how you and your family or friends might be effected by Peak Oil. It’s not too soon to make some contingency plans, “just in case”.
References: 1. www.peakoil.net 2. www.hubbertpeak.com 3. http://www.communitysolution.org 4. Why our Food is So Dependent on Oil: http://www.321energy.com/editorials/church/church040205.html 5. Food & Agriculture Organization of the United Nations: http://www.fao.org 6. http://www.energybulletin.net/primer.php 7. http://www.fromthewilderness.com/free/ww3/042903_media_lies.html 8. www.shorelinesolar.org 9. Biodiesel (in Seattle): http://www.fuelwerks.com 10. Biodiesel (in Kirkland): http://www.greencarco.com
Larry Owens www.shorelinesolar.org
1David Pimentel and Mario Giampetro, “Food, Land, Population, and the US Economy,” Carrying Capacity Network, (November 21, 1994).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.910807192325592,
"language": "en",
"url": "https://www.pc.gov.au/inquiries/completed/migration-population/report",
"token_count": 1440,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1708984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:970b9d15-1569-41b8-b9d3-5465944aeda1>"
}
|
Economic Impacts of Migration and Population Growth
This research report was released on 17 May 2006.
Download this publication
- Economic Impacts of Migration and Population Growth (PDF 1.8 MB)
- Economic Impacts of Migration and Population Growth (ZIP 3.9 MB)
- Key points
- Media release
Migration has been an important influence on Australian society and the economy
- affecting the size, composition and geographic location of the population and workforce.
Recent changes to Australia's migration program include a greater emphasis on skills, increased numbers of temporary immigrants, and more diversification in the country of origin.
The number of Australians leaving this country, permanently and long term, has risen markedly in recent years.
- But the number has been considerably smaller than those coming to Australia.
Economic effects of migration arise from demographic and labour market differences between migrants and the Australian-born population, and from migration-induced changes to population growth.
However, the Commission considers it unlikely that migration will have a substantial impact on income per capita and productivity because:
- the annual flow of migrants is small relative to the stock of workers and population
- migrants are not very different in relevant respects from the Australian-born population and, over time, the differences become smaller.
Some effects of migration are more amenable to measurement and estimation than others. Effects that cannot be reliably measured or estimated might still be significant.
- Positive effects from additional skilled migrants arise from higher participation rates, slightly higher hours worked per worker and the up-skilling of the workforce.
- Some of the economy-wide consequences lower per capita income, such as capital dilution and a decline in the terms of trade.
- The overall economic effect of migration appears to be positive but small, consistent with previous Australian and overseas studies.
In terms of the selection criteria of the Migration Program:
- the greater emphasis on skills has been associated with better labour market outcomes for immigrants
- English language proficiency stands out as a key factor determining the ease of settlement and labour market success of immigrants.
John Salerian (Assistant Commissioner) 03 9653 2190 / 0409 814 424
Migration has been an important influence on Australian society and the economy. Increasing skilled migration would make a positive overall contribution to Australia's future per capita income levels, according to a final report released by the Productivity Commission.
The report - Economic Impacts of Migration and Population Growth - responds to a request by the Australian Government to examine the impact of migration and population growth on Australia's productivity growth.
'Australia's migration program is increasingly focussed on skilled migration, which is generally improving the labour market outcomes for immigrants. However, the annual flow of immigrants is small compared with the size of the population and the workforce, so a relatively small contribution to the economy is to be expected. Furthermore, there are economy-wide consequences that can offset the labour market effects of immigrants', said Commissioner Judith Sloan.
To assess the effect of skilled migration, modelling was conducted to estimate the economic impact of a simulated increase in skilled migration of about 50 per cent on the level in 2004-05.
By 2024-25, the increase in income per capita, on average, is projected to be about $400 (or about 0.7 per cent), compared with a base case scenario. Commissioner Sloan said 'in an exercise like this, many assumptions are required and not all of the potentially important aspects can be quantified. However, the results are consistent with studies in other countries as well as previous studies in Australia, and provide a guide to the likely economic effects.'
'Migration contributes to the economy in many ways. As well as the upskilling of the workforce, economies of scale and the development of new export markets would further add to the economic benefits of migration. Environmental issues associated with a larger population would need to be managed, however', according to Commissioner Sloan.
The Commission also found that the English language proficiency of immigrants is a key factor in determining their ease of settlement and their labour market success, particularly for skilled immigrants.
John Salerian (Assistant Commissioner) 03 9653 2190 / 0409 814 424
Leonora Nicol (Media, Publications and Web) 02 6240 3239 / 0417 665 443
Cover, Copyright, Foreword, Acknowledgments, Terms of reference, Contents, Abbreviations and Glossary, Overview
1.1 Background to the study
1.2 Scope of the study
1.3 Conduct of the study
1.4 Structure of the report
2 Trends in migration
2.1 International migration flows
2.2 Australian perspective
2.3 Migration and Australia's population
3 Linking migration, population and productivity
3.1 Economic growth and living standards
3.2 Size and diversity are keys to the economic effects
3.3 Overview of migration's links to productivity and income per capita
4 The diversity of the migrant workforce
4.1 The education levels of immigrants
4.2 Immigration and the supply of labour by occupation and industry
4.3 Immigration and the working age population
4.4 Immigration and labour force participation
4.5 Immigration and unemployment rates
4.6 Immigration and working hours
4.7 Immigration and regional labour supply
4.8 Intergenerational effects
4.9 Emigration and labour supply
4.10 Projecting the effect of changes in immigration flows on labour supply
4.11 Overall assessment
5 Migration and labour productivity
5.1 Migration, human capital and productivity
5.2 What is the evidence on the labour productivity of immigrants in Australia?
5.3 The skill effect of immigration
5.4 Overall assessment
6 Scale and environmental effects of migration
6.1 Migration and economies of scale
6.2 Migration, natural resources and environmental externalities
7 Sectoral, economy-wide and distributional effects of migration
7.1 Sectoral effects
7.2 Other economy-wide effects
7.3 Distributional effects
8 Overall impact on living standards
8.1 Overall effect of migration on living standards
8.2 Why a small impact?
8.3 Comparison of modelling results
9 Impediments to productivity and economic growth from migration
9.1 Efficacy of Australia's migration program
9.2 Migration policy and skill shortages
9.3 English language proficiency
9.4 Distortions arising from the skilled migration program
9.5 Efficacy of skills assessment and recognition processes
9.6 Impediments arising from Australia's tax system
9.7 Australian emigration
A Submissions, visits and roundtable attendees
B Trends in international migration
C Australia's migration policy and flows
D Characteristics of Australia's migrants
E Labour market analysis
F Effects on labour supply of an increase in skilled migration
G Economic effects of increasing skilled migration: Modelling summary
H Detailed employment effects by occupation and region
I Referee reports on modelling
J Alternative modelling assumptions and results
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9417386651039124,
"language": "en",
"url": "https://www.pionline.com/article/20150312/ONLINE/150319945/fair-pricing-bubbles-and-crashes",
"token_count": 1505,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.006500244140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:846d6693-221b-49a4-b7ca-48393f29d4e5>"
}
|
Stock returns have a fat-tailed distribution; this means that large shocks are far more likely than one might expect. A few examples illustrate why this matters to investors. The stock price of Tesla Motors Inc. has an annualized volatility of 45% but has risen 1,250% in four years. Assuming log-normality, Tesla's four-year return would have had a probability of 10^43. Yet examples like this are not uncommon. Indeed, even the stock market crashes of 1929 and 1987 were to be expected, given the observed tail exponent. There is nothing surprising or abnormal about such events: as long as market returns continue to have a fat-tailed distribution, we will have crises such as these. As we'll explain below, market automation has made pricing far more precise than it was in human-dominated markets, but has not removed the root cause of fat tails in asset returns.
Why should the distribution of price shocks have a fat tail? The magnitude of crises like the Great Depression may be explained by unique circumstances in credit markets and monetary policy. But the tail exponent itself is a universal property that can be observed at all scales and in very different macroeconomic environments. If fat tails are a universal property of markets and make major crises inevitable, it seems worthwhile to try to understand what might cause them.
One explanation of huge price shocks can be found in an analogy between asset pricing and complex systems in physics. Trading models rely on historical data to estimate the parameters in models or earnings and/or relationships between asset prices. The same applies to analysts and portfolio managers: we all learn from historical analogues. Models interact with one another through the markets: a long-term model's buy decision will spawn orders that push the price up when executed; a mean reversion trading model may respond by deciding to supply liquidity. Trading models interact in the same way as species in an ecology: each model exploits a niche but also shapes the fitness landscape of other models. Capital allocation to one type of trading model increases its market impact and thereby creates opportunities for others. There are symbiotic species, parasites, prey and predators. Copying successful modeling ideas is an example of herding by machines. A successful herd increases aggregate leverage and a benign environment promotes specialization (better training, new drivers, etc.).
In good times, traders need to evolve models to compete in an increasingly crowded niche. But increased specialization and leverage make the ecology more vulnerable to a change in the environment, increasing the risk of a major crisis. Simple ecological models have demonstrated the emergence of self-organized criticality. In a world where quantitative models dominate asset pricing, the models in aggregate are the system and their designs and parameters are its degrees of freedom … prices themselves are merely gauges we can use to diagnose the condition of the patient. Fat-tailed event-size distributions are a generic property of self-organized critical systems. Are major financial crises in essence extinction events in the population dynamics of asset pricing models?
At first glance this seems a bit strange: how do asset pricing models, a technical aspect of the function of markets, lead to macroeconomic crises? Do financial markets solely reflect the state of the real world or can the endogenous dynamics of a market trigger events in the economy? Asset prices drive capital flows and economic activity, so erroneous pricing can lead to misallocation of capital, sometimes on a massive scale. The 2005 credit crisis provides a good illustrative example: a systematic underpricing of risk in asset-backed securities led to the aggressive marketing of mortgage-related products and an unsustainable growth of the aggregate debt burden of consumers. In an economy highly dependent on consumption, this could not end well.
This example illustrates how herding in model space causes systemic fragility (in this case, the failure of the Gaussian copula model), and also how it relates to criticality in the credit market. Another example is the occurrence of self-organized criticality in margin debt: the success of momentum strategies and low trailing volatility measures draws aggregate margin debt towards criticality.
Trading models are in effect asset pricing models — so criticality in this ecology translates to criticality in asset pricing. A mass extinction is not only an extermination of certain classes of trading models, more importantly it reflects on asset prices and through these on economic activity. This is clear in the case of market crashes, but price shocks occur at all scales and in both directions. In the example of Tesla's stock at the top of this article, there is little doubt that Tesla's stock price story has affected investment decisions at competing auto manufacturers and impacts economic activity in the real world.
In recent empirical and theoretical work we showed that the markets are performing a remarkably efficient task of solving two problems simultaneously for each market-traded asset. First, market-makers enforce the Martingale property: the current price is equal to the expected future price given what has been revealed in market data and other public sources. Second, portfolio managers feed the market private information from research and quant models. Portfolio managers receive information signals and create buy or sell orders. The aggregate response to a signal is called a “metaorder.” Our work shows that metaorder sizes are related to the value of the information in a precise manner: the implementation shortfall of a metaorder is equal to its permanent impact, a property we called “fair pricing.” In contrast, uninformed cash flow trades have no permanent impact, regardless of their size — this shows that markets are accurately measuring the information conveyed by metaorders. Markets coordinate the work of many computers and individuals by processing signals (orders) and producing outputs (prices) which in turn feed back into pricing models. Viewed as a single computing machine, the global markets have the architecture of a type of recursive neural network called a Jordan network. The aggregate computing capacity of this network makes it the most powerful computing machine ever created. At approximately 1 exaflop (10^18 operations per second), the global markets are processing information at a rate that is 100 times faster than the computing capacity of the human brain, as illustrated in Ray Kurzweil's popular book “The Singularity is Near.”
Fair pricing implies that the market is informative in the sense that metaorders correct any mispricing. Its information-processing capacity has vastly improved since 1929 and 1987. Unfortunately, this does not do away with criticality in the market ecology. The fair price is only the collective opinion of models that participate in price formation. If these models are using parameters that are no longer in line with reality, the fair price they will agree to could well be wrong by a wide margin. The global financial market may act as an extremely intelligent artificial organism accomplishing a difficult prediction task, far superior to the human brain in its ability to process statistical data. But as long as models are trained predominantly on recent historical data, this artificial organism will not be immune from the bias that impairs our own judgment. The forces of competition will continue to drive over-specialization and herding in the market ecology, asset returns will continue to exhibit fat tails and there will continue to be opportunities for those with a longer-term view.
Henri Waelbroeck, Ph.D., serves as global head of research at Portware LLC, a developer of trading execution software. He leads Portware's Alpha Vision research, applying machine learning to optimize execution management.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9566737413406372,
"language": "en",
"url": "https://born2invest.com/articles/why-investing-in-hardware-could-prove-more-lucrative-than-investing-in-bitcoin/",
"token_count": 2055,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.337890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6b966230-f0da-483b-980f-3912e376d331>"
}
|
Cryptocurrency mining has become an increasingly complex and costly affair. In the past, it was possible (much to the ire of gamers) to simply use a graphics card. Indeed this led to a surge in the price of graphics cards over the course of 2018 as the number of cryptocurrency miners surged.
Unfortunately for the student mining cryptocurrency in their dorm room, times have changed. The difficulty of mining Bitcoin has continued to increase and the introduction of purpose-built machines, or ASICs, has led to a highly competitive market. Typically, the most successful miners are those based in areas with a low-electricity cost, such as China and some parts of Canada. Those who can turn electricity into the most computations will see the greatest returns.
With this in mind, owning the appropriate hardware is essential to remain competitive. Squire Mining Ltd. (CSE:SQR) (OTCQB:SQRMF) are building the next generation of ASIC machines that will dominate the lucrative crypto mining industry.
Why are ASICs so important
To understand why ASICs are important you need to understand how cryptocurrency mining works. For the moment the majority of cryptocurrencies use a process called Proof of Work. A network of computers all compete to solve a series of complicated mathematical equations. Once one finds the correct answer, it is able to claim a portion of cryptocurrency as its reward.
This process is necessary for two reasons. The first is that it ensures the integrity of the blockchain and process any transactions on the network. The second is that it controls the creation of new cryptocurrency, theoretically ensuring more equal distribution than government-controlled currencies.
The problem for miners is that most cryptocurrencies, including Bitcoin, are designed to become more difficult to mine over time. This forces out small miners and means that more efficient hardware is increasingly important. This conundrum led to the rise of specialized mining computers known as ASICs.
A typical mining operation will have dozens, even hundreds, of these machines all working around the clock. For these mining farms to turn a profit it essential that their ASICs are not only powerful enough to solve equations but also efficient enough to keep the energy bill down.
Smart money invests in underlying systems
While investor’s attention has been squarely focused upon the lucrative opportunities afforded by cryptocurrency many have missed the potential held by the systems to underpin the cryptocurrency ecosystem. This trend is across a number of sectors and the truth is that underlying systems often share the advantages of the front-facing industry with less of the risks.
In the cryptocurrency world, this means that one of the most promising investments is in mining. Rather than having to purchase the hardware to mine your own cryptocurrency it is perfectly possible to purchase shares in the companies that are producing the ASICs that power the cryptocurrency boom.
Currently, the value of the top five microchip production companies is more than $120 billion and this is set to grow thanks to the rising demand driven by cryptocurrency. In general, there have been supply issues, particularly with GPUs and ASICs, which has led to an opening for new entrants into a market traditionally dominated by giants such as Intel and Nvidia.
This offers smart investors the ability to indirectly benefit from the cryptocurrency market. It also shields them from the risks associated with the inherently unstable nature of the cryptocurrency markets and any regulatory pitfalls associated with investment in the controversial technology.
The other advantage of investing in ASIC producers, rather than cryptocurrency directly, is that you don’t need to understand the often arcane inner workings of every alt-coin. Many ICOs or new cryptocurrencies are borderline scams and even seasoned investors risk being tripped up by a flashy white paper.
Squire Mining is producing the chips of the future
Unlike other competitors, Squire is making their mark by ensuring that their chips are not only the most powerful on the market but also the most energy efficient. Through their controlling share in ARA Core Technology Corp. and partnerships with leading foundries in South Korea Squire have begun development of their revolutionary next-gen chip the 10mn Bitcoin ASIC mining Chip.
The 10mn is unique because it represents a leap forward in both the computing power (hash power) and energy efficiency when compared to the competition. Indeed Squire’s chip will have a comparable production cost to its competitors whilst consuming significantly less energy with a much higher hash rate. This gives the chip a conservative 3.93% return on investment whilst also solving many of the energy problems associated with cryptocurrency mining.
Currently, the 10mn is only designed to mine Bitcoin but Squire is already in the process of developing chips for other cryptocurrencies. Most notably the potentially lucrative DASH mining sector. DASH has recently begun the transition from GPU mining to ASIC mining and represents a lucrative opportunity for Squire. The company also plans to roll out ASIC’s for other cryptocurrencies that currently rely on inefficient GPU based mining.
Squire is a best in breed business with access to a huge market
The total capitalization of the crypto market is over $218 billion. This capitalization is supported by an infrastructure of ASIC machines, like those that are currently being produced by Squire Mining Ltd. (CSE:SQR) (OTCQB:SQRMF). Unlike their other competitors, Squire represents a best in breed business.
In October of 2018, Squire announced two key partnerships. The first was with Samsung Electronics, who are Squire’s chosen foundry partner and are assisting with the manufacture of their ASIC chips in South Korea. The second is a partnership with Gaonchips, who are acting as Squire’s design house. These partnerships have formed the basis for the mass production of Squire’s new ASIC chips.
Squire’s ambitious plans for the 10mn chip have already set them up for a promising future. The company estimates that their initial production run of 16,500,000 chips will generate around $150 million a year from chip sales alone, with another $225 million from mining rig sales. These estimates do not include any of Squire’s future plans. Tests of their newly engineers FPGA chip have proven to be highly successful and have set the company up for a strong entry into this competitive market. Squire is also looking at non-cryptocurrency applications for their groundbreaking chips.
For starters, the company is looking into creating a multi-purpose chipset that will be applicable to AI as a service ventures. This will become increasingly important as the size of the AI sector. A sector that is estimated to have $56.8 billion a year invested in it by 2021. This important secondary market helps to underline the long-term nature of Squire’s strategy. By diversifying they shield themselves from any adverse changes in the cryptocurrency market, for example, a widespread shift to proof of stake consensus or an unfriendly regulatory environment.
Don’t risk missing this opportunity to strike gold
Many investors lament missing the boat on Bitcoin and other cryptocurrencies but there are always new opportunities. Squire Mining Ltd. (CSE:SQR) (OTCQB:SQRMF) represents a company that is uniquely well placed to thrive in a competitive market and there has never been a better moment to invest than now.
The company is still young and for the moment is flying broadly under the radar. It won’t take much to send the stock price of Squire skyrocketing. If you have been looking for the opportunity to get in on cryptocurrency without the risk then it’s literally staring you in the face.
The future is looking bright for Squire, make sure that you are a part of it.
(Featured image by DepositPhotos)
This article may include forward-looking statements. These forward-looking statements generally are identified by the words “believe,” “project,” “estimate,” “become,” “plan,” “will,” and similar expressions. These forward-looking statements involve known and unknown risks as well as uncertainties, including those discussed in the following cautionary statements and elsewhere in this article and on this site. Although the Company may believe that its expectations are based on reasonable assumptions, the actual results that the Company may achieve may differ materially from any forward-looking statements, which reflect the opinions of the management of the Company only as of the date hereof. Additionally, please make sure to read these important disclosures.
Burkina Faso: Japan grants $6.5 million for access to health, education for the most vulnerable children
The grants follow a partnership signed between the Government of Japan, the Japan International Cooperation Agency (JICA) and UNICEF Burkina...
LentiStem seeks more than half a million euros to advance gene therapy
LentiStem seeks more than half a million euros to advance gene therapy. The biotech company has raised $311,000 (€260,000) in...
Our ‘Chart of the Week’ looks at Bitcoin and the new star of the crypto market – Dogecoin
Inflation is on the lips of many market participants who seem to be expecting a repeat of the 1970’s or...
The Dow Jones, CinC & the BGMI 1920 to 2021
Since March 15th the 52Wk highs at the NYSE have taken a rest, but on Friday’s close the NYSE saw...
ETC Group launches first ETP on centrally cleared cryptocurrency Litecoin
The ETP on the cryptocurrency Litecoin will be listed on the Xetra platform of the Frankfurt Stock Exchange. It will...
Crypto6 days ago
NBA players will be able to choose whether to be paid in cryptocurrencies
Featured6 days ago
How the USDA WASDE report influenced futures markets
Cannabis5 days ago
UN fears imperfect decriminalization of cannabis in Mexico
Biotech6 days ago
Rovi will also produce Moderna’s COVID-19 vaccine in Andalusia
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9425836205482483,
"language": "en",
"url": "https://homeworkcorp.com/imagine-that-there-are-100-different-researchers-each-studying-the-sleeping-habits-of-college-freshmen-each-researcher-takes-a-random-sample-of-size-50-from-the-same-population-of-freshmen-each-rese/",
"token_count": 270,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.201171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3359fac0-209e-4b0d-ab25-a9a32f018cbc>"
}
|
1. Imagine that there are 100 different researchers each studying the sleeping habits of college freshmen. Each researcher takes a random sample of size 50 from the same population of freshmen. Each researcher is trying to estimate the mean hours of sleep that freshmen get at night, and each one constructs a 95% confidence interval for the mean. Approximately how many of these 100 confidence intervals will NOT capture the true mean? 2. Nana Akosua Owusu – Ansah, a financial manageress for a company is considering two competing investment proposals. For each of these proposals, she has carried out an analysis in which she has determined various net profit figures and has assigned subjective probabilities to the realization of these returns. For proposal A, her analysis shows net profits of GH? 20,000.00, GH? 30,000.00 or GH? 50,000.00 with respective probabilities 0.2, 0.4 and 0.4. For proposal B, she concludes that there is a 50% chance of successful investment, estimated as producing net profits of GH? 100,000.00, and of an unsuccessful investment, estimated as a break – even situation involving GH? 0.00 of net profit. Assuming that each proposal requires the same Ghana cedi investment, which of the two proposals is preferable solely from the standpoint of expected monetary return?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9637225270271301,
"language": "en",
"url": "https://thedevelopmentmanager.com/apprenticeship-support-for-the-logistics-industry/",
"token_count": 1054,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0289306640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6c0a004b-9331-41c0-954e-caa2e668c222>"
}
|
Logistics and freight transportation is a crucial industry in the UK. The industry is the pulse of the nation, responsible for moving millions of items across the country daily.
Just about every item that we use in our daily lives has been part of a supply chain at some point. From your weekly grocery shop, the car you drive, or the device you are using to read this article – they have had to be moved from their origin to you.
It is estimated that 189 million tonnes km of domestic freight is moved across the UK annually and the logistics sector employs 108,145 people in the West Midlands. It is estimated that this will grow by 16% by 2030.
With the rise of digital technologies, the new competition battle lines have been drawn around efficiencies and effectiveness.
Customers want quicker delivery times and expect to be better informed about their deliveries with live tracking and accurate delivery times. This has resulted in the sector making significant investments in implementing new technologies and processes.
Why should logistics companies invest in Tech & Digital skills?
For businesses to maintain and grow their competitive advantage they need skilled digital workers to manage their new tech and processes. Using technology these employees will be able to identify and realise new opportunities. They will also help to overcome challenges such as Brexit, Covid-19, changing customer demands, road safety, air quality targets and decarbonisation.
Digital transformation is happening and new tech is being introduced to the world frequently. 5G has great potential in logistics. A simple use of this new tech is increasing the speed and reliability of the information in the supply chain. A more complex use of it is drones being used for deliveries. The possibilities are endless and can be realised by the business having the tech and digital skills.
The biggest opportunities in the sector…
Data, data, data!
Data has been described as the new oil. This holds true but only if it is being used effectively. Data is collected throughout the supply chain. Interpreting the data correctly can lead to many benefits for companies.
Data can reduce costs. The last mile of delivery is notoriously expensive with some studies showing that it represents up to 28% of the total cost of deliveries. Most of this is down to failed and missed deliveries with subsequent attempts for deliveries having to be made. Having data systems in place can ensure that first-time delivery successes increase as well as route optimization reducing fuel costs and people-hours. Data can also be used to improve reliability and transparency, shorten lead times and ensure sensitive packages arrive intact.
With vast amounts of data available to companies, they will rely on software solutions to capture and display it. Software solutions enable smoother operations for better real-time fleet management, streamlined communication and improved customer service.
These software systems can be as simple as showing what freight is on a given lorry. A more advanced solution may be a customer-facing app that gives live delivery times, item route history and even the name of the delivery driver. This may include integrating several different systems and APIs.
Companies can greatly improve their competitive advantage by having software development skills. Not only will this reduce downtime of systems but could provide them with systems that are far superior to rivals.
Companies may have the best and greatest systems and processes with super fast and reliable delivery, but if they aren’t able to communicate this they will struggle to get customers.
By having a digital marketing presence allows companies to reach millions of potential customers. Digital marketing is different from more traditional marketing in that adverts can be highly targeted and personalised and at a fraction of a cost.
Marketing can analyse sales figures and customer data to better inform the business on where to acquire new business. Some logistics companies have themselves added a marketing business to their business model by selling their lorries at advertising boards! A marketing skillset provides businesses with a creative skillset to think outside the box and can open up new opportunities.
How can an apprentice help?
At TDM we offer several different apprenticeship programs that would suit logistics companies and help them grow. An apprentices’ training can be adapted according to the needs of your business, they are highly motivated to learn new skills and can expand and upskill your workforce.
Research has found that 86% of employers said apprenticeships helped them develop skills relevant to their organisation. Also, 78% of employers said apprenticeships helped them improve productivity. Furthermore, 74% of employers said apprenticeships helped them improve the quality of their product or service.
TDM consistently scores highly in its Pass Rates, Distinctions Rates and are the leading Digital Apprenticeship Provider in England, according to the ESFA’s National Achievement Rate Tables.
We offer several apprenticeships programs such as Data Technician, Software Development and Digital Marketing that can help you succeed. Our expert staff can help you recruit an apprentice or retrain someone already in the organisation into one of these programs. Whether it is your first apprentice or 20th you can rely on us for expert support and guidance.
If you would like to find out more about our tech and digital apprenticeships please a callback here or email [email protected] or call 0333 10 100 40.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9515959620475769,
"language": "en",
"url": "https://waynetimes.com/news/howls-abound-as-homeowners-receive-new-property-assessments/",
"token_count": 1036,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.06640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a92a61ee-26dd-4e67-bd84-a4ac473eb4f3>"
}
|
When homeowners in the Towns of Walworth and Ontario opened the notices about new property assessments, phones at town offices wentbonkers.
Upset homeowners flooded phones and social media bemoaning what they feared were increased taxes...but that may not be the case.
Assessments and market value
A property’s assessment is based on its market value. Market value is how much a property would sell for under normal conditions.
Assessments are determined by the assessor, a local official who estimates the value of all real property in a community.
All properties in your municipality (except in New York City and Nassau County) are required to be assessed at a uniform percentage of market value each year. In other words, all taxable properties in town must be assessed at market value or at the same percentage of market value.
For example, if the market value of your home is $200,000, and assessments in your community are at 30 percent of market value, your assessment should be $60,000.
Likewise, if assessments in your community are higher than market value, your assessment will see an increase.
Currently, the housing/property market has skyrocketed over the past several years throughout the area and houses especially are selling at record rates. It is not unusual for a home on the market to have multiple offers, often over well over the original asking price.
Supply and demand can cause an assessment to drastically increase over a period of time. The number of homes for sale versus the number of buyers determines how quickly the homes in your area sell. Currently, throughout the area, there is a shortage of home sales.
In communities assessing property at 100 percent of market value, your assessment should equal roughly the price for which you could sell your property. In communities assessing at a percentage of market value, the estimated market value of each property is listed on the assessment roll.
If your assessment or the estimated market value for your property is higher than the price for which you can sell your home, you should discuss it with your assessor.
If the assessor does not reduce your assessment, you can contest your assessment
Unfortunately, you probably won’t find an exact comparable sale. To account for this, you need to adjust the sale prices of the comparable properties. This will require some analysis on your part to determine whether these differences increased or decreased the sale price, and, if so, by how much. The adjusted sale price is your estimation of what the property would have sold for if all the characteristics were the same.
How does my home’s market value affect my property taxes?
Generally, property taxes are based on the estimated market value of your home. Your local assessor determines the estimated market values of all the properties in the community. Your assessor may use the sales comparison approach or any other method to arrive at your property’s estimated market value, which is available on the assessment roll and your property tax bill.
The assessor only estimates each property’s market value during a reassessment or when a property has a physical change. Some communities have not had a reassessment in several years or even decades. As a result, the estimated market value shown on the assessment roll or your property tax bill may not actually reflect your home’s current market value.
The Assessor is appointed by the Town Board to assess your property and is “Independent” of the Board. The Assessor receives instructions from New York State not the Town Board and is required to assess properties at fair market value, which is basically what it would sell for in today’s market. The only consideration is fair market value, not the percentage of increase. Percentage is not a valid argument in challenging your assessment, only comparable sales are used to defend fair market value.
Ontario Supervisor Frank Robusto wrote in his column this week (see page D7) “My personal residence went up almost 50%. Yes, I understand the shock when the new assessment notice is opened!
There is a process of challenging your assessment. Under the Law, the burden of proof is on you to prove the Assessor’s numbers need to be adjusted and you do this with comparable sales, appraisals, a market analysis, purchase offers, etc. Request an informal hearing with the Assessor. These meetings are being set via zoom, phone or in person. (the same in Walworth)
To understand how assessments relate to your tax bill, assume for a minute that the budget stays the same. If the assessments go up, the tax rate drops to raise the same amount for the budgeted amount. Currently the Town of Ontario total taxable value is $727 million. The current town tax rate is $3.19 per thousand, which raises about $2,319,130 (the tax levy). If the total assessed value of the Town was to drop, the tax rate would increase to raise the same budgeted amount. The Town Board and I are committed to keeping the budget under the yearly 2% tax cap.”
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9432699084281921,
"language": "en",
"url": "https://wealthhunters.com/digital-magazine/consciousness/emotional-spending/",
"token_count": 1080,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.275390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d0bf4b3d-0005-4825-8c2f-538b88f6681b>"
}
|
Emotional Spending: The Link Between Money and Mental Health
An Emotional Spending Overview
Money is a source of joy and stress. Most people feel better with money and worse without it. Whether rich or poor, emotion-filled financial choices typically go against better judgement. So how do we end emotional spending and make wise financial choices?
Spending money can be stressful when you have a limited supply. Millions of people experience anxiety when money becomes the topic of discussion. It’s not uncommon to fear running out of money as they are spending it. This is not only limited to the poor. Many wealthy individuals with sky-high expenses have the same fears- their spending anxiety is just typically attached to a higher dollar amount.
Unconscious spending temporarily reduces financial anxiety. Manic episodes and intoxication temporarily replace financial anxiety with serotonin and dopamine. However, when conscious again, the stress is compounded more than it was prior to the unconscious spending episode. This perpetuates a cycle of seeking mental release from poverty’s stresses by reverting to the unconscious behavior known as emotional spending.
During bouts of emotional spending, many people view and portray themselves as wealthy. The stress that typically exists when spending the little money available is temporarily non-existent. Thus, for the non-wealthy, a “mental break” or “vacation” typically is enabled through mental illness or intoxicants. As mind-altering substances and mental illness take hold, emotional spending occurs.
The Intersection of Money and Mental Health
Money enables vacations and “mental breaks”. For the wealthy, vacations give people a break from the places and situations that cause mental stress. Vacations refresh the mind, reduce stress, and provide clarity of purpose. Most people return from vacation with renewed ambition.
The connection between money and mental health is extremely strong- even for those without mental conditions. For the poor, it’s nearly impossible to escape from stress sources- job, bad neighborhoods, and even a night away from children. The described situation is what leads people to feel “trapped” when they are physically free.
Furthermore, the poor typically stress over how money is made. Low-paying, undesirable jobs cause stress even though they pay the bills. Many people work manual labor jobs and are forced to work long hours to pay the bills.
Clearly, money is often a pain point for those financially struggling- but why? Money is a self-evaluation metric. Society ingrains the connection between money and self-worth from a young age. Money allows rich kids access to joy and opportunity. Furthermore, money can alleviate boredom by providing access to new experiences.
Extreme discipline is required to raise your consciousness without money. Spending money exposes one to new life experiences- expanding consciousness and providing new perspectives. Money is like a passport for life experiences. Accordingly, expanding consciousness while being poor requires intensive budgeting and increasing the presence of mind without a lot of new experiences.
Money is simply stored energy. The poor’s inability to release energy through spending money may cause for energy to be released in unhealthy ways- which worsens mental health. Even wealthy individuals have wound up poor by spending money irresponsibly during a time of mental instability. Clinically speaking, spending money frivolously during a state of manic depression is known as bipolar spending.
Bipolar Spending: A Disconnect Between Money and Mental Health
I wrote about bipolar spending from personal experience. In fact, I struggled with bipolar disorder, and bipolar spending, until I discovered the power of meditation. Here is a short story about my struggle with bipolar spending, money, and mental health.
Bipolar disorder is simply a label for a special brain chemistry that creates immense energy and creativity, along with the tremendous risk of self-sabotage. In my case, I was only delusional and reckless while under the influence of massive amounts of alcohol. While sober, I am creative and able to focus on a singular task for a long period of time.
When I was younger, I frequently made personal and financial progress only to demolish my hard work in a singular weekend. There were many times in my life where it felt unavoidable to not get drunk and spend money. Everything I loved doing involved late nights and intoxicants.
After studying bipolar disorder, I learned that financial recklessness is a common symptom of bipolar spending. I also learned alcohol and other drugs often fuel mania. Mania is a period of time where energy levels elevate to unhealthy levels, for a week or longer, and lead to irrational thoughts, actions, and behaviors. Simply put, irrational financial decisions result when you combine money and mania. There we go- bipolar spending is mania and money combined.
Need Help With Bipolar Spending or Emotional Spending?
Bipolar disorder is becoming more widely talked about publically. Many celebrities have publically spoke up regarding bipolar disorder and mental health in general. For me, I had to subside self-imposed stigma and doubt in order to conquer the brain label called bipolar disorder.
Don’t wreck your financial progress with bipolar spending! Developing healthy habits like meditation can be a natural remedy for emotional spending. Get more information about how to conquer bipolar spending and bipolar disorder by filling out the form below.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.938200056552887,
"language": "en",
"url": "https://howtodiscuss.com/t/negative-correlation/24315",
"token_count": 322,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0196533203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:79bb081c-0a9e-4c82-a860-8194b908d427>"
}
|
Definition of Negative correlation:
Negative correlation is a relationship between two variables in which one variable increases as the other decreases, and vice versa. In statistics, a perfect negative correlation is represented by the value -1, a 0 indicates no correlation, and a +1 indicates a perfect positive correlation. A perfect negative correlation means the relationship that exists between two variables is negative 100% of the time.
Negative correlation or inverse correlation is a relationship between two variables whereby they move in opposite directions. If variables X and Y have a negative correlation (or are negatively correlated), as X increases in value, Y will decrease; similarly, if X decreases in value, Y will increase. The degree to which one variable moves in relation to the other is measured by the correlation coefficient, which quantifies the strength of the correlation between two variables.
How to use Negative correlation in a sentence?
- Negative correlation or inverse correlation is a relationship between two variables whereby they move in opposite directions. This relationship is measured by the correlation coefficient "r", while the square of this figure "R-squared" indicates the degree to which variation in one variable is related to the other.
- Correlation between two variables can vary widely over time. Stocks and bonds generally have a negative correlation, but in he decade to 2018, their correlation has ranged from -0.8 to 0.2.
- Negative correlation is a key concept in portfolio construction, as it enables the creation of diversified portfolios that can better withstand portfolio volatility and smooth out returns.
Meaning of Negative correlation & Negative correlation Definition
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.956453263759613,
"language": "en",
"url": "https://www.cagmc.com/deep-insight-into-registration-of-charges-as-per-companies-act-2013/",
"token_count": 2757,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.03857421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7c018be6-3be4-495c-b694-a49294a8ef84>"
}
|
A company needs funds and ROC for diversification, expansion or for financing various projects. For this, they depend upon share capital and borrowed capital for financing their projects. The Borrowed funds usually consist of funds raised by the issue of debentures (secured or unsecured) or by obtaining financial assistance from Bank or Financial Institutions.
The Bank or financial institutions do not lend funds unless they are sure that their funds are safe and would be repaid along with the principal amount & interest. For securing their loans they resort to creating rights on the assets of the borrowing companies, which is known as a charge on assets.
Now the question which arises that what exactly is Charge.
Concept of Charge
A charge is a security given for securing loans or debentures by way of a mortgage on the assets of the company.
- A company, like a natural person, can offer security for its borrowings. Normally, the debentures and other borrowings of the company are secured by a charge on the assets of the company.
- Where property, both existing and future, is agreed to be made available as a security for the repayment of debt and creditors have a present right to have it made available, a charge is created.
- The legal right of the creditor can only be enforced at some future date if certain conditions governing the loan are not met. The creditor gets no legal right either absolute or special to the property charged. He only gets the right to have the security made available/enforced by an order of the Court.
In simple terms a charge is a right created by a company i.e. “Borrower” on its assets or properties or any of its undertakings present or future, in favor of a financial institution or a bank or any other lender, i.e. “creditor” who has agreed to extend financial assistance.
Charge as per Companies Act, 2013
According to Section 2(16) of the Act, “charge” means an interest or lien created on the property or assets of a company or any of its undertakings or both as security and includes a mortgage. The Charge here has the following essential features:
- There are minimum two parties to the transaction, the creator of the charge and the charge- holder.
- The subject- matter of charge may be on current or future assets and properties of the borrower
- The intention of the borrower to offer one or more of its specific asset or properties as security for repayment of the borrowed money together with payment of interest at the agreed rate etc. should manifest from an agreement entered by him in favour of the lender, written or otherwise.
Kinds of Charges
They are mainly of two types –
- Fixed charge: This charge is identified with a specific and clear asset at the time of the creation of that charge. The company is supposed not to transfer this type of a charge unless the charge holder is paid off all the dues for the same.
- Floating charge: This type has a circulating nature of properties of the company, like sundry debtors or stock in trade, are be deemed as floating charges. The nature of these types of charges kept on changing from time to time. The floating charge can be converted into a fixed charge if there is a crystallization of the company or the undertaking cease to be a concern.
Relevance of Section 77 of the Act
The provisions under Section 77 of the Act related to registration of charges must, so far as may be, apply to the company acquiring any property subject to a charge within the meaning of that section. This will also be applicable when any modification made in the terms or conditions or the extent or operation of any charge which is registered under this section. When a charge gets registered with the Registrar of Companies, he has to issue a certificate of registration of such type of charge to the company and to the person in whose favour the charge has been created.
Form and manner of Registration of charges
The manner of registration of charges is by the filing of the particulars of charge along with all the instruments creating a charge, within the period prescribed with the Registrar of Companies mentioned under Section 77(1) of the Act. However, in case of failure to file the particulars within the specified period, the company can be registered the charge by seeking condonation of delay from the Central Government. This process is also known as rectification of the register of charges.
The application for registration of charges has to be submitted with the Registrar of Companies in such form, by making payment of the required fees in such manner as specified under the Companies (Registration of Charge) Rules, 2014.
The registration of charges is given in Section 77(1), Section 78 and Section 79 of the Companies Act. The particulars of a charge together with a copy of the instrument creating or modifying the charge must be filed with the Registrar of Companies. It is filed within a period of 30 days of the date of creation or modification of charge along with the prescribed fees, in Form No.CHG-1 for other than Debentures and Form No.CHG-9 for debentures including rectification, as a case may be, which is duly signed by the company and the charge holder.
Extension of time for Registration of charges
The proviso mentioned under Section 77(1) of the Act is regarding the extension of time for filing particulars for registration of charges. It is stated, that the Registrar of Companies can allow such registration to be made within a period of 300 days of after such creation on the payment of additional prescribed fees. Rule 4 mentioned that an application for the delay has to be made in Form No.CHG-1. It is supported by a declaration form signed by the company through its secretary or director stating that such belated filing will not adversely affect the rights of any other intervening creditors of the company so far.
However, in the case where registration is not made within a period of 300 days of such creation, the company must seek an extension of time for filing of the particulars or for the registration of the charge from the Central Government (in Form CHG-8) with according to provisions of Rule 12 and Section 87of the Companies Act.
Form CHG-1 for registration of charges will be processed by the Registrar of Companies office after the order of Central Government for approval for condonation of delay (in Form INC 28) and has been filed with the Registrar of Companies. The Central Government has the option to provide for an extension of time on the ground that the omission to file with the Registrar of Companies, the particulars of charge or, the omission to the registration of charges was accidental or due to inadvertence or any other sufficient cause or also it is not of nature which prejudices the position of creditors or shareholders of the company. It is assured by just and equitable to grant relief.
Types of charges to be registered
According to the provisions Section 77(1) of the Act, it is the duty of every company creating a charge. It further states that the company has to register all types of charges signed by the company and the charge-holder together with the instruments, if any, creating such charge, with the Registrar of Companies within 30 days of the creation of that particular charge. The charge is created-
(i) Within or outside India
(ii) On its property or assets or any of its undertakings
(iii) Whether tangible or otherwise
(iv) Situated in or outside India.
Section 77, in fact, also says that the additional period can extend to 300 days (30 days in addition with 270 additional days). If the form is filed after 30 days of the usual registration period, it has to be paid with an additional fee. An application has to be filed in CHG- 10 from the Company Secretary or Director of the company that this late filing will not adversely affect any of the creditors involved in this.
Application for registration of charges
Under Section 7, it is provided that if a company fails to register the charges, then the person in whose favour the charge is to be created may apply to the Registrar for the registration of charge along with the instrument created for the charge within the prescribed time in such form and in that manner as may be specified. The Registrar may take action, on that application, after making notice to the company within a period of 14 days, unless the company itself registers the charge or shows the sufficient cause as to why the charge has not been registered. Thereafter, he may allow such registration on the payment of additional fees as may be specified. Such a person can recover the cost of registration from the company.
Under Section 79 it is provided that the provisions related to registration of charges will apply to-
- A company acquiring any property which is subject to a charge within the meaning of that section, or,
- Any modifications in case of terms or conditions or the extent or operation of any charge registered under the section.
When the particulars of modification of charge are registered under section 79, the Registrar has to issue a certificate of modification of charge.
Date of notice of such charges
Under Section 80 it is provided that where any charge on any property or assets of a company or any of its undertakings when gets registered under section 77. Then any person acquiring such property, assets, undertakings or part thereof or any share or interest must be deemed to have notice of the charge from the date of its registration.
Registration of charges to be kept by Registrar
Under Section 81 of the Act, it is made obligatory on the part of the Registrar to maintain a register and keep information in respect of every company. It contains the particulars of charges registered in that form and in such manner as may be specified. Such register must be kept open for inspection to be made by any person on payment of required fees as may be prescribed.
Company to report satisfaction of the charges
Under Section 82 it is provided that a company has to intimate to the Registrar in the specified form of the particulars of payment or satisfaction of the charges registered within 30 days with the Registrar from the date of that event. The registrar on receiving of such application from the company must issue a notice to the holder of charge for show cause within 14 days from date of the notice. If there is no show cause shown to the Registrar, he must order a memorandum of satisfaction and also will inform the company. If any cause is shown, the effect will be entered in the register, and the same has to be intimated to the company.
If the holder of charge intimated the satisfaction of the charge his signature, then such notice has not to be issued to the holder of charge. The provisions in this section will not affect the power of the registrar to make an entry of satisfaction of charge on intimation received otherwise that the company.
Power of Registrar
Under Section 83 of the Act, it gives powers to Registrar to make entries of satisfaction and release in the absence of intimation from the company. According to this section, the Registrar may on evidence being given to his satisfaction with respect to any registered charge-
- That a debt for which the charge was given has been paid or satisfied in whole or in part, or,
- That part of the property or undertaking charged has been released from the charge or has ceased to form part of the company property or undertaking, enter in the registration of charges a memorandum of satisfaction in whole or in party, or of the fact that part of the property or undertaking has released from that charge or has ceased to form part of the company’s property or undertaking, as the case may be, notwithstanding the fact that no intimation has been received by him from the company. The registrar has to inform the affected parties within 30 days of making the entry in the register of charges.
Effect of non-registration of charges
- Under the section 77(3) of the Act, it states that in the case when no charge is created by a company, i.e. if a charge is not registered, and a certificate of registration is not issued by the Registrar of Companies, then the charge must not be taken into account by the liquidator or other creditors. However, nothing provided in Section 77(3) of the Act will prejudice any contract or obligation for the repayment of the money secured by a charge.
- Section 86 of the Act provides for punishment for contraventions of Section 77 of the Act. It is provided that the company must be punishable with fine, which must not be less than one lakh rupees but which may exceed ten lakh rupees. Also, every officer of the company who is found in default will be punishable with imprisonment for a term which can extend up to six months or with fine or both which must not be less than twenty-five thousand rupees and which can extend up to one lakh rupees.
- It is mandatory that an application must be made for registration of charges to the Registrar of Companies in the prescribed format so that the Registrar of Companies after being satisfied with the application can issue a certificate of registration of charge and entitle the company and its creditors to rights at the time of liquidation.
Conducting a fair business practice in India is not an easy job. There are tons of paperwork that is required that go along with the corporate laws. Hopefully, this article has ramped up your understanding regarding the application for registration of charge. We will keep coming up with informative contents down the line. So stay tuned.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.931323230266571,
"language": "en",
"url": "https://www.get-invest.eu/market-information/nigeria/renewable-energy-potential/",
"token_count": 762,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.040283203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fc77a25c-3174-4e72-8875-0c73f29b970d>"
}
|
The huge potential for renewable energy in the country is mostly untapped. Barriers to the development of renewables include: the large oil and gas production in the South together with government fuel subsidies, the lack of clarity/market information on private sector opportunities, and a general knowledge gap concerning financial support mechanisms available within the country.
Nigeria has enormous solar energy potential, with fairly distributed solar radiation averaging 19.8 MJm2/day and average sunshine hours of 6h/day. The assumed potential for concentrated solar power and photovoltaic generation is around 427,000 MW. According to estimates, the designation of only 5% of suitable land in central and northern Nigeria for solar thermal would provide a theoretical generation capacity of 42,700 MW. In July 2016, 14 Greenfield Independent photovoltaic (PV) power projects with a capacity of 1,125MW had their PPAs signed by the Federal Government owned NBET.
Global Horizontal Solar Irradiation in Nigeria
Hydropower has been a cornerstone of grid-powered generation in Nigeria for decades. 15% of current power generation sources in the country are hydro based. The country is reasonably endowed with large rivers and some few natural falls. In all parts of Nigeria, potential sites for unexploited small hydropower exist, with an estimated total capacity of 3,500 MW. A multitude of river systems, providing a total of 70 micro dams, 126 mini dam and 86 small sites, supply a technically exploitable large hydropower potential estimated to be about 11,250 MW. Under recent circumstances, only 17% is being tapped. Potential large investments in some significant hydropower sources and even some plans, such as the dam for the Mambilla plateau in northern Nigeria, have been struggling due to large investments cost required and lead times needed. The potential for small hydro power is about 3,500 MW, with just about 64.2 MW being exploited. By 2020, the Nigerian government aims to have increased the hydroelectricity generation capacity to 5,690 MW. This projection shall be met through an upgrade of old hydroelectricity plants and the installation of new hydro power plants.
Hydro Power development by the Federal Ministry of Power (2014)
|Power Station||Capacity (MW)||Status|
|Zungeru project||700||financing secured|
|Mambilla Project||3050||under development|
|Gurara II Project||360||under development|
|Gurara I Project||30||under development|
|Itisi Project||40||under development|
|Kashimbilla Project||40||under development|
River Basins with large and small scale hydropower potentials
The Wind energy potential in Nigeria is very modest, with annual average speeds of about 2.0 m/s at the coastal region and 4.0 m/s at heights of 30m in the far northern region of the country. Based on wind energy resource mapping carried out by the Ministry of Science and Technology. Wind speed of up to 5m/s were recorded in the most suitable locations, which reveals only a moderate and local potential for wind energy. The highest wind speeds can be expected in the Sokoto region, the Jos Plateau, Gembu and Kano / Funtua. From the study, Maiduguri, Lagos and Enugu also indicated fair wind speeds, sufficient for energy generation by wind farms. Apart from these sites, other promising regions with usable wind potential are located on the Nigeria western shoreline (Lagos Region) and partly on the Mambila Plateau.
A 10MW wind farm projects is currently being built in Katsina, and expected to be completed in 2017.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9413225054740906,
"language": "en",
"url": "http://bitcoin-and-blockchain.education/BlockchainAlgorithm/36",
"token_count": 796,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06201171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6e984023-6d45-4df4-8f36-e2e7fb33d131>"
}
|
Blockchain is down
There’s been a growing buzz about Bitcoin halving over the past few weeks, and you’ll likely hear it referenced even more often as this upcoming weekend approaches!
In this post, we’ll explain exactly what Bitcoin halving is, why it’s important to know about, and we’ll also share some cool resources to help keep you in the loop. If you need to rewind and refresh your memory on some bitcoin basics before diving into this post, feel free to check out these introductory resources in our Support Center first.
What is Bitcoin halving?
Bitcoin halving is a process that is built into Bitcoin’s code, and it occurs once for every 210, 000 blocks mined, roughly every 4 years. The process affects how much of a reward miners receive for validating new blocks of transactions on the blockchain. Miners play a crucial role in preventing fraud and maintaining Bitcoin’s unique system of checks and balances. In other words, the work miners do helps make it possible for us to securely send and receive peer-to-peer transactions, instead of having to trust a third party like a bank. Miners are motivated to continue participating in the Bitcoin network by the rewards they earn for validating new blocks. The reward for mining first started out as 50 BTC per block until the first halving event occurred in November 2012, which cut the reward in half to 25 BTC. The second halving is on course to happen on July 9th, 2016, and will cut the reward to 12.5 BTC. The final halving will take place in the year 2140.
Why is the miner reward decreasing?
At first you might think that it is counter-intuitive to decrease the miner’s reward, but there’s a good reason for it. Unlike most national currencies we’re familiar with like Dollars or Euros, Bitcoin was designed with a fixed supply and predictable inflation schedule. There will only ever be 21 million bitcoins. This pre-determined number makes them scarce, and it’s this scarcity alongside their utility that largely influences their market value.
When bitcoin was first created there were very few miners participating on the network and they were originally rewarded with a bigger amount. Over time, bitcoin has become adopted by many people increasing the global competition in mining. The increase in demand for bitcoins combined with a decreasing block reward has a tendency to push upward price pressure on the value of Bitcoin. The halvening is a rare but predictable event in the bitcoin community.
Visualizing what is happening
Bitcoin enthusiast, Laurent D, and designer, , joined forces to create an interesting resource called The Halvening! We love it because it’s easy to follow the countdown and includes some great resources to learn more.
The site tracks:
- the number of blocks remaining
- the amount of time remaining (days, hours, minutes, seconds)
- % of how close we are
- the number of new bitcoins and blocks remaining
If you really want to get in on the fun, there are meet ups happening all over the world as part of Halving Day; here’s a partial list. Many in the industry speculate that the halving may be one of the reasons for the recent price fluctuations, which we’ve covered extensively each week in our news recaps.
Bringing the data to life
The new block announcement powering the Halvening website is provided through the websocket API established through the Blockchain API, a fun reminder of just a few of the cool things developers can build on top of our developer platform.
We also discussed the halving countdown using our API in November of last year, where developer Kyle Honeycutt used the API to create a countdown clock to calculate the amount of time left until the bitcoin block reward halving happens in real-time.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9444135427474976,
"language": "en",
"url": "http://www.rapidshift.net/heede-and-the-climate-accountability-institute-detail-emissions-of-the-carbon-majors/",
"token_count": 3449,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.498046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:47e2ee6e-3aae-4e98-8fbe-f05c6a3acb58>"
}
|
Houston Chronicle, June 2020 Climate crisis to blame for $67bn of Hurricane Harvey damage – study. Exclusive: new figure far higher than previous estimates of direct impact of global heating https://www.houstonchronicle.com/business/columnists/tomlinson/article/Coronavirus-costing-oil-gas-1-8-trillion-data-15327990.php
Fiona Harvey, The Guardian, Fri 12 Jun 2020
At least $67bn of the damage caused by Hurricane Harvey in 2017 can be attributed directly to climate breakdown, according to research that could lead to a radical reassessment of the costs of damage from extreme weather.
Harvey ripped through the Caribbean and the US states of Texas and Louisiana, causing at least $90bn of damage to property and livelihoods, and killing scores of people.
Conventional economic estimates attributed only about $20bn of the destruction to the direct impacts of global heating. Climate breakdown is known to be making hurricanes stronger and may make them more likely to occur, but separating the effects of global heating from the natural weather conditions that also cause hurricanes is complex.
In a study published in the journal Climatic Change, researchers used the emerging science of climate change attribution to calculate the odds of such a hurricane happening naturally or under increased carbon dioxide levels, and applied the results to the damage caused.
Similar methods were used in a separate study, published last month in the same journal, that found that droughts in New Zealand between 2007 and 2017 cost the economy about NZ$4.8bn, of which $800m was directly linked to climate change. Floods caused insured losses of about NZ$470m over the same period, of which NZ$140m was linked to the climate.
The researchers say the new tools are a more accurate way of estimating the economic damage caused by climate breakdown.
“We’re pretty sure the climate change-related damages associated with extreme events have been underestimated in most assessments of the social cost of carbon,” said David Frame, a professor of climate change at the Victoria University of Wellington and the lead author of the studies. “We think this line of research, as it matures, should provide a really valuable input.”
Friederike Otto, the director of the Environmental Change Institute at Oxford, who was not involved in the research, said the method could make it possible to generate global estimates of the true cost of climate breakdown, which could have a profound effect on how governments and businesses approach the need to reduce greenhouse gas emissions.
“We have known about the costs of climate change theoretically,” said Otto. “It’s all very well in the abstract, but the global mean temperature does not kill anyone – extreme events cost money and lives. Being able to attribute these impacts to climate change means being able to convey what climate change really means.”
She said it would become possible to compile an inventory of the damage that could be attributed to climate change around the world, which governments and businesses could use to bring about change. “Hopefully this will speed up the process of moving to net zero [carbon].”
Estimating the true costs of the climate crisis could also help developing countries seeking recognition of the loss and damage they face as a result of climate breakdown, which they argue should spur rich countries to provide more assistance. Loss and damage is likely to be one of the most vexed issues at next year’s UN Cop26 climate summit.
Legal actions around the world would also be affected, said Tessa Khan, a co-director of the Climate Litigation Network. Activists and local governments around the world are taking fossil fuel companies to court over their greenhouse gas emissions, arguing that they knowingly caused damage while profiting from raising carbon dioxide levels.
“[The two new studies] are opening to door to stronger evidence to persuade courts that fossil fuel companies should be held accountable for their role,” Khan said. “This will strengthen the legal basis of these lawsuits.”
Dr Suzanne Rosier, a climate scientist at the National Institute of Water and Atmospheric Research in New Zealand
Over the past decade, a compelling body of evidence has linked a range of extreme weather events to human-caused climate change.
This area of research – known as “event attribution” – provides a means for climate scientists to examine how the severity and frequency of weather events, such as heatwaves, droughts and storms, are changing as greenhouse gas concentrations rise.
In a pair of new journal papers, we have attempted to open up a new avenue for quantifying the “attributable costs” of weather-related disasters. We focus on recent droughts and floods in New Zealand and the landfall of Hurricane Harvey in Texas in 2017.
Using event attribution as the scientific basis for quantifying how extreme weather has changed, we have been examining the links between changes in extreme weather and their economic consequences.
If we can quantify the contribution from climate change to an extreme weather event and we can also know the cost of the associated disaster, then we can put a financial figure on the climate change component of those costs. These calculations then provide us with the price tag of climate change, through its impact on extreme weather events.
Quantifying attributable costs
In the two studies, both published in the journal Climatic Change, we look at droughts and floods in New Zealand during the decade 2007-17 and the landfall of Hurricane Harvey in Texas in August 2017.
The New Zealand Treasury estimated that two droughts in 2007 and 2013 jointly reduced GDP in New Zealand by around NZ$4.8bn (US$3.4bn in 2017). Using previously published methods, which used climate models to estimate changes in the types of weather patterns typical of severe New Zealand drought, we estimate that around NZ$800m (US$568m) of this cost is due to climate change.
We also analysed 12 extreme rainfall events, which contributed a total of around NZ$470m (US$334m) in insurance losses, by applying techniques used elsewhere. This involved running regional climate models thousands of times over, both with and without human influences, and looking at how often the events in question occurred in each case. Based on this, we estimate that around NZ$140m (US$99m) of those insurance losses were attributable to human influence on the climate.
The two sets of costs are not directly comparable – one measures reductions in economic performance and the other measures insured losses. The main insight is that event attribution is able to show that climate change is already causing significant losses to New Zealand. Climate change is not only a future problem, but it is costing us here and now.
Benchmarking social cost of carbon estimates
We also looked at the human climate change fingerprint on the damages associated with Hurricane Harvey that hit Houston, Texas, in 2017, which were strongly driven by torrential rain and extensive flooding.
Previously published attribution studies, each using independent methods, found good agreement on attributable changes in the rainfall associated with Harvey: these conclusions formed the basis of our cost estimates. The results are striking: we estimate that around US$67bn of the Hurricane’s overall US$90bn are associated with climate change.
This is a far higher estimate than that which would be obtained from conventional economic models for the cost of climate change in the US, such as in the model built by Nobel Prize winner William Nordhaus. This model is underpinned by a 2017 study (pdf) from the US Environmental Protection Agency on the “social cost of carbon” – the financial damages caused by every additional tonne of carbon emitted into the atmosphere. Nordhaus’s model predicts total economic costs to the US economy in 2017, from climate change, to be around US$20bn. GlossaryINTEGRATED ASSESSMENT MODELS: IAMs are computer models that analyse a broad range of data – e.g. physical, economic and social – to produce information that can be used to help decision-making. For climate research, specifically,… Read More
The usual tools used to quantify the costs of climate change are called “Integrated Assessment Models” (IAMs). (See Carbon Brief’s detailed Q&A on IAMs.) IAMs have been developed with the premise that the main economic impacts associated with climate change arise from long-term changes to agricultural productivity and practice associated with rising average temperatures. They typically assume that the effects of extreme weather events – which are infrequent by definition – are relatively minor.
The actual numbers we have obtained could be too high or too low (that is the way with research). But even if they are an overestimate, the damages we attribute to Hurricane Harvey measures just the immediate damages from one single event, in a single city. It does not include the direct and indirect costs of disruption associated with this hurricane, nor the health impacts, nor the population displacement.
It also does not include the costs of other events that happened that year – Harvey was one of four major hurricanes to make landfall in the US in 2017 – nor the costs associated with changes in the environment that are unrelated to extreme events (for example, coastal erosion because of sea level rise).
Practically, the results from these initial papers suggest that common “top-down” approaches substantially underestimate the costs of climate change and that event attribution techniques can be applied to form a kind of “bottom-up” check on those estimates.
Deploying this approach more widely could provide a useful check on IAM performance and add another valuable line of evidence to inform estimates of the social cost of carbon.
There are, of course, many uncertainties in any estimate of the human influence on weather events and in estimates of the costs of climate change. While some effects of extreme events are reasonably well-recorded, such as insurance losses, others are very difficult to measure, such as impacts on mental health and wellbeing.
Using attributable costs
The main significance of our new work is less in the exact numbers and more in the ability to link, more forensically, human influence on the climate to the economic impacts of disasters.
There are several ways in which this line of research could be used:
1. By central banks and treasuries as they are increasingly asked to consider climate change-related risks. This line of evidence can provide innovative ways of analysing the problem and should help them deal with dynamic, climate-related fiscal and monetary risks.
2. By insurance companies and investors that may find attributable cost techniques useful as an additional line of evidence regarding the way their risks are changing.
3. By policymakers tasked with assessing the social cost of carbon; a number that may guide national emission targets. The forensic approach suggests that traditional, IAM-based social cost of carbon estimates are too low.
4. By parties wishing to pursue arguments regarding “loss and damage” arising from climate change, potentially including lawsuits. Loss and damage refers to the societal and financial costs of climate impacts that can no longer be avoided. The idea of developed countries – who are most responsible for climate change – compensating developing nations for these damages is an ongoing part of international climate negotiations.
5. By investors as they consider divestment, especially in light of (3) and (4). If the social cost of carbon is currently underestimated, and if our new approach can potentially lead to legal actions, then these constitute very powerful arguments for firms to accelerate their divestment initiatives.
With colleagues from around the world we are trying to develop further our approach. This involves thinking through methodological issues, clarifying the economic consequences of weather and climate events, and trying to assess which events are amenable to event attribution and which are not. There is much to do and much to learn, but much to gain from doing so.
In the long run, the integration of quantitative social science and climate change event attribution will help decision-makers have a richer, better and more accurate understanding of the effects of climate change on the economy.
By looking as far along the chain from emissions to impacts as we can, we provide fresh evidence for decision-makers to consider as they grapple with the climate change challenge. By thinking through the economic consequences of human influence on extreme events, we think this can help move event attribution from the news cycle to the boardroom.
Frame, D. J. et al. (2020) The economic costs of Hurricane Harvey attributable to climate change, Climatic Change, doi:0.1007/s10584-020-02692-8
Frame, D. J. et al. (2020) Climate change attribution and the economic costs of extreme weather events: a study on damages from extreme rainfall and drought, Climatic Change, doi:10.1007/s10584-020-02729-ySharelines from this story
- Guest post: Cost of extreme weather due to climate change is severely underestimated
Scientific American, May 2020
The Climate Accountability Institute (CAI), in collaboration with CDP (London), announces the publication of operational and product-use emissions attributed to fifty major oil and gas companies over the period from 1988 to 2015.Press Release: Carbon Majors 2015 update: six companies responsible for one-third of emissions from oil and gas sector since 1988.
The Climate Accountability Institute and CDP jointly announce the publication of the Carbon Majors Dataset. The Dataset points to the central role of the oil and gas sector in global carbon dioxide emissions and highlights the risks and opportunities of the industry to drive the transition to a low-carbon economy. The Dataset updates the Carbon Majors database of operational and product-use emissions attributed to the largest fifty investor and state-owned oil and gas companies from 1988 to 2015. This updates the original work by Richard Heede (2014) Tracing anthropogenic CO2 and methane emissions to fossil fuel and cement producers 1854-2010, Climatic Change, vol. 122(1):229-241; URL: http://link.springer.com/article/10.1007/s10584-013-0986-y?view=classic.
The Institute was established by Richard Heede, Naomi Oreskes, and Greg Erwin as a non-profit research and educ ational organization.
The Climate Accountability Institute engages in research and education on anthropogenic climate change, dangerous interference with the climate system, and the contribution of fossil fuel producers’ carbon production to atmospheric carbon dioxide content. This encompasses the science of climate change, the civil and human rights associated with a stable climate regime not threatened by climate-destabilizing emissions of greenhouse gases, and the risks, liabilities, and disclosure requirements regarding past and future emissions of greenhouse gases attributable to primary carbon producers.
Our vision is for a world protected from the social, economic, and environmental damages of climate change.
Our mission is to use climate accountability as a fulcrum for climate stewardship.
Our strategy is to leverage accountability by carbon producers into using their skills, capital, and resources to aid rather than oppose the transition to a low-carbon or zero-carbon energy future.
“In my view, staying out of the fray is not taking the ‘high ground’; it is just passing the buck.”
—Steve Schneider Memorial Forum, Boulder, August 2011.
The Climate Accountability Institute announces a new paper: Heede and Oreskes (2015) Potential emissions of CO2 and methane from proved reserves of fossil fuels: An alternative analysis, Global Environmental Change.
News Update August 2015
The Climate Accountability Institute announces a new paper by Frumhoff, Heede, & Oreskes: The climate responsibilities of industrial carbon producers.
A “powerful, and gutsy” paper —Denis Hayes, Bullitt Foundation
News Update December 3, 2014
The Climate Accountability Institute is today releasing an update of the
Carbon Majors Project — detailing the emissions traced to the major industrial carbon producers in the oil, natural gas, coal, and cement industries — through the year 2013
Papers and charts available here.
News Update December 15, 2014
The Climate Accountability Institute is today releasing an analysis of
global fossil fuel and cement emissions of CO2 since 1751 that calculates
the proportion emitted since 1988 ‹ when the evidence and risks of
human-caused warming first became widely known.
Papers and charts available here.
Banner image is of Cyclone Catarina in the southern hemisphere that later hit the Brazilian town of Torres. Catarina was the first hurricane-intensity tropical cyclone ever recorded in the Southern Atlantic Ocean. Photographed from the International Space Station on 26 March 2004.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9576987624168396,
"language": "en",
"url": "https://ageconsearch.umn.edu/record/210625",
"token_count": 521,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01177978515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:342ee4f9-2788-47c6-9677-befac6afea56>"
}
|
In the 21st century, sport is not just a fun, social cohesive force but also a business; it has become an independent industry by now and several countries possess developed sport markets. According to estimates, sport accounts for 4% of the EU’s GDP. The actuality of our research is given by the fact that the economic aspect of sports develops continuously which is also due to that more and more amounts already stream into sports in our days. In Hungary, sport is mainly state aided and has mostly financing problems while the sport businesses existing in the more developed Western Europe are principally sponsored by the private sector. The government considers sport as a strategic branch (HERCZEG et al, 2015) and manages as such because they see the international breakthrough potencies in sport as well. Sport companies must also adapt the business-based thinking, which requires the strategic planning and operation (BECSKY, 2011). The research covers the subject of economic approach of the players’ rights. The task of accounting is to give a true and fair image about the property, income and financial situation of an undertaking. Information provided by accounting is essential for both the management decisionmaking and the market operators. In Hungary, the sports undertakings, as each managing entity, have to prepare their statements according to the Act C of 2000 on Accounting (AoA.) (NAGY – BÁCSNÉ BÁBA, 2014). The purpose of this research is to examine how a domestic sports undertaking demonstrates the value of available players in the books and how the incomes and expenditures incurred with the players are accounted for, based on the regulations of the Hungarian, international associations and the Union of European Football Associations (hereinafter: UEFA). In order that the leaders of the businesses can make quick and appropriate economic decisions, it is essential in this intensively changing world that an enterprise should have a well-functioning accounting system based on up-to-date information. International Financial Reporting Standards (hereinafter: IFRS) are intended to provide the comparability across borders. Firstly, we deal with the accounting reporting system, both the Hungarian, international financial reporting standards and, relating to UEFA, the investigation of the intangible assets to a great extent during analysing the balance sheets. Then, we examine the income statements from the viewpoint player transfers. To what extent the rules of a statement laid down by UEFA differ from the ones of a statement prepared according to AoA? What is the difference in domestic and international relations? In this study, we search after the answers for questions mentioned before.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9590045809745789,
"language": "en",
"url": "https://arts.eu/insights/article/digitalisation-of-the-workplace-new-opportunities-for-employees/",
"token_count": 1427,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.05322265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f02c9988-27d3-4801-b141-b1bcfc04ea86>"
}
|
Due to automation, employees no longer stand along the production line: instead, machines take care of repetitive line work. Digitalisation has made it possible for us to optimise and accelerate processes. History shows that technical progress is necessary to be able to keep up economically, increase productivity, and reduce prices for products and services while paying higher salaries.
Then, as now, progress also brought change. It is also normal for many people to fear this change, as no one can predict exactly what change will bring. Machines may replace people, established career profiles could disappear, and the unemployment rate could increase. However, technical progress cannot be stopped and the current state of near-full employment and massive aging of Germany’s population means that the economy must respond to labour shortages by replacing people with machines. In addition, employees’ expectations have changed, and as a result, work-life balance, home working and flexible working models are valued more highly than ever before.
In order to meet employees’ needs and face up to inevitable technological change, we need digitalisation and progress – so what can we expect the next few years at work to look like?
Current studies show that digital transformation will bring both winners and losers in the field of employment. Some jobs will disappear, others will grow, and in a few years, futuristic jobs that do not currently exist will be routine. Everyone, both employers and employees alike, must adapt to digitalisation and the associated changes in working requirements by making it possible to take the necessary steps. Employees will, therefore, have to adapt their skills to keep pace. In concrete terms this means that, as career profiles evolve, employees will have to develop new skills and meet new challenges. Employers will need to invest in training to ensure that their workforces are qualified for these changed jobs if they are to acquire these skills. However, studies show that most OECD (Organisation for Economic Co-Operation and Development) countries significantly reduced their expenditure on training for employees between 1993 and 2015. Sweden and Germany are at the forefront of this decline, with more being invested in employee training in Sweden and Germany as a percentage of GDP in 1993, making the contrast with today even more striking. Australia, Switzerland, and Denmark show precisely the opposite change: these countries are investing more in training their employees than they did in 1993. This trend is likely to continue across all countries, as the new professional challenges arising as a result of digital transformation and the associated demands on employees will require employees to demonstrate greater qualifications than ever before.
In the future, routine and physically demanding activities will be performed by machines, while people will still fulfil the role of controllers. This development will not necessarily lead to a reduction in the number of jobs, but there will be a shift towards higher quality, more intellectually demanding jobs. The requirements on employees will change accordingly. Specialist knowledge, self-management and creativity will be in greater demand than ever, resulting in lifelong learning through continuous professional development. That development must be desired by employees, and supported and facilitated by employers. This will enable businesses to remain competitive and keep pace with the appreciably rapid pace of change.
Alongside learning and development for employees, employer branding and employee liability will continue to grow in importance within businesses. As the labour force reduces in size, the German population continues to age, and continuous professional development becomes a core topic in everyday business, businesses will be increasingly conscious of the need to secure the loyalty of their most knowledgeable employees while also recruiting international labour resources, as employees‘ specialist knowledge is valuable and worth retaining. Moreover, it will not become easier to find and hire qualified workers with suitable skills on the labour market, meaning that many employers are already facing up to the need to catch up.
Robots instead of people in every role will never be enough to innovate and achieve economic success. Teamwork and brainstorming will continue to be essential in the future: no robot can replace teamwork, because complex problems in particular can be solved more efficiently by a team than by a machine. People in a team benefit from each other through their different perspectives and approaches. According to the World Economic Forum, the Fourth Industrial Revolution will bring with progress and development that will completely change our ways of working and living. The World Economic Forum has listed the “Top Ten Skills” for 2020, which employees will need to deliver effective work, with creativity heading the list.
People will need to be creative to be able to take advantage of the wide variety of new technologies, new way of working and new products. Robots will be able to help us with physically demanding work or faster processes, but not with creativity, nor by coming up with new ideas. Moreover, emotional intelligence, decisiveness, and critical thinking will be important human qualities that we will particularly need in the future, and which machines will be unable to replace.
Politics, economics, and society will all need to undergo a transformation. Measures will need to be introduced to ensure that digital progress can continue, is used effectively, and maintains its competitiveness. The EFI Expert Commission on Research and Innovation (EFI) addressed the “sphere of digital transformation” in its 2017 report, in which it also advanced potential political solutions. In particular, small and medium enterprises (SMEs) should receive political support, while it should be made easier to implement digital technologies and business models. Similarly, digital education should become more important. As such, digital core competences should be taught to primary school children and teachers should receive continual training in technology. Apprenticeships and further and higher education courses should also involve more intensive delivery of IT skills, and this will also require further training for educators. Digital core competences are becoming increasingly relevant and will need to be integrated into people’s lives at an ever earlier stage, thereby ensuring that the next generation experiences digital transformation from an early age and is, therefore, as well-prepared as possible. Current employees must, as already described, be prepared for digitalisation via ongoing, continuous training.
The mobile internet and cloud technologies are already affecting the ways in which the majority of employees work. Some businesses have already introduced desk sharing and equipped their workers with laptops to meet the desires of both employees and employers for flexible working arrangements. Skype conferences are an everyday event, so that all colleagues can have a “round-table discussion” even if it spans national borders, or so that interviews can be held at short notice irrespective of geographic distance. Tradespeople complete their maintenance reports on digital tablets and send the documentation to their customers by e-mail. Digital transformation brings many benefits for the working environment and should not be stopped, as history has demonstrated that every transformation moves us a step further forward.
ARTS has kept up with the times and has recognised the benefits of digitalisation. Where necessary, we hold our interviews using Skype, all ARTS employees have an excellent internal network, and every employee is equipped with the latest technology. Apply now, join the team, and benefit from having ARTS as your employer.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9433750510215759,
"language": "en",
"url": "https://cfnc-online.org/nieer-special-report-how-will-the-covid-19-pandemic-impact-pre-k/",
"token_count": 285,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.036376953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7d41163a-23bd-4d41-a771-cbfc8a1048a4>"
}
|
The COVID-19 pandemic is already forcing local governments to make budget cuts. Unfortunately, Pre-K programs experience budget cuts that are felt long after the recession or economic downturn is over. To look at some ways that the COVID-19 pandemic could effect Pre-K programs in the U.S., The National Institute for Early Education Research (NIEER) looked back at their data from the Great Recession to look at how early childhood programs would effected.
Here were some of their findings:
- The worst enrollment cuts occurred four years after the Great Recession began
- More than a decade hasn’t been long enough for 25 states to bring their per child spending rates for public preschool programs back to pre-Great Recession levels.
- The best way to prevent long-term problems is to avoid cuts to pre-K programs. States can do this by choosing to make high-quality preschool a public policy priority. Even modest, one-time COVID emergency federal funding for pre-K could prevent short- and long-term cuts.
(Source: Garver, K. (n.d.). Special Report: How Will The COVID-19 Pandemic Impact Pre-K? Retrieved from http://nieer.org/policy-issue/special-report-how-will-the-covid-19-pandemic-impact-pre-k)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.95945143699646,
"language": "en",
"url": "https://en.reset.org/blog/how-disclosing-your-environmental-data-might-make-world-better-place-07012019",
"token_count": 720,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.267578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5fe44fe6-1880-49bc-bee2-916ca78e24f6>"
}
|
Data has become the world’s most valuable resource, the “oil” of the digital era. And while as individuals we're increasingly forced to take steps to secure and control access to our own private information, data transparency can actually help large companies and governments to join the fight against climate change. Data - its collection and analysis - enables some of society's most powerful players (entities that have the power and clout, but not always the will to take action when it comes to issues surrounding the environment!) to gain a better understanding of the risks posed by climate change, and identify opportunities for more sustainable performance. As Jamison Ervin and David Jensen pointed out an article in Medium, “It's time to recognize environmental data as a global public good.”
Before anything can change, we need information about the current status quo. CDP, formerly the Carbon Disclosure Project, realized that fact a good while back, and for the last seventeen years, the organization has been committed to developing what they call a "global disclosure system". CDP's aim is to transform capital markets by elaborating environmental reports and measuring companies', cities', states' and regions' environmental impacts - enabling them to make better-informed decisions on climate action. “Only by measuring and understanding their environmental impact was it possible for investors, companies and cities to take action to start building a truly sustainable economy,” the CDP website states.
How does it work?
CDP asks companies, cities, states and regions for data on their environmental performance. This year, for example, over 7000 companies have responded to their climate change, water, forests and supply chain questionnaire and more than 620 cities have disclosed environmental information. Then the organization works with the data to provide detailed analyses on critical environmental risks, opportunities and impacts. This information is then used by businesses, investors and policy makers to make better decisions.
CDP divides the data into three categories: the corporations’ data that can be purchased, the data for investors that is only available if you are an investor member and the cities, states and regions data which can be accessed for free through their open data portal.
Why would companies want to disclose their environmental information to CDP? A couple of weeks ago, at the beginning of June, a CDP report revealed that over 80 percent of the companies that have taken part of the research will have to deal with key negative climate impacts such as extreme and volatile weather patterns, rising global temperatures, and increased pricing of greenhouse gas emissions.
"Our analysis shows that there are a multitude of risks posed by climate change, including impaired assets, market changes and physical damages from climate impact, as well as tangible impacts to business bottom lines,"explained in the report Nicolette Bartlett, director of climate change at CDP. So ultimately, if these kind of large and powerful entities can see the affect that climate change will have on their profits, they might be encouraged to use their power do something about it - taking steps to address carbon emissions, deforestation and water security in ways that are beneficial to everyone. And that disclosure also means investors and customers are provided with comprehensive information about the companies' environmental credentials. Data transparency, therefore, can have added benefits throughout different stratas of society - as well as encouraging and enabling powerful entities to do their bit to tackle climate change.
In the era of data, we might just need a "digital ecosystem for the environment" as some UN experts have called it. And it seems that environmental disclosure could be an integral part of it.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.94325190782547,
"language": "en",
"url": "https://pace-cme.org/2014/05/21/most-european-countries-will-continue-to-see-increasing-obesity-rates/",
"token_count": 604,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1767578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:07ce8e0a-bc27-4d93-8eec-97eefa13d623>"
}
|
Most European countries will continue to see increasing obesity rates
Statistical modelling predicts that rates of obesity and overweight will increase by 2030 in almost all European countries, but to different extents.
The study, from investigators which included the WHO Regional Office for Europe, was presented at the EuroPRevent congress in Amsterdam. It was a statistical modelling study which incorporated all available data on body mass index (BMI) and obesity/overweight trends in all 53 of the WHO's Euro-region countries. Such modelling, said the authors, "enables obesity trends to be forecast forward providing estimates of the dynamic epidemiology of the disease". Definitions were based on the WHO's standard cut-offs - healthy weight (BMI ≤24.99 kg/m²), overweight and obesity combined (BMI ≥25 kg/m²) and obesity (≥30 kg/m²).
In almost all countries the proportion of overweight and obesity in males was projected to increase between 2010 and 2030 - to reach 75% in UK, 80% in Czech Republic, Spain and Poland, and 90% in Ireland, the highest level calculated. The lowest projected levels of overweight and obesity were found in Belgium (44%), and the Netherlands (47%). Similar trends in overweight and obesity were projected in women, with Ireland again showing the greatest proportion (84%).
Similarly, the projected proportions of male obesity were found high in Ireland (58%), Greece (40%), Czech Republic (38%) and UK (35%). The lowest male obesity prevalence was projected in Romania (10%).
Moreover, the projections show little evidence of any plateau in adult obesity rates in Europe. Dr. Laura Webber from the UK Health Forum in London, who presented the study, concluded: "Our study presents a worrying picture of rising obesity across Europe. Policies to reverse this trend are urgently needed." It should be noted that, considering the poor data availability in many countries, the results of this study may even be underestimates.
As a possible explanation for the variations in projected obesity levels between countries the investigators note the possible effect of "economic positioning" and "type of market". “The UK and Ireland, where obesity prevalence is among the highest, possess unregulated liberal market economies similar to the US, where the collective actions of big multinational food companies to maximise profit encourages over-consumption," they write. "The Netherlands, Germany, Belgium, Sweden, Denmark, Finland and Austria possess more regulated market economies." Obesity is, however, a multi-factorial disease.
Commenting on the public health implications of the study, Dr Webber said: "Given the complexity of obesity, the United Nations has called for a whole-of-society approach to preventing obesity and related diseases. Policies that reduce obesity are necessary to avoid premature mortality and prevent economic strain on already overburdened health systems. The WHO has put in place strategies that aim to guide countries towards reducing obesity through the promotion of physical activity and healthy diets."
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9067530035972595,
"language": "en",
"url": "https://support.zebitex.com/hc/en-us/articles/360010643340-Which-cryptos-are-listed-",
"token_count": 3720,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.251953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8946b2b2-13b3-4f8c-8b49-5f74b25c8c96>"
}
|
BTC ( BITCOIN )
Bitcoin is a crypto and digital payment system invented by an unknown programmer, or group of programmers, under the name Satoshi Nakamoto, which was released in 2009 as open-source software. These transactions are verified by network nodes and recorded in a distributed public ledger called "blockchain". As the system operates without a centralised entity or a single administrator, bitcoin is called the first decentralised digital currency. Bitcoin can be exchanged for other currencies, products and services in the markets. As of February 2015, already more than 100,000 merchants and suppliers have accepted Bitcoin as payment. In July 2016, the DIGYCODE refill sale became established in France in order to buy Bitcoin in tobacconists. According to a study produced by the University of Cambridge in 2017, there are 2.9 to 5.8 million unique users using a crypto wallet, the majority of them using Bitcoin.
Emission: 21,000,000 BTC Official
White Paper: https://bitcoin.org/bitcoin.pdf
ETH ( ETHEREUM )
Ethereum is a public (open-source) distributed computing platform based on a "blockchain" and based on intelligent contracts (scripts), which facilitates online contractual agreements. Ethereum also provides a token or token called "ether", which can be transferred between accounts. The "Gas", an internal transaction pricing mechanism, is used to mitigate spam and allocate resources on the network. It is listed under the diminutive ETH and traded on trading platforms such as ZEBITEX.
Official website: https://www.ethereum.org/
White Paper: https://github.com/ethereum/wiki/wiki/White-Paper
USDT ( TETHER )
USDT is a crypto emitted at its beginnings on the Bitcoin blockchain via the Omni Layer protocol, it now also exists on the Ethereum network in ERC20 as well as on the Tron network. Each USDT unit is backed by one US dollar held in the reserves of the company "Tether Limited". USDT and other Tether currencies were created to facilitate the transfer of national currencies, to provide users with a stable alternative to Bitcoin. USDT offers an alternative to the methods of proving creditworthiness by introducing a process of proof of reserves. In the Tether Proof of Reserves system, the amount of USDT in circulation can be easily verified on the Bitcoin blockchain via the tools provided on Omnichest.info, while the corresponding total amount of reserves held in USD (US $) is proven by the publication of the bank balance and by periodic serious audits.
Official website: https://tether.to/
LTC ( LITECOIN )
Litecoin is a peer-to-peer and open source crypto software project published under the MIT/X11 license. The creation and transfer of parts is based on an open source cryptographic protocol and is not managed by any central authority. Although inspired by, and in most respects technically almost identical to Bitcoin (BTC), Litecoin has some technical improvements over Bitcoin, and most other major cryptos, such as the adoption of the separate witness. Litecoin also has a very low transaction cost and verifies transactions approximately four times faster than Bitcoin.
Issuance: 84,000,000 LTC
Official website: https://litecoin.org/
White Paper: https://github.com/litecoin-project
BAT ( BASIC ATTENTION TOKEN )
BAT is a token for decentralised advertising exchanges. It compensates for the browser user's attention while protecting privacy. BAT connects advertisers, publishers and users and is defined by relevant user attention, while eliminating the social and economic costs associated with existing advertising networks, such as fraud, privacy breaches and malicious advertising. BAT is a payment system that rewards and protects the user while providing better conversion for advertisers and higher returns for publishers. We see BAT and associated technologies as a future part of web standards, solving the important problem of monetising publishers' content while protecting user privacy with its brave browser.
Issue: 1,500,000,000 BAT
Official website: https://basicattentiontoken.org/
BNT ( BANCOR )
The Bancor protocol is a blockchain-based discovery system and liquidity mechanism that supports multiple smart contract platforms. The flexibility of these blockchains allows the tokens or token in reserve to be locked and smart tokens to be issued on the Bancor system allowing anyone to instantly buy or liquidate the smart token in exchange for one of their reserve tokens. The BNT is the first smart token on the Bancor system and it will hold a single Ether reserve. Other smart tokens, using the BNT as one of their reserves, connect to the BNT network. The BNT establishes a network dynamic where increased demand for one of the smart tokens in the network increases the demand for the common BNT, which benefits all other smart tokens holding it in reserve.
Issuance: 68,080,614 NTOs
Official website: https://www.bancor.network/discover
CVC ( CIVIC )
Civic is a decentralised identity ecosystem that allows secure access, access on demand to identity verification at a lower cost thanks to the blockchain. Through a digital identity platform, users can create their own virtual identity and store it with their personal identifiable information on the device. This information will be subject to a verification process conducted by identity validators on the platform and then transferred to the blockchain where service providers can access it with the appropriate authorization from the user. CVC is a token or token based on Ethereum it is an ERC20 used by Service Providers seeking information about a user. They can make a payment in CVC. The smart contract system used will then see the funds handed over to both the validator and the identity owner (user).
Issuance: 1,000,000,000 HVF
Official website: https://www.civic.com/
DAI ( DAI )
The Dai Stablecoin is a guaranteed crypto that is stable in value against the US dollar just like the USDT and we believe that stable digital assets such as the Dai Stablecoin are essential to realise the full potential of blockchain technology. Maker is its intelligent contractual platform on Ethereum that supports and stabilises the value of Dai through a dynamic system of Collateralized Debt Positions (CDPs), autonomous return mechanisms and motivated external players. Once generated, Dai can be used in the same way as any other crypto: it can be freely sent to others, used as payment for goods and services, or held to keep a stable value.
Official website: https://makerdao.com
DASH ( DASH )
Dash is a peer-to-peer and open source crypto with a strong focus on the payments industry. Dash offers a form of money that is anonymous, portable, inexpensive and fast. It can be spent securely online and in person with minimal transaction fees. Based on the Bitcoin project, Dash aims to become the world's most user-friendly and scalable payment system. In addition to all of Bitcoin's functionality, Dash currently includes a second layer network of masternodes to facilitate InstantSend, PrivateSend and governance functions to create a self-managed and self-funded network capable of paying individuals and businesses for work that adds value to Dash. This decentralised system of governance and budgeting makes it one of the first decentralised autonomous organisations to succeed.
Issue: 18,900,000 DASH
Official website: https://www.dash.org/
DOGE ( DOGECOIN )
Dogecoin is a crypto focused on real usefulness as a currency with its Shiba Inu logo that gives sympathy to a popular Japanese breed dog. With fast validation times and very low fees Dogecoin is a suitable solution for micro-transactions but also as a payment option for online shops. Dogecoin has been adopted as such by online retailers and can be easily used as a means of transferring money from consumer to consumer.
It is often used for donations and rewards.
Official website: https://dogecoin.com/
ELF ( AELF )
Aelf, is a decentralised and scalable cloud computing network. To establish a blockchain infrastructure for various business requirements, Aelf provides a highly efficient multi-channel parallel processing system with cross-channel communication and self-scaling governance. It provides three innovations: scalable nodes on computer clusters, resource isolation for smart contracts via "one chain" to "one smart contract" and token or token holder voting. ELF tokens are used to pay for resource costs in the system, such as smart contract deployment, operation and system upgrades (transaction fees, data transfer costs between chains). It also allows the community to vote on important decisions, such as the election of nodes, the introduction of new features in the system and other important decisions.
Issuance: ELF 880,000,000
Official website: https://aelf.io/
White Paper: https://aelf.io/gridcn/aelf_whitepaper_FR.pdf?v=1
ETHOS ( ETHOS )
Ethos aims to create a human-sized crypto services company to demystify blockchain technology and break down traditional barriers to entry by removing decades-old barriers for consumers and businesses. Ethos' mission is to make the crypto market accessible and reliable for the average user, accelerate the adoption and democratise the ownership of traditional cryptos and financial assets. By enabling the average participant to easily and securely purchase cryptographic currency and other financial assets, and to have an environment to learn and exchange with others, Ethos contributes to making the new economy easy, secure and accessible to all. We hope to synthesise many of the needs of the new economy into a unique and user-friendly ecosystem.
Issue: 222 295 208 ETHOS
Official website: https://www.ethos.io/
White Paper: https://www.ethos.io/Ethos_Whitepaper.pdf
GNT ( GOLEM )
Golem is a supercomputer, open source, decentralised and accessible to all. It consists of the combined power of the users' machines, from PCs to complete data center. Golem is capable of computing a wide variety of tasks, from CGI rendering, through machine learning to scientific computing. Golem's limits are only defined by the creativity of our developer community. Golem creates a decentralised economy of shared computing power and provides software developers with a flexible, reliable and inexpensive source of computing power. Golem allows users and applications (applicants) to rent the machine cycles of other users (suppliers). Any user, whether a single PC owner or a large data centre, can share resources via Golem and be paid in GNT (Golem Network Tokens) by the requesters. Golem uses a transaction system based on Ethereum to settle payments between suppliers, applicants and software developers. All calculations take place in a sandbox and are completely isolated from the host system. Software developers are at the centre of the Golem ecosystem. Their "Application Registration and Transaction Framework" allows anyone to deploy, distribute and monetise applications in the Golem network.
Issuance: 1,000,000,000 GNT
Official website: https://golem.network/
HOT ( HOLO )
Holo is a framework for creating fully distributed peer-to-peer applications. Holochain is not a blockchain simply because the blockchain uses a data-centric approach, on the contrary, Holo uses an agent-centric approach. The blockchain keeps the same register and is shared by all nodes in the network. With Holochain, each agent manages its own data. Holochain then uses a distributed hash table (DHT) such as BitTorrent or IPFS to share the content and ensure that these entries are always available, even if some nodes are offline. Holo adds distributed validation to manage the data and ensure that the data is correct and the network is secure. Finally, Holo uses a proof of service algorithm to serve applications to others and provide the correct data. All these attributes make Holo extensible to infinity.
Issue: 177,619,433,541 HOT
Official website: https://holochain.org/
IOST ( INTERNET OF SERVICES )
IOST is building an ultra-high TPS blockchain infrastructure to meet the security and scalability needs of a decentralised economy. Led by a team of proven founders and supported by world-class investors, our mission is to be the underlying architecture for the future of online services.
Issuance: 21,000,000,000 IOST
Official website: https://iost.io/
White Paper: https://github.com/iost-official/Documents
LINK ( CHAINLINK )
Chainlink (LINK) is a decentralised network that provides information (called Oracles) to intelligent contracts. Founded in 2017 by Sergey Nazarov and Steve Ellis, Chainlink aims to solve the problem of supplying off-chain information through intelligent contracts for their execution parameters. Smart contracts are designed to be executed automatically when certain parameters are met, but when these parameters exist off-chain, reliance is placed on information sources (oracles) to provide the necessary information. Off-chain oracles tend to be centralised and rely on a third party to provide critical information in a reliable and timely manner. ChainLink aims to break this dependency by providing information to intelligent contracts via a network of decentralised oracles that work together on the Blockchain Link to verify and transmit critical information to those contracts. The ChainLink network enables users who have either a data stream or PLC-carrying information to easily provide information to smart contracts in exchange for the LINK token.
Issuance: 1,000,000,000 LINK
Official website: https://chain.link/
White Paper: https://link.smartcontract.com/whitepaper
LOOM ( LOOM NETWORK )
Loom Network is a Layer 2 scaling solution for Ethereum which is a Delegated Demonstration Side Chain Network ("DPoS"), enabling highly scalable games and user-oriented DApps while benefiting from Ethereum's security. Main features of the loom. Loom SDK: the fundamental element of Loom Network is an SDK that allows developers to quickly build their own blockchain without having to understand the infrastructure of the blockchain - a "build your own blockchain" generator. Shared sub-chains : a network of high-speed, interconnected sidechains, such as GameChain and SocialChain, which use Ethereum as the base layer. PlasmaChain: a Layer-2 hub that connects multiple Ethereum sidechains, enabling faster and cheaper transactions and a more efficient chain for developers to deploy their DApps. Delegated Pile Proof: Loom supports out-of-the-box DPoS, enabling Apps to adapt to gasless transactions and transaction times of less than one second. Online games and social applications are the first two types of DApps Loom is focusing on, but developers can build any type of DApp using the Loom SDK. Plasma Security on Ethereum: Loom inherits the security of Ethereum's core network on its Layer 2 side chains using Plasma Cash relays to transfer assets between chains.
Issue: 1,000,000,000 LOOM
Official website: https://loomx.io/
LRC ( Loopring )
Loopring is an open protocol for evolutionary and decentralised exchanges . Loopring 3.0 is our newest, fastest and most visionary protocol. Loopring 3.0 can settle up to 1,400 transactions per second while guaranteeing the same level of security as the underlying Ethereum chain. This is made possible by the use of a construct called zkRollup, and a feature called "Chain Data Availability, or OCDA". If OCDA is disabled, the loopring throughput can be up to 10,500 transactions per second, but the security is reduced to that of the consortium maintaining the data. The average settlement cost for each transaction with Loopring 3.0 is as low as US$0.002, which covers the gas for Ethereum transactions and the cost of proof generation on cloud computing platforms. DEX can further reduce the cost of payment by using cheaper servers in the cloud and GPU-based algorithms.
ssuance: 1,375,076,040 LRC
Official website: https://loopring.org/
White Paper: https://github.com/Loopring/whitepaper
BTU ( BTU PROTOCOL )
BTU is a token designed as an universal digital reward. It is distributed directly by companies to automate a reward following a purchase or a business contribution. BTU is elusive, never expires, and freely transferable or exchangeable 24 hours a day, 7 days a week while respecting privacy. BTU is used by different brands in different countries, BTU is universal. For example, BTU is offered by many e-merchants in the mobile application Verso (https://get-verso.com). For businesses, BTU offers software that accelerates the adoption of blockchain and crypto assets. BTU and associated technologies provide a reward programme that is simple to join, universal and brings real value to users.
Issue: 100,000,000 BTU
Official website: https://btu-protocol.com
White Paper: https://www.btu-protocol.com/pdf/whitepaper.pdf
Uniris disrupts the way people make transactions by removing credit cards, passwords and IDs, with a forgery-proof identification system based on the encrypted vein network and an upgradeable blockchain, you can safely access any network (payments, IoT, digital IDs, etc.) with the tip of your finger.
Issue: 10,000,000,000 UCO
Official website: https://uniris.io/
White Paper: https://uniris.io/UNIRIS-White-Paper.pdf
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9437988996505737,
"language": "en",
"url": "https://teknologiateollisuus.fi/en/ajankohtaista/article/giant-investment-reduces-7-finlands-carbon-dioxide-emissions-fast-track",
"token_count": 493,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.025634765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:999b0b93-29c9-4bcf-b4f6-49845d8fc329>"
}
|
Giant investment reduces 7 % of Finland's Carbon Dioxide emissions – The fast-track investment by SSAB is a real benchmark from the technology industry
In 2016, SSAB began an investigation whether the company could revolutionize the material that is most used in the world, the steel, and its manufacture by reaching zero-level of co2 emissions from its production. Today, SSAB is gradually transforming its processes. Steel is produced by hydrogen reduction, which produces water instead of carbon dioxide – no emissions.
"We are tightening up our original goal from the original 2035. We have promised our customers that we will have fossil-carbon-free steel for the European and North-American markets in 2026. We are rebuilding our factories and finalizing everything by 2040", promises Mr. Harri Leppänen, Director of Environment of SSAB.
Competitors are interested in this technology
Metal processing is one of the most important industries in the technology industry, but it is also one of the largest co2 emitters in Finland. So SSAB's giant investment and its importance for Finland's climate action is significant.
Mr. Harri Leppänen, Director of Environment of SSAB.
A special feature of the technology industry in the field of sustainable development is the so-called fingerprint: The products and services offered by the technology companies enable significant reduction of emissions also in the activities of the customers of the industry.
SSAB's fingerprint is enormous, as its carbon-free steel can eventually be used in other industries and the company can sell its technology globally.
Research, innovation and electrification are key elements
In line with the Finnish Government's program, the technology industry is committed to pursuing carbon-neutral Finland by 2035. The industry will use a low-carbon road map to determine what technologies and measures are needed to achieve this goal.
The most effective ways of reducing direct emissions in the industry are new innovations, such as SSAB's hydrogen reduction and the electrification of machinery and equipment. For example, machine tools and cranes used in mines and industry can be converted to electric power.
In addition, energy efficiency, waste heat utilization, circular economy and digitalisation offer significant opportunities for technology companies and their customers to reduce emissions quickly. However, these efforts will require significant investment, access to affordable clean electricity and targeted investment in R&D.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.951439619064331,
"language": "en",
"url": "https://www.mscpaonline.org/cpe/listings/JS-ARF4+22/view",
"token_count": 361,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0225830078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d335893b-4dd0-4869-a1e5-756ac534259d>"
}
|
Thursday, Jun 17, 2021
8:45am – 12:15pm
The Association of Certified Fraud Examiners says that there is fraud lurking in all businesses including not for profit organizations. It often goes undetected for years and when uncovered, management and the board may question why the auditor did not identify it. The auditor's responsibility in a financial statement audit is to assess risk and perform sufficient procedures to obtain reasonable assurance that the financial statements are free from material misstatement due to fraud or error. However, failure to perform an adequate fraud risk assessment and report deficiencies in internal control, such as lack of segregation of duties can leave a firm vulnerable. This course will discuss the audit procedures that should be performed in accordance AU-240 as recently amended, best practices in performing fraud risk assessment procedures, when and how to report control deficiencies noted in an audit and the most frequent types of fraud found in small to mid-size entities along with internal controls that could be implemented to help prevent and detect them. This course features case studies.
CPAs in either public or private practice with accounting, financial reporting, or attest responsibilities
Understand the drivers of fraud risk in a financial statement audit Conduct procedures required by professional literature to assess the risk of fraud Develop discussion points to review with management and those charged with governance Identify the main types of fraud that occur in small to mid-size companies and develop internal controls to be responsive to those risks Evaluate fraud case examples and identify how fraud occurred and how it could have been prevented or detected
Fraud landscape in the United States Fraud risk procedures as updated by recently issued standards Most likely fraud types found in small to mid-size entities Internal controls to prevent and detect fraud What to do when fraud or suspected fraud is identified Case studies based on recent frauds
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.