meta
dict | text
stringlengths 224
571k
|
---|---|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9547054767608643,
"language": "en",
"url": "https://www.americanprogress.org/issues/poverty/reports/2014/10/07/98452/harnessing-the-eitc-and-other-tax-credits-to-promote-financial-stability-and-economic-mobility/",
"token_count": 669,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.056640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b087d5d6-9a65-4bf7-8e55-a689b123f66d>"
}
|
The Earned Income Tax Credit, or EITC, is one of the nation’s largest and most effective anti-poverty tools. It is a federal tax credit for low- and moderate-income workers that encourages work, boosts family income, and offsets federal payroll and income taxes. In 2012, it helped more than 6.5 million Americans—including 3.3 million children—avoid poverty. The Child Tax Credit, or CTC, protected about 3 million people—1.6 million of them children—from poverty in the same year. A growing body of research finds that these credits are effective tools for boosting economic mobility: Children whose families receive the EITC and other income supports have higher rates of high school completion and increased adult earnings. In addition to mitigating economic hardship, these tax credits serve as a powerful source of economic stimulus. For example, the EITC generates some $1.50 to $2.00 in economic activity for every $1 that goes to working families.
Both the EITC and CTC have enjoyed wide bipartisan support throughout their history. Presidents from both political parties have taken action to strengthen the EITC since its enactment in 1975, and more recently, Republicans and Democrats alike have joined in praising the program for its effectiveness as an anti-poverty tool. However, while the EITC effectively boosts economic security among families headed by low-wage workers, it is not a substitute for a living wage. Efforts to strengthen the EITC and CTC must go hand in hand with minimum-wage policies to ensure that no one who works full time has to live in poverty.
Congress should act on several existing proposals to strengthen the EITC and CTC, such as making permanent the improvements enacted as part of the American Recovery and Reinvestment Act of 2009, or ARRA; enhancing the EITC for workers without qualifying children and lowering the minimum age for EITC eligibility, as recommended in the new Generation Progress report “A Ladder Up”; and making the CTC fully refundable and tying its value to inflation. In addition, this report offers a set of new policy solutions that harnesses the EITC as a tool for financial empowerment and upward economic mobility. These recommendations include:
- Strengthening the EITC as an asset-building tool for families who wish to use their tax refunds to build savings
- Creating an early-access provision that allows workers to access a small portion of their EITC ahead of tax time so they do not have to rely on predatory lending products and can take advantage of mobility-enhancing opportunities
- Increasing access to higher education and training through categorical eligibility for the maximum Pell Grant for EITC recipients and reforms to strengthen the American Opportunity Tax Credit
Building on existing proposals to strengthen the EITC, these reforms would enhance the credit’s effectiveness as a tool for promoting economic mobility.
Rebecca Vallas is the Associate Director of the Poverty to Prosperity Program at the Center for American Progress. Melissa Boteach is the Vice President of the Poverty to Prosperity Program at the Center and the Vice President of the Half in Ten Education Fund. Rachel West is a Senior Policy Analyst with the Poverty to Prosperity Program at the Center.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9154578447341919,
"language": "en",
"url": "http://consideringthegrid.com/as-administration-transitions-president-obama-and-executive-agencies-recap-achievements",
"token_count": 641,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.28125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1310ee69-5ae7-4b7d-bedc-45000c52ac36>"
}
|
On January 9, Science Magazine published an online editorial by President Obama entitled “The Irreversible Momentum of Clean Energy.” In it, President Obama addresses the decoupling of economic growth from energy sector emissions, private sector initiatives to reduce greenhouse gas emissions, power sector markets in which gas prices are currently—and are projected to remain—cheaper than coal, and global momentum to reduce emissions. President Obama opines that “the trend toward a cleaner power sector can be sustained regardless of near-term federal policies.”
This editorial follows last week’s release of the President’s letter to the American People which discusses his Administration’s achievements and presents a series of exit memoranda written by his Cabinet members, including the heads of EPA, DOE, the Department of the Interior, the Department of Commerce, and the White House Office of Science and Technology Policy. In those memos, the agency heads present a summary of the agencies’ major actions over the last eight years, “their vision for the country’s future, and the work that remains in order to achieve that vision.”
EPA Administrator Gina McCarthy’s exit memo discusses, among other things, steps that the agency has taken to regulate carbon dioxide and other air emissions from power plants—from issuing the agency’s greenhouse gas (GHG) endangerment finding in 2009 to promulgating the Mercury and Air Toxics Standards for power plants and the Clean Power Plan.
Energy Secretary Ernest Moniz’s exit memo highlights DOE’s investments in clean energy research and development, in modernizing the electric power grid, and in wind, large-scale solar, advanced nuclear power, and carbon capture and storage technologies. He also discusses the growth in clean energy sector jobs. His vision for the future includes doubling investment in clean energy research and development, continuing to diversify America’s energy supplies, and investing in modernizing America’s energy infrastructure.
Exit memos from the Department of the Interior, the Department of Commerce, and the White House Office of Science and Technology Policy also address the outgoing administration’s energy-related initiatives. Department of the Interior Secretary Sally Jewell’s exit memo speaks to the Department’s approval of 60 commercial-scale renewable energy projects on public lands and development of an offshore wind leasing and permitting program. Department of Commerce Secretary Penny Pritzker’s exit memo mentions Commerce’s community-focused Climate Resilience Toolkit, and discusses her belief that “[p]olicymakers can accelerate job growth by providing financial incentives for states to exceed clean energy goals, and by facilitating the creation of new projects, infrastructure, and industries to match regional energy needs and existing industrial ecosystems.” And Office of Science and Technology Policy Director John Holdren’s exit memo mentions the Office’s efforts to advance climate science and information, its $90 billion American Recovery and Reinvestment Act investment in low carbon energy, and its efforts to address climate change’s impacts on national security.
The 28 Cabinet exit memos can be found here.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9335100650787354,
"language": "en",
"url": "https://blog.ipleaders.in/difference-between-brand-name-and-company-name-and-its-respective-implications-on-the-protection-of-ipr/",
"token_count": 4209,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.024169921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:37d4cf43-cf8a-4966-a3e6-016eb41f6727>"
}
|
This article is written by Janhavi Dudam who is pursuing a Diploma in Intellectual Property, Media & Entertainment Laws from LawSikho.
Customers sometimes struggle to understand that a company name and a brand name are two different entities and have different meanings. As a prospective or existing business owner in order to get your legal matters correct, you have to know the major distinctions between the two. Let’s take a more detailed look, shall we?
What is a Brand name?
A company’s products or services, which can be used for advertisement and sales purposes. It requires no legal suffixes. Let’s take the example of Armani, Emporio, Exchange, the most expensive brand of apparel. Under this brand, they even produce perfumes, leather bags and belts, glasses, footwear, and other materials. Another example is the Dell company that offers technology solutions, sells products under its brand such as laptops, desktops. While reading this, one question may come to your mind as to how is it different from the company name? Let’s dive into the discussion.
What is a Company name?
The name given to a company incorporated under the Companies Act is referred to as the trade name or name of the company. To simply put, it is the official name through which a person or a group of people want to carry out any business operation for profit-making. It needs to be suffixed with a private limited company, LLC, Corp, or other legal endings depending on the type of business arrangement it is actually working in. For example, Dell Inc., Giorgio Armani, and Emporio Armani.
Difference between a Brand name and Company name
That’s really simple. One company is differentiated from other companies by a company name. A brand name describes one company’s products from another company’s products. Mostly in the case of popular brands such as Sony, Nike, or Shell, a brand name and the company name can overlap when the company discovers that one name is appropriate for the identity of all its products.
Checklist – When naming your Company
The process of naming is not so easy as thinking about a name and attaching it to your company. Before applying for registration, you need to perform a company name check to make sure that your name is legally available. A Company is like an artificial person in which name statutory requirements such as income tax returns, annual filings, and other legal proceedings are filed. Therefore, you will have to register it under the Registrar of Companies under the provisions of the Companies Act, 2013.
Rule 8 of the Companies (Incorporation) Rules, 2014 as amended by the Companies (Incorporation) Fifth Amendment Rules, 2019 framed by the Central Government under the Companies Act, 2013 lays down the naming rules for companies. It must be both distinctive and acceptable, as per the Act, for a name to be authorized for the incorporation of a new company.
Let’s look at all those factors and other things in-depth to keep in mind when choosing a name for your new company.
The proposed name should not be similar to another existing name
A proposed company name should not be the same as the name of an established company. The following tests are used to determine if a name is similar to that of an existing company. Let’s discuss a few of them-
Tests to determine the similar name
A plural form of any of the words that appear in an existing company’s name does not form a unique name.
Existing Company name- “Cena Private Limited Company”
Proposed name- “Cenas Private Limited Company”
Changes to type, letter case, spacing, or punctuation marks do not form a unique name.
Existing Company name- “Cena Private Limited Company”
Proposed name- “CENAS Private Limited Company” (letter case) or
“CenaPvt. Ltd Company” (Spacing) or
“Cena-Private Limited Company” (Punctuation marks)
Use of the definite or indefinite article in
one or both names.
Existing Company name- “Cena Private Limited Company”
Proposed name- “A Cena Private Limited Company” (indefinite)
“The Cena Private Limited Company” (definite article)
If purposely misspelled words are used as a company name, it shall be verified with properly checked words.
Existing Company name- “Cena Private Limited Company”
Proposed name – “Cenaa Private Limited Company” (Purposely misspelled)
The use of Different phonetic spellings or spelling variations does not form a unique name.
Existing Company name- “G. P. Industries Limited”
Proposed name – “G and P Industries Limited”
Or “Gee Pee Industries Limited” or “G n P Industries Limited” (Phonetic spellings or spelling variations)
Addition of internet related designation such as .com, .net, .edu, .gov, .org, .in does not make a name unique.
Existing Company name – “G. P. Fashion Limited”
Proposed name – “G.P.Fashion.com Limited” or “G.P. Fashion Dot Com Limited”
To make a company’s online presence felt, a matching domain name is always useful. So, make sure that no other company shares a domain name close to the name of your company.
The proposed name should not be undesirable
A name is considered undesirable if it breaches one of the following requirements-
- Violates Emblems and Name Act – The proposed name should not contravene section 3 of the Emblems and Names (Prevention and Improper Use) Act 1950.
- Trademark violation: The name suggested should not violate the name of the trademark or of the trademark which is the subject of an application for registration unless the consent of the owner or applicant for registration has been obtained.
- Includes derogatory words: There should be no words or words in the proposed name that is offensive to any segment of the society.
Checklist – When naming your Brand
Logos, which are visual representatives of brands, are typically associated with brand names. They help consumers recognize and separate the parent companies’ products from others. To make your brand unforgettable, you will need to have a well-designed logo.
By acquiring a trademark or service mark from an approved entity, which is generally a government registry, a brand name is secured from unauthorized use by others. In order to have it trademarked, you need to ensure that the brand name is not already registered.
The very first step in avoiding trademark conflicts is a systematic and thorough public search for trademarks (or brands, trade names, etc) to ensure that any trademark-related disputes can be effectively avoided in the future. A mere google search will also be an extremely useful tool to do a public search.
Guide to conduct Online trademark search
An Online trademark search can be conducted on the Indian Trademark Registry database of the government https://ipindiaservices.gov.in/tmrpublicsearch/frmmain.aspx
- At the top of the list, click Wordmark as the search type.
- Enter the wordmark for which you would like to access the trademark database.
- You may compare the trademark database against the search query under three conditions: “Start with,” “contains,” and “match with.”
- ‘Start with’ – This search shows the prefix part of the mark from the online records.
- ‘Contains’ – This search displays all records that, irrespective of their position, may have in them the mark for which the search is being carried out.
- ‘Match’- This search only shows similar matches.
- Enter the class which applies to trademark. Trademark are classified into 45 classes, 1 to 34 deals with different goods and 35 to 45 deals with the different services as per 11th edition Nice Classification.
- Click search to begin the trademark search
- Then, the website will generate a search report of all possible conflicting marks
After doing deep research, you may have arrived at your desired name of the brand. To protect it from unlawful use and claim it as yours, you will have to register it as a trademark. You can file a registration application to the Controller General of Patents and Designs and Trademarks registry under the Trademarks Act of 1999. Once you obtain a trademark for your brand name, you can sue anyone who tries to use the same name in an unlawful manner.
It is always helpful to take legal advice from a trademark lawyer or IP practice firm who can direct you through the process, as there are several legal implications involved. In many cases, the brand name may become more famous than the company it is owned by. Both company and brand naming are extremely necessary and competent for this purpose.
Congratulations! You have just succeeded to establish an exciting and fast-growing fashion brand. But to grow the business and avoid legal disputes across all disciplines, including intellectual property we need to look at all possible legal requirements involved in IPR to protect the brand.
Legal requirements involved in IPR
Appoint outside IP firm
The key starting point is to find an outside firm that has specialized intellectual property practice. This firm should consider the IP competitive market that the company is seeking to maintain and have precise knowledge of the IP needs and concerns of the company. Look for the firms based on the industry or sector that is associated with your business. The more experienced the firm is with your specific matter, that’s better.
Copyright and the Internet
The first step when using works copyrighted by others is to create a list of copyrighted works required by the company. Meet with the marketing team of the company to decide what copyrighted works you need in your day-to-day business so that they can receive the appropriate permits. Then, create a standard plan for receiving permission (i.e. clearance) and obtaining authorship properly. Start educating the internal staff on when and why copyright notices are required.
Brand owners should be cautious of over-reliance on ‘fair use’ because the exception is narrow, just because a photo, article, or other pieces of content is available on the Internet does not mean that it is free to be used by anyone.
Brand owners will need to take action to protect their own copyrighted content from misuse. To avoid misuse, they should enter into licensing agreements for the company’s own copyrighted works and educate the team of the company about why they should pursue copyright protection for works they make.
If the company hires third parties to create copyrighted works for them, ensure that they are hired on a “work-for-hire” basis and have in place assignment agreements prepared or reviewed by the lawyer.
Protecting the Company’s brand
It doesn’t happen overnight to create a brand name and symbol that will be recognizable immediately. Trademarks acquire strength and value, through use which takes time and resources. Trademarks can be protected at the state level in any jurisdiction and internationally by a single application through the Madrid protocol. Brand owners need to pay attention to territorial limits and ensure that they use their trademark to get as much protection as possible in all areas where the company is doing business.
To prepare an application to register a trademark, Brand owners should appoint trademark counsel with expertise in its industry so, that they can provide strategic guidance on which marks to file for and how to prioritize them, including which classes of services or goods to file for. In addition to this, they can also offer practical guidance on creating an effective trademark plan to track the use of the company’s trademarks or similar marks by others.
Finally, Brand owners must register a domain name as soon as they are recognized, even before applying for a trademark, to ensure that they are available online.
Protecting fashion with design patents
In the fashion industry, looks and appearance matter. Design patents are all about protecting unique ornamental appearances, so before introducing new lines, it makes sense for the fashion industry to consider this type of IP. Please take note that US design patents have specific requirements that might vary from other countries’ design laws. Let’s discuss some key insights about US design patent laws.
Products that have a shorter shelf life (around 3-6 years) will have benefit from the earlier grant of design patent. A design patent can be obtained within an average of 21 months, unlike utility patent applications that can take many years and undergo several rejections. At present, US design patent applications enjoy an estimated 85% success rate. If in a design application, an Office Action is issued, the objections typically concern indefinite drawings which can be corrected with replacement drawings.
The length of a design patent shall be 15 years from the date of grant, which should be sufficiently long to cover a wide range of trendy consumer goods. Design patents cover goods such as handbags, shoes and footwear, jewelry, hair accessories, home decorations / decorative accessories.
A trade secret is any information which may include a formula, a recipe, a program, customer lists, among many other things that derive economic value from its confidentiality, i.e. the fact that others do not know the information, makes it more valuable and one has made reasonable efforts to keep it secret.
It is quite different from obtaining a patent or a trademark to acquire trade secret protection. Most of the companies do not apply or register for trade secret protection. By taking affirmative measures to keep the information completely confidential, they obtain trade secret protection. Employee training and education are essential for acquiring and keeping the protection of a trade secret.
In a company, trade secret information should not be accessible to all but should be limited to the ‘need to know’ basis. Any disclosure of information that is considered to be a trade secret to a third party destroys the confidentiality of trade secrets. For a safer side, signing of non-disclosure agreements with employee or third party and a confidential clause in agreements can be included to avoid disclosure of the information but that does not guarantee trade secret security alone. Therefore, a company should work carefully internally and externally, as necessary, to build and maintain fair and appropriate protections for its confidential information.
Recent landmark cases that have influenced the US fashion industry
Star Athletic, LLC v Varsity Brands, Inc. 137 S. Ct. 1002 (2017)
In 2017, It was a landmark case that went before the Supreme Court. The case focused on the protection of cheerleading uniforms. In particular, it examined whether certain creative elements, such as the stripes of a chevron, could be protected under US copyright law in the design of a cheerleader’s uniform. In other words, without taking away the purpose of the design, that is to be a cheerleading uniform, could these elements be specifically or conceptually separated?
The Supreme Court explained the separability standard in its opinion, stating that in general terms, some creative elements of fabric, whether two-dimensional or three-dimensional, may be protected by copyright law. However, it failed to discuss the protectability of the particular uniforms in question or the level of creativity inherent in it.
To decide if the cheerleading elements were sufficiently original to obtain protection, the case has to go back to the lower court. Although the practical effect of the decision on the US fashion industry is not yet clear, it does give designers some hope of being able to use copyright law to protect at least some creative aspects of their fabrics.
Today, social media platforms are used by so many people and so many brands to post content over which they do not actually own the rights. This is giving rise to a large number of cases of copyright infringement. Besides, when someone hijacks a trademark and registers it as a domain name in bad faith, it is obvious that cybersquatting and trademark squatting are not going away anytime soon.
One recently resulted in New Balance winning $1.5 m in damages against Fujian-based New Barlun, an athletic clothing and shoe manufacturer known for using a slanted N logo on their products, after a series of New Balance battles against a few native Chinese companies.
Two decades ago, Chinese companies started selling sneakers that were very similar to New Balance trainers, which were already famous. With the letter “N” on the sides, their silhouettes resembled famous “dad” shoes. Moreover, in addition to the product’s resemblance, the names of the companies sounded the same. The names were also similar when written down using Chinese characters: “New Boom”, “New Barlun”, “New Bunren”.
On April 16, 2020, Shanghai Pudong New District People’s Court for New Balance Trading (China) Co., Ltd. (a Chinese Subsidiary of the U.S. Massachusetts-based New Balance Athletics, Inc.) against Niu Ba Lun (China) Co., Ltd. (referred to as New Barlun), awarding approximately 1.3 million USD to New Balance for unfair competition.
New Balance argued that the continued use of the New Barlun’s logo was unfair competition, which resulted in the loss of prestige and goodwill of New Balance.
New Barlun argued that under a trademark, their logo is protected, and therefore they were free to use it. (Interestingly, the existing status in the Chinese registry of the New Balance trademark is marked as ‘announcement of an invalid pending’).
The Shanghai Court held that the relevant public has clearly identified the products decorated with ‘N’ letters on both sides of the New Balance sneakers through a long-term widely used and therefore the logo has become a source of identification.
Moreover, the Court decided that the concerned parties should follow the principle of good faith in dealing with disputes based on trademarks, decorations, and other signs, not only to protect prior rights and interests but also to avoid market confusion. The infringing logo is identical to the logo of New Balance, which causes confusion. Although New Barlun holds rights in a trademark, it still violates the principle of good faith due to its violation of previous rights and interests.
While it seems clear that both companies are competitors in the same industry, it must be emphasized that the similarity of a logo put in the same position on similar products creates confusion in the market. This causes the source of the products to be misidentified, which violates the principles of good faith and accepted market ethics, and thus constitutes unfair competition.
As a result, the Court issued an injunction to stop further unfair competition by ceasing to use the infringing “N” logo, awarded New Balance $1.5 million in damages.
Conclusion- By reading the above two recent judgments you might have got some clarity of all possible disputes of IP. By referring to these case laws, we cannot plan the IP strategy. However, to minimize the chances of IP disputes, it is best to discuss all options with appointed counsel. This means doing due diligence to ensure that someone else’s rights are not breached by the company. In-house attorneys must educate their staff on the costs of litigation, monetary and non-monetary.
As you go ahead, your brand becomes famous, if a competitor or infringer using your IP, initiating an IP infringement lawsuit shouldn’t be a wise decision. It is not to say that litigation is never the correct option, but it should only take place after deep consideration of the pros and cons of such action to initiate an IP infringement lawsuit. Sometimes, licensing agreement or cease and desist notice, joint development agreements, open-source software and licensing, key domain name registration can be a cost-effective alternative to filing a lawsuit.
Students of Lawsikho courses regularly produce writing assignments and work on practical exercises as a part of their coursework and develop themselves in real-life practical skill.
LawSikho has created a telegram group for exchanging legal knowledge, referrals and various opportunities. You can click on this link and join:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.970170259475708,
"language": "en",
"url": "https://danielgolliher.com/personal-finance/",
"token_count": 673,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1533203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3c3df172-cfae-4832-83a2-ef2b1ccb588d>"
}
|
Consider this page to be perpetually incomplete. Perhaps there is just an outline here waiting to be fleshed out. There’s always more to add, but it was lasted updated on: [8/7/2020]
For my purposes, personal finance means knowing what you spend and what you earn. The point of this knowledge is to arrange your spending and saving to live as freely and happily as you can manage.
- Financial illiteracy abounds.
- A budget is an accounting of income and expenditure. It is not just a list of expenses.
- The point of money is to let you live as you please. Budgets and financial literacy allow you to understand money.
- New York City isn’t expensive the way most people say it is. Bad money and lifestyle habits just make it more expensive than it needs to be.
- Expense is relative to personal preference. Generally speaking, there is no such thing as an absolutely “cheap city” or an “expensive” city. If you prize living space above all else, NYC will be expensive. If you prize social spheres, public spaces, and certain job markets, it is not.
- The price of debt is not merely a monthly payment, but the opportunity cost of that payment.
- One’s absolute amount of money isn’t the thing. One’s approach to money in general is the thing.
- Linda Tirado’s book Hand to Mouth: Living in Bootstrap America is a great examination of what it’s like to be very poor in America, and, indeed, what being “not poor” truly means.
I was once on a date with a man; it was generally nice. We went for a run and then took a long walk to a coffee shop.
During our walk he told me that he had tens of thousands of dollars in student loans. It was like $40,000. Absent other context, this isn’t that big of a deal, even though it’s not great. I graduated with $20,000 in debt from college. Not ideal, but I paid it off in about four years; and those four years included some pretty lean times.
But then he told me that he lives alone in a Manhattan apartment.
And then he told me about a new phone he was going to buy, despite the perfectly fine phone he’d been texting me with.
And then, when we got to the coffee shop, he offered to buy my drink along with his. They were like $6 each.
There is no way that his job paid him enough to sustain the lifestyle he’d adopted. No way. He was living above his means, and sacrificing his independence and optionality in order to do so. And to what end? To appear successful enough? To live without roommates (even though living with other people is a good, regular thing to do)?
Final notes for now
Your economic station doesn’t dictate your moral standing. Poor doesn’t mean bad, rich doesn’t mean good.
In general, it seems like financial illiteracy is everywhere. Those with varying degrees of money are just shielded from their mistakes.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9472405314445496,
"language": "en",
"url": "https://peekerfinance.com/the-envelope-budgeting-system/",
"token_count": 1247,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10400390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f6b6eebd-e5e2-4907-8a89-481e0b9198c6>"
}
|
The Envelope Budgeting System
How the budget envelope works
1- Identify your discretionary revenue, Before the envelope system starts, find out how much it is accessible to you after your bills have been paid and set savings and investment cash aside.
2- Make a budget decision
After knowing how much money you left, decide how to split it between various classifications of budget, while your monthly expenditure is different in that.
Some popular category instances include eating and food, food, household goods, clothes, donations, gas and supplies.
So See your statements in your bank to get a sense of how much you spent in these sectors.
3- Build and add cash to envelopes
Get a subscription for each category and write on the front the name of the category.
After every paycheck, then, put the quantity of money in the estimated quantity.
4- Just spend cash
When the money in an envelope is running out, for that pay period you have met your budget and until the next pay period, they cannot spend more in that category.
5- Pay Off Debt or Save It with Extra Money
If it’s your debt, Use your envelopes to pay the cash left over.
If the debt is not yours, put your additional money into a UFB Direct savings account.
Benefits of the Envelope Budgeting System
– The envelope budgeting system is best in work simply
if you pay only in money for stuff, and you’re running out of money, you may not be able to spend too much so for a long time the envelope system was around for good reason.
– It helps you to discipline
In our life, everybody needs the discipline to improve us, whether with our habits of expenditure, our work productivity or our food habits.
So the more discipline you exercise, it is simpler to take accountability for other fields of life that you would like to enhance.
– It is a fund for an emergency
Rather than carry money, many people carry plastic what can be an issue if there is an emergency.
Although the money in the system of envelopes not for emergency purposes, you can use it always for one.
As an example, For a tow or a return home ride, you need to pay for it.
– You have a tangible budget
The cash idea is much more palpable if you’re using plastic rather than money.
You can readily get over-expended by credit cards.
If the money is used, you will be closer to your budget.
Because every moment you are remembered you reach an envelope from which you can spend.
– No charges for overdraft
Were you ever paid from your bank for an overdraft fee?
If your debit card discard and you use money, you will overrun much less and get the nonsensical fee charge.
Fewer waste expenses, when I have been using an Excel table to generate my budget and at the end of the month you’d look back and amazing to see cash waste.
When using the budgeting scheme for envelopes, however, you will believe about every buy more probably.
You have less chance of wasteful spending when you see the cash gone.
Indeed, individuals spend between 10% and 15% less on the money.
– You’re not missing a payment
If the budgeting scheme is using, you pay front and you don’t have to maintain track, If you use an e-program to make your budget, on the other side, Purchases are simple to miss because hundreds of bucks were off the budget held electronically every month because of the reality that regularly lost receipts.
It is difficult to bring the entire family on board
Some individuals oppose the use of money.
They like the ease and simplicity of plastic and use it.
Who could blame them?
However, to succeed in the envelope budgeting scheme, the entire family must commit itself completely to working, it’s not going to otherwise.
You have to go to the bank or to the customer’s money order
Many people are trying really to avoid going to the bank or ATM.
It’s only a second errand in their busy lives they have to operate as a residence at the home.
But if you use the budget scheme for envelopes, clearly, from somewhere you will have to get money to pack your envelopes.
Start as confusing sometimes
For instance, if you’re buying from Target and buy clothing worth 30 dollars, food in value of $30 and house decor worth $30.
What’s the money and from where do you get it? You likely have three distinct envelopes to get it. When you learn how to stick to your categories you will experience a learning curve.
You will not be rewarded by credit card
Before I used the budget system for envelopes, I’d get enough to gain the greatest cash $250 a year with credit cards with the best money back.
However, with the budgetary envelope, you forgo benefits for a credit card because you won’t use as much or even at all your loan card.
1- Choose categories that you have the most difficulty
As an example, Your spouse and you used to have an apparel envelop
But after approximately six months of never using any money from that envelope.
they decided that budgeting cash for that category was wasteful.
Rather, they are putting more cash on household goods and if we ever purchase fresh clothes, it can come from this category.
In short, No envelope category is required in which you are not overpaying.
2- Instead of envelopes, use a small folder.
there is a good suggestion, Accordion folders are recommended because instead of being able to monitor seven distinct envelopes by someone, only one he has, Its folder is envelope-sized.
In short, so it’s ideal to keep the money, It is also longer lasting than envelopes.
Source: Envelope system (Wikipedia)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9384113550186157,
"language": "en",
"url": "https://searchinform.com/industries/business-services/trade-secret-protection/disclosure-of-trade-secrets/",
"token_count": 2231,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.45703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c70e600c-8e8a-4f9b-a4ce-7c144e58fd1c>"
}
|
Disclosure of trade secrets
Almost every enterprise has a trade secret. This data requires protection and access control. Let's figure out what a commercial secret is, how to protect it, and what responsibility a person bears for its disclosure.
A trade secret is a list of information that is confidential within an enterprise. Possession of this information allows the firm to increase its income and differentiate itself from similar companies in the market, which poses a potential danger of a competitor stealing classified information.
The need for trade secrets is due not only to an increase in income, but also to the following factors:
- avoidance of losses;
- increasing market share;
- possession of a unique production technology;
- any other benefit.
What can be classified as a trade secret?
The company independently determines the type of information that is a trade secret. An exception will be the list of data specified by law.
Trade secret information is any statement that has a technical, production, organizational, economic, intellectual nature and plays an important role in creating the final product of the company. If the disclosure of information to unauthorized persons could lead to losses of the company, such information should be kept under the heading "commercial secret".
Purchasing data, investment allocation, new production technologies, contacts of business partners, innovative research - all this can be attributed to information that has commercial value.
Despite the full right of the company to independently determine the commercial secret statements, according to the law, there is a list of data that cannot be classified as secret:
- Information about licenses, grants, tenders.
- Data on the state of the external environment, food products, medicine and other factors that have a direct impact on citizens.
- The number of workers and information on wages. This also includes data on the timing of payment of wages, the company's debts to employees.
- Information about violation of the current legislation.
- Documents confirming entrepreneurial activity. The names of founders and members of the board of directors cannot be hidden.
- Inquiries on entering information into open registers. These documents cannot be hidden and must be submitted to the tax office as proof of income.
- Conditions of auctions, tenders, competitions, which are created by the enterprise, cannot be hidden.
- The size of the company's income and turnover.
List of employees who can act on behalf of the CEO without the need to present a power of attorney.
Methods of disclosing commercial secrets
Disclosure of commercial secrets can only take place with the consent of the owner of this data. If the information is part of the enterprise, the head of the board of directors is considered to be the owner.
Selling secrets of production to competitive companies. As a rule, employees who have the authority to work with trade secrets sell information.
Lack of protection system. If the company does not protect information that is a trade secret in any way, any employee will be able to access the information and give it out to outsiders. Leaving important documents unattended on your desktop can result in leakage. With the advancement of modern technology, an attacker will be able to photograph documentation and send it to competitors in a matter of seconds.
Incompetent employees. Such people are often unaware or do not fully understand the full implications of divulging secrets. A talkative person can tell about a trade secret to his relatives, relatives, friends. Even if disclosure does not pose a threat to the company, it is still considered illegal and is a reason for bringing an employee to disciplinary responsibility.
Data theft. Often, employees purposefully want to get a position in the company in order to have access to classified materials. Excessive employee turnover in a company is also a threat that must be addressed in order to maintain trade secrets.
Lack of staff motivation. When the company has low salaries, delays in payments, disrespectful attitude towards employees, and there are no career prospects, an employee can steal a trade secret and pass it on to a competitor. It is necessary to motivate staff to work conscientiously, both financially and with team training, and a comfortable atmosphere.
Even if an employee left the company of his own free will or was fired, he is obliged to keep the trade secrets of his previous place of work. If the head of the company suspects the resigned employee of disclosing the CT, he has every right to go to court to receive compensation.
Other less common methods of theft include hacker attacks on servers that store CTs and illegal entry into company premises.
Responsibility for disclosure of commercial secrets
The law provides for penalties for disclosing commercial secrets. Depending on the type and degree of damage, liability can be: criminal; administrative; disciplinary; civil law.
In practice, citizens of the Russian Federation are rarely held liable under the laws that relate to the disclosure of commercial secrets. However, if necessary and all the evidence is available, it is very easy to prove the guilt of an attacker.
Article 183 of the Criminal Code regulates the issue of disclosing commercial secrets, namely, it lists the options for liability that apply to the perpetrator:
- Imprisonment for up to 2 years.
- Penalties up to 500,000 rubles.
- Correctional labor for up to 2 years.
- Forced deprivation of the right to occupy leadership positions or positions that involve working with classified data.
It should be noted that criminal punishment is used extremely rarely and is relevant only in cases where the disclosure of commercial secrets has led to injuries, death of other employees of the enterprise, disruption of technical capacities, as a result of which people have suffered.
The Code of Administrative Offenses in Article 13.14 regulates liability for disclosing commercial secrets. After the employee's guilt is proven, he is fined, the amount of which depends on the position in the company and the damage caused: for individuals - up to 1,000 rubles. for persons who hold a position in the firm - up to 5,000 rubles.
This type of responsibility does not imply the establishment of an official police case. The fact of the leak is recorded by the company's security service, after which initial measures are taken to identify the attacker.
When an unscrupulous employee is identified, the head of the company has every right to apply the following types of disciplinary action to him: fine; comment; rebuke; dismissal.
The civil law does not have a special article that regulates the issue of commercial secrets. However, liability can be established in accordance with the basic rules of civil law, including full compensation for damages.
If it is proved that the employee received income from the disclosure of trade secrets, the plaintiff has the right to demand compensation for the loss and the amount that the employee himself received for disclosing classified information.
Only the head of the security service or the director of the enterprise can accuse an employee of a data breach. To submit an application to law enforcement agencies, two conditions must be met:
- The company must have a trade secret regime.
- The employee accused of stealing information had to sign an agreement with the employer to be held liable for disclosing trade secrets. In addition, it is also necessary to prove the fact of the leak, which occurred through the fault of a particular employee.
Methods of protection against disclosure of trade secrets
Enterprise information is considered confidential if it is protected from unauthorized access and disclosure. Only if a security system is in place is the leakage of commercial information classified as disclosure.
To protect valuable information from disclosure to third parties, an enterprise should develop a comprehensive data protection system against leakage. The main area of protection is access to information by unauthorized workers. At the remaining stages, control of technical leakage channels, security of premises should be taken into account. There are several practical ways to protect trade secrets.
- Secrecy stamping on documents that contain trade secrets, or on flash drives that store secret data.
Secrecy is a property of a document, which is the official evidence of its security. The stamp "trade secret" is placed in the upper right corner of the first page of the document - it can be a special seal of a company or a security service. Also, the signature of the person who assigned the degree of secrecy, the date of the creation of the neck and the period for which the secrecy extends is placed near the stamp.
All secret documents must be filed in separate folders and kept only in protected archives or dedicated premises. Each folder is recorded in the security log. If it is necessary to get access to the archive with data, the employee first contacts the archive guard. The security guard records in the log (physical or electronic) the name of the employee, the date of issue and return of the document. It is also necessary to check whether the employee has the right to work with documents of trade secrets.
A similar procedure for stamping and distributing trade secret materials with flash drives, only instead of a seal and a stamp, a special seal is attached to the device, which indicates the degree of secrecy of the data stored on the carrier.
- Creation of a list of persons who may have access to trade secrets.
The head of the security service or the director of the company must independently determine which of the employees can have access to trade secrets in order to carry out their professional duties.
- Supplementing the company's internal regulations with a document "On commercial secrets".
The provisions on trade secrets should contain detailed instructions on working with secrets, provide other employees with detailed information about who can gain access to protected information, what responsibility employees bear for attempts to steal or divulge trade secrets. At the same time, the regulation must be drafted correctly, so as not to reveal any details of commercial secrets.
In the process of hiring a new employee, a clause on liability for disclosure or theft of trade secrets should be included in the employment contract.
Such a clause should be in the employment contracts even of those employees whose duties do not include working with commercial secrets. All employees of the enterprise must be aware of the possible disciplinary and administrative punishment for their actions.
- Taking organizational measures and creating an integrated protection system to block access to trade secrets from third parties.
Organizational protection includes actions for documentary organization of keeping secrets, creating regulations for working with commercial secrets and rules for differentiating access. Technical protection includes the design, installation and further operation of technical means that prevent theft of secret information. These can be access control and accounting systems; screening and noise reduction devices; regular scanning of premises for the presence of embedded devices; protection of architectural structures (sound insulation of walls, floors, ceilings, doorways and windows).
Before creating a protection system, it is necessary to calculate all the costs of its design and implementation. It is important to note that the amount of these costs should not exceed the potential loss from the disclosure of trade secrets. Otherwise, the creation and support of the system makes no sense and is unprofitable for the enterprise.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9620184898376465,
"language": "en",
"url": "https://sustainabilitymath.org/2021/04/05/hows-the-labor-market-for-college-grads/",
"token_count": 213,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0174560546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fb9c802a-111f-419f-8946-8be756ad4209>"
}
|
The Federal Reserve Bank of New York’s page The Labor Market for Recent College Graduates has a number of graphs related to employment for recent and not so recent grads. For example, their graph here is the percent that are underemployed defined as
The underemployment rate is defined as the share of graduates working in jobs that typically do not require a college degree. A job is classified as a college job if 50 percent or more of the people working in that job indicate that at least a bachelor’s degree is necessary; otherwise, the job is classified as a non-college job. Rates are seasonally adjusted and smoothed with a three-month moving average. College graduates are those aged 22 to 65 with a bachelor’s degree or higher; recent college graduates are those aged 22 to 27 with a bachelor’s degree or higher.
There are graphs for unemployment, underemployed job types, wages and a table of outcomes by major. In all cases the data can be downloaded.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9532457590103149,
"language": "en",
"url": "https://www.answers.com/Q/What_is_the_difference_between_an_extension_and_an_increase_in_supply",
"token_count": 658,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1416015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:61696dd3-cf20-4488-a4de-62a95a45a0e2>"
}
|
Extension in supply is the extension of the vendor contract for a longer duration while increase is the increase within the stipulated time.
check your answer
In AS/A2 examination economic theory, an increase in demand would normally refer to an increase in the quantity demanded at every price level (i.e. a shift in the "curve"). An extension of demand is an increase in the quantity demanded because the price has changed (usually because supply has shifted) - ie a movement along the demand curve. Sad but true!
The difference between individual supply curve and the market supply curve is tat individual supply curve is like a firm. To be able to get the market supply curve you have to have the individual supply curve.
supply function can be defined as the quantity of a good.
it is so because, there exists a positive relation between price and supply, i.e wen price increase then supply olso tends to increase the same. . .
Increase Supply means to have more of a specific supply on hand.
The relationship between these things is that supply and demand work together to determine the price of a good or service.
Availablity versus demand
we have textbooks
One says individual and the other says market!
the major difference between the two is mercantalism is based around the government and capitalism around the individual. Mercantalism depends on a trading market of exporting more than importing to increase the gold and silver of a country. Capitalism has supply and demand.
An increase in the supply is not represented by a movement up the supply cuve. A movement up supply curve is due to the increase in quantity supplied instead of the increase in supply. Alternatively, it can also be due to increase in the price of the goods that could lead to movement up the supply curve.
The Main difference between a Switched Mode Power Supply and an Uninteruptable Power Supply is the function. SMPS are like a conditioner of electricity supply for a building, house, etc. UPS are the emergency backup power supply for vital computer based systems
One difference between air conditioning and refrigeration is the circulation systems. The point of supply for the gases they use is another difference.
Supply is the quantities of commodities in a producer willing and able to offer for sale for a particular period of time while supply curve is the use of graphical method to show the relationship between the price and the quantity supply.
Three examples that cause supply to increase are overproduction, inflation and lack of demand. Lack of demand for supply can create the supply to increase eventually.
Supply is the amount produced and demand is the amount that is wanted.
interpret what an increase in demand and an increase in supply mean.
demand in supply is the basis of it's increase and decrease
get more supply
No because real money supply would only increase if the price level doesnt increase or increases at a slower pace than the increase in nominal money supply. This is because the real money supply takes into account the current price level.
The main difference between chillers and cooling towers is that a cooling tower uses pumps to circulate a water supply. On the other hand, a chiller uses a fan to circulate the water supply.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9593153595924377,
"language": "en",
"url": "https://www.economist.com/finance-and-economics/2003/08/21/the-rising-tide-of-red-ink",
"token_count": 1535,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:23126742-82fa-40e6-8906-e352682fde6a>"
}
|
SUPPOSE that the governments of rich countries had announced in 2000 that they planned to go on a borrowing binge, and turn their combined balanced budget into a deficit of more than 4% of GDP by 2003. The bond markets would have freaked out. Yet that is what governments have done. The ratio of rich-country government debt to GDP is likely to reach a record 80% by the end of this year, almost twice as large as in 1980. Is this outbreak of public profligacy cause for concern?
Only three years ago, almost two-thirds of all OECD countries ran a budget surplus; most are now in deficit. Earlier this year the OECD estimated a combined budget deficit in rich countries of 3.6% of GDP in 2003. But slower-than-expected growth in many economies, along with additional easing of fiscal policy, means that the deficit could now reach 4-4.5%.That would still be less than the post-second-world-war peak of 5.2% in 1993. But emerging-market governments have also been borrowing more. In Asia outside Japan, budgets were close to balance in the early 1990s; now they add up to a combined deficit of 3% of GDP. As a result, global government borrowing may be approaching record levels.
America is the chief culprit. In 2000 it had a general-government budget surplus (including state and local governments) of 1.4% of GDP. This year the IMF reckons that its deficit could reach 6% of GDP. Some of that reflects the economic slowdown, which reduces tax revenues and increases benefits. But most of it is due to the biggest fiscal stimulus for decades—with tax cuts and higher spending equivalent to 5% of GDP over three years.
In contrast, the euro area's structural budget deficit has been more or less unchanged over the same period. The swing in its headline budget position, from broad balance in 2000 to a deficit approaching 3% of GDP this year, is the result of the region's sluggish growth. Automatic stabilisers (falls in tax revenues and rises in benefit payments) have been allowed to operate, but nothing more. Indeed, some governments—notably Germany and Italy, which are both in recession—have been forced to tighten fiscal policy. Despite this, Germany's deficit could reach 4% of GDP this year, breaching the 3% ceiling under Europe's stability pact for a second year. The European Commission warned Germany this week that it could face sanctions unless it reduces its deficit in 2004.
Unlike the euro area, Britain has seen a big easing of fiscal policy, turning a surplus of 3.9% of GDP in 2000 into a deficit of almost 3% of GDP this year. That is dwarfed by Japan's deficit of 8% of GDP, but Japan's rapidly mounting public-sector debt actually conceals a modest tightening in its structural budget balance, ie, after adjusting for the cycle, since 2000.
In its latest Economic Outlook, the OECD paints a grim picture of future fiscal positions. Assuming that annual GDP growth averages almost 3% and that underlying fiscal policies are broadly unchanged, the average budget deficit of OECD countries will still be 2.6% of GDP in 2008, and the ratio of public debt to GDP will rise to 86%. The underlying fiscal situation is even worse since these numbers ignore future pension liabilities as populations age. Some economists argue that governments should now be running budget surpluses to reduce debt and the need for future tax rises. The OECD concludes that rich economies have little scope for further fiscal stimulus besides the automatic stabilisers. In some cases, it says, future fiscal problems are so severe that governments need to start tightening now.
A recent analysis by UBS suggests that the OECD's concerns may be overdone. The main reason to worry about government borrowing is that it could push up long-term interest rates, crowd out private-sector investment and hamper growth. But with the exception of Japan, public-sector debt is hardly spiralling out of control. The OECD's forecasts suggest that in 2008 the debt-to-GDP ratios in America and the euro area will still be below their peaks in the 1990s. Moreover, while there is some economic slack, borrowing by firms remains weak and monetary policy is loose, there is scant risk that rising interest rates will crowd out the private sector. Bond yields have risen, but they are still much lower than a few years ago.
The main reason why total OECD government debt is rising so fast is Japan, where debt is already 156% of GDP and heading for nearly 200% by 2008. So should Japan's government immediately slash public spending and raise taxes? Not until its economy looks stronger—its ratio of debt to GDP has been soaring partly because its nominal GDP has been shrinking.
Japan's public-debt problem may be less severe than it looks. Around 60% of it is held by the Bank of Japan or other public-sector institutions. It is therefore not really “debt”: bonds purchased by the central bank rather than the private sector imply no net increase in public-sector debt service and hence no need for future tax increases. Ben Bernanke, a governor at America's Federal Reserve, has argued that, to kick-start Japan's economy, tax cuts should be financed directly by the Bank of Japan. If such a fiscal boost increased nominal GDP while debt in private hands was unchanged, this would reduce the debt ratio. Rising nominal GDP would also boost tax revenues and trim the deficit.
Has America's recent budgetary binge been imprudent? Bill Dudley, an economist at Goldman Sachs, argues that a big increase in America's budget deficit was inevitable after the bursting of its bubble. To understand why, one needs to focus on an accounting relationship: by definition, the sum of net private-sector saving (saving less investment) and public-sector saving (the budget balance) must be equal to a country's current-account balance.
In 2000 the private sector had a net financial deficit of 5% of GDP, while the public sector had a surplus of 2%. Together they were equal to the current-account deficit of 3% of GDP. The private-sector deficit was unsustainable and has shrunk to 1% of GDP. But thanks to an overvalued dollar and sluggish growth abroad, America's current-account deficit has widened to 5% of GDP. So to satisfy the accounting rule, all the adjustment in the private-sector deficit had to be accommodated by a rise in the budget deficit. Without tax cuts the economy would have been weaker (and the current-account deficit smaller). To allow the decline in the private-sector deficit, the budget deficit would still have had to widen sharply—through a deeper recession eroding tax revenues.
A better criticism of the Bush administration's tax cuts is their composition. The equivalent stimulus could have been achieved at a much lower budgetary cost, both near and long term, by focusing tax cuts on lower-income households who are more likely to spend their gains.
The recent increase in government borrowing may not be cause to panic in the short term so long as private-sector demand is weak. Indeed, there is a strong case for Germany to loosen its fiscal policy. But the biggest test will come when economies recover. Continued heavy borrowing could then start to push up interest rates and harm private investment. That is when governments must act swiftly to tighten their belts. But not yet.
This article appeared in the Finance & economics section of the print edition under the headline "The rising tide of red ink"
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9276127815246582,
"language": "en",
"url": "https://www.gep.com/blog/mind/win-win-energy-efficient-motors-can-drive-savings-as-well-as-growth",
"token_count": 812,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.05078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a0ac7ddd-325e-49e6-92fd-811d183fc45b>"
}
|
Win-Win: Energy-Efficient Motors Can Drive Savings as Well as Growth
When sourcing motors, the untrained eye might be tempted to go for the one that costs the least. However, procurement pros know that the purchase cost is a very small factor and constitutes less than 3% of the total life-cycle cost of a motor. As the purchase cost constitutes such a minimal percentage in the total cost of ownership, a lower purchase cost will have a nominal effect on the total cost of ownership of a motor. Therefore, it is necessary to consider the total cost of energy. The cost of running a motor is between 70% to 90% of the total life-cycle cost of a motor and is based on the price of power per kilowatt-hour.
Energy consumption has a significant contribution to the life cycle cost of an electric motor. The average life of an electric motor is 20 years, during which the energy consumption of the motor covers approximately 90% of the total life-cycle cost. The efficiency of an electric motor is the ratio of mechanical power delivered by the motor to the electric power supplied to the motor. Energy-efficient motors use less electricity, do not produce as much heat and often last longer than standard motors.
High Power Consumption and Pollution: True Costs of an Inefficient Motor
According to a recent study by the International Energy Agency (IEA), electric motors are responsible for 53% of global electricity use — or 10,500 TWh per year — and emit a total of 6,800 metrics tons of carbon dioxide. Countries such as Australia, Brazil, Canada, China, India and Mexico — who together represent around 81% of global electricity consumed by electric motors — are working to change their market with regulations and policies that support the enhanced use of energy-efficient electric motors. A transition to energy-efficient motor systems would reduce the global electricity demand for electric motors by around 20% to 30% by 2030, depending on the adoption and implementation of energy-efficient practices and environmental policies globally.
Another aspect enabling the push towards electric motors is public health and safety. According to the World Health Organization (WHO), more than 6 million deaths occur globally because of air pollution, the consequence of fossil-fuel powered motors used in industrial, automotive, and other sectors. These rising concerns regarding the reduction in energy consumption and pollution have led to the increasing demand for energy-efficient motors.
Classification of Energy-Efficient Motors
The International Electrotechnical Commission (IEC) has contributed towards the development of an energy-efficient electric motor standard and has classified the motor into four levels of efficiency. These are IE1 (standard efficiency), IE2 (high efficiency), IE3 (premium efficiency) and IE4 (super premium efficiency). These IEC codes allow governments to specify efficiency levels for MEPS, the Minimum Energy Performance Standards that electric motors have to meet for legal use. The European Union has set motor MEPS levels at either IE3 or IE2 in combination with a variable frequency drive. The USA — which was the first country to set MEPS for motors — has a minimum required level of IE3 while the Asia-Pacific region and China are envisaging setting a MEPS level of IE3 as a voluntary standard.
Energy-efficient motors run cooler and are able to better withstand voltage variations compared to standard motors. Another factor to consider is the manufacturing techniques and material used in the making of the motors. For example, using copper instead of aluminum for the conductor bars and rings of motors results in higher motor efficiency and a significant reduction in resistance losses. Motors with copper rotors yield an overall loss reduction of 15% to 20% compared to aluminum. As a result, an energy-efficient unit has a longer life than a standard unit.
While energy-efficient motors come with a high initial cost, their benefits in terms of energy savings and greater output as well as a lower maintenance cost cancel these costs out. Eventually, an energy-efficient motor will pay for itself.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9647392630577087,
"language": "en",
"url": "https://www.kapronasia.com/asia-payments-research-category/will-covid-19-mark-a-turning-point-for-cashless-payments-in-asia.html",
"token_count": 704,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3a3c9f49-c834-4efc-ad32-ddef2aa358f0>"
}
|
In February, around the peak of the virus outbreak in China, China's State Council told journalists that banks should remove cash potentially contaminated with the virus from circulation and sterilize it with heat or UV treatments. Once the treatment was complete, banks were asked to store the money for seven to 14 days before returning it to circulation. Cash taken from high-risk sites including markets and hospitals was sealed and specially treated. However, that cash was transferred for safekeeping to the People's Bank of China (PBOC) and not returned to circulation.
Prior to January 17, the PBOC reportedly arranged to allocate nearly 600 billion yuan ($86 billion) of new banknotes to the country. Fan Yifei, deputy governor of the PBOC, said in February, "After the outbreak, we paid great attention to the safety and health of the public’s use of cash."
As the virus has spread globally, other countries have been taking measures aimed at mitigating perceived infection risks from handling cash. Highlighting the need to prevent and control COVID-19's spread, the National Bank of Cambodia in late March urged consumers to use digital payments. ABA, one of Cambodia's largest lenders, asked consumers to use mobile payments instead of cash to pay bills and transfer funds. Japan's Nikkei Asian Review reported on March 26 that some shops in the Australian cities of Sydney and Melbourne are refusing cash payments - although they have no legal right to do so.
Given the virus's severity and the increase of social-distancing measures, it is understandable that the use of cash will come under closer scrutiny. Put simply, if the objective is to reduce the disease's spread through less contact, contactless payments may be better than cash. However, credit cards that rely on a swiping or insertion into a card reader also involve physical contact.
David Hui, a professor of respiratory medicine at the Chinese University of Hong Kong, told Nikkei Asian Review in March that it is unclear if the virus can be transmitted via paper notes as there are a lack of data proving any correlation.
It could be worthwhile to conduct relevant studies. Yet in the absence of any peer-reviewed research, it is impossible to say whether using cash increases the risk of spreading the coronavirus.
It is possible to quantify the costs of heavy reliance on cash though. They add up. In early April, EuroMoney noted that India spends US$210 billion, equivalent to 1.7% of its GDP, on producing, storing and distributing cash. Costs could be reduced significantly with the gradual phasing out of cash. The objective is not to eliminate cash entirely - to do so risks financial exclusion of some of the most vulnerable members of society - but replace it as the mainstream form of payments in the long run.
A less crucial, but nonetheless important aspect of the cash vs. cashless conversation is convenience for consumers. In Taiwan, for instance, many taxi drivers only accept cash. They say they do not want to pay the fees associated with digital payments. Ride-hailing giant Uber, on the other hand, only accepts cashless payments. The transparency and convenience of the Uber system have made the company's services popular with Taiwanese consumers, despite Uber's periodic clashes with regulators and incumbents. Over time, the largest taxi companies in Taiwan have moved to position themselves as cashless friendly in an effort to compete with Uber. For consumers who want greater choice in payments, it's been a win.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9484838843345642,
"language": "en",
"url": "https://www.osborneclarke.com/insights/what-is-urban-dynamics/",
"token_count": 560,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0147705078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6200ec22-d4a5-4fdd-a30e-4627365eea7f>"
}
|
First, it’s good to understand that, prior to Covid-19, various sources predicted that two-thirds of the world's population will live in urban areas by 2050. 90% of urban growth will take place in Africa and Asia, with China and India expected to see the greatest urban growth between 2018 and 2050. China and India build cities in a way that Western European countries simply do not: it is part of their growth plan.
In Westernised economies, for example in the USA and Europe, there also continues to be massive urban migration. Rapid urbanisation is also taking place in smaller cities. It’s quite possible that this trend will be exacerbated by the ongoing impact of Covid-19 and climate change.
Urban dynamics are the changing elements that make up an urban environment: the opportunities and the threats. The people and their governance. The commercial impacts of human geography.
Why should businesses care about urbanisation?
Large scale shifts in population lead to both opportunities and challenges for businesses - economically, socially and environmentally. Enterprises that do not take into account the effects of urban dynamics on their business are unlikely to be the winners in the next economic cycle.
Much of the change brought about by urbanisation is good for business, such as the creation of larger markets for goods and services; an increase in the labour markets; increased wealth; closer proximity of businesses to their customers and services; better public services; an abundance of recreation and leisure facilities; social and cultural diversity; improved technology; improved infrastructure; reduction of transportation costs and the creation and dissemination of knowledge and ideas.
However, the potential for overcrowding brings with it a number of disadvantages including environmental impacts and pollution; scarcity of land and limitations of real estate; strains on infrastructure and natural resources; traffic congestion and mobility costs; issues with crime and personal safety; as well as wealth disparity and urban poverty.
Companies will prosper if they can devise strategies to deliver products and services which exploit the advantages whilst limiting the disadvantages. Understanding urban dynamics and flexing business models to adapt to urbanisation will provide you with a competitive edge.
Has Covid-19 affected the progress of urbanisation?
It’s too soon to fully understand the long-term effects of Covid-19 on urbanisation. One thing for sure is that variances have increased as a result. However, a glance through the history books shows that cities are shaped by the effects of pandemics. A walk through the streets of the world’s major cities shows how they have thrived despite the challenges posed by pandemics.
We believe that cities will continue to flourish in the medium term and that astute businesses will take advantage of the opportunities of urban dynamics while mitigating the downsides.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9573334455490112,
"language": "en",
"url": "https://www.wrn.com/2014/04/tax-freedom-day-in-wisconsin/",
"token_count": 415,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.267578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:764e07cd-3c13-44dc-b0fd-a8bce1a7ddcd>"
}
|
It’s Tax Freedom Day in Wisconsin, the day on which Badger State taxpayers have collectively earned enough income to pay off their total federal, state, and local tax bill. Wisconsin is the 37th state to reach Tax Freedom Day. According to the annual report from the nonpartisan Tax Foundation, national Tax Freedom Day falls on April 21, three days later than last year.
The states with the earliest Tax Freedom Days are Louisiana (Mar 30), Mississippi (Apr 2), and South Dakota (Apr 4). The latest dates fall in New Jersey (May 9), Connecticut (May 9), and New York (May 4).
The study’s key findings include:
- The national Tax Freedom Day is three days later than last year due mainly to the continuing economic recovery, which will boost federal tax revenue collected through the corporate, payroll, and individual income tax.
- Americans will spend more on taxes in 2014 than they will on food, clothing, and housing combined.
- Americans will spend 42 days working to pay off income taxes, 15 days for excise taxes, and 11 days for property taxes.
- Americans will pay $3 trillion in federal taxes and $1.5 trillion in state and local taxes, for a total bill of more than $4.5 trillion, or 30.2 percent of the nation’s income.
- If you include annual federal borrowing, which represents future taxes owed, Tax Freedom Day would occur on May 6, 15 days later.
Tax Freedom Day is a significant date for taxpayers and lawmakers because it represents how long Americans as a whole have to work in order to pay the nation’s tax burden. Tax Foundation Economist Kyle Pomerleau said Tax Freedom Day provides “a vivid representation of how much we pay for the goods and services provided by governments at all levels.”
Historically, the date for Tax Freedom Day has fluctuated significantly. The latest-ever nationwide Tax Freedom Day was May 1, 2000. In 1900, Tax Freedom Day came on January 22.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9452410340309143,
"language": "en",
"url": "https://coincentral.com/could-proof-of-stake-mend-bitcoins-energy-costs/",
"token_count": 1824,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2021484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b15e9a11-9440-4cd9-a5cb-e2e4c850c971>"
}
|
Could Proof of Stake Eliminate Bitcoin’s Energy Costs?
Proof of Stake: A Solution to Bitcoin’s Energy Problems?
Bitcoin has an energy problem. Thanks to the coin’s proof of work distributed consensus algorithm, Bitcoin mining is creating a massive carbon footprint. Miners use up an estimated 29.05TWh of electricity annually. That’s 0.13% of the world’s annual energy consumption, which is more than 159 countries including nearly all of Africa.
Coupled with the competitive nature of mining, Bitcoin’s exponential growth is largely to blame for this rampant energy consumption. Mainstream public attention and a boom in transaction volume has only exacerbated the problem, as the Bitcoin Energy Consumption Index estimates that mining power expenditures increased by 29.98% from October to November.
At this exponential rate, the cryptocurrency’s meteoric rise has it on pace to consume more energy than the whole of the US by 2019.
The Contributing Factors
In order to properly diagnose the root cause of this energy crisis, we have to dig into the relationship between Bitcoin’s network growth and its mining mechanics.
Under Bitcoin’s proof of work model, miners compete with each other to ensure a distributed consensus (the means by which Bitcoin circulates) on the blockchain. Miners commit their computing power to verify the transactions sent through the network.
To do so, the computers solve the encryption puzzles that secure each transaction, and once solved, store them as hashes in the blocks on the public ledger. The first miner to finish the current block receives a block reward in Bitcoin.
As you can see in the image, the competitive nature of proof of work incentivizes miners to commit as much processing power to the blockchain as possible. The more powerful your mining rig, the faster you can solve the transaction encryptions, the more likely you are to finish a block and receive its rewards.
Back in Bitcoin’s infancy, it used to be that you could reliably mine with a graphics card or a run-of-the-mill computer processor. But those days are long gone. As more miners jumped on the Bitcoin gravy train, more sophisticated mining software was developed to give miners an edge. This hardware arms race culminated in application-specific integrated circuit (ASIC) mining. In TLDR terms, ASIC miners are processors that are more efficient and powerful than CPUs or GPUs.
And they left the original mining procedures in the dust. Seriously, if you were trying to compete with ASIC mining rigs using your computer or graphics card, it’d be like trying to win the Monaco Grand Prix with a vespa.
At this point, even a single ASIC isn’t enough to compete with the big league mining pools. The biggest mining cooperatives rig up hundreds of ASICs to create massive processor pools. In order to stay competitive with other miners, these pools will add hardware to their rigs’ to increase overall hashing power (output).
You probably see where this is going. Mining rigs obviously require electricity, and the harder they have to work, the more power they consume. As such, proof of work’s competitive incentives invariably lead to an exponential increase in energy consumption.
And this doesn’t even include difficulty increases. Every 2,016 blocks, Bitcoin undergoes a difficulty adjustment. This adjustment is meant to scale block difficulty to match mining hashrates so that no miner solves algorithms too quickly, sucking up all the block rewards in the process. What this means, though, is that the more miners there are on the network, the more difficult it becomes to solve the encrypted algorithms after each adjustment. This would also mean that mining rigs have to work harder to stay competitive, thus consuming even more power.
Starting to get the picture? The more people buy into Bitcoin, the more miners will be attracted to the currency for its valuation. With more miners comes more energy consumption to fuel competition, and with a growing network, each difficulty adjustment will only exacerbate energy consumption by making miners work harder.
Now that we’ve gotten that out of the way, let’s turn this problem on its head and look at a potential solution.
The Case for Proof of Stake
Proof of stake is an alternate algorithm for reaching a blockchain’s distributed consensus. It came onto the scene in 2012, with Peercoin, NXT, and BlackCoin as its primary early adopters.
No miners exist under the proof of stake model. Instead, they are replaced with validators (or forgers) who are in charge of validating transactions. Typically, validators stake a certain amount of a proof of stake currency in that blockchain’s core wallet.
That currency’s network may then deterministically select them to construct the next block. The selection mechanism varies by algorithm, as it may be chosen at random or based on a combination of variables, such as total wealth and the amount of time it has been staked.
It’s important to note that proof of stake offers no block rewards, only transaction fees, so theoretically, the model doesn’t engender the same competitive impulse as the proof of work system. While you might receive more frequent selections and greater transaction fees the more you have staked, you aren’t trying to beat anyone to the punch like you would be with Bitcoin.
With proof of stake, you only need enough energy to power a blockchain’s core software. No need to waste energy on an ASIC and a cryptographic hashing program. To return to the racing analogy, it’s akin to being awarded a prize for starting your car instead of using it to race. You wait in line at the starting gate for your participation trophy, and you don’t have to worry about wasting the extra gas to complete the race faster than your fellow competitors.
In a nutshell, proof of stake significantly cuts back on energy use. Not only does it employ a less energy intensive program, but validators don’t have to up the ante against each other to remain viable like miners do under a proof of work consensus. They don’t receive block rewards, but they also don’t have to face the outrageous energy costs that miners confront. If we weigh proof of stake’s transaction fees without its significant operation costs, it comes out comparable to proof of work’s rewards against its costs, especially for those who can’t maintain expensive mining rigs.
The Proof is in the Puddin’
Back in May, Vitalik Buterin unveiled plans to transition the Ethereum blockchain to a proof of stake algorithm called Casper. As the second largest cryptoasset, this development is a huge endorsement for the proof of stake system.
Proof of stake may very well be the future for blockchain. Ethereum’s change indicates as much, as Vitalik Buterin sees value in the mechanism’s pros as they capitalize on Bitcoin’s cons.
Bitcoin’s energy crisis is one of the first truly substantial trials facing the cryptocurrency as it marches towards public prominence. Pitfalls and obstacles such as these are to be expected in such a nascent technology, but it’s the responsibility of the community at large to adapt to these tribulations. There’s no reason to think that addressing proof of work’s shortcomings should compromise our belief in Satoshi Nakamoto’s creation–quite the contrary. If we want to see Bitcoin succeed, we must remain vigilant in our criticisms and proactive with our solutions, because as it currently stands, Bitcoin is on track to becoming unsustainable in the near future.
Perhaps proof of stake could avert Bitcoin from self-sabotage. If Ethereum’s algorithm change means anything, it should be a clear signal to the cryptocommunity that proof of work can not persist in its current state.
The question is, will the market adapt?
In the early stages of Bitcoin development, most cryptocurrency enthusiasts tended to think that the original digital…
Even though it was founded in 2017, a time when there were already established companies in the…
Ethereum 2.0 is the next level of the Ethereum platform which will introduce several new features aimed...
Bitcoin has experienced a major price jump this year; since April 2020, its price has risen by over 800 percent to reach the present valuation of $60,652 at press time. So what are the key drivers behind bitcoin prices today? Investors are not Selling A significant number of bitcoin investors are now holding bitcoin. Fewer…
ABOUT THE AUTHOR
ABOUT THE AUTHOR
Colin is a freelance writer and crypto-enthusiast based in Nashville, TN. When he’s not speculating crypto futures, he’s probably letting his hair down and/or heading to a music festival–because stereotypes exist for a reason.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9696547985076904,
"language": "en",
"url": "https://exclusivepapers.com/essays/economics/strategic-planning-in-a-recession.php",
"token_count": 2450,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1845703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:899fdf44-7a70-45d2-890a-52735bd3ef37>"
}
|
Every business individual, company or organization is in need of prospering in most of its undertakings as relates business. This has been an activity that happens in the world of business as many business organizations are in a mood of trying to escape and survive the risks of changes in the world economy. One of the critical actions that many companies and individuals in business have been doing to counteract is what we call strategic planning. This is the process where an organization or organizations defines its action plans, direction and decides on the allocation of its resources in order to meet the qualifications that will enable it to compete well and make the most no matter the present situations in the world.
The resources that a business can involve in its management are the capital and the people at the disposal of the plans and intentions of the organization. As far as there are many organizations and business companies, there are many and diverse strategies that have been in use in these organizations. They are made by the structures of the organizations, and mostly suit the organization at a particular time. This therefore means that they are subject to change with the changing nature of the business in the world (Allison and Kaye, 2005). They specifically relate to the ideas that a business organization does, to who these actions are done and how they excel in the delivery if its services to its people. It also involve the ideas of the mission of the company, objectives, analysis of the present and the coming situations, formulation of the plan of action to be undertaken and finally implementation and control of the tools used to attain to the intentions made.
Strategic planning has been the pillar of many businesses in the world as a failure to have it takes the business to a great risk of collapse. It is the course of action and a destination of any business plan, whether it is at the point of start of is at a continuum. Strategies are normally considered long terms as their effects are felt over a long period of time. For any business, it is an obligation that it undergoes recession once in a while (Allison and Kaye, 2005).
Recession refers to a period in business where there is a decline in the activities across the economy that takes a period of time as a year or even more. It refers to a contraction in the business cycle that is depicted in productions of industries, employment opportunities, individual and group income and trade in the retail whole-sale. There is a general fall in many of the business aspects as incomes in various houses, employment, nature of employments, investments and spending in investments and utilization of the company’s capacities. There is however a rise in a number of factors as the rate of bankruptcy and unemployment in the country. The rise or fall is normally indicated in the gross domestic product (GDP) of various countries. Economic recession has been one of the fears that, many business plans and organizations would not wish to encounter in their endeavors. It affects all kinds of businesses whether big or small, and normally results in a failure to get access to particular commodities and services by the consumers (Goodstein, et, al. 2003).
Economic recession has hit many economies at the moment including that of the United States of America and Japan, and has thus affected the economies of other nations that depend directly or indirectly on the economies of these nations. There are a number of factors that have been seen to be the cause of the economic recession in the world. These include factors that concern a widespread spending and the availability of commodities to exchange with the spending. It also entails internal factors as taxation and the supply of money in various nations (Geroski and Gregg, 1997).
Before taking any action, various economies takes steps in order to determine and know if there is recession in the business market. This is normally the work of the business cycle acting committee based at the national bureau of economic research (NBER). It participates in a number of activities as providing an efficient way in order to know if there is an economic recession. This is normally done by the determination of the business activity present in the economy by considering concepts as production, employment, sales in the whole-sale and the real income.Want an expert to write a paper for you Talk to an operator now
This board perceives recession as to cover the time when the business has grown very well and has reached a peak in its production, to the time when the business starts to drop to the bottom line. This is the normal activity that takes place in the field of business as you won’t expect a business to rise without reaching a point where it starts to drop in its production and expanse. After some time, there is the second and subsequent rise, called the expansionary period. Basing on the fact that recession is bound to happen at any time, many economies have tried to come up with major ways to check on them so that the businesses won’t experience a total loss but will be able to survive and finally recover from the acts of recession.
At the moment, there are various strategies that have been involved in various countries to help counter the problem of recession. For instance, in the United States of America, there is the call for the citizens to spend more so that they can avoid the problem of recession as inflation. There are other plans as reformation of the pension schemes, public sector unions and other institutions that make use of the government resources. This is to help in the management of the spending by the government and other sectors that depend on it. It is the inadequacies in the aggregate demand in the economy that most mainstream economies perceive as the cause of recession. The policy makers are the main determinants of the decisions that are made against recession (Goodstein, et, al. 2003). For instance, those who believe in monetarism use expansion in the monetary policy while others would engage in increased spending by the governments in order to raise growth in the economy.
Organizational planning entails the ideas and strategies that are educational in nature and are used by various organizations to offset change in the fields of beliefs, values and the structures of the organization for it to manage and survive the regular and unexpected changes that take place in the field of economy. The company or any other organization has to participate in a number of plans that will be a guide on the occurrence of recession. There are a number of plans that are in use by various institutions as follows; there is the use of the behavioral science by the companies. This involves focusing on the behaviors of the company as that of the workers in an organization, and trying to find its effect to the natural environment and the world at large. In this case, many economies have to gauge themselves with the other economies in the market such that when recession occurs, it will be in a position to manage and come out of it within a short period of time (Geroski and Gregg, 1997).
System improvement is another plan that a business can take in order to deal with recession. There are various sectors within an organization or institution that have to be well organized and managed for the betterment of the future of the economy of the organization. These include factors as making use of the modern technology equipment, having the workers participate in the modern ways of management and embracing the culture of hard work and flexibility in the management of the company such that incase of anything as recession, the organization will be able to adjust to and take a very short time to bail out of it. The organization also has to take appropriate and timely plans that can act incase of an emergency, and be able to yield expected results as well (Sanderson and Cushman, 1997).
Self analysis is another factor that an organization can take in order to counter the problem of recession.
The company has to carry out analysis on its strategies, goals, objectives, marketing plans, what are the possible outcomes of any strategy that has been taken, and the evaluation of the recent plans of action with respect to the future plans and the changes in the world market. This also entails the comparison of the organization with the other organizations that are in existence. This will enable the organization to deal with the problem of recession as soon as it is perceived to occur.
The firms are at the point of restructuring and reorganizing their management activities in order to curb the problem of recession. There are a number of strategies that have been engaged by these firms. National economic programs have been implemented by a number of countries as the Ireland, and this needs to be done by the other states that have felt the burden of recession. One factor that is under search to be eradicated by various countries is on the inflation. This is the sudden rise in the prices of the basic goods and services over a period of time. It also takes to the erosion that attacks the purchasing power of the money. For the nations to manage the problem of recession there is need to stabilize on the inflation and its rate in every setting. This will enable the market to adjust swiftly in a downturn and thus cut on the risk of liquidity trap that hinders the monetary policy from stabilizing the economy. The inflation rates therefore have to be kept low and stable (Sanderson and Cushman, 1997).
Another strategy is on the payment of tax by the citizens of the nation, and that being paid by one nation to another. Many countries are engaging in deals and strategies that cut short on the burdens that result due to taxation. One of the means that a nation loses money is through the payment of tax that has been levied either to the producer or the consumer. By easing the burden due to taxation, there will be an advantage to manage recession.
Among other factors to manage recession is the promotion of foreign investment. Many economies are venturing onto the field of going global in most of its undertakings. This is one of the strategies that have assisted many nations to rise in their economies. Through the act of going global in business and making investments in outside countries, one is left secured due to the risk of recession. Moreover, an institution can increase on the labor force skills in order to meet greater production (Allison and Kaye, 2005).
In planning, one needs to make various considerations as concerns the weaknesses and strengths of an institution. This will help make improvements and adjustments that will yield good fortunes for the institution. First is to look at the objectives and goals of the institution, either long term or short term, and compare with the efforts that are being put in the system together with the results that are got. Strengths are identified from the strategies that have been implemented and have proved to yield, in or out of season. On the other hand, weaknesses can be obtained from the failures that have rocked the institution due to the implementation of particular strategies. An organization is competent in its strategies if it is able to survive, with a continuum of high productivity, the challenges and hurdles that are met by business firms and other organizations (Allison and Kaye, 2005).
A mission statement is a short and formal statement that spells out the purpose &aim of the organization, the stakeholders, the responsibilities and the products and services offered by an organization or an institution. A planned change refers to a procedural methodology that can be implemented at the process of effecting a particular change in an organization. As compared to a mission statement, it is not for a long time as can be a mission statement. It explores on the need for change, to the evaluation of the particular change that has been effected. Planned change normally arises due to the environmental pressures that might be posing risks to the business. On the other hand, the unplanned change is one that happens under the unseen or unanticipated influences (Talbot, 2003).
In conclusion, any business is subject to recession as it rises from one lower point to another and normally drops before it takes another rise. It is therefore the work of the business managers and planners to see to it that the procedures and productivity of the business is kept at a continuum (Geroski and Gregg, 1997). This can only be carried out through strategic planning with the involvement of effective mission statements, planned and the unplanned changes.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.957696795463562,
"language": "en",
"url": "https://fetch.ai/the-future-of-consensus-proof-of-stake-with-unpermissioned-delegation/",
"token_count": 2907,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.011962890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3ba96ede-20a4-4621-8ca2-c00bd7e6334b>"
}
|
The Future of Consensus: Proof-of-Stake with Unpermissioned Delegation
Aug 8, 2019
The Benefits of Commitment
Consensus is the process of forming an agreement between many parties. Achieving consensus is therefore amongst the oldest problems that humanity has faced, preceding even the emergence of complex societies. Algorithms for achieving consensus also lie at the heart of designs for blockchains; revolutionary technologies that have existed for barely a decade. The issue of ensuring agreement between multiple computers in a network has some important parallels with achieving consensus among groups of people. The most prominent of these is the need for the participants in both types of consensus to make some form of commitment.
In human groups attempting to reach a consensus, the commitment typically involves all parties agreeing, ahead of time, to abide by the outcome of the decision-making process. In the blockchain scenario, the computers participating in consensus (generically known as nodes) can only make commitments using the passing of electronic messages. As a result, these must take a different form from the human setting. The key technological breakthrough that enabled the world’s first successful blockchain, Bitcoin, was to use a simple algorithm to enable nodes to demonstrate a commitment of resources. More importantly still, the inventor of Bitcoin introduced a scheme where nodes (also known as miners) are paid for each commitment that they make.
In the Bitcoin consensus protocol, the commitment of resources is known as Proof-of-Work (PoW). This is an elegant design that enables the amount of computational effort that has been expended in producing the proof (which is a compact 256-bit number) to be verified by other nodes very cheaply. In Bitcoin, a PoW contest takes place to establish which miner can produce the proof most quickly, and with it receive payment for producing a block. The stochastic nature of the algorithm also solves the problem of how to allocate the production of blocks to different nodes in the peer-to-peer network at different times. Unfortunately, the success of Bitcoin and the properties of its PoW consensus has led to centralization and unacceptable energy consumption.
From an economic perspective, the commitment only needs to involve some expense for the node, and does not necessarily require consumption of energy or any other external resource. A much more energy-efficient, decentralized and secure consensus can be achieved by using a resource that is intrinsic to the ledger. This is the rationale for Proof-of-Stake (PoS) consensus where the commitment typically takes the form of provably locking tokens for a specific period of time.
Proof-of-Stake: A Solution to Bitcoin’s Energy Problem
Since its invention, many PoS variants have been proposed. While these have generally represented an improvement over PoW, many difficulties remain unresolved. One problem with most forms of PoS is that people with the largest coin holdings have the greatest control of the consensus. If these “whales” also earn excessive rewards for maintaining the network, then this leads to a self-reinforcing spiral towards ever greater centralization.
Another issue with PoS is that it is exclusive, and leads to smaller token holders being left unable to participate in the consensus. Nonetheless, holders of small deposits are often willing to stake their tokens in exchange for rewards, and this desire is typically fulfilled by staking pools. The pools accept deposits from many different token holders and aggregate the funds into a single account that they then use to become a validator¹. In exchange for the temporary “loan” of the tokens, the operators pay rewards for the contributions.
The disadvantage of stake pools is that the small token holdings are often transferred directly to staking pool operators, and this introduces a reliance on “trusted” third parties. This is contrary to the objective of decentralization and leaves the small token holders exposed to the stake pool operators being hacked or fraudulent. The cryptoeconomic security is also harmed, as token holders with an interest in the blockchain are not able to participate. An attempt to address these limitations was made with a scheme known as delegated Proof-of-Stake dPoS.
Delegated Proof-of-Stake: A False Hope
Perhaps the best known dPoS chain is EOS², whose consensus was initially marketed as a way of increasing transaction throughput. The small number of nodes in EOS (21 at any one time) allowed greater throughput to be achieved than earlier chains as the system is more centralized. However, more recent consensus designs can support high throughput without resorting to restrictions on the number of nodes. It was also proposed that the dPoS consensus offsets the technical centralization of the protocol while addressing some of the defects of conventional PoS consensus schemes.
There are many variants of dPoS, but the general principle is that the governing foundation selects a group of entities that it considers to be eligible for becoming validators. This typically requires the candidate to be known publicly and to fulfil other obligations specified by the chain’s governance. The candidates then canvass other stakeholders and offer incentives for them to delegate their tokens towards them. The candidates that receive the most delegated stake are then elected as validators. This process has superficial similarities with stake pools but several important differences. Most importantly, the returns for being a validator are guaranteed and delegated stake is not subject to any financial risk.
Since individual users have little influence over the election of validators and no “skin-in-the-game”, their choice of candidate will be based primarily on the size of the incentive that they are offered. This looks very much like institutional bribery and it is unsurprising that dPoS chains are prone to manipulation and specifically the formation of cartels. Once established, these are difficult to displace, as stake delegated to members of the cartel is pooled to the exclusion of everyone else. The cartel can further exploit the semi-permissioned process for becoming a validator to create further barriers to new entrants. A remarkable example of this can be found on the Lisk blockchain, where a cartel openly advertizes that it controls a majority of the nodes on the network. Worse still, the cartel also offers incentives to users to discourage them from voting for its competitors.
Resolving the Blockchain Cost Trilemma
With these factors in mind, we set out to design a consensus scheme that overcame the limitations of PoS, and that genuinely delivered on the promise of dPoS consensus but without its major drawbacks. In doing so, we first discuss two unrealised advantages of dPoS that, when implemented correctly in the PoS-uD consensus, can bring great benefits to the platform in terms of cost, efficiency and security. The two key features are a restriction on the number of nodes that operate the network and an economic design that provides an opportunity for smaller investors to delegate stake to validators securely and without promoting corruption.
At first glance, restricting the number of nodes might appear to be a backward step since protocols such as Algorand are designed to support huge numbers of validators that can even run on desktop computers with intermittent internet access. While an impressive achievement from a cryptographic and distributed computing point-of-view, the economic inefficiency of the Algorand protocol will cause it to be uncompetitive with PoS-uD for enterprise applications. To explain why this is the case, we introduce the cost trilemma for blockchains and how it informed the design of PoS-uD.
The cost trilemma reflects the trade-off between three properties that are all desirable but are in conflict with each other. In the trilemma, cost refers to the total operating cost of the blockchain, which should ideally be kept as low as possible. Another property is security, which reflects how expensive it is for an attacker to own 51% of the stake and take control of the consensus. And finally, decentralization, which refers to the ledger’s degree of replication.
The total operating cost reflects all aspects of running the ledger, and is recovered from users of the network in two ways. The first way is direct charges in the form of transaction fees. The second is indirectly through the issuance of new tokens, also known as inflation³. If the issuance is set at a high rate, such as the 5% or more charged historically on the Bitcoin, Ethereum or EOS networks, it means that users’ assets are depreciating at approximately the same rate. For a network designed for the economy-of-things and extremely high transaction throughput it is essential that the operating costs be kept low.
On the other side of the cost trilemma are the properties of security and decentralization, which are both funded by the operating cost. Decentralization is short-hand for the cost of operating the physical infrastructure that maintains the network. At one extreme, a small number of nodes is cheap to operate but centralized, and at the other, massive replication of the ledger by low-powered machines, as proposed by Algorand, will be prohibitively expensive. Technical innovations such as effective sharding can be used to achieve greater decentralization at lower cost, which is one of the reasons that third-generation designs, such as Fetch.AI, are likely to displace older protocols.
Beyond the cost of purchasing and running the compute resources for a node, validators are also subject to the opportunity cost of locking their tokens as stake. Opportunity cost is a quite abstract concept, but can be thought of as the cost of not being able to use the staked tokens for other purposes such as active trading. What matters from a consensus point-of-view is that the opportunity cost increases proportionally to the amount of tokens that are staked. An individual can therefore decide how much to bid to become a validator based on whether they expect a profit after accounting for operating and opportunity costs.
The consequence of the economic factors is that the quantity of staked tokens, and therefore the security of the protocol, depends directly on the size of the charges that have been levied on users after subtracting the cost of physically operating the network. This relationship inspired Fetch.AI to develop a consensus that made it possible to choose the optimal trade-off between the different aspects of the cost trilemma, which is one of the major benefits of PoS-uD. The alternative used by many other projects such as Ethereum and Algorand leads to coupling of security and operational costs, and are likely to lead to much higher fees for users as a result.
Secure Stake Delegation
The other important design criterion in the development of PoS-uD was to enable stake to be delegated more securely and efficiently than in “standard” PoS but without the potential for abuse that is associated with dPoS. Implemented properly, stake delegation has several benefits for the blockchain. The main benefit is that it can greatly decrease the barriers to participating in the consensus, which increases both the decentralization and security. The decentralization is improved by increasing the number and diversity of participants in the consensus while greater security is achieved by the increasing the quantity of staked tokens.
The PoS-uD staking design is referred to as being “unpermissioned” since becoming a validator does not require any kind of approval by the Fetch.AI foundation, and instead relies on competition in the market. Another key component of the design is that the delegation of stake to another party does entail some financial risk. In particular, the delegated stake could be lost if the validator launches an attack on the protocol. The presence of risk means that users are likely to demand that validators have a public identity, operate their nodes with transparency and efficiency, and offer good returns for staking tokens. Collectively, these different elements contribute to establishing a reputation for validators.
The validator’s dependence on reputation is beneficial for the overall security and function of the platform. This arises from what economists call dynamic incentives, which mean that most of the benefit that a validator can expect to earn from operating a node will be realised in the future. Dynamic incentives increase the system’s security, as the benefits of good behaviour extend beyond the reward that is immediately available. Another advantage of this approach is that it establishes validators as parties who act in the interests of users in the governance of the chain.
Bringing it All Together with PoS-uD
The final step of the development of the PoS-uD consensus was to design a market mechanism that fulfilled all of these objectives. In doing so, we wished to ensure that the costs of becoming a validator are constant to ensure that the risks and rewards are uniform for all participants in the consensus. The auction mechanism that is used in our staking program has been designed to fulfil these requirements.
The auctioning of validator rights also enables the bidders to be certain of the rewards that are available, which enables them to offer a defined interest rate for the delegation of stake by retail investors. This staking model is complementary to our minimal agency scheme for operating the blockchain once the validators have been selected, and can also be adapted to accommodate developments of the underlying ledger technology. These features combined with a means for navigating the cost trilemma make the Fetch.AI platform the ideal economic basis for decentralized applications such as exchanges, prediction markets and the machine-to-machine economy.
While most blockchain projects are focussed on improving the underlying technology, the cause of the unsatisfying user experience on many existing platforms is often flawed economics. The Fetch.AI ledger is being designed from the bottom-up with sound economic principles combined with technical innovations. These developments are aimed at driving user adoption in the short term and supporting our ambitions for the agent economy in the future.
1. Validators are nodes that take part in the consensus protocol equivalently to miners in PoW chains.
2. dPoS consensus has proven to be quite popular, and is used by Lisk, Tron, Neo, Steem, Bitshares and Tezos to name a few well-known examples.
3. The technically correct term for new token issuance is seignorage.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9459789395332336,
"language": "en",
"url": "https://larryferlazzo.edublogs.org/2008/08/22/the-best-sites-for-students-to-create-budgets/",
"token_count": 541,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.07568359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5a320243-170d-4e2e-8d24-e042f973546a>"
}
|
This short “The Best…” list is sort of an addendum to The Best Sites For Learning Economics & Practical Money Skills. Even though there are some great financial literacy sites on that list, there really isn’t anything there that students can use to create a budget for themselves — either reflective of their present income and expenses or what they anticipate for the future.
There are tons of online budgeting tools, but most, I think, are not particularly accessible to English Language Learners. As with all of my “The Best…” lists, though, I will only include online applications that I think will be accessible to ELL’s (and are free to use).
Here are my choices for The Best Sites For Students To Create Budgets:
Career Zone California has revised their exceptional online student budget calculator.
(There’s now a site that will provide you with a localized budget of what you need to live in any city or town in the United States. It’s called The Living Wage Calculator, and has been developed by people at Pennsylvania State University.)
Numbeo shows the cost-of-living in just about every country in the world, and many cities in the United States.
Pear Budget is good for students who don’t live in California. It, too, has a step-by-step guide. However, it doesn’t have the information needed for students to realistically develop their budget — they would have to research the specifics elsewhere. But the site is very clear what budget categories students would need to use, and it’s very clear how to input the information. You can use the site without saving the information for free, and then you can get a free thirty-day trial before you have to start paying for it. But you can just have students complete it and print it out without doing any sign-up at all.
Living On A Budget is a good interactive that’s accessible to English Language Learners. It’s one of many resources on a site called “The Mint.”
Finally, Practical Money Skills For Life has a variety of simple and reasonably accessible calculators for a variety of financial issues.
Planwise is a new free tool that seems to me just about the most ambitious web tool out there for budgeting. It may be a little too complicated for English Language Learners, but it’s worth a look.
As usual, feedback and additional suggestions are always welcome.
If you’ve found this post useful, you might want to consider subscribing to this blog for free.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9378727674484253,
"language": "en",
"url": "https://www.eda.admin.ch/aboutswitzerland/en/home/wirtschaft/uebersicht/export.html",
"token_count": 268,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1708984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0baa6a64-a58f-4fe5-b17d-197110d7185f>"
}
|
Poor in commodities but rich in a highly qualified workforce, Switzerland maintains intensive trade relations with the rest of the world.
Switzerland's economy is highly dependent on foreign trade. In 2012, total exports (goods and services) amounted to CHF 285.8 billion. Total imports, for their part, amounted to CHF 220.8 billion. Switzerland regularly runs a trade surplus.
The service sector (banks, insurance, tourism) accounts for a significant share of Switzerland's foreign trade: 29% of all exports and 20% of all imports.
Trade in goods
In 2012 Switzerland exported CHF 200.6 billion worth of goods. 57% of this amount involved exports to EU countries. Germany is Switzerland's main trading partner, with a 20% share of exported goods. Switzerland's most important export goods are chemical and pharmaceutical products (CHF 79 billion), watches (CHF 44 billion), and machinery (CHF 33.3 billion).
Of the goods worth CHF 176.8 billion imported in 2012, 75% came from EU countries. Germany alone delivered 31% of imported goods. The largest shares of imported goods were accounted for by the chemical and pharmaceutical industry (CHF 39.4 billion), the machine industry (CHF 29.4 billion), and the watch industry (CHF 19 billion).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.931315004825592,
"language": "en",
"url": "https://www.geeksforgeeks.org/top-10-cloud-computing-research-topics-in-2020/?ref=rp",
"token_count": 1513,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0299072265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f0473208-dbf0-424f-9022-234b60a5c754>"
}
|
Cloud computing has suddenly seen a spike in employment opportunities around the globe with tech giants like Amazon, Google, and Microsoft hiring people for their cloud infrastructure. Before the onset of cloud computing, companies and businesses had to set up their own data centers, allocate resources and other IT professionals thereby increasing the cost. The rapid development of the cloud has led to more flexibility, cost-cutting, and scalability.
The Cloud Computing market its an all-time high with the current market size at USD 371.4 billion and is expected to grow up to USD 832.1 billion by 2025! It’s quickly evolving and gradually realizing its business value along with attracting more and more researchers, scholars, computer scientists, and practitioners. Cloud computing is not a single topic but a composition of various techniques which together constitute the cloud. Below are 10 the most demanded research topics in the field of cloud computing:
1. Big Data
Big data refers to the large amounts of data produced by various programs in a very short duration of time. It is quite cumbersome to store such huge and voluminous amounts of data in company-run data centers. Also, gaining insights from this data becomes a tedious task and takes a lot of time to run and provide results, therefore cloud is the best option. All the data can be pushed onto the cloud without the need for physical storage devices that are to be managed and secured. Also, some popular public clouds provide comprehensive big data platforms to turn data into actionable insights.
DevOps is an amalgamation of two terms, Development and Operations. It has led to Continuous Delivery, Integration, and Deployment and therefore reducing boundaries between the development team and the operations team. Heavy applications and software need elaborate and complex tech stacks that demand extensive labor to develop and configure which can easily be eliminated by cloud computing. It offers a wide range of tools and technologies to build, test, and deploy applications with a few minutes and a single click. They can be customized as per the client requirements and can be discarded when not in use hence making the process seamless and cost-efficient for development teams.
3. Cloud Cryptography
Data in the cloud is needed to be protected and secured from foreign attacks and breaches. To accomplish this, cryptography in the cloud is a widely used technique to secure data present in the cloud. It allows users and clients to easily and reliably access the shared cloud services since all the data is secured using either the encryption techniques or by using the concept of the private key. It can make the plain text unreadable and limits the view of the data being transferred. Best cloud cryptographic security techniques are the ones that do not compromise the speed of data transfer and provide security without delaying the exchange of sensitive data.
4. Cloud Load Balancing
It refers to splitting and distributing the incoming load to the server from various sources. It permits companies and organizations to govern and supervise workload demands or application demands by redistributing, reallocating, and administering resources between different computers, networks, or servers. Cloud load balancing encompasses holding the circulation of traffic and demands that exist over the Internet. This reduces the problem of sudden outages, results in an improvement in overall performance, has rare chances of server crashes, and also provides an advanced level of security. Cloud-based servers farms can accomplish more precise scalability and accessibility using the server load balancing mechanism. Due to this, the workload demands can be easily distributed and controlled.
5. Mobile Cloud Computing
It is a mixture of cloud computing, mobile computing, and wireless network to provide services such as seamless and abundant computational resources to mobile users, network operators, and cloud computing professionals. The handheld device is the console and all the processing and data storage takes place outside the physical mobile device. Some advantages of using mobile cloud computing are that there is no need for costly hardware, battery life is longer, extended data storage capacity and processing power improved synchronization of data and high availability due to “store in one place, accessible from anywhere”. The integration and security aspects are taken care of by the backend that enables support to an abundance of access methods.
6. Green Cloud Computing
The major challenge in the cloud is the utilization of energy-efficient and hence develop economically friendly cloud computing solutions. Data centers that include servers, cables, air conditioners, networks, etc. in large numbers consume a lot of power and release enormous quantities of Carbon Dioxide in the atmosphere. Green Cloud Computing focuses on making virtual data centers and servers to be more environmentally friendly and energy-efficient. Cloud resources often consume so much power and energy leading to a shortage of energy and affecting the global climate. Green cloud computing provides solutions to make such resources more energy efficient and to reduce operational costs. This pivots on power management, virtualization of servers and data centers, recycling vast e-waste, and environmental sustainability.
7. Edge Computing
It is the advancement and a much more efficient form of Cloud computing with the idea that the data is processed nearer to the source. Edge Computing states that all of the computation will be carried out at the edge of the network itself rather than on a centrally managed platform or the data warehouses. Edge computing distributes various data processing techniques and mechanisms across different positions. This makes the data deliverable to the nearest node and the processing at the edge. This also increases the security of the data since it is closer to the source and eliminates late response time and latency without affecting productivity.
Containerization in cloud computing is a procedure to obtain operating system virtualization. The user can work with a program and its dependencies utilizing remote resource procedures. The container in cloud computing is used to construct blocks, which aid in producing operational effectiveness, version control, developer productivity, and environmental stability. The infrastructure is upgraded since it provides additional control over the granular activities over the resources. The usage of containers in online services assists storage with cloud computing data security, elasticity, and availability. Containers provide certain advantages such as a steady runtime environment, the ability to run virtually anywhere, and the low overhead compared to virtual machines.
9. Cloud Deployment Model
There are four main cloud deployment models namely public cloud, private cloud, hybrid cloud, and community cloud. Each deployment model is defined as per the location of the infrastructure. The public cloud allows systems and services to be easily accessible to the general public. Public cloud could also be less reliable since it is open to everyone e.g. Email. A private cloud allows systems and services to be accessible inside an organization with no access to outsiders. It offers better security due to its access restrictions. Hybrid cloud is a mixture of private and public clouds with the critical activities being performed using private cloud and non-critical activities being performed using the public cloud. Community cloud allows system and services to be accessible by a group of an organization.
10. Cloud Security
Since the number of companies and organizations using cloud computing is increasing at a rapid rate, the security of the cloud is a major concern. Cloud computing security detects and addresses every physical and logical security issue that comes across all the varied service models of code, platform, and infrastructure. It collectively addresses these services, however, these services are delivered in units, that is, the public, private, or hybrid delivery model. Security in the cloud protects the data from any leakage or outflow, theft, calamity, and removal. With the help of tokenization, Virtual Private Networks, and firewalls data can be secured.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9364948868751526,
"language": "en",
"url": "https://www.producer.com/news/are-food-companies-influencing-fertilizer-use/",
"token_count": 907,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2307672e-9717-4009-b610-6d054a01a46a>"
}
|
Retailers are catering to consumer backlash on agricultural inputs, and growers are being told to produce more with less
Many people in agriculture, including many farmers, remain skeptical about sustainable farming.
Many say the phrase is meaningless, or call it a fad that will amount to nothing.
Brian Arnall isn’t one of those people. In October, Arnall, an Oklahoma State University soil scientist, spent two hours communicating with a representative of Walmart, answering questions about wheat production in Oklahoma.
“They (Walmart) want the ins and outs. Inputs in, inputs out. (Nutrient) losses here and losses there,” said Arnall, who spoke at the Canola Discovery Forum, an agronomy conference held late October in Winnipeg.
Walmart is interested in how crops are produced because the company is committed to sustainable sourcing of food.
As part of that commitment, Walmart has partnered with organizations like the Environmental Defense Fund, a massive environmental group, to “produce more food with fewer resources.”
“Working with supplier companies … we (Walmart) will provide increasing visibility over the next 10 years to agricultural yields, greenhouse gas emissions, and water usage, and drive adoption of best practices in sustainable agriculture,” Walmart says on its website.
When the largest grocer in the world makes such statements, it’s a sign that the farming practice of chasing highest possible crop yields at all costs is over, said Arnall, who specializes in precision nutrient management at OSU.
“The mindset of maximizing yield without regards (for environmental or social consequences) is not going to work.
“We cannot maintain that thought process as an agricultural community going into the future. Not because of economics, not because of crop production, because of politics.”
Mario Tenuta, University of Manitoba soil scientist, agreed crop production will need to change. But the philosophical shift away from maximum yield is just underway.
“It will rather be the most economical yield considering price of fertilizers, and impact on environment and ecology will be the goal.”
Two major forces will push farmers to think differently about fertilizer and yield: government regulators and end users, Arnall said.
U.S. states like California and Florida have introduced restrictions on fertilizer use, because the public will no longer accept cropland nutrients flowing into lakes, rivers or oceans.
In Canada, the federal government plans to introduce a carbon tax, which is expected to drive up the cost of nitrogen and penalize farmers who use excessive amounts of fertilizer.
Regulators will likely impose change, but Arnall thinks corporations like Walmart and General Mills will ultimately have more influence.
“They’re all chasing right now for that sustainability model,” he said. “Companies are using that (sustainability) to sell…. One guy (a corporate rep) actually said we want good stories.”
Grocery chains and food companies may crave narratives around sustainable ag, but they also want metrics.
“How much nitrogen per ton of wheat produced?” Arnall said, recalling conversations with sustainability reps. “How much phosphorus was applied versus how much was removed?”
Adopting a mindset of ‘more with less’, or more yield with the same amount of fertilizer, could be challenging for many Canadian producers.
Over the last decade or so, rates of nitrogen applied to wheat, canola and corn has increased.
“This is because genetic and agronomic management improvements result in capitalizing on greater N additions to produce more yield,” Tenuta said.
Soil scientists and agronomists across North America believe the 4R nutrient stewardship program will help end the ‘more is better’ philosophy.
The 4Rs stand for:
- right source
- rght rate
- right time
- right place
Arnall said this sort of nutrient stewardship is inevitable because buyers of grain and the food industry will make it happen.
The only question is whether they will use a carrot or a stick.
“Are they going to incentivize… or will there be dockages applied?” he said. “I have a feeling they’re going to first try the carrot before the stick… I’m hoping the carrot comes first.”
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9749054312705994,
"language": "en",
"url": "https://www.rnz.co.nz/international/pacific-news/311539/poverty-in-marshalls-'worse'-than-figures-suggest",
"token_count": 169,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.30078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:465bde81-3145-4b8b-877c-7df355cff613>"
}
|
A domestic poverty line calculation for the Marshall Islands shows that up to 37 percent of the population lives below a basic needs income level.
The level was estimated by the Asian Development Bank based on data from the 2011 national census.
The basic needs income was defined as being $US 14.50 per person per week in urban areas, and $US13.60 in rural areas.
60 percent of people in rural areas live below that and 28 percent in urban areas.
But our correspondent in Majuro, Giff Johnson, said the figures might be wrong.
"It's very likely that the poverty situation could very well be worse, I think we'd have to see updated figures to really know but there has not been much in the way of significant development or expansion of employment in the five years since the national census was conducted."
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9548330903053284,
"language": "en",
"url": "http://www.oknativeimpact.com/education-impact/",
"token_count": 534,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2cff42fc-572b-4520-8d91-e9c90a8737d1>"
}
|
Tribes paid over $1.5 billion in exclusivity fees, of which over $1.3 billion has been earmarked for public education.
In 2017, Oklahoma tribes spent $80.5 million for tribal education programs, scholarships and donations to Oklahoma education institutions.
When combined with the exclusivity fees, Oklahoma tribes delivered $198 million for Oklahoma education in 2017.
Oklahoma’s economic future depends on a well-educated and trained workforce. Oklahoma’s Tribes understand that investing in education helps drive economic growth and quality of life for Indians and non-Indians alike. Millions in annual tribal investments to public and higher education, including colleges and vocational training, provides lasting benefits for students, schools, and families now and for generations to come.
Exclusivity fees from gaming revenue are established in compacts between individual tribes and the state of Oklahoma. More than $1.3 billion has been paid to the state of Oklahoma since 2006. Exclusivity fees reached nearly $134 million in fiscal year 2017 and almost $139 million in 2018.
Other direct funds from tribal governments and businesses that flow into local school districts across the state come from tribally issued Motor Vehicle Tag sales, annual donations, scholarships and other financial support. This direct support topped $80.5 million in 2017. Tribal business and job creation, often in rural Oklahoma communities, also contribute to the health of the local economy and property taxes which fund local schools.
Oklahoma’s K-12 schools significantly benefit from federal dollars for public education based on Oklahoma’s large population of Native American students. Under the Johnson-O’Malley Act, enacted in the 1930s, federal funds are appropriated to states and distributed to local school districts and individual schools based on the size of that school’s Native American student population. This additional funding directly impacts all students by increasing school budgets for everything from teacher salaries to textbooks and technology, facilities or transportation.
Tribes also invest in higher education through donations and partnerships with Oklahoma’s universities and other educational institutions, but more commonly, through direct assistance and scholarships to students. Tribal higher education financial support ranges from annual scholarships, tuition assistance and housing assistance.
Other direct tribal support for students and families of all ages includes robust social services and assistance programs. From school clothing, school supplies, transportation and after school care, the assistance programs are vital for successful, educated children and families, especially in under-served rural Oklahoma communities.
Oklahoma Tribes have a rich history of investing in education. Today’s continued investment is an investment in all of Oklahoma for future generations.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9687144756317139,
"language": "en",
"url": "https://602a5fde2490f.site123.me/fundraising-for-small-businesses",
"token_count": 754,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09033203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5ff619fa-7755-4f5e-a85c-6a03672f4608>"
}
|
Today, nearly every business, whether they are service-based or sales-based, has taken advantage of the ability to raise funds through the Internet by creating websites for a variety of purposes, such as a business fundraiser, a charitable organization, or a student fundraiser. But for some businesses, the goal is not raising money so that they can make themselves better corporate citizens, but rather to raise money so that they can make even more money. This is called a business grant and they may be able to obtain grants from various state, federal, and/or private entities to achieve their business goals. Visit this link now
for more details.
One type of business grantor is a government agency, such as the United States Small Business Administration (SBA). The SBA generally provides loans and other financial resources to small businesses that are in need of start-up funding, capital financing, expansion, and management assistance. While this funding is primarily intended for businesses with significant start-up costs, there are also funds available for businesses that are not necessarily small businesses, such as franchisees, contractors, consultants, and others.
Another type of business grantor is a private grant, which means that they provide funds to non-profit organizations for a variety of purposes. They can be used for business marketing and promotional activities, advertising campaigns, student activities, community development, and the like. Sometimes, these organizations will receive grants based upon need rather than on the ability of the business to profit from a particular activity. For example, if a business needs help getting started but cannot come up with the capital to do so, it will not be awarded funding as a business fundraiser but might be eligible for financial assistance based upon its community development efforts. Check out scotthall.co
to get started.
Private foundations and special interest groups have also begun to provide funds for business fundraising activities. One of the most popular among these is the YWCA, which has been around for over a century and was founded to help women advance in the area of public and private service. The YWCA operates many different programs, including one that gives women in recovery the chance to raise money for their living expenses. Other groups may provide funds for various projects, ranging from assisting women's groups in starting their own business to helping women reach their goals in other areas, such as politics and the art world. In many cases, these organizations require an annual contribution to their cause; however, they do reserve the right to solicit funds in any way that they see fit.
The third type of business organization that provides funding is the Catholic Family Charities. Unlike many other business grants, the Catholic Family Charities does not award monies directly to businesses. Instead, it conducts many services, such as providing assistance for victims of natural disasters and social services for those who are in need of housing, food, and clothing. Because the Catholic Family Charities receives funds from tax-dollars, it may be more willing to work with businesses that are members of the organization.
Businesses should be wary of using these three types of fundraising strategies. Each one has its advantages and disadvantages, so it is important to carefully evaluate each one before deciding which is right for a particular business. As a Business Plan demonstrates, when properly planned and executed, fundraising can be an effective means of creating additional income for small businesses. The key is to find a fundraising company that can handle all of your business's needs while providing you with a good return on your investment. Contact qualified business grants today to learn more about how you can maximize your business's fundraising efforts.
Learn more here: http://www.youtube.com/watch?v=4v_ees8p4SQ
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9550049901008606,
"language": "en",
"url": "https://asiatimes.com/2018/11/the-threat-of-silent-inflation/",
"token_count": 1166,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.369140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9308a147-9f65-4338-a693-94789e993347>"
}
|
In many countries, inflation has become so low and stable in recent decades that it appears to have faded into the woodwork. Whereas galloping inflation was once widely viewed as the No 1 economic problem, today most people – at least in the developed countries – hardly ever talk about it or even pay attention to it. But “silent inflation” still has subtle effects on our judgment, and it may still lead to some consequential mistakes.
Since New Zealand’s central bank set the first example in 1989, monetary authorities around the world have increasingly pursued a policy of setting inflation targets (or target ranges) that are substantially above zero. That is, policymakers plan to have inflation, but steady inflation. What used to be a dirty word is now announced publicly, and moderation is enforced.
Central Bank News tabulates these targets for 68 countries. The European Central Bank targets annual inflation in 2018 at “below, but close to, 2%.” In Canada, Japan, South Korea, Sweden, the United Kingdom and the United States, the 2018 inflation target is 2%. China and Mexico target 3% annual price growth. In India and Russia, the target rate is 4%. It is 5% in Ukraine and Vietnam, and 6% in Azerbaijan and Pakistan.
Some countries have had double-digit inflation targets. Egypt has set a target of 13%, plus or minus 3 percentage points, for this year. But most countries have set their 2018 inflation targets at between 2% and 6%.
It is worth translating these annual inflation targets to longer-term inflation, assuming that the target is not changed in coming years. Inflation of 2% per year implies 22% inflation over a decade, or 81% inflation over 30 years. That will make numbers measured in currency look a lot bigger over time, even if nothing real is changing.
It is a lot worse if one considers a 6% inflation rate. At that pace, prices would rise 79% in 10 years and almost sixfold in 30 years.
Such policies cause a sort of magnification of the present in the minds of most people. Suppose you ask someone who has been living in the same house for 30 years what he or she paid for it. The purchase price will probably look ridiculously small. If one is not careful to remember the effects of inflation on all prices, it might seem that we are living in a magnificently successful new era. With silent inflation, it can be easy to forget that the truth is much less dramatic.
At the same time, in an age of Internet rumors and fake news, the world today can look a little unmoored from history. That might create a sense of real risk.
Inflation targeting has other effects, too, which seem to be more on the minds of central bankers.
In his influential 1998 book Inflation Targeting, Ben Bernanke and his co-authors advised policymakers to announce a target inflation rate because it “communicates the central bank’s intentions,” which would “reduce uncertainty.” The announced rate should be substantially positive, they wrote, because if officials tried to get it close to zero, any mistake could result in deflation, which “might endanger the financial system and precipitate an economic contraction.”
As Federal Reserve chairman from 2006 to 2014, Bernanke formally introduced inflation targeting in the United States in 2012, setting the annual rate at 2%, where it has remained ever since.
But reducing uncertainty about prices by keeping the inflation target at 2% or more might actually increase a sense of uncertainty about real things like home values or investments. While it is right to worry about massive deflation, the historical relationship between deflation and recession is not all that strong.
In a 2004 paper, economists Andrew Atkeson and Patrick Kehoe concluded that most of the evidence of a relationship comes from just one case: the Great Depression of the 1930s.
The news media’s tendency to fixate on new records serves their short-term interest in creating the impression that something really important has happened that justifies readers’ or viewers’ attention. But sometimes there is a bit of fakery in the record, especially when the record is described in nominal terms and we have steady inflation. As a result, the emphasis on records can encourage a disrespect for history and nurture a sort of disoriented feeling that we live in exceptionally uncertain times.
For example, sometimes the stock market has set a new record, whether up or down, which is nothing more than the result of inflation. On February 5 of this year, the Dow Jones Industrial Average fell 4.6%, far below the record 22.6% decline on October 19, 1987. But media reports chose to point out that the February 5 drop was the biggest-ever one-day decline in absolute terms (1,175 points on the DJIA).
Presenting a drop this way is misleading, and might encourage some panic selling. The amplitude of stock-market point swings invariably grows with general inflation in all prices.
The money illusion even bleeds into impressions of the “strength” of the economy, as if a high level of GDP growth or a bull market are indicators of the health of something called the economy. GDP growth numbers are conventionally reported in real (inflation-adjusted) terms, and unemployment numbers are unit-free. But reporting of just about every other major economic indicator is generally not corrected for inflation.
An inflation target of a few percentage points may seem to promote stability, and perhaps it really does. But we need to consider the possibility that it may lead to subtle misperceptions that have the opposite effect on the stability of our judgments.
Copyright: Project Syndicate, 2018.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9349161386489868,
"language": "en",
"url": "https://bankingallinfo.com/what-are-the-reasons-against-foreign-trade/",
"token_count": 677,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.357421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bdb3b034-6cf3-40f1-83be-7295eed38708>"
}
|
What are the reasons against foreign Trade?
Reasons against Foreign Trade are given below:
Following points explain the need and importance of foreign trade to a nation.
- Division of labor and specialization
Foreign trade leads to division of labor and specialization at the world level. Some countries have abundant natural resources. They should export raw materials and import finished goods from countries which are advanced in skilled manpower. This gives benefits to all the countries and thereby leading to division of labor and specialization.
- Optimum allocation and utilization of resources
Due to specialization, unproductive lines can be eliminated and wastage of resources avoided. In other words, resources are channelized for the production of only those goods which would give highest returns. Thus there is rational allocation and utilization of resources at the international level due to foreign trade.
- Equality of prices
Prices can be stabilized by foreign trade. It helps to keep the demand and supply position stable, which in turn stabilizes the prices, making allowances for transport and other marketing expenses.
- Availability of multiple choices
Foreign trade helps in providing a better choice to the consumers. It helps in making available new varieties to consumers all over the world.
- Ensures quality and standard goods
Foreign trade is highly competitive. To maintain and increase the demand for goods, the exporting countries have to keep up the quality of goods. Thus quality and standardized goods are produced.
- Raises standard of living of the people
Imports can facilitate standard of living of the people. This is because people can have a choice of new and better varieties of goods and services. By consuming new and better varieties of goods, people can improve their standard of living.
- Generate employment opportunities
Foreign trade helps in generating employment opportunities, by increasing the mobility of labour and resources. It generates direct employment in import sector and indirect employment in other sector of the economy. Such as Industry, Service Sector (insurance, banking, transport, communication), etc.
- Facilitate economic development
Imports facilitate economic development of a nation. This is because with the import of capital goods and technology, a country can generate growth in all sectors of the economy, i.e. agriculture, industry and service sector.
- Assistance during natural calamities
During natural calamities such as earthquakes, floods, famines, etc., the affected countries face the problem of shortage of essential goods. Foreign trade enables a country to import food grains and medicines from other countries to help the affected people.
- Maintains balance of payment position
Every country has to maintain its balance of payment position. Since, every country has to import, which results in outflow of foreign exchange, it also deals in export for the inflow of foreign exchange.
- Brings reputation and helps earn goodwill
A country which is involved in exports earns goodwill in the international market. For e.g. Japan has earned a lot of goodwill in foreign markets due to its exports of quality electronic goods.
- Promotes World Peace
Foreign trade brings countries closer. It facilitates transfer of technology and other assistance from developed countries to developing countries. It brings different countries closer due to economic relations arising out of trade agreements. Thus, foreign trade creates a friendly atmosphere for avoiding wars and conflicts. It promotes world peace as such countries try to maintain friendly relations among themselves.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9361380338668823,
"language": "en",
"url": "https://efface.eu/quantitative-analysis-environmental-crime",
"token_count": 368,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.4140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:38c8a99b-e843-491e-adfd-43a6f5e28460>"
}
|
The EFFACE project undertook a three phase analysis of the quantitative and monetary impacts of different types of environmental crime. The first phase sought to evaluate the availability of data for ten identified types of environmental crime. The report, “Understanding the damages of environmental crime: Review of the availability of data” evaluates the availability of data on:
- Waste: landfills and dumping
- Illegal waste shipment from Europe
- Pollution incidents
- Protected Areas
- Illegal trade in chemicals
Data was found to be highly variable and significant gaps for specific kinds of environmental crime existed. For example, there was little to no data on environmental crimes in protected areas nor was there sufficient data for illegal trade in chemicals. Lack of sufficient data made it difficult to provide a robust estimate of the overal impacts of environmental crime, however, a second phase of analysis selected five specific areas of environmental crime for further in-depth analysis where sufficient data existed.
The five areas of environmental crime analysed in depth were:
- Arson events
- Illegal wildlife trade in rhino and elephant
- Marine pollution
- Illegal WEEE shipments from the EU to China
- Illegal wildlife trade in Horsfieldii Tortoise
The research conducted found that data can be useful to understand the impacts of environmental crime. For some areas of environmental crime, gaps in data can be overcome by linking together data from different sources. For example, in the report on illegal wildlife trade of rhinos and elephant, population data in combination with poaching data was used to gauge the rate of extinction of elephant and rhino. Despite being an economic analysis, these reports found that there were many qualitative impacts of environmental crime that were not easily quantified in monetary or economic terms but that did have important impacts on economic development, public health, political institutions and the environment.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9456581473350525,
"language": "en",
"url": "https://gradeup.co/ugc-net-study-notes-on-wage-theories-i",
"token_count": 1402,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.361328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:845ec150-99ca-4a62-8e9e-365216672c8b>"
}
|
- Wages are the monetary payment to the workers for performing work. Wages are given to the workers on an hourly, daily or weekly basis which plays a key role in boosting their morale, raising their living standard and motivating them to improve productivity.
- The factors that affect the wages are demand and supply of labour, employer’s ability to pay, trade union, cost of living, current wage rates, job requirement, and state regulations.
Different theories have been propounded for explaining the nature of wages. Let’s understand some of the important wage theories.
1- Subsistence Theory:
- In 1817, Subsistence theory was given by David Ricardo. This theory mainly sees labour as a part of the population and says that each member of the society should be given sufficient food, clothing and shelter for survival.
- The subsistence theory was propounded on the basis of the assumption that workers or labours are just like a commodity which is bought and sold in the market. As per this theory, the subsistence level determines the wages of the workers.
- In case an increase in the wages is more than the subsistence level then the population of labours will increase and as a result, there will be an increase in the supply of labours. Consequently, there will be a reduction in wages.
- On the other hand, if wages given to workers fall below the subsistence level then there will be a decrease in the supply of labour due to a fall in the population. Consequently, wages will increase.
- The subsistence theory is also known as ‘Iron Law of Wages’.
2- Wage Fund Theory:
- In 1930, this theory was given by John Stuart Mill. The wage fund theory was propounded with the assumption that the payment of workers is done out of a pre-determined wealth fund. This fund is made from the savings of the previous year operations of the organisation.
- The wage level or wage rate is determined by the amount of wage fund and the total number of workers.
- According to this theory, if the wage fund is large, the wages paid to the workers will also be more. Also, if the number of workers is reduced then the wage rate will increase.
- The wage fund theory is considered rigid as it says that the bargaining power or trade union cannot increase the wage level and even if they try to do so, then this will discourage the accumulation of capital.
- The wage fund theory is criticized as it tells about the way to determine the wage rate but does not describe the sources of wage fund. The other drawback is that there is no mentioning of the method of estimating wage fund.
3- Surplus Value Theory of Wages:
- This theory was propounded by Karl Marx. According to his theory, labours are just like an article which can be purchased by paying ‘subsistence price’.
- As per this theory, the surplus between the labour cost and product price should be given to the labour.
- Marx suggests that the displacement of labour is dysfunctional to the system and it will eventually destroy capitalism.
4- Residual Claimant Theory:
- This theory was given by Francis A. Walker. He considered wages as a residue which is nothing but a mere portion of total revenue left after deducting other expenses like rent, interest, taxes and profits.
- This theory is criticized because the entrepreneur is the residual claimant. There is no discussion about the influence of labour union on wage determination.
5- Marginal Productivity Theory:
- This theory was given by John Bates Clark. As per this theory, wages of the workers are determined on the basis of the level of contribution made by the marginal worker.
- The marginal productivity theory assumes that there is a certain quantity of workers that seeks employment.
- The wage rate at which the worker can secure employment is equal to the addition to total production. This results in employing the marginal unit of workers. There is an assumption that the production is carried out under the condition of diminishing returns to labour.
- The shortcoming of the marginal productivity theory is that it fails to explain the differences in wages.
6- Bargaining Theory of Wages:
- This theory was given by an American economist, John Davidson.
- According to the bargaining theory of wages, the workers and the employers negotiate to determine the wages and the hours of work.
- As per this theory, the upper and lower limit of the wage rate is fixed and the actual wage rates depend on the bargaining power of both the employer and the worker.
- The upper limit is the rate above which the employer will abstain from hiring a certain group of workers, whereas the lower limit is the rate below which workers refuse to work.
7- Institutional wage theory:
- This theory says that the level of wage rate is determined on empirical and quantitative It is important to do region cum industry comparison.
- It is an inter-disciplinary approach to compensation that includes such considerations, as the influence of collective bargaining, wage experience and so on.
- The theory suggests that one must analyse compensation from a dynamic and a continually changing basis.
8- Supply and demand theory:
- This theory was given by Alfred Marshall. According to him, the demand and supply of labour play a very important role in determining the wages of the labours.
- According to this theory, the demand price of the worker is determined by the marginal productivity of a single/individual worker. The supply of labour means the number of workers searching for employment for earning wages. The demand for labour refers to the number of workers needed by the organisation.
- The supply of labour will rise with the rise in the number of working hours and an increase in the wage rate. The demand for labour depends upon the productivity of labour, technology, product demand and the cost of capital inputs.
9- Investment theory:
- This theory was given by M. Gitelman. As per this theory, the compensation of the worker is determined by the rate of return on the employee’s investment like employee’s education, training and development programmes and experience.
- Generally, the wider the labour market is, the higher the wages
10- National income theory:
- This theory was propounded by John Maynard Keynes and is also known as the Full Employment Wage Theory.
- According to the national income theory, full employment is the function of national income of the country.
- National income is equal to the total of consumption plus private or public investment.
- If the national income falls below a level that commands full employment, then it is the responsibility of the federal government to either manipulate any one or all of the three variables so as to increase national income and return to full employment.
Score better. Go gradeup.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9573529362678528,
"language": "en",
"url": "https://thediplomat.com/2019/01/south-koreas-hydrogen-economy-ambitions/",
"token_count": 1613,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1279296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a589f63f-5830-45a5-a1a9-569c85451567>"
}
|
In a speech earlier this month, South Korean President Moon Jae-in laid out a vision for South Korea to develop the technology and infrastructure needed for a hydrogen-based economy.
To transition the Korean economy to hydrogen the Moon administration put forward a roadmap centered on three elements – increasing the production and use of hydrogen vehicles, expanding the production of fuel cells, and building a system for the production and distribution of hydrogen.
A key component of the plan rests on the development of South Korea as a leader in hydrogen fuel cell electric vehicles (FCEVs). At the moment, very few FCEVs are sold worldwide and they trail plug-in electric vehicles as an alternative to existing combustion engines. Through the first 11 months of 2018, plug-in electric vehicles were on pace to sell 2 million units and account for about 2 percent of automobiles sold worldwide. In contrast, by the end of 2017, only about 6,500 FCEVs had been sold worldwide, with over half of those sales in California. In 2018, sales of FCEVs only accounted for 2,300 units in the United States, with 1,700 being the Toyota Mirai.
Despite being the first company to commercially produce an FCEV, South Korean firm Hyundai has only produced a little less than 2,000 to date. Under Moon’s plan, production of FCEVs in South Korea would double to 4,000 units this year and rise to over 80,000 units by 2022. The goal is to grow the market domestically and abroad to reduce per unit costs, with the expectation that the price of an FCEV will drop to around $27,000 once annual production reaches 100,000 units.
Related to its FCEV production goals, the Moon administration’s roadmap also calls for South Korea to provide subsides for the introduction of FCEV taxis, as well as to put 2,000 public buses and 820 police buses powered by hydrogen on the road.
One obstacle to creating a new market for FCEVs, as with plug-in electric vehicles, is the development of an infrastructure of fueling stations. At the moment, there are less than 40 in the United States, and those are primarily in California. In Europe, most fueling stations are in Germany, where the joint venture H2 Mobility Deutschland currently has 52 refueling stations and expects grow that number to 100 by the end of the year. In South Korea, there are currently only 15 refueling stations for FCEVs, though the government is looking to add 71 this year. In contrast, there are nearly 25,000 charging stations for electric vehicles across the United States and Canada.
To grow the number of refueling stations in South Korea, the government plans to loosen regulations by creating a regulatory sandbox that will allow domestic producers to experiment with new technologies without the concern of being burdened by regulations.
Outside of the transportation sector, the government aims to use fuel cells for household and commercial power generation. Under the current roadmap South Korea would aim to produce 15 gigawatts of power through fuel cells for industrial use by 2040, with 8 gigawatts for domestic industrial use, or 7 percent of its power generation. Another 2.1 gigawatts is expected for household use.
To move the hydrogen needed for its plans, South Korea is considering the construction of a pipeline to transport hydrogen around the country.
Hydrogen has a potentially unique appeal to the Moon administration as an alternative power source. In his remarks, Moon noted that South Korea is dependent on imports for 95 percent of its energy. While South Korea largely imports petroleum from the Middle East, it is also dependent upon imports for LNG, coal, and the nuclear fuel used to run its nuclear power plants.
Hydrogen, in contrast, is relatively abundant in nature and is a zero emission fuel. While fossil fuels are generally used to extract hydrogen from water, excess wind and solar power can be used to separate the hydrogen from the oxygen in water through electrolysis. The development of an efficient domestic hydrogen production and distribution system would allow South Korea to reduce its dependence on energy supplies from abroad.
Another advantage of the shift to hydrogen is improved air quality. In recent years, air pollution has become an increasing problem as Korea has experienced multiple days of dangerously high levels of fine dust, also known as PM2.5, and its air quality is now the worst in the OECD.
While China is one source of fine dust, a significant portion of the air pollution is produced domestically, with automobile emissions being one of the main sources. The switch to hydrogen vehicles would reduce air pollution, even if fossil fuels continued to be used to separate the hydrogen needed to power their fuel cells. If the Moon administration’s target for FCEVs by 2030 is met, it estimates the switch would reduce South Korea’s fine dust pollution by 10 percent annually.
One potential avenue for using hydrogen that the Moon plan does not currently cover is burning hydrogen to produce fuel. Existing coal and gas plants can be converted to burn either 100 percent hydrogen or a mixture of 30 percent hydrogen, 70 percent natural gas that would reduce carbon emissions by 10 percent. Mitsubishi Hitachi Power Systems is already working to convert one gas turbine in the Netherlands by 2024. Converting existing South Korean power plants to hydrogen could be an important part of any future shift towards a hydrogen economy.
There have also not been reports indicating that the Moon administration intends to tie its renewable energy goals to the use of wind and solar power to run electrolysis to produce hydrogen. This would further reduce South Korea’s dependence on carbon-based fuels and improve its energy security.
Domestically, the plan has backing from Hyundai and the city of Ulsan. Prior to Moon’s remarks, Hyundai had already set a goal of producing 500,000 FCEVs annually by 2030, and Ulsan had set the goal of becoming a leading center for the hydrogen economy. By 2030 it plans to have 40 percent of city buses run on hydrogen fuel cells (the first started this year), add 60 hydrogen refilling stations, and have 15 percent of personal vehicles run on hydrogen.
However, South Korea’s goal to become a leader in the hydrogen economy will face competition. As previously noted, the Toyota Mirai is the leading FCEV and Japan already has a more extensive hydrogen refilling network with 97 stations and it is set to grow to 160 by 2020. Japan is also looking to make strides in fuel cell buses and has set a goal of 800,000 FCEVs by 2030. Germany is looking to lead European efforts and has set a goal of 1.8 million units by 2030. While it trails other countries, China is pushing into the market as well. It provided $12.4 billion in subsidies last year and has set a target of 1 million vehicles by 2030. Firms in the United States are also pursuing the technology and California’s fuel emission standards could encourage production.
In the end, South Korea may be its own obstacle to success. If it is to succeed in transforming its economy and becoming a global leader in hydrogen, it will need policy continuity beyond the Moon administration — something that hasn’t happened in the past. In the late 1990s, Korea also began supporting research in hydrogen power, but when the Lee Myung-bak administration took over in 2008 the emphasis shifted to the promotion of nuclear power and support for the development of hydrogen declined. Now the current administration is looking to phase out nuclear power. There is increasing support for hydrogen in the National Assembly, where six bills are pending, which may help steady policy, but to succeed this time the policy shifts of the past will need to be avoided.
Troy Stangarone is currently a Posco Visiting Fellow at the East-West Center. He is on leave from the Korea Economic Institute where he is the Senior Director for Congressional Affairs and Trade.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9407406449317932,
"language": "en",
"url": "https://uintacrypto.substack.com/p/what-does-1-trillion-mean-to-you",
"token_count": 842,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.455078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d674c5c2-0e5d-416f-9b3a-0e67e24cdb17>"
}
|
A few days ago, our kind and generous government agreed on printing another $1.9 Trillion dollars for the next stimulus package. This is the third stimulus package since March 27, 2020, bringing the total stimulus to $5 Trillion.
Do you adequately comprehend how much $5 Trillion actually is? I promise you don’t! I’ve been trying to comprehend it for 12 months and still don’t think I really do.
First off, the government collects about $3.5 Trillion in taxes every year. If there were no real consequences, to printing trillions of dollars, why don’t they stop requiring us to pay taxes and just keep the money printer rolling?
They will never stop making us pay taxes, because there really ARE consequences to continually printing money. And you start to realize the consequences when you start to understand the magnitude of $5 Trillion dollars. Let me see if I can help.
As you can see in the image above, you can see the difference between stacking 1 million one dollar bills versus one billion and one trillion one dollar bills. The comparison is 3.3 ft versus half a mile versus 631 miles. That’s basically the distance between Salt Lake City and Phoenix!
In terms of time, 1 million seconds is about 11 days time. 1 billion seconds is about 31.5 years. And a trillion seconds is more than 31,668 years. That’s right, 31 THOUSAND years!
Think of it this way, how long would it take you to count to 3600 if each number took 1 second? It would take you 1 hour. So if we follow that math, to count to 1 million, it would take 278 hours. Which is about 11 days. And 1 Trillion is 1 million million. So to get to 1 Trillion, it would be 11 days X 1 million, which is 11 million days, which is more than 30,000 years!
If you want a fun visualization, check out the video below:
So the next thing to do is put this into perspective of Bitcoin. Bitcoin’s market cap is about $1 Trillion. So the U.S. Government basically replicated the market cap of Bitcoin 5 times in 1 year.
What’s more important to realize is that the amount of Bitcoin is fixed at 21 million coins. Check out the images below that demonstrate the amount of Bitcoin and the amount of US Dollars created over time.
You will notice the US Dollar is being created with no end in sight, where as the amount of Bitcoin created is diminishing as the years go on. The amount is fixed. The US Dollar is becoming less scarce at an exponential rate, while Bitcoin is increasing in scarcity at an exponential rate. It’s a simple lesson on scarcity.
I hope all of these images and thought exercises help you realize what is going on with our money. What are you doing to protect your purchasing power?
Short term price analysis for Bitcoin remains tricky. The best thing to do is to continue dollar cost averaging. I’m predicting the price to remain choppy through April. I expect things to accelerate for all cryptocurrencies starting in May and lasting through July.
Crypto Accelerator Course Update
For those considering my Crypto Accelerator Course, I wanted to let you know that I will be increasing the price in the next couple weeks, so if you want to lock in the $750 price, be sure to do it before the end of March.
Also, I’ve changed the format a bit. I’ve made the course available on-demand. You can go through it at your own pace and on your own time. In addition, I will be holding a weekly 2-hour Q&A sessions to address any questions. Plus, you will be invited to a private Slack group to network and learn from others that have gone through the course. Please let me know if you have any questions, you can always schedule 15-mins on my calendar.
Have a great week and don’t forget to HODL!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9779025316238403,
"language": "en",
"url": "https://www.amazonsale.in/stock-trading-the-history-of-the-nyse",
"token_count": 1283,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.158203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b173af7a-c56b-461e-af76-506e53b40571>"
}
|
THE HISTORY OF THE NYSE
The NYSE or the New York Stock Exchange in the US stock exchange which is located on Wall Street in New York in the US. The NYSE has a market cap that exceeds US$16 trillion which makes NYSE the world’s largest stock exchange
The NYSE or the New York Stock Exchange in the US stock exchange which is located on Wall Street in New York in the US. The NYSE has a market cap that exceeds US$16 trillion which makes NYSE the world’s largest stock exchange which has an average of US$169 billion in daily trading value as of 2013.
Till 2014 the NYSE which is also called the “the Big Board” had a listing of more than 1900 companies out of which 1500 were American companies.
The New York Stock Exchange is the property of Intercontinental Exchange and its regulation is carried out by the Securities and Exchange Commission.
It was on 17th May 1792 when the New York Stock Exchange was founded when 24 stockbrokers signed the Buttonwood Agreement on Wall Street in New York City in the US.
In what has become the stuff of legend the 24 brokers met under a Buttonwood tree and created a centralized exchange for the rapidly increasing and the mushrooming securities market in America.
The need for auctioneers was eliminated who were used very frequently for tobacco, wheat, and other common commodities and they fixed a commission rate.
The NYSE made the Tontine Coffee House as its headquarters and initially focused on Government bonds.
After 25 long years on March 8, 1817, this organization became official and the New York Stock & Exchange Board was created which was later known as NYSE.
All during the initial years of the 1800s, the New York Stock Exchange started expanding beyond bank stocks and government bonds.
And as it happened New York became the new financial capital of America surpassing Philadelphia.
As the advances in telegraphic communication were created it enabled buying and selling of stocks through the telegraph which created new ease in trading and was a step towards expansion and modernization.
The membership to the NYSE increased monumentally and soon became more exclusive.
There was excitement in the air as by the start of the Civil War gold was discovered in California, and securities and commodities were starting to be traded on the NYSE.
There were many experiments with the location of the stock exchange but it finally settled to its present location n at 11 Wall Street in the year 1865.
The Neo-Classical building became a Historic Landmark in 1978.
A new revelation began as in 1878, telephones were installed which gave investors access to brokers on the exchange floor.
As the activity increased the cap on the number of members was put at 1060 seats which also required purchase from members who were retiring.
The New York Stock Exchange struggled in the middle of international turmoil between the later part of the 1800s and the end of World War 1 and then one day the stock market crashed on 23rd October 1929 which caused an almost 90 percent drop in share prices.
This crash led to many heavy regulations by the American Government. And subsequently, the NYSE registered with the United States Securities and Exchange Commission.
The second crash came on 19 October 1987 when the Dow Jones Industrial Average dropped by 508 points and this was the biggest crash since 1929.
Today the technology used in the NYSE includes cell phones and super-fast computers and this has led to high-speed transactions and also has revolutionized the stock markets forever.
New York Stock Exchange Trading
Whenever a company registers with the New York Stock Exchange basically to raise capital, the shares of that particular company become available for public trading. All those traders who want to invest in the stock market can sell and buy stocks online with the help of exchange companies.
It is through the floor brokers and Designated Market Makers that the trading takes place on the trading floor.
To provide liquidity to each stock the NYSE assigns Designated Market Makers which is the only exchange that requires this assignment
There are opening and closing bells that are rung at the start and the finish of each trading day and the NYSE operates from Monday through Friday, from 9:30 am in the morning to 4:00 pm in the evening ET.
It has been a tradition of sorts since 1870, where market participants like CEO’s, Celebrities and other eminent personalities have been invited to ring the bell.
All the trading is automated with some exceptions of high priced stocks which makes the NYSE the leading hybrid market.
Manual trading takes about 9 seconds and trades execute within a second electronically. The trades also run a constant auction format.
At present investors only need to find a brokerage who is an NYSE member and through him, they can buy and sell stocks and other products from the quotes which are provided to the brokerage from NYSE.
PRODUCTS PROVIDED BY NYSE
5 Regulated markets which include New York Stock Exchange, Arca, MKT, and Amex Options are held by the NYSE.
Medium and large companies are listed on NYSE and smaller companies are listed on NYSE MKT.
Asset classes like equities, options, exchange-traded funds (NYSE Arca), and bonds (NYSE Bonds) can be traded on the NYSE.
There are many market indices housed by NYSE which include the S&P 500, the NYSE Composite, NYSE US 100 Index, the NASDAQ Composite, the Dow Jones Industrial Average, and others.
COMPANIES LISTED ON NYSE
RAISING US$55 billion in 2013 the NYSE is at present the world’s largest IPO provider.
Ticker symbols are used by the Companies listed on the NYSE for example Apple Inc is known as AAPL.
Around 20 percent of the industries on NYSE are from financials—trusts, insurance, and others
Apart from this technology and telecommunications, consumer goods and services, healthcare, oil, and gas are among other major industries covered by the NYSE.
Some major corporations listed on NYSE include Ford Motor Co, Bank of America, and General Electric Co, Apple Inc., Twitter Inc., and many more.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9542568922042847,
"language": "en",
"url": "https://www.asisonline.org/security-management-magazine/articles/2014/10/rethinking-recovery/",
"token_count": 1310,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.031494140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8bad5f65-0691-4bcb-967e-4eefb6601cf8>"
}
|
ON CHRISTMAS NIGHT, 1802, fire ripped through the city of Portsmouth, New Hampshire, destroying a seaport that was a crucial outlet for commerce in newly founded America. A few weeks later, Congress implemented the first act of federal disaster relief in American history. “What did the federal government do? It did what it would do for the next 200-plus years—it wrote a check,” says Brad Kieserman, acting assistant administrator for recovery at FEMA’s Office of Response and Recovery (ORR). “We’ve been doing one version or another of that ever since. We haven’t been particularly innovative.”
But now, innovation is sorely needed. Officials say that the current model of relying on federal largesse for relief and long-term recovery is not sustainable. According to a recent study by the Center for American Progress, the government spent $136 billion from fiscal year 2011 to fiscal year 2013 on disaster relief.
“The federal government can’t fix it all. The federal government is not an endless pot of money,” says Daniel Craig, chairman of the Disaster Recovery Contractors Association.
Craig, Kieserman, and other disaster recovery experts came together recently to discuss ideas for a new model of response and relief at “Expert Voices–Future Innovations for Long-Term Disaster Recovery,” held at the National Press Club in Washington, D.C. The event was moderated by Admiral Thad Allen, a former commandant of the U.S. Coast Guard who is now senior vice president at Booz Allen Hamilton.
To bring more innovation to disaster relief, experts agree that a new paradigm must be established so that the roots of a strong recovery are planted before the event occurs. “My message to you is: Recovery doesn’t start after the disaster. It doesn’t start in the response phase. It starts well before the response phase,” said Joseph Nimmich, associate administrator at ORR.
Such a new paradigm would use predictive data on weather patterns to anticipate storm cycles and their likely effects. “We need to take that data and look at where the projections from science are and what the weather changes are going to be, and then we need to have our community and city planners plan ahead of time,” Nimmich advised.
“So we are not arguing at the time of the event, but we know—these houses can’t be built back up. And we are telling the people that, when a flood occurs, you’re going to be bought out,” says Nimmich. “We’re planning that buy-out long before the event ever occurs,” he said.
In addition, government can be more proactive in signing contracts with private companies for services that will be needed after a disaster, such as temporary housing. This type of up-front investment and preparation makes it easier for the government to control costs, and it is ultimately cheaper than a massive post-disaster aid package, such as the $60 billion Congress approved in the wake of Hurricane Sandy, experts say.
And from the private sector side, such advance agreements are often welcomed because they allow a company to ensure that they will have clients during challenging times. “When no one else is buying the service because the community is devastated, I can guarantee that the government, or the utility, or someone else, will,” Kieserman said.
Contracts and relationships with nonprofit groups can also be forged before a disaster occurs. These connections can be important, experts say, because nongovernmental organizations (NGOs) offer a range of resources that are sometimes underused.
One of the primary strengths that NGOs bring to the table is flexibility—they can tailor service delivery based on circumstances on the ground, and they don’t require a presidential declaration to act, according to Jeff Jellets, territorial director of emergency disaster services for the Salvation Army. Moreover, NGOs are grassroots organizations, arising from the communities they serve. “They know the unique characteristics of those communities,” Jellets said. NGOs offer a variety of services, including health care, meals, shelter, reconstruction, and other resources.
However, those looking to better leverage NGOs in disaster relief efforts need also be mindful of a few challenges, Jellets added. The wide range of services they offer can make managing and coordinating them difficult. NGOs can also be dependent on fundraising.
Prevent preparation is critical because it is often difficult to make progress on long-term measures after a crisis, according to Glenn Cannon, director of the Pennsylvania Emergency Management Agency. “It’s hard to get a community to think about long-term planning implications…or how to build resilient flood-resistant structures when they still are reeling from the impacts of that catastrophic disaster event,” Cannon said.
Allen, who directed the federal response to Hurricane Katrina in 2005, agreed. He emphasized the importance of short-term efforts in the aftermath of a disaster that focus on helping private sector businesses get up and running again. “When you have a devastating event, you have loss of continuity of government...but you also lose the continuity of society. And that includes all the economic transactions that take place that actually drive the revenue base and let the city recover. So it’s really essential that you open the Walmart, the Home Depot, the Lowe’s—even the Waffle House becomes very important,” Allen said.
In the end, experts say that what is needed for a more innovative disaster response system is a change in mindset. The idea of greater focus on strategic predisaster preparation must gain currency in the minds of more Americans. Similarly, more people—such as those living in areas where rising coastal waters make some waterfront housing unsustainable—must move away from the idea that the object of a relief program is always restoration.
“At every level, we need to change our thinking, if we are going to move forward in the future,” Nimmich said. “At the personal level, when you go through a disaster, the expectation has become somebody will help you get back to exactly the way it was before. That’s not what a disaster is. A disaster is a life-changing moment.”
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9324303865432739,
"language": "en",
"url": "https://www.infodev.org/crowdfunding",
"token_count": 363,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.052734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1a09f0cd-d99d-4415-94b0-75dedfab8892>"
}
|
Crowdfunding—the practice of raising funds from multiple individuals via the web—first emerged in an organized form in the low-investment environment of 2008, and has quickly grown into a multi-billion dollar industry projected to reach $5 billion this year, channeling funding to hundreds of thousands of ventures globally.
Crowdfunding combines the traditional practice of raising funds from friends, family and community for projects or business launches, with the power of the Internet, mobile technology, and social networks to drive donations and investment. It essentially democratizes financing, putting the decision to fund new ventures in the hands of the communities and customers who would benefit the most.
The revolutionary power of crowdfunding also extends to the realm of international development, the report suggests. Preliminary modeling estimates that the possible market potential for crowdfunding in developing countries could reach up to $96 billion a year over the next 25 years given the right answers to current regulatory, infrastructure and cultural challenges.
In a foreword to the report, AOL co-founder Steve Case highlights crowdfunding’s potential in enabling the “Rise of the Rest,” and calls for further study of appropriate regulation and investor protections.
Organizations such as the World Bank, governments, venture funds, and NGOs are watching crowdfunding closely to see whether it has the potential to solve the “last mile funding problem” faced by many start-up companies.
To assist them in making the most of crowdfunding, the report provides practical guidance via a self-assessment tool, a set of policy recommendations, and suggestions for the World Bank and infoDev to pursue the topic further.
The report also includes a special case study section on climate and clean energy technology addressing the applicability and opportunity of crowdfunding to infoDev’s Climate Innovation Centers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9532073140144348,
"language": "en",
"url": "https://www.nibusinessinfo.co.uk/content/difference-between-quotation-and-estimate",
"token_count": 414,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10888671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ca160787-0943-40f7-a236-1ed8f84e9ef1>"
}
|
Price lists, estimates, quotations and tenders
Difference between a quotation and an estimate
Some businesses simply cannot give standard prices for goods and services. This may be because the skills, time and materials required for each job vary depending on different customers' needs.
This situation is common in trades such as building work or producing custom products, where no two jobs are exactly the same. When it's not possible to work from a standard price list, you have to give a quotation or an estimate instead.
The main difference between a quotation and an estimate is that:
- a quotation is an agreed fixed price
- an estimate is approximate price that may change
What is a price quotation?
A quotation is a fixed price offer that can't be changed once accepted by the customer. You must adhere to the quotation price even if you carry out more work than you expected. If you think this is likely to happen, it makes more sense to give an estimate. You can also specify in the quotation precisely what it covers, and situations that will lead to additional charges.
What is a price estimate?
An estimate is an educated guess at what a job may cost. It isn't binding. To account for possible unforeseen developments, you should provide several estimates based on various circumstances, including the worst-case scenario. This will prevent your customer from being surprised by the costs.
How to give a price quotation or estimate
To work out a quote or estimate you need to know your fixed and variable costs. These include the cost-per-hour of manual labour and the cost of the materials you'll need. You can then calculate your quote or estimate based on what you think the job will involve.
You should provide all your quotes and estimates in writing, including a detailed breakdown. This will help to avoid any disputes about what work is included in your overall price. Be sure to state clearly whether it is a quotation or an estimate.
You could also set an expiry date, after which quote or estimate will no longer be valid.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9303615689277649,
"language": "en",
"url": "http://getfreeessays.com/drivers-and-modes-of-entry-for-multinational-enterprises-economics-essay/",
"token_count": 1971,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0595703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:eac2e454-252d-46e4-aa78-ae6d0429adcd>"
}
|
According to Needle (2010), “A Multinational firm is one that operates and is managed from bases in a number of countries. Most Multinationals are large firms with diverse interest coordinated by a centrally planned strategy.” The world economy is seem to be more integrated, while many products being sold multinational and firms operating in more than one country. For the retailers who seek to expand into new markets, multinational plans will remain as an important stream of strategy (Hynes 2010). There reasons driving the Multinational Enterprises to expand in numbers and its operations are discussed along with the ways in which a firm can become multinational.
DRIVERS OF MULTINATIONAL ENTERPRISES:
Theoretically, the most comprehensive approach arguing the necessity of three factors for ‘going multinational’ was given by Dunning (1993). The three factors he identified under ‘eclectic paradigm’ are
Ownership Factors: Dunning argues that in order to be successful in an overseas market, the multinational firm must hold some advantages like technology, brand image, high amount of finance, superior distribution channel and better organization and management than local firms.
If you need assistance with writing your essay, our professional essay writing service is here to help!
Location Factors: There may be a number of reasons that operate in the host country like no or less import tariff, cheap labour, low business rates, availability of specific resource, etc., which acts as reason for a firm to operate overseas.
Internationalization Factors: It relates to the extent of ownership, control and risk a firm can hold while going multinational. According to the entry mode ranging from exports to wholly owned subsidiary, the MNE chooses internalization where the market does not function properly or does not exist so as to drive the transaction expenses of the external route very high.
In general, the reasons for going multinational can be internal and external factors of the firm.
POTENTIAL FOR MNE
Technological know-how Market Factors
Product Innovation Economic Factors
Financial Capital Competitive Factors
Brand Image Environment Factors
According Eicher, Mutti and Turnovsky (2009), the internal or firm specific advantages are
Technological Know-How: A need for new market is mandatory for the firm with special expertise in the field of technological know-how. This can include a new process to produce a product, upgraded production methods with minor changes in the process, etc.
Product Innovation: Heavy investment in R&D which would have resulted in product innovation can be one of the reasons to enter into new markets. To take out the investment in R&D to develop the new product, firms try to go international and spread their fixed cost.
Access to Financial Capital: A firm with heavy financial capital, which cannot be floated in domestically becomes imperative for the firm to go multinational to tap the potential from international markets.
Brand Image: The brand image and the niche strategy of the company can be used to exploit the competitive advantage, by narrowing the market segment within the clearly defined market sector. For example, the fashion sector aerated drinks, Cigarette Company use their brand recognition and enter into foreign markets.
According to John et al. (1997) and Rugman and Collinson (2006), the external factors that drive the Multinational Enterprises can be sub-divided into
Emergence of group of customers with homogenous market needs, emergence of new trading blocs like Asian Tigers, China and Russia, creation of global brands, global standardization and simplification of products, internationalized supply chain, protecting the firm against cyclical problems of national economies, smallness of home-market, growing world market for goods and services and higher price on the international market are some of the market factors that drive multinational enterprises.
Worldwide economies of scale in manufacturing and distribution, worldwide sourcing, significant difference in country cost, raising production development cost (R&D), risk and uncertainties of the domestic business cycle are the economic drivers of Multinational Enterprises. To diminish the negative effects of home country and tap the advantages of other countries, firm go multinational.
Competitive interdependence among countries, increased foreign competition, global moves of competitors, opportunity to pre-empt a competitor’s global move tend to drive the competitive factors to set up Multinational Enterprises. Using the ‘follow competitor’s strategy’, many MNE set up their operations following the competitors move.
Falling transportation cost, increasing pace of innovations, technological change, improving communications, Government policies and removal of international trade barriers such as trade and non-trade barriers are the environmental factors pushing the MNE’s to grow in numbers.
MODES OF ENTRY:
The decision criteria for mode of entry are market size and growth, risk, Government regulations, competitive environment, local infrastructure, company objectives, internal resources, flexibility, assets and capabilities and need for control (Katsioloudes and Hadjidakis 2007).
Hynes (2010) developed a model of three approaches for firm’s going multinational. The three approaches are
Stage Approach: According to the stage approach, a firm goes multinational in a gradual manner starting from markets close to the domestic market and the primary mode of entry is through exports.
Network Approach: In network approach, it is assumed that the successful multinational business is dependent on developing business networks relationships consisting formal and informal relationships and alliance with local or other firms at any stage of value chain. Network approach can accommodate the scarce financial and non-financial resources.
The Born Global Approach: The born global approach assumes to business to have a global orientation from the onset of its business.
The methods or modes of foreign market entry (Needle 2010; Moosa 2002; Holt and Wigginton 2002) are
Exporting: The simplest form of foreign market entry is through exporting where the cost and risk are low. Exporting is done foreign distributors, agents and marketing subsidiary. But in exporting, the firms may face problems in the form of tariffs and high transportation cost which may increase the price of the product. In exporting, the production of goods or services is located in the home country. Exporting is the first stage in multinational growth. However exporting from the firm’s home base may not be a good idea if lower-cost manufacturing locations can be spotted abroad. Initially, an enterprise with less international exposure may prefer to export with arrangements of low-risk brokerage..
All other modes of entry have foreign production sources.
Licensing: It may involve the supply of technological know-how, usage of trademark or patent for a fee. This mode of entry creates a chance to generate revenue from other markets, which are otherwise not accessible. It requires only little investment and is relatively low risk. Licensing is an effective option for companies that lack in managerial capabilities that they would need to operate effectively in multinationals. The biggest disadvantage of licensing is the lack of control.
Franchising: Under franchising, the firms enter foreign markets with a contractual agreement. Companies with good brand names like McDonalds, KFC, etc., move into international market by allowing the foreigners to sell their product. The firms provide technical and marketing assistance for an initial fee and corresponding royalties. International Franchisee management challenges the world because they are becoming the fastest growing economic sector in nearly all developing and developed nations. Again, it involves little investment and risk. Small businesses are particularly suited for franchisee business and it involves risk of quality control.
Off-shore Outsourcing: Under off-shore outsourcing market entry, in order to achieve cost-effectiveness in the production of goods and services, many firms tend to contract some part of activities to firms in foreign countries. Cost-effectiveness can be achieved because of lower cost of labour and cheap availability of raw materials. But, moving operations out of home, creates employment opportunities in host country leaving unemployment at home. Lower quality products, lower standards, transfer of technological know-how to host country are some of the risks involved with off-shore outsourcing.
Joint Venture: A joint venture can happen strategically either with a host country firm or Government Institution, as well as with another company that is foreign to the host country. Alliances can be formed with flexibility where one firm provides technical expertise, raising finance while other firm provides strategic assets, local knowledge of bureaucracy and local laws and regulations. It is form of FDI with joint ventures occurring with suppliers, R&D projects with difference in ownership and control. Shared ownership arrangement can lead to conflicts and battles for control between the firms.
Wholly-Owned Subsidiary: This mode of entry occurs through Mergers and Acquisition of a firm or establishing Greenfield site operations in host country. Generally a weaker firm of host country is acquainted. Under Greenfield site operations investment occurs when the firm establishes new production, distribution or other facilities in the host country. This form of FDI involves more risk than other modes because it is a costly method of entering a foreign market from the viewpoint of capital investment. Firms doing this must bear the full capital cost and risks of setting up overseas operations.
M&A GREENFILED OPERATIONS
LICENSING & FRANCHISING
AMOUNT OF OWNERSHIP, RISK, CONTROL AND PROFIT SHARING
When a company decides on its multinational entry, choice and compromise will have to be made between the desired and necessary levels of control, capital investment and expected profitability (Katsioloudes and Hadjidakis 2007). A clear understanding of the factors that motivates the firm to go multinational and choosing the best mode of entry after considering the criteria will make the firm succeed in its multinational operations.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8842779397964478,
"language": "en",
"url": "https://accounting-123.com/variance-analysis/",
"token_count": 622,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.068359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:58dfd305-d10f-454f-9f91-fc7ebc677c75>"
}
|
What is a variance analysis?
The variance analysis is present in managerial accounting and it refers to the investigations of differences between actual and planned behavior.
Variance analysis typically involves the isolation of different causes for the variation in income and expenses over a given period from the budgeted standards.
Types of variances
These are the main types of variances in use:
- Purchase price variance. It is the actual price paid for the materials used in production minus the standard cost, multiplied by the number of units used.
- Labor rate variance. The actual price paid for the direct labor used in the production process, minus the standard cost, multiplied by the number of units used.
- Variable overhead spending variance. Subtract the standard variable overhead cost per unit from the actual cost incurred and multiply the result by the total unit quantity of output.
- Fixed overhead spending variance. The total amount of fixed overhead cost minus their standard cost for the period.
- Selling price variance. The actual selling price, minus the standard price, multiplied by the number of units sold.
- Material yield variance. The total standard quantity of materials that are supposed to be used is subtracted from the actual level of use and then multiplied by the standard cost per unit.
- Labor efficiency variance. Subtract the standard quantity of labor used from the actual amount and multiply it by the standard labor rate per hour.
- Variable overhead efficiency variance. Subtract the budgeted unit of activity on which the variable overhead is charged from the actual units of activity, multiplied by the standard variable overhead cost per unit.
Basis of calculation
A variance analysis highlights the causes of the variation in income and expenses during a period that is then compared to the budget.
In order to make variances important, the concept of a “flexed budget” is used when the variances are calculated. The flexed budget acts as a bridge between the original budget and the actual results.
A flexed budget is prepared based on the actual income and expenditures of a business. Sales volume variance accounts for the difference between budgeted profit and the actual profit on the flexed budget. All the remaining variances are calculated as the difference between actual results and the flexed budget.
Functions and importance
The variance analysis is an important part of an organization’s information system.
Functions of the variance analysis include the following:
Planning, standards and benchmarks
Standards and budgetary targets have to be set in advance against which the organization’s performance can be compared against it, in order to calculate variances. This encourages forward thinking and a proactive approach towards setting performance benchmarks.
A variance analysis facilitates “management by exception”. and deviations from standards are highlighted which affect the financial performance of an organization. If a variance analysis is not performed on a regular basis, these exceptions may go about undetected which can result in a delay in management action necessary in the situation.
The variance analysis facilitates performance measurement and control at the level of responsibility centers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9553186893463135,
"language": "en",
"url": "https://education.nsw.gov.au/public-schools/schools-funding/resource-allocation-model.html",
"token_count": 109,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.08154296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:49c662a3-867b-4627-805d-4cb7e00ba273>"
}
|
Resource Allocation Model
The Resource Allocation Model (RAM) was developed to ensure a fair, efficient and transparent allocation of the state public education budget for every school.
Every school receives a School Budget Allocation Report (SBAR) in October that shows the full school funding allocation for the following year, including staffing and operational costs. The report assists schools to develop a budget and allocate funding to deliver on the strategic directions in their Strategic Improvement Plan (SIP). It includes each school's allocations for the seven loadings of the RAM.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.94675612449646,
"language": "en",
"url": "https://leapsummit.com/transitioning-to-renewable-energy-could-create-30-million-jobs/",
"token_count": 1014,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.04248046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:042192d0-3b77-4453-be95-33ee6a190315>"
}
|
Ten years after the publication of first plan for powering the world with wind, water, and solar, researchers offer an updated vision of the steps that 143 countries around the world can take to attain 100% clean, renewable energy by the year 2050. The new roadmaps, publishing in the journal One Earth, follow up on previous work that formed the basis for the energy portion of the U.S. Green New Deal and other state, city, and business commitments to 100% clean, renewable energy around the globe and use the latest energy data available in each country to offer more precise guidance on how to reach those commitments.
How China is leading the renewable energy revolution
At the start of 2017, China announced that it would invest $360 billion in renewable energy by 2020 and scrap plans to build 85 coal-fired power plants. Chinese authorities reported that the country was already exceeding official targets for energy efficiency, carbon intensity, and the share of clean energy sources. And just last month, China’s energy regulator, the National Energy Administration, rolled out new measures to reduce the country’s dependence on coal.
These are just the latest indicators that China is at the center of a global energy transformation, which is being driven by technological change and the falling cost of renewables. But China is not just investing in renewables and phasing out coal. It also accounts for a growing share of global energy demand, meaning that its economy’s continuing shift toward service- and consumption-led growth will reshape the resource sector worldwide.
At the same time, various other factors are reducing global resource consumption, including increased energy efficiency in residential, industrial, and commercial buildings, and lower demand for energy in transportation, owing to the proliferation of autonomous vehicles and ride sharing.
Despite these hurdles, technological innovation should help Chinese producers realize productivity gains and deliver savings to consumers. According to MGI, by 2035, changes in the supply and demand for major commodities could result in total cost savings of $900 billion to $1.6 trillion worldwide.
The scale of these savings will depend not only on how quickly new technology is adopted, but also on how policymakers and companies adapt to their new environment. But, above all, it will depend on China.
Germany town Wolfhagen is lighting the way with its renewable energy model
With 100% of its electricity coming from renewable sources (and more to spare), the German town of Wolfhagen is particularly demonstrative of what can be achieved when municipalities adopt innovative approaches to the ownership and governance of key infrastructure. Significant lessons can be drawn from Wolfhagen’s hybrid model of ownership, which can – and must – be applied to sectors beyond energy production.
Back in 2005, the local authority decided to take back the power. In what became the first steps to fulfilling Wolfhagen’s plan to become fully self-sufficient on renewable energy, the city government decided not to renew the private company licensing agreement, instead putting a public company – Stadtwerke Wolfhagen – in charge. Followed a 2008 decision that all household electricity would be provided from local renewable resources by 2015, the town committed to building a solar power park and a wind farm.
Wolfhagen demonstrates that innovative approaches to the ownership and governance of utilities can not only unlock further cooperative capital investment, but also create new forms of democratic engagement in their governance. It’s precisely the creation of these democratic spaces that can enable citizens to move beyond individualistic efforts to “reduce their carbon footprint”, and instead place them at the core of innovation in delivering a just transition to a sustainable and democratic economy.
Californian renewable energy model
California Independent System Operator (CISO), got 67.2% of its energy from renewables — not including hydropower or rooftop solar arrays. Adding hydropower facilities into the mix, the total was 80.7%. Sunny days with plenty of wind along with full reservoirs and growing numbers of solar facilities were the principal factors in breaking the record. The CISO controls 80% of the state’s power grid.
While California is certainly leading the nation, other states and cities are following suit. Atlanta will run on 100% renewables by 2035, and Chicago will power all city buildings with renewables by 2025. The Las Vegas government has them both beaten, as it’s already 100% powered by renewables, and Nevada itself has a goal of 80% renewables by 2040. Massachusetts will be 100% renewables-powered by 2035, followed by Hawaii in 2045.
If you’d rather hear about goals that have already been achieved, New York State has increased its solar use by 800%. Block Island in Rhode Island has just switched entirely to wind power, shutting down a diesel plant. In fact, experts say that the eastern United States could get 13% of its energy from renewables by 2025, and we’ve already experienced days of more than 50% wind power running the entire country.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9552488327026367,
"language": "en",
"url": "https://thefinancialexpress.com.bd/views/tapping-on-trade-investment-nexus",
"token_count": 1748,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07861328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:248e2329-7d7c-4613-aba3-2aa62f893651>"
}
|
Bangladesh and India have long bonds in culture and history. Despite such bonds and neighbourly proximity, economic cooperation between the two countries has remained far below potential. A number of studies have shown that bilateral trade and investment offer immense opportunities for accelerating growth and reducing poverty in both the countries. These studies suggest that India could become a major player for accelerating the growth of intra-industry trade and uplifting foreign direct investment (FDI) inflow to Bangladesh. Also, for India, Bangladesh could become an additional source of trade as well as a critical destination for investment thus addressing many concerns relating to the economic isolation of its backward Eastern and North-Eastern states. Furthermore, better connectivity between Bangladesh and India through multi-modal transport and transit facilities will further enhance the strength of the economic relations between these two countries.
Although it experiences annual volatility, the overall trade between Bangladesh and India has increased over time, and the balance of trade has remained heavily in favour of India. Total exports from Bangladesh to India increased from US$ 50.2 million in 2001-02 to US$ 527.2 million in 2014-15 (which was only 0.1 per cent of India's total import). The share of Bangladesh's exports to India in the country's overall export increased from 0.3 per cent to around 1.5 per cent during the same period. On the other hand, India's exports to Bangladesh increased from about US$ 1019 million in 2001-02 to US$ 5.8 billion in 2014-15 (around 2.0 per cent of India's total export). At present, India is the second largest import source for Bangladesh. In 2014-15, the share of Bangladesh's import from India was around 16 per cent of the country's total import from the world.
Looking at the product details we find that in recent years Bangladesh's exports to India (Figure 1) have been dominated by readymade garments (RMG) (HS code 6) and jute products (HS code 5). Bangladesh also exports products like textile articles, edible fruit and nuts, salt, fish, inorganic chemicals, mineral fuels and raw hides and skins. In contrast, large parts of Bangladesh's import from India have been raw materials and capital machineries (HS codes 5 and 8) (Figure 2) which are used in Bangladesh's export-oriented and domestic industries. At the product details, Bangladesh's import from India for last decade were chiefly cotton, vehicles and parts and accessories, machinery, cereals, man-made staple fibres, iron and steel, electrical machinery, organic chemicals, tanning or dyeing extracts and plastics.
Though exports from Bangladesh were supposed to increase significantly as the Indian government offered Bangladesh duty-free benefit for all products except 25 alcoholic and beverage items since November 2012, exports did not increase much after 2012. A number of challenges can be made responsible for such weak export response which are related to Bangladesh's limited export capacity, lack of diversification of export baskets, and various non-tariff measures (NTMs) and procedural obstacles (POs) due to inadequate infrastructure and lack of support facilities both at home and in the Indian market.
It is noteworthy that readymade garments (RMG) has become the major item in Bangladesh's export to India on account of duty-free market access granted by India. In 2009-10, the share of RMG was more than 28 per cent in total export of Bangladesh to India, which rose to 34.3 per cent by 2014-15. However, studies have shown that there are many products in which Bangladesh has large export capacities, but actual exports to India are either very low or zero. For example, Figure 3 shows that though for products in the HS categories of 02, 16, 24, 41, 46, 64, 65 and 67, Bangladesh has either the full or significantly partial export capacities to meet India's import demand, actual exports to India are zero. Similar observation also holds for Indian exports to Bangladesh. Therefore, there is enormous scope for raising bilateral trade between the two countries. There is a need to explore carefully, how different NTMs and POs and lack of trade facilitation affect such prospects. Necessary measures should be taken to improve the scenario. In order to address the trade infrastructural problems at the border, lately, there are some initiatives by the Government of India to set up Integrated Check Posts (ICPs) at major entry points on the land borders between Bangladesh and India. Two such ICPs have been launched recently, and they are expected to boost bilateral trade.
Bangladesh and India have to tap on the trade-investment nexus for improving their bilateral economic cooperation. The horizontal and vertical integration of Indian and Bangladeshi industries could help to improve scale of economies, especially for Bangladesh, and help Indian firms gain from the use of cheap labour in Bangladesh. However, in terms of sources of FDI (foreign direct investment) inflow in Bangladesh, the US, the UK, and South Korea top the list of countries, and FDI from India is still very low.
Lately, there have been a number of initiatives between Bangladesh and Indian governments to improve the investment situation. The Bangladesh Power Development Board and the Indian National Thermal Power Corporation have signed a memorandum of understanding in 2010 to set up two coal-fired power plants, each of which will have a capacity of 1,320MW, with partnership shared equally between them. Furthermore, recently, Bangladesh has offered India to establish two Special Economic Zones (SEZ) for Indian companies. Launching of these SEZs is expected to substantially increase Indian FDI into Bangladesh.
In 2015, Prime Ministers of India and Bangladesh contracted international gateway of internet service in Agartala and supply of 100MW power to Bangladesh from Tripura. India is already supplying 500 MW of power to Bangladesh, and supply of another 500 MW was also announced during Indian Prime Minister's visit to Bangladesh in 2015. On the other hand, the bandwidth connection came as Bharat Sanchar Nigam Limited (BSNL) and Bangladesh Submarine Cable Company Limited (BSCCL) signed an agreement for leasing of international bandwidth for Internet at Akhaura. As a result, Agartala has become third station connected to submarine cable for Internet bandwidth after Chennai and Mumbai. The internet bandwidth export to India from Bangladesh will enable reliable and fast Internet connectivity for the people of Tripura as well as other parts of India's northeastern region.
It is expected that the latest shipping arrangement between Bangladesh and India would make faster movement of goods between these two countries. Currently, such shipments are routed via Colombo or Singapore. Also, it takes around 20 days for a shipment by land. However, the direct shipping is expected to reduce the time to around seven days, as there is no longer a need for transshipment at Colombo. The service will play a vital role in decongesting the border points and bringing down the cost and transit time involved. This improved arrangement of connectivity would bring better efficiency and thus provide the best competitive freight rates to the advantage of the industries.
The aforementioned analyses point to the fact that there are heightened political commitments between the governments of Bangladesh and India to improve bilateral economic cooperation through different initiatives. Such initiatives need to be materialised at the earliest. As for Bangladesh, to make the most out of such initiatives, there are a number of challenges though. The country needs to significantly improve the business environment for attracting FDI, as the latest World Bank's ranking of the ease of doing business shows that Bangladesh's position dropped two steps to 174 out of 189 countries due to stalled regulatory reforms.
Finally, besides above-mentioned economic issues, there are some more bilateral issues between Bangladesh and India, which need to be resolved for enriching mutual trust and confidence for greater economic cooperation. For example, border killing is an issue that strains India-Bangladesh relations as the victims are often ordinary people of Bangladesh living in border areas. This needs to stop, for which a political decision at the highest level is necessary. Also, the water-sharing issue between India and Bangladesh is yet to be solved properly, which undermines a lot of the developmental prospects. However, it can be hoped that these issues will be solved with the heightened commitment among political elites of the two countries for a deeper economic cooperation.
Dr. Selim Raihan is Professor, Department of Economics, University of Dhaka, Bangladesh, and Executive Director, South Asian Network on Economic Modeling (SANEM). [email protected]
Dr. Farazi Binti Ferdous, Research Fellow, SANEM.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9551641345024109,
"language": "en",
"url": "https://wicked-solutions.blog/2019/02/19/how-blockchain-technology-can-improve-americas-infrastructure/",
"token_count": 1153,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1240234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e2bfcaf2-ec9c-4a17-8472-f898e8907db8>"
}
|
This piece first appeared in Forbes.
America’s infrastructure is often described as crumbling, broken-down, and out-of-date by politicians from both sides. While the reality isn’t so dire, there are obvious infrastructure issues throughout the country, such as New York’s subway, D.C.’s metro, and almost any road in Michigan.
So even though the country’s infrastructure is not as bad as some suggest, we should still be on the lookout for better ways to provide it. And help may come from an unlikely place: the blockchain technology underlying cryptocurrencies.
Quality infrastructure is important for economic growth. The innovations in transportation technology over the last century—trucks, highways, planes, shipping containers—drastically reduced transportation costs for physical goods and people which increased productivity and well-being. The advent of the internet and broadband technology did the same thing for information and ideas. Continued economic growth means routinely reallocating resources to their highest-valued use as new opportunities arise, and this process is hindered when roads, bridges, airports, subways, and broadband are in poor shape.
While many complain that U.S. infrastructure funding is inadequate, when adjusted for inflation, spending on transportation and water infrastructure has actually been fairly constant since 2000 after rising in the 1980s and 90s. The figure below from the Congressional Budget Office shows this.
Infrastructure spending CONGRESSIONAL BUDGET OFFICE
Additionally, infrastructure spending by the federal government as a percentage of all federal spending was 2.5% in 2017, which is about what it was in the mid-1980s. It’s true that over the last 15 years a larger portion of spending has gone to operation and maintenance rather than capital outlays, but that’s not surprising in a mature economy like the United States. In general, and as the next figure shows, there hasn’t been a big drop in inflation-adjusted infrastructure spending by government.
Spending by category CONGRESSIONAL BUDGET OFFICE
But even without a big drop in spending, problems can arise. Just because money is spent doesn’t mean we get something useful for it. Spending is useful when it’s allocated to the most beneficial projects and when it’s closely monitored to limit waste. There’s plenty of evidence that infrastructure costs are significantly higher in America than other places and that these higher costs are due to outdated government regulations, the bidding process, and poor oversight.
While using current spending more efficiently is important, that doesn’t preclude putting additional money to good use, especially if it’s raised properly. Congestion taxes and charging for parking, for example, can raise money for infrastructure while also reducing traffic and the economic losses it causes, which have been estimated at $160 billion per year.
Another potential funding source is crowdfunding facilitated by blockchain technology. Blockchain is essentially a publicly distributed ledger that keeps track of transactions and ownership of assets, which could be digital currencies, patents, or physical objects like rare art or buildings. Blockchain technology also allows ownership of physical assets to be broken up into small parts and makes it easy to keep track of all the owners.
Blockchain technology has the potential to open up all sorts of investments to the average person. For example, a new toll bridge could be funded completely or in part by individuals who then get a portion of the tolls commensurate with their investment. Today, these types of public-private partnerships, or PPPs, are not available to the average person. Highways, convention centers, stadiums, parking garages, rail projects, and other infrastructure with a potential revenue stream could be funded similarly.
Importantly, more local funders mean more people with a stake in the progress of the project, and this means more accountability for those in charge. Additionally, local governments and construction companies could use blockchain technology to keep track of materials, permits, and contracts . Andrew Lindsey, a market strategist for the Alpha Corporation, says that today it’s hard to know exactly who holds what and when on large infrastructure projects. He believes blockchain technology can fix this:
One of the easiest ways for a claim to come up is over who held what when and who handed what over when. When you have a very clear snapshot of all of that information, you can see what would happen to claims… Right now, it’s done off a spreadsheet that says “X is going to arrive here within this timeframe with this level of error.” But if you have an immediate, encrypted, and immutable ledger of the flow of items, you can identify exactly what is going to be where, when and you can reflect that in schedules, cost estimates, and, more broadly, phase engineering.”
And since the ledger would be available for all to see, the companies and officials responsible for the project wouldn’t be able to kick the blame for delays and cost-overruns back and forth while the public struggles to sort fact from fiction. Instead, those responsible for problems could be identified and held accountable.
Infrastructure is important for economic growth, but America’s process for funding and building it is subpar. Blockchain won’t solve all the problems, but it has the potential to make a big difference.
Dr. Adam Millsap is the Assistant Director of the L. Charles Hilton Jr. Center for the Study of Economic Prosperity and Individual Opportunity at Florida State University and a Senior Affiliated Scholar at the Mercatus Center at George Mason University.
The feature image is from Wired.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9518724083900452,
"language": "en",
"url": "https://www.greekshares.com/investing-education/gambler-s-fallacy-in-investing",
"token_count": 398,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1318359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fb451fe4-ed72-4ce0-813b-332bd6cb2b0a>"
}
|
When it comes to probability, a lack of understanding can lead to incorrect assumptions and predictions about the onset of events.
One of these incorrect assumptions is called the Gambler's Fallacy.
In the Gambler's Fallacy, an individual erroneously believes that the onsets of certain random events are less likely to happen following an event or a series of events.
This line of thinking is incorrect because past events do not change the probability that certain events will occur in the future.
For example, consider a series of 5 coin flips that have all landed with the "heads" side up.
Therefore, someone might predict that the next coin flip is more likely to land with the "tails" side up.
This line of thinking represents an inaccurate understanding of probability because the likelihood of a fair coin turning up heads is always 50%.
Each coin flip is an independent event, which means that any and all previous flips have no bearing on future flips.
It's easy to think that under certain circumstances, investors can fall prey to the Gambler's Fallacy.
For example, some investors believe that they should liquidate a position after it has gone up in a series of subsequent trading sessions because they don't believe that the position is likely to continue going up.
On the other hand, other investors might hold on to a stock that has fallen in multiple sessions because they view further declines as "improbable".
Just because a stock has gone up on 5 consecutive trading sessions does not mean that it is less likely to go up on during the next session.
In independent events, odds of any specific outcome happening on the next chance remain the same regardless of what preceded it.
In the stock markets the same logic applies:
Buying a stock because you believe that the prolonged trend is likely to reverse at any second is irrational.
Investors should instead base their decisions on fundamental and/or technical analysis before determining what will happen to a trend!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.962355375289917,
"language": "en",
"url": "https://www.investopedia.com/terms/a/assetbasedfinance.asp",
"token_count": 707,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.04296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e19e73e1-9f51-40d4-9ac9-def5fd2cc6a4>"
}
|
What Is Asset-Based Finance?
Asset-based finance is a specialized method of providing companies with working capital and term loans that use accounts receivable, inventory, machinery, equipment, or real estate as collateral. It is essentially any loan to a company that is secured by one of the company's assets.
Asset-based funding is often used to pay for expenses when there are gaps in a company's cash flows, but it can also be used for startup company financing, refinancing existing loans, financing growth, mergers and acquisitions, and for management buy-outs (MBOs) and buy-ins (MBIs).
Asset-based finance may also be called asset-based lending or commercial finance.
- Asset-based financing is a way for companies to use property, inventory, or accounts receivable as collateral to obtain a loan.
- Asset-based finance is a field solely used by businesses, not by individuals seeking personal loans.
- These types of loans may be more flexible than traditional commercial loans; however, the downside of this type of arrangement includes high financing costs.
- Other names for the asset-based finance industry are commercial finance and asset-based lending.
- Asset-based loan financing may be used by companies that need short-term working capital to keep day-to-day operations, like payroll, for example, up and running.
Understanding Asset-Based Finance
An example of asset-based finance would be purchase order financing; this may be attractive to a company that has stretched its credit limits with vendors and has reached its lending capacity at the bank. The inability to finance raw materials to fill all orders would leave a company operating under capacity and could put the company at risk for closure.
Under a purchase order financing arrangement, the asset-based lender finances the purchase of the raw material from the company's supplier. The lender typically pays the supplier directly. After the orders are filled, the company would invoice its customer for the balance due. The accounts receivable set up at this time would typically be paid directly from the customer to the asset-based lender.
After the lender receives payment, he then deducts the financing cost and fees and remits the balance to the company. The disadvantage of this type of financing, however, is the interest typically charged, which can be as high as prime plus 10%. However, these loans do have lower interest rates than unsecured loans because of the loan's collateral that allows the lender to recoup any losses if the borrower defaults.
Asset-based loans are agreements that secure the loan via collateral, like equipment or property owned by the borrower. Asset-based lending may be a line of credit or a cash-funded loan, but either way, the loan money is secured by some sort of collateral from the borrower's business or properties, such as inventory or accounts receivable.
The most frequent users of asset-based borrowing are small and mid-sized companies that are stable and that have physical assets of value. However, larger corporations do use asset-based loans from time to time, usually to cover short-term cash needs.
Asset-based finance lenders tend to favor liquid collateral that can be easily turned to cash if a default on the loan occurs. Physical assets, like machinery, property, or even inventory, may be less desirable for lenders. When it comes to providing an asset-based loan, lenders prefer companies with not only strong assets but also well-balanced accounts.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9586600065231323,
"language": "en",
"url": "https://www.sehinc.com/news/why-your-community-should-invest-bicycle-and-pedestrian-infrastructure",
"token_count": 2427,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.09423828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:14300b60-f9f8-4b89-90a0-5e5e1b1c5fb8>"
}
|
Cleaner air, quieter streets, and more people biking and walking outdoors – the pandemic has created a bit of a silver lining for communities willing to capitalize. Throughout this article, we share eight benefits of investing in non-motorized infrastructure, including economic development, public health, climate resilience and more.
COVID-19 has confirmed what many planners and engineers have been advocating for years: walking and biking are critical modes of transportation and recreation that deserve more attention as well as funding. The pandemic has tragically left schools, businesses and many modes of transportation either closed or significantly modified. However, it has also brought to light the strategic value of non-motorized infrastructure.
Understandably so, trail use is up more than 200% compared to last year. We will eventually reach a post-pandemic way of thinking and living, meaning some things will return to “normal,” but many habits – like the rise of outdoor recreation and trail usage – are likely to remain long term.
The benefits of investing in bicycle and pedestrian infrastructure cannot be denied as communities seek to attract residents, build their tax bases, spark their economies despite limited budgets and an unknown future, ensure transportation equity for all, promote public health and address climate change. If properly planned for and executed, here are eight benefits of investing in non-motorized infrastructure within your community.
Bicycle paths and complete sidewalks are comparatively less expensive than building new roadway infrastructure. While still a large investment, their narrower widths make them a much smaller price tag per linear foot.
In addition to lower project costs, such efforts can also improve safety and comfort for people walking, biking and driving. Specifically, markings like those shown below can be added at relatively low cost to existing streets to encourage drivers to travel at slower speeds by narrowing their lanes. This can also make them more aware of bicyclists. The addition of the buffer between modes also makes the street more comfortable for people biking.
For example, the City of Farmington in New Mexico has made building new bicycle and pedestrian infrastructure a priority. According to Traffic Engineering Administrator Isaac BlueEyes, before each road paving, overlay or improvement project, the City checks right-of-way and pavement widths to see if there might be room for bicycle lanes. The City often finds room for bike lanes because many of their streets are oversized for the existing traffic.
In addition to making it safer for bicyclists and pedestrians, officials have found that narrower driving lanes result in less speeding and lane swerving as well as fewer traffic accidents. A recent project to remove existing striping and restripe a critical connecting street cost just $4 per foot (or $8,000 for 2,000 feet of road).
The City of Minneapolis has found similar opportunities to repurpose auto lanes for people walking and biking; including temporary conditions through their pandemic response (snapshot shown in the image below), and also permanently with projects such as the historic 10th Avenue SE Bridge rehabilitation. The new bridge deck will reduce the number of auto lanes from four to two while widening the bicycle lanes and sidewalks. A curb is also being installed between the bicycle lanes and auto lanes to further increase comfort and safety for the 1,500+ people who bike or walk across this bridge over the Mississippi River each day.
The 2018 Benchmarking Report on Bicycling and Walking (published every two years) reports that bicyclist and pedestrian fatalities "may be reduced through proactive infrastructure, policy, education and other community investments in bicycling and walking." Here are four of the many examples representing how:
The Journal of Transport and Health concluded that a growing number of bicyclists in a city correlates to safer streets for all users. Ultimately, the more bicycle and pedestrian infrastructure you have, and the more strategically it's planned for and placed, the safer and more active your community will be.
Many baby boomers, Gen X, millennials and Gen Z are bringing new ways of thinking to their housing searches – such as being willing to rent rather than buy, recognizing that smaller cities are modernizing and offering similar benefits as large metro areas, and desiring recreational opportunities as opposed to wide streets, cramped parking garages and endless traffic. Simply stated, more people regardless of generation are looking to live and work in areas that support their hunger for outdoor recreation opportunities and desire to commute by bike or foot. This creates economic development opportunity for your community.
Trails, outdoor recreation areas and nature-interaction opportunities were growing in demand prior to the pandemic – as people seek ways to remain active while having fun. They are growing even more so in the midst of it. As a result, the most strategic communities are seeking ways to create opportunities for walking and biking for daily travel as well as recreational running, hiking, camping, fishing, kayaking and more. These communities realize one way to become more inviting to people of all generations is by becoming a destination they not only want to live in, but a place they want to visit.
Investing in bicycle and pedestrian trails that connect to recreation and retail spaces, bike repair stations, bike lanes, wide and pedestrian-friendly sidewalks, as well as walkable communities, to name just a few of the opportunities, can help you create a community that draws people in – therefore drawing in new businesses, events, development and a growing tax base.
In smaller towns and more rural areas, there's a good chance off-road bicycle and hiking trails already exist. Though, they may not be officially sanctioned trails. Whether these are unimproved roads, old rail corridors, logging roads, social trails, irrigation canals or simply volunteer trails, sometimes all you need to get started is the ability to map and connect these informal systems. They often tell you where people already want to be, providing built-in demand. Of course, be careful that the existing network is on public land, and/or make sure easements have been granted permission by the private land owner.
Trails 2000 is one example of how communities can uncover potential trail opportunities then educate the public on how they can be maintained and used. Trails 2000 is a City of Durango, Colorado trail advocacy group founded in 1989. The non-profit exceeded their goal of building 200 miles of trails by the year 2000 (hence the name), and today maintains over 300 local miles of trail. The non-profit's mission is to plan, build and maintain the local trail network; educate trail users; and encourage connectivity on roads, paths and trails.
According to Trails 2000 Executive Director Mary Monroe Brown, "connectivity" is a critical part of Durango's trail infrastructure. Connectivity refers to how hard-surface trails and bicycle paths intersect with soft-surface trails to create a vibrant trail community. Because of the group’s work, Durango is frequently listed as one of the best walking, hiking and bicycling destinations in the country. Educating and engaging the public and local stakeholders, uncovering potential areas for trail, and partnering with local groups can have a dramatic impact on your ability to create community-wide connectivity.
The residual economic effects of drawing in tourism become obvious the more your community invests in outdoor recreation and bicycle and pedestrian infrastructure. Hotels, supermarkets, restaurants, shops, local events and more are just a few of the businesses that benefit when out of town guests are attracted. In kind, your collective community and economy can benefit.
For example, Summit County in Colorado and the Colorado towns of Breckenridge, Frisco, Dillon, Silverton and Keystone Resort are well-known winter recreation destinations. But what might not be as well-known is the fact that these communities began investing heavily in bicycling infrastructure in the 1980s. Today, they have a connected system of 51 miles of bicycle paths and are widely regarded as some of the top destinations for bicycling and hiking in the U.S. As a result, these areas have grown from solely winter-based economies to year-round destinations.
Simply stated, not everyone drives. There are people who don’t have access or the ability to drive a vehicle, and many smaller cities lack quality public transit to serve these communities. Convenient access to reliable transportation is essential for the livelihood and well-being of the community. It's particularly important for underrepresented populations, such as people walking in low-income communities, people of color and older adults. These communities typically rely more heavily on public transportation and non-motorized forms of travel, and are disproportionately represented in the number of people killed while walking according to Dangerous by Design 2019, Smart Growth America's three-year report on pedestrian safety.
Thus, it's incumbent upon transportation professionals and policy makers to develop well-planned, safely executed and comfortable facilities for all users. This includes people who require assistive devices such as wheelchairs, pedestrians of all ages and abilities, bicyclists, strollers and scooters. Robust multimodal transportation options can be a great equalizer and life-saving, providing low-cost and accessible options for commuting to work, getting an education, grocery shopping, accessing healthcare and other basic yet instrumental activities.
Related Content: How to Address 8 Common Challenges of Complete Streets Design
The Center for Disease Control reports that physical inactivity is a significant contributor to the steady rise in obesity, diabetes, heart disease, stroke and many other chronic health conditions of concern throughout the U.S. Runners and bicyclists are a community; people gather in formal and informal clubs to run and ride across cities, mountains and parks. Many people enjoy these endeavors, and many are active to maintain or improve their health. Others take part because they are steadfast in their desire to reduce their carbon footprint and improve our environment.
When cities or geographic areas are recognized as hotbeds for this type of activity, people travel to these areas for the experience – and they often come back, sometimes to live.
Thousands of communities hold annual bike races and walks to support important charity causes. The same can be said for races like marathons as well as 5K and 10K events, which often take place on paved bicycle infrastructure. These types of events garner community support and are excellent for rallying people around important causes. This benefits our quality of life and the greater good! When we promote active transportation by implementing quality trails, sidewalks, bicycle facilities and open spaces, we support these goals and causes.
Bicycle and pedestrian infrastructure supports everyone in a community, whether we need or choose to bike or walk to get to our jobs, schools, stores and services. We all want to feel safe and comfortable. We want to feel connected and have access to the communities we live in. We also want to find ways to be active while having fun, whether individually or with our families. Creating a framework that supports non-motorized travel is an investment we can all benefit from.
Communities can see upticks in economic prosperity as well as public health and safety. As we continue on in this new era and continue to see innovative approaches to how we live, work and play, bicycle and pedestrian infrastructure will continue to rise in both popularity and importance. Now is the time to act!
Nancy Dosdall, AICP, LEED Green Associate, is a senior planner and project manager with 35+ years of experience in land use planning and entitlement. Nancy is proficient in public engagement and finding common ground amongst stakeholders to develop community-supported plans, enjoys the outdoors and is a proponent of non-motorized infrastructure that is accessible to all. Contact Nancy
Heather Kienitz, PE*, is a senior multimodal traffic engineer and SEH principal with 20+ years of experience leading a variety of transportation projects. She is dedicated to developing highly inclusive, context-sensitive solutions for built environments, and is particularly experienced with the retrofit, reconstruction of new bicycle and pedestrian facilities and enhancements. Contact Heather
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9536660313606262,
"language": "en",
"url": "https://coloradofoundation.org/the-young-and-the-jobless/",
"token_count": 900,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.150390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:27e2f324-0abe-4459-bdbd-c94cb8d855d9>"
}
|
- May 22, 2020
The Young and the Jobless
TheYoung and the Jobless
Thisresearch paper will be focusing on the employment rate among theyouth in America during the period after the financial crisis. Theage group considered as the youth in this paper are those agedbetween 16 – 24 years. The importance of conducting this researchis significant since it will assist in analyzing the aftermath of therecession and its effects on the labor market. This will, in turn,assist all involved stakeholders like the government to understandthe youth employment rate and their contribution to the economy.Consequently, it will help the stakeholders to make clear,well-informed decisions based on the findings. There is adisproportion in the share of the labor market among the differentage-groups. The unemployment rate among the youth is higher after theeconomic recession and this paper addresses this problem. This paperpresents a data commentary about the findings and statistics behindyouth employment in America relative to the labor force and thepopulation as presented by Casselman, Ben, and Walker in "TheYoung and the Jobless" (2013).
Casselman,Ben, and Walker (2013) note that most Americans under the age of 25make up the biggest portion of the unemployed relative to the shareof the population. According to journaldata, 16 – 19-year-olds make up only 3.8% of the labor force in thecountry while those between 20 – 24 years have a 10% share in thelabor force. This is a disproportional share comparative to the totalpopulation of those between 16 – 24 years. The share of theunemployed is 11.2% and 16.4% for 16 – 19 years and 20 – 24 yearsrespectively. This brings the total share to almost 30%. It is acomparatively high proportion. Another claim in the article is thatthe change in the average weekly earnings from 2007 to 2012 among thedifferent age-groups is unfavorable to the youth. It shows that theyare progressively losing ground when it comes to weekly earnings. Inthe data, it shows that those between 16 – 19 years experienced a-4.6% change in average weekly earnings while those between 20 – 24years had a -6.9% change. The data demonstrates a high rate ofdecline in weekly earnings among the youth. In comparison, thischange was distributed among the other age groups as follows
Age %change in weekly earnings
25– 34 -0.7%,
35– 44 +0.8%,
45– 54 +0.4%
55– 64 +0.9%.
Apartfrom those between 25 – 34 years who experienced a slight drop, therest of the age groups increased their weekly earnings after therecession at the expense of the youth. This data is calculated andadjusted for inflation. According to Casselman, Ben, and Walker(2013), in , many of the youths are stayingin school in a bid to escape the harsh reality in the job market.This is a claim that is further supported by Casey (2012) who claimsthat the rate of those who are enrolled in school is increasing whilethe same cannot be said for those who are out of school and working.Statistics also show that the rate of those who are neither in schoolnor working is also on an upward trend. Those who are in employmenttoo are not having it easy. Most are working part-time and thepercentage of those in full-time jobs is on a downward trend. It isalso claimed that those working are doing so for fewer hourssubsequently earning less money than the period before the economicrecession.
Thisdata implies that the youth have been hardest hit by the economicrecession which only favored the older, more experienced andwealthier workforce. There is need of re-evaluating the employmentpolicies in the country in favor of the youth as they are moreenergetic and productive in addition to forming the biggercomposition of those willing and able to work. This will reduce theunemployment rate to more manageable standards.
AnnieE. Casey: Youthand Work: Restoring Teen and Young Adult Connections to Opportunity. (2012).
Casselman,Ben, Marcus Walker, and Like so many young Americans. "Wanted:Jobs for the New’Lost’Generation." WallStreet Journal, September 13th: (2013).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9381982684135437,
"language": "en",
"url": "https://dnrt.org/the-economic-value-of-ecosystem-services/",
"token_count": 708,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1435546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a00aa467-6557-413c-a0fa-4a5e607cec94>"
}
|
In recent years, there has been a marked increase in studies and reports that seek to attach a dollar value to “ecosystem services” provided by protected natural areas. Because traditional “markets” do not exist for these services, researchers must use indirect means of assessing values. These techniques include calculating costs such as the downstream damage that would occur without intact riparian buffers, or the water treatment necessary to replace the natural waste assimilation of wetlands.
The primary ecosystem services accounted for in these studies include:
Water Supply – Land cover such as forests and wetlands and their underlying soils help ensure that rainwater is stored and released gradually rather than being allowed to immediately flow downstream as runoff. This natural system provides for the continuous recharge of streams, reservoirs, and aquifers, providing a fresh and clean water supply.
Water Quality – Forests and wetlands provide a natural protective buffer between human activities and water supplies. This buffer prevents pathogens, excess nutrients, metals, and sediments from negatively impacting water supplies and marine resources.
Disturbance Prevention – Many natural landscapes can provide a buffer from disturbance events. For example, coastal vegetation can reduce the damage of wave action and storm surges, and wetlands and floodplains can help reduce the impact of floods by trapping and containing storm water.
Air Quality – Trees offer the ability to remove significant amounts of air pollution and consequently improve environmental quality and human health. In particular, trees have been found to remove significant amounts of nitrogen dioxide, sulfur dioxide, carbon monoxide, ozone, and particulate matter.
Carbon Sequestration – Trees and other vegetation mitigate the impacts of climate change by sequestering and storing atmospheric carbon from carbon dioxide.
The results of such studies are stunning. For example, a 2010 report on “The Economic Value of Protected Open Spaces in Southeastern Pennsylvania,” determined that approximately 200,000 acres of protected open space in that region (including 100,000 acres of protected private lands) contributed an estimated $132.5 million in annual cost savings and economic benefits through the provision of six ecosystem services: water supply ($50.2m), water quality ($10.9m), flood mitigation ($37.5m), wildlife habitat ($16.9m), air pollution removal ($15.1m), and carbon sequestration ($1.9m).
A 2011 report done for the Piedmont Environmental Council in Virginia, estimated that its more than 700,000 acres of private lands under conservation easement in that state provide $259 million in annual cost savings in 5 out of 6 of these same categories (the Virginia study did not include air pollution removal). The Virginia study reviewed over 100 articles and policy papers to produce per-acre value benefits. Some of the per-acre values used in the Virginia study were, for water supply benefits, $20/acre/year for forestland and $485/acre/year for wetlands (including forested wetlands); and, for water quality, $238/acre/year for forestland and $1,278/acre/year for wetlands.
The numbers provided above do not even include other economic benefits of protected lands, such as recreational value, forest and farm product value, and the effect that they have on the value of nearby residential properties and real estate taxes, all of which also have been extensively studied. The hope is that all these economic studies will continue to add to the public’s understanding of real financial benefit of protecting natural areas.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9512632489204407,
"language": "en",
"url": "https://thebusinessprofessor.com/investments-trading-financial-markets/market-efficiency-definition",
"token_count": 1196,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.00121307373046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:37bdfa08-c5fd-4364-8909-2fb8f9e5ac34>"
}
|
Market Efficiency - Definition
If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.
- Marketing, Advertising, Sales & PR
- Accounting, Taxation, and Reporting
- Professionalism & Career Development
Law, Transactions, & Risk Management
Government, Legal System, Administrative Law, & Constitutional Law Legal Disputes - Civil & Criminal Law Agency Law HR, Employment, Labor, & Discrimination Business Entities, Corporate Governance & Ownership Business Transactions, Antitrust, & Securities Law Real Estate, Personal, & Intellectual Property Commercial Law: Contract, Payments, Security Interests, & Bankruptcy Consumer Protection Insurance & Risk Management Immigration Law Environmental Protection Law Inheritance, Estates, and Trusts
- Business Management & Operations
- Economics, Finance, & Analytics
Market Efficiency Definition
Market efficiency is a metric used to measure how far market prices incorporate all the suitable, available information. In case the markets happen to be efficient, then it means that all the information is already integrated into prices and it, therefore, provides opportunities for those buying and selling securities to make profits. The element of profit-making is what makes investment managers have an interest in market efficiency.
A Little more on What is Market Efficiency
Market efficiency is a financial tool used to measure the markets ability to incorporate information which in turn provides opportunities for buyers and sellers. This process effects a transaction without necessarily having to increase transaction costs. Basically, the market is assumed to be large and liquid. For this reason, it is essential that investors have access to cost information. Market efficiency means that the transaction costs should be low-priced compared to the expected strategy profits from investment. However, it is not possible to continuously do well in the market, especially where the prices of stock in the market are difficult to predict, within a short period of time.
The Origin of Market Efficiency
Market efficiency can be traced back to 1970 where an economist by the name Eugene Fama developed a theory known as Efficient Market Hypothesis (EMH). The theory stated that:
- It is impossible for a person investing to outperform the market
- There should be no market anomalies as they will be arbitraged away immediately.
Note that there are investors who are in agreement with this theory, usually purchase index funds which track total market performance as well as passive portfolio managements proponents.
How Market Efficiency Works
Investors, who value this theory, believe that at some point, stocks can be priced below the price they are worth. Those who manage to do a successful valuation of value stocks make great profits by buying stocks when their price is below and then sell them at higher prices once the price or the stock in the market is much above its basic value. However, there are those investors who are not certain about the existence of an efficient market as well as active traders. Such investors assert that as long as there are no opportunities for one to earn profits that outdo the market, there is no incentive for becoming a dynamic trader. Also, they believe that fee charges by the active managers are an indication that the transaction costs in the efficient market are low.
Forms of Market Efficiency
Market efficiency exists in three forms. They are as explained below:
- The weak form: This degree of market efficiency is of the assumption that future price rates are in no way influenced by the past price movement. In other words, past price trends have no effect on how future prices will trend. As a result of this assumption, the rule that some investors would purchase or sell their stock is totally invalid.
- The semi-strong form: This form of market efficiency is of the assumption that stocks in the market usually adjust fast so that it can absorb the most recent public information, to ensure that investors benefits do not exceed the market when they trade on that recent information.
- The strong form: This form of market efficiency assumes that both public and private information is reflected in the market prices. This assumption incorporates the other two forms (weak form and semi-strong form). Following the assumption that both private and public information is reflected in the stock prices, none of the investors profit will exceed that of an average investor even if they had exposure to inside (private) information.
How Market Become Efficient
To be able to make a market efficient, it is essential for investors to look at the market as inefficient and with the possibility of beating it. Only then can the market be efficient. However, it is ironic that the strategies for investment supposed to take advantage of inefficiencies are the ones instead of helping to maintain market efficiency.
Arguments Against Market Efficiency
Like any other theory, it is obvious that there exist arguments against the efficient market hypothesis. Those who argue against EMH say that there are investors who have been able to beat the market. They focused their investment strategy on undervalued stocks and were able to make billions of dollars. The argument further reflects on the portfolio managers with good track records than their counterparts, and also to the investment houses whose research analysis are well-known compared to others. For these reasons, some have been able to conclude that there is no way the performance can be random when we have individuals who have been able to profit and at the same time beat the market. However, following the law of probability, there are those who strongly believe that in a market with many investors, there are bound to be those who will outperform the market while others will underperform. This then nullifies the assumption that investors who beat the market, do so not because they have the skills, but rather because they happen to be lucky. Nonetheless, behavioral finance studies have revealed that when it comes to stock prices, there are some biases. Some of the bias includes confirmation, overconfidence, and loss aversion. References for Market Efficiency
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9479404091835022,
"language": "en",
"url": "https://vicgrout.net/2016/09/15/technocapitalism/",
"token_count": 3774,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.423828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4bc955aa-f7ac-48f1-83da-91f57d250cca>"
}
|
We could be well-advised to take note of this word. We may be hearing a lot more of it …
Actually, in truth, we’ve already used it a few times before in this blog but perhaps now might be a good time to have a closer look at what it is and what it might mean?
It’s always dodgy making claims like this but the term technocapitalism was probably effectively coined by Professor Luis Suarez-Villa in Technocapitalism: A Critical Perspective on Technological Innovation and Corporatism (2009) and developed further in Globalization and Technocapitalism: The Political Economy of Corporate Power and Technological Domination (2012). [Yes, it may have been used before this, but it gets very hard to track these things down accurately.]
The essential principle of technocapitalism (taken largely from http://www.technocapitalism.com/) is that it’s “an evolution of market capitalism that is rooted in technological invention and innovation. It can be considered an emerging era, now in its early stage, which is supported by such intangibles as creativity and new knowledge.”
If you want it in simple terms, we might say that, in a technocapitalist future, data will be more important than physical stuff and the whole thing will be driven by increasing automation, meaning people will be creative rather than (conventionally: physically) productive.
There are several key threads to technocapitalism (taking the positive view for now), including: ‘intangibles’, ‘creativity’, ‘new knowledge’, ‘new technologies’ and the ‘new economic activities’ or ‘opportunities’ resulting from all of these. More precisely:
- “Intangibles are at the core of technocapitalism. Creativity and knowledge are to technocapitalism what tangible raw materials, factory labour and capital were to industrial capitalism. During industrial capitalism, tangible resources acquired the greatest value, as factory production, repetitive labour and massive output ruled the day. In the emerging technocapitalist era, however, those material resources are becoming secondary in importance. Intangibles are therefore vital for technocapitalism. Creativity and knowledge are the most valuable resources of this emerging new era. They, for example, already account for as much as three-quarters of the value of most products and services in existence, and that proportion is bound to increase over time. In contrast, the material resources that were most valuable for industrial capitalism are losing value relative to those intangibles in most every product or service.
- “New economic activities are emerging that are representative of technocapitalism. Biotechnology, nanotechnology, bioinformatics, software design, genomics, synthetic bioengineering, molecular computing and biorobotics, for example, are likely to be hallmarks of the twenty-first century, as electronics and aerospace were of the twentieth. This new ecology of activities and sectors is more reliant on creativity and knowledge than any of the old industries of industrial capitalism. The organizations that are typical of these new activities are research-intensive and highly dependent on new discoveries for their survival. Continuous or systematized invention and innovation are very much a part of their reality, and are vital for survival. Unlike the factories of industrial capitalism, where production was paramount and research was often a marginal endeavour, the organizations and firms typical of technocapitalism are, first and foremost, oriented toward research and discovery.”
However, in passing, it’s also noted that:
- “The old industries typical of industrial capitalism are also feeling the impact of this emerging era. The sort of continuous invention and innovation found in activities typical of technocapitalism are spreading to old sectors, such as automobile manufacturing, apparel and the mechanical industries. Service activities, including some of the most mundane ones, are also being affected. It seems that no economic sector or activity can be considered immune from the dynamics of the new order and its emphasis on novelty.”
This focus on ‘creativity’, ‘intangibles’ and ‘knowledge’ implies a couple of things to begin with; strangely enough, neither of which is entirely new:
- In future, (intangible) ‘data’, and the ‘knowledge’ extracted from it, will be more valuable than (tangible) raw materials and the products they make. Well, yes, we know this. Up to a point. Because the shift from an offline life to an online one is obvious; the ratio of software billionaires to hardware ones is increasing; ‘knowledge is power’, etc. The first key question is whether there’s a limit to this progression?
- The employment landscape will change: as increasing automation takes over more and more of the mundane jobs, we’ll be freed up to be more creative and do (it’s presumed) more interesting things. And, again, we know this. Up to a point. Because this is the pattern that we’ve seen before throughout various technological stages of our history. Our key question, now, is whether this cyclic pattern can be maintained?
Because, almost immediately, Professor Suarez-Villa makes two key observations:
- “A major question that remains to be answered is whether the new technologies symbolic of technocapitalism will be mostly controlled by oligopolistic corporations. The pricing power of oligopolies may prevent those new technologies from having the impact on human wellbeing that they would otherwise have. Oligopolies that take over the new sectors may also erect substantial barriers to entry that prevent new and highly innovative firms from developing. Most new firms that are created may thus end up being taken over by powerful oligopolies in control of their sectors, thus stifling invention and innovation along with new enterprise creation. At the same time, the pricing power that oligopolies typically command may place the new technologies out of the reach of many of the people that could benefit from them.
- “To prevent this from happening, many of the new technologies symbolic of technocapitalism may need to be considered a public resource. Specific laws may have to be created to prevent oligopolies from being formed, or from taking over the new sectors and technologies. Such laws might also make it possible for small, innovative firms to be created, and to allow them to survive as independent enterprises. A lack of oligopolistic pricing power may make the new technologies symbolic of technocapitalism more affordable, allowing them to reach the people who may benefit most from their introduction.”
And this is critical because, bluntly, whilst the first point is partially true, there’s very little chance of the second. Although people get mentioned in the context of a ‘public resource’, there’s actually very little thought of people at all in any of this, except to briefly note that many may not benefit from any of it. Other than that, the only real concern appears to be whether small business can survive with big business. The capitalist blinkers are bolted on as firmly as they ever were. There’s the same old assumption that enough healthy competition is good for everyone (or at least, most). But let’s think about this …
There’s a certain arrogant ‘connection by necessity’ about the standard capitalist model, which is simply translated under technocapitalism. Starting from the axiom that humans have to compete with each other (because there’s apparently no credible alternative social model), there has to be an objective (function) to measure success and failure. This creates an order in terms of this function, with an elite at the top. Moreover, this objective can’t be abstract (like amateur sport, for example) because then few would bother so it has to be of value in terms of the framework in which it sits. Money satisfies both requirements admirably; so, as a result, the framework is driven by profit. Things happen if someone makes money from it, but not otherwise. This is already true and has been for hundreds of years. For example, we can build anything we like (to provide entertainment, etc.) provided ordinary people can be persuaded to pay money back to the elite for whatever it is to make it worth their while. At the same time, however, we don’t build or repair schools and hospitals as much as we should, for example, because no-one profits from it – the elite pay for their own. [OK yes, there has to be a ‘show’ of providing public services – even within a capitalist system; otherwise, people would see through the whole thing much easier; but this never really manages to deal with the problems properly. Look what gets cut first in a crisis. Usually, it’s enough to be seen to be trying.]
Technocapitalism doesn’t extend conventional capitalism at all; it simply replaces the bricks with data, but leaves all the other assumptions intact. Under technocapitalism, a few will still profit while the majority suffer. It’s new ‘raw-material’, data, could be used for as a global force for good but it won’t be because it will sit within the same old framework. We could be using data to give people better lives; instead, we’ll be using it to sell them more stuff to cycle the money back to the elite. The principle will remain the same: things will happen if the elite can profit from it, but not otherwise.
In fact, the things that worry people the most (taken from a range of sources: http://www.bustle.com/articles/70778-what-do-americans-worry-about-the-most-gallup-poll-reveals-the-nations-top-concerns, https://www.opendemocracy.net/transformation/carmen-rios/seven-everyday-things-poor-people-worry-about-that-rich-people-never-do, http://www.inc.com/fiscal-times/10-surprising-reasons-why-the-wealthy-are-stressed-out.html, etc.) appear to be:
- Economic survival (food, clothing, shelter, etc.)
- Violence (wars, terrorism, etc.)
- Environment (damage, overpopulation, etc.)
- Security (safety, privacy, etc.)
- Health (personal, etc.)
So, forget business, for a moment. How does technocapitalism look from these perspectives?
To start with the economics, this is perhaps the area in which it’s the most obvious that nothing will change for the better. Technocapitalism won’t help redistribute wealth because, although the commodity may change, the competitive framework remains intact. The economy (and therefore society, because any form of capitalism makes these synonymous) will remain a squat triangle with the elite at the top and most at the wide bottom. BUT, it could make things worse …
Aside from data being the new currency of technocapitalism, the other key driver is automation. We’re not looking as far ahead as the singularity, here; just to a future in which increasing roles are undertaken by computers, robots and other machines. Since the first industrial revolution, the assumed model has been that, as machinery takes over from humans, people are freed up to do more interesting things – which is nicely in line with the ‘creativity’ aspect of technocapitalism. Eventually, these newer jobs also get automated, people find even more creative things to do and the cycle repeats for ever. …
Except it can’t. Nothing repeats for ever and the end of this cycle is in plain view. The AI/robot/automated future is one in which humans will be outperformed by machines at everything. Not just the old jobs but the new jobs too – including the ‘creative’ ones. (If you don’t think a computer can write decent poetry, then go and sit on the bench with those who are already there: who thought that computers couldn’t produce art, diagnose illnesses, design and repair themselves, play chess, etc.) In a profit-driven framework, no employer is going to use an expensive human for anything when a machine can perform the same task, cheaper, faster, safer, more reliably and with fewer legalities. The surplus majority can’t ALL be the new ‘innovators’ – the numbers don’t work! Human unemployment will increase massively across the globe. What then happens to these people isn’t a technological question; it’s a political, socio-economic one. Let’s face it: if nothing changes, it doesn’t look good: the economic triangle gets wider at the bottom. Where will that lead?
Then there’s war and terrorism; and these are probably logically inseparable if we adopt the ‘one person’s terrorist is another’s freedom-fighter’ axiom. We seem to go through phases of increasing and reducing violence across the world so it’s hard to say if the increase we’re seeing at the moment is cyclic or longer-term. However, there are some facts that can’t be denied: the new data age (including social media, for example) makes it easier than ever before to spread messages of hate; mass movements can be formed almost overnight (for good or bad – but who’s to say which is which?); the increase in cyber-terrorism appears to be adding to conventional terrorism instead of replacing it. Also, it could be argued (although probably out of the scope of this piece) that much of the current ‘extremist’ unrest has more to do with economic circumstance rather than interpretation of scripture, for example – but that our global interconnectivity assists with this. There’s no evidence of new technology doing much to address any of these areas but plenty of ways it could make it worse.
Environmental concerns also continue to grow. The world is overpopulated and (in terms of the previous point, may be about become even more so). We’re not keeping pace at all with our consumption of resources and, already, the damage we’ve done may be irreparable. What might technocapitalism’s influence be here?
Well, it could help. The combination of big data and the Internet of Things could lead to clever use of energy, greater cooperation between people, and the machines they use, and better ways of generating the energy in the first place. But will it?
Once again, we have to look at the evidence we have, which is the current system; and that doesn’t make for good reading. Remember the essence of capitalism is that things only happen for profit; and, sadly, no-one will profit from saving the planet. Quite the reverse, in fact: environmental restrictions reduce profit and will be contested and avoided (by legal means and otherwise) wherever possible. Does technocapitalism’s commodity shift from tangibles to intangibles change that? It’s hard to see how.
Security and privacy are another matter and we need to make an important distinction here before we go any further. There are two types of very rich people: ‘celebrities’ and those not wanting celebrity. The former thrive on exposure and often seem worse off than the rest of us when it comes to privacy – but that’s a choice. The latter (the multinationals, arms-dealers, etc.) manage to stay effectively hidden if that’s what they want. There’s no obvious reason why we should expect that to change in future but what about the rest of us?
Well, this is an area, which we’ve discussed before – many times so we won’t repeat old material. But, essentially, our data is very valuable. Not just to us, but to others – in different ways. What’s important to us, as private, has value to institutions as information with which to control or economically exploit us. And, ultimately, that data itself could be sold on as the commodity; and not just to other institutions – to ordinary people too (‘Shazam for People‘). To precis it all: some serious advice for the technocapitalist onslaught would be that, if you have skeletons in the cupboard, you might want to start getting them out now – voluntarily – because they won’t be in there for long!
Finally, the new data age really should help us live healthier and happier lives. Will it? Well, currently, a drug that will keep someone alive can be withheld if it ‘costs too much’, which essentially means the company that developed it won’t make a big enough profit if it costs less. Will the data that might help us be treated any differently under technocapitalism? Unlikely. That data – ultimately – will be used to benefit business and corporations: we’ll only get to use it by buying it. [Again, making allowance for the sleight-of-hand that is the public sector.] And if, considering all of the previous points, the elite decide there are simply too many of us, well …
Putting all this together, it’s more than possible to conclude (although other political opinions are available, of course) that the human race will do well to even survive a decade or so of technocapitalism, let alone benefit from it!
We’re going to have to continue this: there’s a lot to discuss here. But, let’s start by not swallowing the elite’s marketing bullshit, shall we?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9548662900924683,
"language": "en",
"url": "https://www.arup.com/perspectives/publications/research/section/circular-business-models-for-the-built-environment?query=circular%20economy",
"token_count": 297,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.05322265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c3e18f61-402c-44e4-be56-8cd047578bb2>"
}
|
This research jointly written by Arup and BAM explores ways Circular Business Models (CBMs) would provide added benefits throughout the value chain in construction.
By highlighting the value proposition to all stakeholders, it is intended that more companies will see the benefit of a built environment based on the circular economy.
Circular Business Models cannot be achieved without intervention, as in today’s economy there are numerous examples of where it is currently perceived as more cost effective and convenient to dispose of resources after their first use rather than re-use them. By taking a systemic view across the whole life cycle of assets, using new technologies and applying advance design approaches, additional value could be created. This value will demonstrate an economic business case for adopting CBMs, as well as providing wider global benefits from a financial, social and environmental standpoint.
Funders, owners and occupiers will be fundamental to driving a ‘circular built environment’, by choosing to adopt alternative development strategies, ownerships structures and operations models. However architects, designers, engineers, suppliers, contractors and facilities managers will have a crucial role in creating circular solutions to facilitate a move to CBMs.
The content of this report is applicable to the built environment as a whole, but many of the examples cited focus on commercial building developments. The general principles are the same for other project types, i.e. infrastructure/PPP, although the use periods, ownership models and supply chain interactions differ accordingly.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9250046610832214,
"language": "en",
"url": "https://www.lgenergy.com.au/faq/solar-power-explained/what-effects-the-cost-and-payback-of-a-solar-system",
"token_count": 582,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.00286865234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c531eb67-b2bf-4f67-bef9-652f266b84cb>"
}
|
What effects the cost and payback of a solar system?
Not all solar panels, inverters and installers are the same. Differences in the purchase price of solar systems generally stem from cost differences in components. Quality components and manufacturing simply cost more. Solar equipment, like any other product follows the universal consumer principle, "You get what you pay for".
The cost of a solar power system is dependent on a number of factors, including:
a. The quality and type of silicon cells, glass, framing, backing sheet, connectors, busbars, diodes and other components in solar panels. There are cheap options and quality options and while these differences are not visibly observable on a new, shiny panel after 5, 10, 25 years there are marked differences.
b. Design, testing and quality assurance used in manufacturing
c. Exchange rates
d. Inverter quality and it's cost.
e. Installation costs. Extra costs are involved on a complicated roof, two story properties, running internal conduit & cabling (not ugly surface mounting), balance of system costs like railing, isolators and fuses come in cheap and quality options.
f. Federal and state rebates (if any) available towards the cost of the system. STC rebates are programmed to decline by 6.7% a year.
The financial returns from a solar system depend on these factors:
How long the system lasts. This is a pivotal factor. Cheaper systems for example, may pay for themselves in 4 years but fail in 8 necessitating another purchase (with lower rebates). Conversely a more expensive system may take an extra year to pay itself off, but then last another 20 years giving the household tens of thousands in net power savings.
Performance of the system. P type silicon cells are cheaper than N type but have three significant performance disadvantages. Firstly the annual degradation of P Type is double that of N type. Secondly N type have better performance at high temperatures and lastly N Type deliver more power in low light conditions. The type of silicon cell could result in a 20% difference in performance over the life of a system. The more power a system delivers the higher the financial savings.
Service & maintenance costs. Solar panels are only as good as the weakest link. One bad panel in 10 can massively reduce solar generation and stop it completely. All panels eventually fail, but ideally this starts happening after 25 years because if it happens after 5 years it is a very expensive exercise in on-going Technician costs, panel replacement and lost generation to keep the solar system operating.
Power retailer buy and sell power tariffs. In all states and territories installing a reliable solar power system will deliver positive financial returns. However, in states with both high power prices and high export feed-in-tariffs (notably South Australia) solar power is a particularly good investment.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9778158664703369,
"language": "en",
"url": "https://www.top-dividends.com/monopoly/",
"token_count": 736,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.447265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7b9c4c83-1e18-4686-bef8-e6a9c79752b9>"
}
|
Monopoly is a game which has been around for decades now but is still one of the most popular board games in the world. Created by Hasbro, there are now literally hundreds of variations of the Monopoly board game which include the likes of Glasgow Monopoly, Coventry Monopoly, Newcastle Monopoly, Manchester Monopoly and an array of other city themed games as well as a cartoon character and sports themes. It seems that the demand for Monopoly games continues to build even today and the company has major plans to bring out further versions as and when applicable in the future.
There is no doubt it is a mixture of the intelligence and skill needed to play the game together with the family orientated theme with the original Monopoly game which has made it so popular for such a long period. There are even Monopoly games out there which allow you to customise each square of the board and effectively create your very own game which you can play with family and friends.
Monopoly is a popular game that involves a lot of strategies and a little bit of luck. A lot of adults love to play Monopoly, as a family game and at large gatherings. But this board game also proves beneficial to kids as well. This article will talk about two key reasons why parents should get their children involved in this board game.
#1: Monopoly Introduces Children To Grown-Up Terms That They Will Use In Later Life
There are two key concepts that kids are exposed to that will become crucial in later life: rent and mortgage. Whenever a player arrives on someone else’s resources, they have to pay a specified amount. This becomes crucial as children grow up because they learn that to stay somewhere else, they must pay a monthly amount. They also learn that every place does not charge the same rent and that properties in more affluent neighborhoods cost more.
The children are also exposed to a mortgage for the first time. While the mortgage in Monopoly is different from the standard mortgage, children are at least introduced to the concept that they will receive money from the bank in consideration for their property. It’s a major growth experience for children.
#2: Monopoly Will Teach Children Strategy And Management
Monopoly is a game of strategy and management on many levels. Whether it involves buying the right set of properties or watching how much money you have, there is more than luck involved in winning. For example, as children play Monopoly over a period, they may learn that they can win by focusing on getting properties focused on a corner is a better strategy than trying to get Park Place and Boardwalk. They may also learn that focusing on the more expensive properties may not be the best strategy to winning because it drains their money and they may not get their return on investment.
Whatever the case may be, children will gain a better understanding of what approach they need to win the game as well as watching how much money they have. This is a good development for a child later in life because they can take their strategy and management skills and put them to good use when it comes time to manage their money and make crucial financial decisions.
I was poking around YouTube the other day and I found this video with some tips on how to win Monopoly when you play. Do you do some of these things? If you do, you might be the winner!
You can learn a lot online, some things you might not even expect. Did you know that you can learn texas defensive driving online? You can! Texas Defensive Driving School has the experts that you need to get your court approved documents and get back on the road.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9254880547523499,
"language": "en",
"url": "https://digital.hbs.edu/platform-rctom/submission/reducing-deforestation-impacts-in-the-agricultural-supply-chain-at-bunge-is-algae-oil-the-answer/",
"token_count": 1565,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07470703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7fcad9bb-6e0b-4557-adb6-80fddb4cd942>"
}
|
Bunge is a food production company, predicated on suppling commodities, such as grains (corn, wheat, rice etc.) and oilseeds (soybeans, palm oil, etc.) to the world economy. Bunge’s operations are integrated across agricultural supply chains, as it sources directly from farmers and suppliers, manages a complex physical transport and storage flow, processes and refines products and then distributes them to foodservice companies.1 With a footprint in more than 40 countries, Bunge is a leading world exporter, operating 32 port terminals worldwide and 1,600 ocean voyages per year.2
Climate change should be top of mind for Bunge given that direct agricultural production contributes to at least 15% of global greenhouse emissions. Factoring in the wide-scale deforestation caused by agricultural expansion, that figure rises to 30%, given the loss of carbon-sequestering trees.3 Climate change in turn threatens food production because temperature shifts, extreme weather events, and heightened seasonal variability can all impact agricultural productivity.3 Thus, given Bunge’s positioning in the macro agricultural supply chain, Bunge is both susceptible to, and a potential contributor to, global climate change. As such, Bunge faces the threat of 1) shortened production supply from the farmers it works with directly 2) price changes in commodity markets due to changes in global supply (even if Bunge’s supply is stable) 3) operational disruptions to its transportation and logistics network due to extreme weather events and 4) reputational risk associated with being a climate change contributor.
Operating as a “full value chain” provider increases Bunge’s risk exposure, but also positions Bunge well to advance traceability of its products, ensuring they are sourced, transported, and processed sustainably. To that end, Bunge is pursing various initiatives to build a more resilient, sustainable supply chain. First, in 2015, it made a commitment to zero deforestation, aiming to have full compliance by 2020-2025.4 Given that soy and palm oil are two of the most detrimental commodities as it relates to deforestation,3 Bunge has focused its efforts on improving the supply chain of these products. Bunge aims to motivate its entire network of farmers and suppliers. For example, Bunge is seeking suppliers that do not disrupt High Carbon Stock (HCS) forests, as these are critical to reducing global GHG emissions. Bunge has put in place a robust system of recording and controls for its raw material inputs to validate and manage its supplier network.5 As of Q4 2016, Bunge was able to trace the origination of 87% of its palm oil production, with areas in Asia as the key region outstanding. Improving traceability in Asia is Bunge’s long-term goal to achieving its zero-deforestation target.6
As a short-term objective, Bunge seems to be diversifying its product mix to include a land-light line of business. Through a joint venture with TerraVia, Bunge is cultivating algae oil as an alternative to its existing oilseed products. Bunge and TerraVia have developed two AlgaWise products, an ultra omega-9 fatty acid algae oil and an algae butter, which are expected to launch in retail in 2018.7 The environmental ramifications of producing high volumes of algae oil remains to be seen, however, its cultivation should be a more sustainable offering relative to other oil alternatives. Bunge and TerraVia are in the early days of product development, but the commitment to pursuing alternatives that lessen their dependency on land cultivation is promising.
To pursue its zero-deforestation target, Bunge should invest more heavily in technology in the near term to ensure that its suppliers are following climate-focused protocols. As a point of reference, Cargill recently announced that it will be leveraging blockchain in its supply chain of its Honeysuckle White turkey product. Through a data tagging system on every turkey in its supply chain, Cargill will be able to better monitor product sourcing with digital tracking and analytics.8 Bunge could better digitize its palm oil and soy operations with similar systems.
In the medium to long term, rather than focusing on slowing deforestation, Bunge should help to reverse it through reforestation practices and agroforestry. By investing in agricultural products that are derived via thriving forest rather than cleared land, Bunge can grow its operations while also mitigating climate impacts. Sugar palm is an example of a product that relies upon agroforestry, in which vegetation such as trees and shrubs are integrated into crop and farming systems.3 The challenge is that Bunge’s current product portfolio does not lend itself to forest-minded agricultural practices – ultimately, palm oil and soy are detrimental to the land and expose Bunge to intractable climate risk. Bunge needs to support the development of alternatives to these products that will can satisfy global demand and should invest more heavily in R&D to that end.
What is Bunge’s role in reducing the use of palm oil across the supply chain? What will happen if they reduce its supply?
Word Count: 797
- Bunge Limited, “Our Businesses,” https://www.bunge.com/our-businesses, accessed November 2017.
- “Investor Day,” Bunge Limited 2016 Investor Day Presentation, December 13, 2016, on Bunge Limited website, https://bunge-ltd-micro.prod-use1.investis.com/~/media/Files/B/Bunge-Ltd-Micro/event-calendar/2016-investor-day-presentation.pdf, accessed November 2017.
- Sonia Vermeulen, Bruce Campbell, John Ingram. “Climate Change and Food Systems,” Annual Review of Environment and ResourcesNo. 37, 2012, p. 195–222.
- “Non-Deforestation Policy: Grain and Oilseeds Update” Non-Deforestation Progress Report, September 2017, on Bunge Limited website, http://www.bunge.com/sites/default/files/non-deforestation_progressreport_sep2017.pdf, accessed November 2017.
- Bunge Limited, “2016 Global Sustainability Report,” https://www.bunge.com/sustainability2016/index.html, accessed November 2017.
- “Global Palm Oil Sourcing Update,” March 2017, on Bunge Limited website, https://www.bunge.com/sites/default/files/bunge_palm_oil_update_-_033117.pdf, accessed November 2017.
- Elaine Watson, “TerraVia algae butter to launch in early 2018, could be a ‘blockbuster,’ predicts CEO,” FoodNavigator-usa.com, May 3, 2017, https://www.foodnavigator-usa.com/Article/2017/05/04/TerraVia-algae-butter-to-launch-in-early-2018#, accessed November 2017.
- “Honeysuckle White®brand leads the way in food transparency, delivering a farm-to-table Thanksgiving featuring first-ever traceable turkeys,” Cargill press release, October 25, 2017, on Cargill website, https://www.cargill.com/2017/honeysuckle-white-brand-leads-the-way-in-food-transparency, accessed November 2017
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9588412642478943,
"language": "en",
"url": "https://eyeonhousing.org/2011/12/household-balance-sheets-deteriorate-during-the-third-quarter/",
"token_count": 834,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.11962890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:54326215-a922-448c-95fb-41c2052c471e>"
}
|
NAHB has been tracking two key economic variables that are critical for a robust and sustainable rebound in housing and the economy as a whole: the ratio of household net worth to disposable income (NW/DPI) and the personal savings rate.
The NW/DPI ratio can be thought of a measure of the health of household balance sheets. It tells us how much household wealth exists relative to available income. Over the last 25 years, this measure has averaged about 5.17 (i.e. households, in aggregate, typically possessed total wealth equal to their current disposable income multiplied by 5.17). In late 2006, this ratio peaked at a value of 6.45. It then fell to 4.68 at the beginning of 2009, as housing price declines and stock market declines took a toll on household wealth.
The personal savings rate tends to be negatively correlated with NW/DPI, rising as household balance sheets deteriorate and falling as they improve. In turn, the personal savings rate has an important effect on macroeconomic growth, with a rising savings rate holding back short-run growth due to declining household consumption.
The graph above plots the current value of NW/DPI (red line) and the 25-year average (1982-2007) (green line). The blue line charts the personal savings rate. Household net worth data are from the Federal Reserve’s Flow of Funds data and the savings rate and disposable income data come from the Bureau of Economic Analysis National Income Product Accounts.
Household balance sheet repair, meaning household deleveraging as families pay down debts and build up savings, in general continues as seen by the gradual upward trend of the NW/DPI since early 2009. However, there have been ups and downs in this process, which is crucial for successfully emerging out of a balance sheet recession. We are currently experiencing an extended down period given recent stock market declines. As of the end of the third quarter of 2011, NW/DPI currently stands at 4.86 (plotted on the right axis).
The most significant reason for this weakening in balance sheets was the decline in the stock market during the third quarter of 2011. For example, the S&P 500 was down almost 16% for the quarter. As a result, total holdings of household financial assets declined by almost $2.7 trillion or more than 5%. It is useful to note that homeowner equity, which has been approximately constant through 2011 due to relative stabilization in home prices, was not a significant cause of the weakening of balance sheets.
This stumble in household wealth occurred just after, under the revised data, household balance sheets reached their historical average in the first quarter of the year. Consequently, the savings rate fell significantly, from 5% to 3.8% in the third quarter (despite some predictions from more pessimistic observers that the rate would increase in 2012 to 8% or higher). This decline certainly boosted consumer spending. In fact, personal interest payments (non-mortgage interest payments on debt) rose in third quarter for the first time since late 2007, according to the NIPA data.
However, because household balance sheet conditions declined over the last two quarters, we expect the savings rate to increase somewhat during the last quarter of 2011. This negative effect should be offset somewhat by recent recoveries in stock prices, with the S&P 500 up 10% over the last two months. If balance sheets continue to improve at the post-2009 trend, then they will return to the historical average in by the middle of 2012. This forecast suggests that the savings rate will remain between 3% and 5% for the first part of 2012.
Finally, data from the third quarter Flow of Funds show that total home mortgage debt outstanding continues to decline. Since the first quarter of 2008, mortgage debt has declined 6.9% or $730 billion. However, the value of the owner-occupied housing stock has fallen by 20% over the same period. As a result, the share of equity relative to the total value of the owned-occupier housing stock has remained around 40% over the last three years.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9493198990821838,
"language": "en",
"url": "https://investingintruth.com/how-to-free-up-100-in-your-budget/",
"token_count": 1645,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0322265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:29731745-9d62-4188-8cee-f096b5b40e17>"
}
|
Who doesn’t want an extra $100 each month? It all starts with tracking our expenses and figuring out where our money is going. Once we’ve done that, we can make decisions on what to cut out of our normal spending in order to free up $100 per month. That money can then be used to pay off debt quicker, build an emergency fund or increase your giving. The goal is to stop spending money on less important things and start using it for the important things you can’t afford now.
The following recording is from “Mornings with Kelli and Steve” on Moody Radio Indiana (97.9 FM). For more information on Moody Radio, go to moodyradio.org/indiana.
Q: When you coach people on how to find $100 per month in their budget, where do you start?
I always have to remind the individual and myself of the real goal. We’re trying to make our budget a more positive experience and create more enjoyment out of our spending by getting rid of expenses that give us little pleasure and replace them with things we want, but can’t afford. When we get that right, it changes the perspective of the process from being negative to being positive.
We shouldn’t look at what others spend their money on to determine what we should be spending. Some people get great enjoyment from things that others would list as least important. That’s why it’s an individual choice and each person needs to decide for themselves where to make cuts.
Q: What are the most common optional expenses that people consider trimming in an effort to find an extra $100 per month in their budget?
Food expenses come in two forms, eating out and buying groceries to cook at home.
- Eating out – I always like to say “eat out because you planned to, not because you failed to plan”. For some of us, our eating out time is important to us because it is time spent with family or building relationships with friends. If their budget allows for that, I’m not going to tell people to give it up.
The problem comes when we eat out because we didn’t think ahead and plan for our day. How many times do we buy a lunch because we didn’t prepare to take it the night before? What about when we work a long day and we’re all tired when 5:00 rolls around and we don’t know what we’re going to do for dinner? Instead of trying to figure it out, we decide to go out and eat instead. In those cases, we’re spending extra money because we failed to plan for a cheaper alternative ahead of time when we had the energy to do it.
- Groceries – If we’re spending too much on groceries, it’s often a result of a lack of planning as well. Every time we go into the store, we’re likely to buy something that wasn’t on our list. That’s why we need to plan our meals in advance and try to buy groceries for a week at a time if we can. If you need to make a grocery run to pick up an item, stick to the meal plan and just buy what you need for those meals.
If you still need ways to cut down on grocery spending, you may want to analyze what you spend on each meal and try to put more of the cheap meals in your week and sprinkle in the more expensive ones less frequently. For a boost in the beginning, you can also try to plan an entire week of meals using only food you already have on your shelves.
We spend a lot of money in the name of convenience these days. Some examples relate back to our first point.
- Prepared foods – It’s so much easier to go to the store and buy food that is already prepared for us to cook instead of buying it in its raw state. Things that come to mind are carrots that are already peeled and cut up for you or salads that are already put together and ready to eat, cheese that is cubed up or garlic bread that is already sliced and buttered. All of these conveniences come with an added cost that increases our grocery budget.
- Box services – Box subscription services are super convenient because they send everything to your door and it’s all ready to go, but there is an added cost to that convenience. In fact, the box meal services are running into the problem that they’re teaching people how to cook and hurting their own business model because their customers can go to the store and buy the ingredients cheaper in the future.
Know what you’re paying for convenience and make sure it’s worth it to you. Some things will be and some won’t.
Gifts and Celebrations
Too often we equate the amount we spend on a gift or experience with the amount of love we’re showing. In reality, time is one of the best and most loving gifts you can give and it doesn’t require our money. Here are a few suggestions to cut back on gifts:
- Focus more on Jesus at Christmas and cut back on the number of gifts. Add experiences and time together instead.
- Celebrate the person on birthdays, rather than defaulting to an expensive meal or party experience.
- Relational holidays like Valentine’s Day, Mother’s and Father’s Day, etc. don’t always have to circle around an expensive meal, even though our society likes to make us think they do. Remember who you’re celebrating and put some creative time into planning something special on a limited budget. It can be fun and special while not breaking us financially.
It’s a profitable business, but not a profitable service when we use it long-term. If you’ve been paying storage fees for a year or more, it’s time to do something else with those items.
It’s not uncommon for me to see people paying more in storage fees than the items inside are worth.
Unused memberships and entertainment subscriptions
If you’re paying for things you’re not using, you need to trim those expenses. Some possibilities are:
- Gym membership you’re not using
- Multiple services for entertainment when just your favorite one will do
Look at these monthly expenses and drop the least important one or two. You probably won’t even miss it.
Cell phone plans
Monthly cell phone bills have become an increasingly large portion of the average person’s budget. We all should look at those bills and see if there’s a way to get the service we need cheaper.
- Look for some of the lesser known service providers that piggy-back on the big cell provider networks.
- Try to change your cell data usage habits. Some people need cell data for their work, but others could easily transition to using wifi data instead and save a fortune. I have a plan that charges me nothing for calls and texts and only charges for the amount of cell data I use. Since I’m normally close to wifi, I use very little data and my cellular bill last month was $2.35.
We all get worn down and feel like we deserve a nice, relaxing vacation, but often end up spending more than we should on those trips.
- One of the most common ways that we overspend goes back to that idea of convenience. All-inclusive resorts or vacation packages that have everything planned out for us sound easy, but they typically come with a mark-up in costs as well.
Brad Graber, CFP® has been working with clients on personal financial planning and investment issues since 1996. He invests his time mentoring and educating individuals on ways to be better stewards of the resources God has entrusted to them.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9733572006225586,
"language": "en",
"url": "https://www.healthikids.org/blog/whats-next-for-the-farm-bill",
"token_count": 990,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.39453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:21bd4100-9a37-466e-a08d-9a212a769a36>"
}
|
May 31, 2018
You might have been hearing about the Farm Bill through social media or in the news. Maybe something about “food stamps” or “SNAP,” and a vote in Congress. There’s talk of major changes and millions of people losing access to food. So what is going on? And what is the Farm Bill anyways?
The history :
The Farm Bill has its roots in the Great Depression of the 1930s. During the Great Depression, many people were struggling to afford even the most basic necessities, like food. That meant long lines at soup kitchens and other charities. Meanwhile, farmers were growing more crops than anyone could buy, and prices for crops became so low that farms were facing ruin. The government responded by creating programs that supported farm incomes and helped consumers get the food they needed.
After the economy recovered in the 1940s, the food stamp program was ended. Later, in the late 1960s, the general public became more and more aware of the severe poverty that persisted in America. The U.S. was a wealthy nation and a world power, but some of its citizens were still unable to afford food. Congress acted by creating the modern Farm Bill, a combination of food stamps (known today as Supplemental Nutrition Assistance Program or SNAP) and financial supports for agriculture.
Since the 1960s, Congress has passed a new Farm Bill every 4 to 6 years. It’s a complicated collection of conservation and environmental programs, farmers markets and rural development grants, support for major crops like corn and cotton, SNAP, and more. The fact that it affects such a wide range of stakeholders usually forces Congress to compromise and pass a bipartisan bill.
What’s happening now:
This year, House Republicans in charge of writing the Farm Bill did not come up with a bipartisan version. The bill would make major changes to the SNAP program. The bill would cover fewer SNAP recipients by lowering the income cap for SNAP recipients, make it harder for States to enroll eligible people, and it would reduce SNAP benefits for people who need States’ help paying their utility bills.
Work requirements for SNAP:
The biggest change would come in the form of new rules for work requirements. SNAP already has work requirements that apply to able-bodied adults who are 18-49 years old and don’t have kids. If these adults are not working 20 hours a week, or participating in a qualified employment-training or volunteering program, they can lose access to SNAP. The House Farm Bill would expand the requirements to include adults with children over 6 years old and people up to 59 years old. It would also make changes that would kick people out of SNAP much more quickly for not meeting the requirements.
Shifting responsibility to States:
These proposals were met with a lot of criticism. Many critics point out that millions of Americans would lose some or all of the help they get through SNAP. Others have said that States won’t be able to offer enough training programs or afford to track every SNAP recipient on a monthly basis. There are questions about just how helpful the training programs, such as resume writing classes, really are for people in SNAP. While most people in SNAP who can work do have jobs, many unemployed SNAP recipients are already trained and actively seeking employment.
Impact on School and Summer Meals:
Under the current Farm Bill, many States use a system called “categorical eligibility” to make enrolling in SNAP easier for people who have already been approved for other income-based programs. The House proposal would make changes to categorical eligibility that would cause hundreds of thousands of kids to lose access to free breakfast and lunch at school and summer meals. The House also adopted an amendment to the Farm Bill that would force the Department of Agriculture to reevaluate nutrition standards for school food. Child nutrition advocates worked hard to pass improved school food standards in 2010, and many fear that this amendment is an attempt to get unhealthy food back into schools.
With these criticisms from moderate Republicans and Democrats, plus objections from conservatives who wanted more spending cuts, the bill did not pass the House. What’s next for the Farm Bill is not clear. The House Republicans could try to pass their bill again. In the Senate, Democrats and Republicans are working together on a bipartisan Farm Bill. Congress might also decide they cannot get a new Farm Bill this year, and choose to wait until after November’s elections.
Whatever happens could have a big impact on the way we grow and purchase food in the United States. For details on the Farm Bill and what it might mean in New York and around the country, you can visit the websites of Hunger Solutions and National Sustainable Agriculture Coalition. Healthi Kids will continue to monitor the developments, because we know how many kids in Rochester depend on SNAP and other programs funded through the Farm Bill.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9839339852333069,
"language": "en",
"url": "https://www.infinitevortexoflight.com/jfk",
"token_count": 206,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.47265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5fd6cb11-63c0-4836-b661-d1eaf1fd8e7a>"
}
|
The U.S. was in a recession when Kennedy took office. He carried out various measures to boost the economy, under his own executive anti-recession acceleration program. Among other things, the most significant tax reforms since the New Deal were carried out including a new investment tax credit.
President John F. Kennedy established the Peace Corps on March 1, 1961. It was a program through which American volunteers would help underdeveloped nations in areas such as education, farming, health care, and construction.
JFK averted nuclear war through his negotiations with soviet leader, Khrushchev. The Soviet Union agreed to dismantle its weapons. Secretly it was agreed that US would remove its nuclear missiles from Turkey, but wouldn’t declare so publicly.
Kennedy supported racial integration and civil rights through his speeches. On March 6, 1961, he signed an executive order which required government contractors to take affirmative action to ensure all employees are treated equally irrespective of their race, creed, color, or national origin.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8667354583740234,
"language": "en",
"url": "https://www.morewords.com/word/cash",
"token_count": 158,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.029541015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:15f8ae6a-6aca-4fdd-9b9a-d7448ebf359e>"
}
|
Definitions of cash
The word cash uses 4 letters: a, c, h, s
cash is playable in:
Meanings of cash
n. - A place where money is kept, or where it is deposited and paid out; a money box.
n. - Ready money; especially, coin or specie; but also applied to bank notes, drafts, bonds, or any paper easily convertible into money
n. - Immediate or prompt payment in current funds; as, to sell goods for cash; to make a reduction in price for cash.
Direct anagrams of cash
Words with the same length and used letters. Useful for word puzzles.
Other words with the same letter pairs
Find words containing the letter combinations found in cash.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9557556509971619,
"language": "en",
"url": "https://www.wowessays.com/free-samples/example-of-healthcare-utilization-and-finance-case-study/",
"token_count": 649,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.053955078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:37f45d38-1403-4e98-8b06-28f82e4027d1>"
}
|
Medicare is available in various categories depending on the patients need. For instance in the united states we have Medicare part A which is a type of hospital insurance that gives inpatients cover and care during their stay in hospitals, hospice, home health care and even in skilled nursing facilities like in Mrs. Zwick’s case. Despite the fact that many people don’t buy part a premiums since it is catered for in the Medicare taxation whist working, one can still purchase the premium if they are more than sixty five years of age; posses part B and are citizens or meet the residency expectations of that state. The second factor is if they are below sixty five years of age and their part A premiums ended on reason related to commencing back to work (McLean, 2003).
The second part Medicare insurance is the part B which only covers for services that may be necessary medically such as; outpatient care, doctor’s services, health services at home , purchase of durable medication equipment and any other medication services or even varied preventive services such as prevention and detection of illnesses like flu (Moody’s I.S, 1995).
On the hand Medicare part D is a special federal based program meant to reduce and subsidize the cost of prescribed drugs in the United States especially for Medicare beneficiaries. Enacted in January 1st 2006 it is part of the Medicare modernization act it is entitled to individuals enrolled under part A or B. it covers for both prescription drugs as well as medical services.
Payments that hospitals and physicians receive for their services give to patients that are under cover of Medicare programs are known as Medicare reimbursement. The money is usually directed to the billing provider though medical insurance does not necessarily cater for the whole amount. This means that the Mrs. Zwick’s acquired infection would cost her and her daughter more money (Culyer, 2000).
Davis would also benefit from the health programs referred as COBRA (Culyer, 2000). This is a healthcare program programs that allows some employees to continue exercising health insurance programs even on leaving employment this is inclusive of disability insurance, postal services and emergency room treatment amongst others. This would be convenient for Davis since he falls in the category that can be rendered the service.
If Davis was a citizen of Great Britain, he would be exposed to better health care since medication is catered for, for every citizen courtesy of taxation. Whilst in Japan health care services are provided for inclusive of parental care, screening examinations to cases of infectious diseases but pay for thirty percent of the total cost whilst the government caters for the remaining seventy percent (Scheiber, 1997).
Moody's Investors Service, (1995). Health Care Utilization and Financial Statistics, New York:
McLean, R. (2003). Financial Management in Health Care Organizations
Delmar Series in Health Services Administration, 2nd ed, New York: Cengage Learning.
Culyer, A. (2000). Handbook of Health Economics, London: Elsevier.
Scheiber, G. (1997). Innovations in Health Care Financing: Proceedings of a World Bank
Conference, Washington: World Bank Publications.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8737619519233704,
"language": "en",
"url": "https://coursecatalog.nwtc.edu/courses/10-090-303+060781",
"token_count": 241,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.059326171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:31a7b580-f64c-4c74-adbf-05300f679f0b>"
}
|
Northeast Wisconsin Technical College
10-090-303 060781 Agribusiness Economics
Outline of Instruction
|10-090-303 AGRIBUSINESS ECONOMICS ...basic knowledge of macro and micro economics will be taught, proficiency in developing basic cash flows, cost benefit analysis, enterprise analysis, budget development, profit loss statements and developing cost of production. Student should be able to develop adequate information for farm manager to analyze business proposals.|
Prior Learning Assessments
Differentiate how agricultural commodities are priced at the local point of sale.
Relate how global markets influence local farm commodity prices (macroeconomics and microeconomics).
Associate the effects on-farm practices (good and bad) have on product quality and value.
Calculate costs of production for various farm products.
Compare previously budgeted expenses and incomes to actuals.
Record daily activities as appropriate on accepted formats.
Compare various options to resolve a pending problem area.
Accept responsibility to use all farm resources carefully and as efficiently as possible.
Organize daily tasks to be completed in a timely manner.
Review sustainable principles being applied in farming industry.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9629209637641907,
"language": "en",
"url": "https://duttonlaw.ca/minimum-wage-in-quebec/",
"token_count": 774,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.18359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0e8bdff5-e42d-4ef9-b838-fa96c2e778b3>"
}
|
Section 2 of the Regulation Respecting Labour Standards sets out the rules about minimum wage in Quebec.
On May 1, 2020, Quebec’s minimum wage rates went up. The new general minimum wage for 2020 and into 2021 in Quebec is $13.10 per hour. This is an increase of 60 cents per hour from the old minimum wage, $12.50, which was in place until April 30, 2020.
The following chart shows the new minimum wage rules in Quebec beginning on May 1, 2020:
|Minimum wage type||Minimum wage rate after May 1, 2020|
|General minimum wage||$13.10 per hour|
|Student minimum wage||There is no special student minimum wage. Students are paid the general minimum wage. However, students employed in a social non-profit and students in regulated professional training programs are exempt from the minimum wage in Quebec.|
|Employees who receive tips||$10.45 per hour|
What about Domestic Workers?
Domestic workers like nannies must be paid the general minimum wage ($13.10). There is no lesser wage in Quebec for domestic workers.
However, if the employer provides room and/or board (meals) to the domestic worker (or any kind of worker for that matter), they can deduct set amounts for room and board in the calculation of the employee’s pay:
Set amounts if providing room only
- $28.53 maximum per week for “rooms”, or
- $51.33 maximum per week for “dwellings”
- “Room” means a room in a dwelling unit that has a bed and a chest of drawers for each employee who is accommodated and that allows access to a toilet and a shower or bath;
- “Dwelling” means a dwelling unit that has at least 1 room and allows access to at least a washer and dryer as well as a kitchen with a refrigerator, a stove and a microwave oven.
Set amounts if providing meals only
Minimum Wage and Commission Employees
If an employee works on-site and has regular hours, and he or she is paid entirely or partly on commissions, the employer must pay the employe an amount to at least the minimum wage for each hour the employee has worked.
For example, if an employee has earned no commissions one week, he is still entitled to the minimum wage for all hours worked. If a commissioned employee’s wage is below minimum wage for the hours they worked, the employer must top up their payment so that it’s equal to minimum wage. If, on the other hand, an employee has earned over $524 (40 x $13.10), for example, in a 40-hour workweek, the employer will not need to make up any difference.
However, if an employee is paid entirely on commission and (1) they work offsite and (2) their hours are not controlled by the employer, then they are exempt from minimum wage in Quebec. For example, a salesperson who makes sales calls from home at his or her own leisure and only receives a commission on the sales he or she makes would be exempt from the minimum wage.
Employees who Receive Tips
Employees who receive tips as a major part of their income are entitled to a lesser minimum wage ($10.45 instead of $13.10). For example, an employee who works in these industries and usually receives tips from customers will be entitled to the lesser minimum wage for employees who get tips:
- restaurants, except for fast food establishments
- food delivery
- bars and clubs
Future Minimum Wage in Quebec
There is no minimum wage hike planned for 2021 and 2020 at this time in Quebec.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9590404629707336,
"language": "en",
"url": "https://theamericangenius.com/finance/cryptocurrency-works-basic-vocabulary-concepts/",
"token_count": 3034,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.032958984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5214eccd-f862-467d-a872-bb367e869985>"
}
|
One of the most exciting things to arise out of new technology is the idea of better ways to optimize and improve concepts that we already find in the real world. None of us should be surprised when that includes currency.
With cryptocurrencies such as Bitcoin, Ethereum, Ripple, Litecoin, Dash, NEM, Ethereum Classic, Monero, and Zcash (to name a few), it may be hard for the average consumer not to just keep up, but to know what’s going on in this revolution in our modern day economy. Knowing how crypto works makes you a better consumer, as well as investor in your future. Let’s get started with the basics.
What is a cryptocurrency?
To ask what cryptocurrency is, one should also contemplate what modern day paper or coin currency is. At its most basic, all currencies share this core trait: you can exchange a unit (or units) which has predetermined value for either goods or services. Whether it’s dollars, Yen, the gold standard, or Dogecoin, all of these currencies allow you to complete basic transactions.
Where cryptocurrency is different, is how these transactions are completed and how cryptocurrencies are processed.
How does crypto differ from common currencies?
Cryptocurrency allows you to send money directly peer-to-peer (p2p) electronically instead of operating through third-party systems like banks or governments.
The technology that makes this happen is called Blockchain. Blockchain technology is the primary difference between the dollars in your wallet and the virtual currencies in your crypto wallet. The Litecoin School of Crypto uses a great analogy to explain how blockchains work:
“In its simplest form, blockchain is data. It’s a list of recorded information called “blocks” strung together in a chain. Think of blocks as folders stuffed with information i.e. how much Litecoin was sent, who sent it, and who received it. The great thing about blockchains is that it’s public and anyone in the world can see it.”
How does a normal crypto transaction work?
Here’s an example using the fictional cryptocurrency, bitquarters: Karen owes Jamal 10 bitquarters for her movie ticket, so she’s going to pay him back. Karen first requests the transaction through her digital wallet. Because of the nature of cryptocurrency, she can’t send him bitquarters she doesn’t have (there is no “overdrawn” account status in crypto, like modern banks), so it’s a good thing she just got paid!
When Karen initiates the transaction, she uses her private key to virtually “sign” it. When a transaction is completed, an individual will “sign” their transaction with their private key – the reason why cryptocurrency is called as such is because of encryption, after all. The requested transaction is sent via peer-to-peer (p2p) sharing to a network of computers called nodes. These computers validate Karen’s key and verify the transaction.
After the transaction is verified, it is added to the blockchain, the virtual ledger, that all bitquarter users have access to. After that is finished, in only a matter of seconds, Jamal is paid!
What is this cryptocurrency “mining” thing I’ve been hearing so much about?
Mining is a vital part of the cryptocurrency transaction. Miners are the only individuals in the crypto process that can confirm transactions. Their job is to take a transaction, to verify that it is legitimate, and spread them p2p in the network.
To make it a part of the public ledger (the blockchain) every node has to add it to its database. Because mining takes a computer’s energy and electricity to perform, miners are rewarded with small amounts of cryptocurrency per transaction (like how you pay to pull money from an ATM). However, to prevent fraudulent transactions, a computer must solve an encrypted puzzle in order to add it to the blockchain.
What are other important crypto terms I need to know?
Address: the only piece of information that needs to be used for a transaction, similar to a user name or email address. Each transaction uses a different address.
Block: a unit of data in the blockchain that holds and validates transactions. A blockchain is where all blocks of transactions reside.
Double spend: the action of trying to spend cryptocurrency to two different recipients simultaneously. Mining as well as the blockchain prevent malicious actions such as this from taking place.
Cryptocurrency is held up by some as being the currency of the future, while many others think that due to over-speculation, that it will be a investment bubble with irrevocable consequences for brick and mortar institutions. Regardless of any market forecasters perspective on cryptocurrency, the technology is here to stay and knowing the basic vocabulary can help you understand where things are going.
Don’t be intimidated by all of the language around this concept – if you choose to dive into the crypto waters, you’ll learn as you go along. If you invest in stocks, you know a specific concept and vocabulary list, and crypto functions differently but is just another finance mechanism, both of which can be overwhelming but learning the parts necessary to your goals is all that matters.
PS: If you’re more of a visual person, there’s a short video available that has circulated that explains Bitcoing well, and applies to crypto in general.
This story was first published in February 2018.
Will China’s new digital currency really compete with the US Dollar?
(BUSINESS FINANCE) It isn’t the first time that China has tried to compete with the dollar, but the release of a digital currency has lead some economists to raise red flags.
For decades the US has been the world standard for foreign trade. As of 2019, 88% of all trades were being backed by that almighty dollar, making it the backbone of the world economy. However, China may be sneaking in something new for digital currency.
In the last few months, over 100k people were “airdropped” cold hard digital currency. This currency came from People’s Bank of China (PBOC), who has created a digital manifestation of the Chinese yuan. This is planned to run concurrently with its paper and coin playmates. Upon initial inspection, they resemble the same structure as Bitcoin and Ethereum. But there’s a major difference here: The Chinese government is the one fronting the money.
The suspected plan behind this is that the government plans to tightly control the value of the digital yuan, which they are known to do with the paper one as well. This would create a unique item within the world of cryptocurrency. Personally, I don’t think that any of this is going to go anywhere soon. Too many people still need hard currency but it does open up a unique aspect of currency that has only just started since debit and credit cards. It gives the government the ability to spy on its cryptocurrency users. Being able to monitor transaction flows can reveal things like tax evasion and spending habits. There is even the possibility of experimenting with expiring cash.
But how does this affect the US? There’s a method that has been used by Americans since WWII called dollar weaponization. The exchange domination allows the US government to monitor how the dollars move across the border. Along with that monitoring they are actually able to freeze people out of global financial products as well. It’s a phenomenal amount of power to hold.
The concern for economists is that the price fixing capabilities of this new currency as well as its backer being an entire countries government could affect everything about the global financial system. Only time will tell how true that turns out to be.
There are a number of possibilities that could come up honestly and they could fall flat on their face unless they put their entire monetary worth behind it. Only time will tell but some economists are already calling for DigiDollars from the American government. Another step into the future.
A tiger shows its stripes: The growth of Tiger Global and their investments
(BUSINESS FINANCE) Tiger Global has been acquiring a load of tech companies – let’s talk about who they have and how they’ve been so successful.
In 2003, Tiger Global was founded by Chase Coleman who began his career at Tiger Management (brilliant name choice). In the ensuing years the investing firm expanded to include private equity and venture investing. Today it’s hitting the charts at $65B with its employees (number at ~100) being the firms’ biggest shareholders.
Earlier this month, Tiger Global raised one of the largest pots of VC money ever recorded, coming in at $6.7B. These came from a list of occurrences and investments.
- Roblox: A sandbox gaming startup, Tiger Global owned 10% when it went public in March and the value is hitting ~$38B+
- Stripe: A fintech firm Tiger Global leaped onto this investment when Stripe announced a $600m rise in value at a $95B monetary evaluation of the company.
- M&A wins: In 2020, 3 portfolio companies (Postmates, Kustomer, & Credit Karma) of Tiger Global were acquired in billion-dollar deals.
The tactics that Tiger Global stands by are well documented in a few different locations. One of the biggest that they push is speed. The deals that fly across their tables are completed in just 3 days, far outpacing other firms. When you are an investment firm hour are a time between success and failure. To keep up with these ideas, they have a pre-emptive approach to startups. Doing thorough research and throwing money at people before they even start looking for it. Knowledge is power and this lets them get their foot in the door faster than anybody else.
Resources and a monstrous war chest are 2 of the other factors that they set their claim to fame on. The numerous portfolio companies have high-priced consultants thrown at them for advice on a regular basis. These consultants just add to the success of the companies and keep things building. Where does this money come from? The stakeholders. The mountainous mounds of money that this firm keeps on hand is matched very few in the world. Scrouge McDuck would be hard pressed to keep up with these guys.
They also keep to long-term holdings as an approach to their methods. Unlike traditional VCs, Tiger Global operates public market hedge funds which provides price stability for startups since it doesn’t have to distribute funds after an IPO, unlike traditional VCs.
In the first quarter of 2021 Tiger Global has closed 60 deals, keeping with their hit the ground sprinting approach. They have bids on a number of different companies already as well (ByteDance, Discord, Hopin, & Coinbase). At least one of these reaches a value into the tens of billions. This company is set to be one of the fastest growing groups in the globe. Who knows where it will stop? Let’s wait and see, or join. Whatever hits your fancy.
India bans cryptocurrency prior to releasing their own
(BUSINESS FINANCE) India is potentially planning to ban cryptocurrency — and instead, they’re planning to introduce their own version of it for purchase.
Owning mainstream cryptocurrency these days is a bit like owning a pair of Crocs: Potentially lucrative (especially if you’re Post Malone), but mostly just weird. A recent report shows that India is planning on adding “illegal” to that list, possibly ahead of launching their own cryptocurrency in place of the banned ones.
The proposed law would also fine anyone found trading—or even simply owning—banned cryptocurrencies in India. Mining and transferring ownership of cryptocurrency would similarly warrant punitive measures.
CNBC notes that this law would be “one of the world’s strictest policies against cryptocurrencies” to date. While some countries have imposed strict laws regarding things like mining and trading cryptocurrency, India would be the first country to make owning it illegal.
Some talk of jail time—including sentences of up to 10 years—for cryptocurrency owners and users was floated by Indian lawmakers back in 2019, but there is no explicit indication that those terms would be present in this rendition of the bill.
To be fair to the lawmakers involved here, the bill wouldn’t be as cut-and-dry as “has bitcoin, gets fined.” According to the CNBC report, people who own cryptocurrency would be able to “liquidate” their earnings for up to six months preceding the bill going into effect. This would theoretically allow investors to hold onto their portfolios for a bit longer before having to cash out.
But that leniency might not matter anyway. It doesn’t take a genius to see that this move could do two dramatic things to the cryptocurrency market: Add yet another niche option for investors, and destabilize every other pre-existing cryptocurrency option—or, at least, make them less stable than they already were.
In fact, the simple introduction and threat of this bill could be enough for the cryptocurrency market to take a nosedive—something that can’t be discounted as a factor in making this decision. Current reports put Indian-owned bitcoin values at roughly $1.4 billion, though, so it’s clear that the bill hasn’t had a deleterious effect at this point.
The fact that India’s central bank has plans to introduce a government-sponsored cryptocurrency of their own cannot be separated from this bill, either. While the official government position is that blockchain is to be trusted while existing cryptocurrencies are eschewed and dismissed as “Ponzi schemes”, it’s clear that at least part of this bill is motivated by a desire to thin out the competition.
Opinion Editorials7 days ago
3 things to do if you *really* want to be an ally to women in tech
Business Marketing2 days ago
Video is necessary for your marketing strategy
Opinion Editorials2 weeks ago
Questions you wished recruiters would answer
Tech News2 days ago
Chatbots: Are they still useful, or ready to be retired?
Business Entrepreneur1 week ago
15 tips to spot a toxic work environment when interviewing
Opinion Editorials7 days ago
4 simple tips to ease friction with your boss while working remotely
Business Entrepreneur2 weeks ago
Zen, please: Demand for mental health services surges during pandemic
Opinion Editorials6 days ago
Why robots freak us out, and what it means for the future of AI
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9599025845527649,
"language": "en",
"url": "https://wol.iza.org/articles/how-important-is-career-information-and-advice/long",
"token_count": 4331,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.166015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:857825af-36cd-488f-82ac-4c09e57af1e1>"
}
|
The quantity and quality of educational investment matter for labor market outcomes such as earnings and employment. Yet, not everyone knows this, and navigating the education system can be extremely complex both for students and their parents. A growing economic literature has begun to test whether interventions designed to improve information about the costs and benefits of education and application processes have an effect on students’ behavior. So far, findings have been mixed, although the positive findings arising from some very carefully targeted interventions give cause for hope.
Information interventions can influence educational investment decisions if the information provided is pertinent to the target group and provided at the right time.
Well-designed information interventions can be low cost relative to other interventions such as tuition subsidies that are intended to increase educational participation.
Some information interventions have been shown to be effective if coupled with personal assistance or mentoring.
Many information interventions have no effect on student behavior, even though they have been carefully targeted and well designed.
Information interventions are unsuccessful if students face significant other constraints, such as high competition for particular education programs, or if they are unable to adjust their aspirations to match what they can realistically achieve.
Providing information too late in the education process may not allow sufficient time for students to make the necessary prior investments.
Author's main message
Evidence suggests that interventions designed to improve knowledge about the costs and benefits of educational investments, and how to navigate application processes, can influence students’ knowledge, expectations, and behavior. However, for these to be effective in the short term, they must be carefully designed and targeted to groups for whom the demand for information is high and can be readily acted upon. A level of personalization is required in how the information package is designed and delivered. Policymakers should view successful information interventions as low cost, but not simple.
It is well established that the amount and type of educational investment has a strong causal relationship with labor market prospects (employment and earnings). Despite this, many young people drop out of education early on in their lives. Others appear to make suboptimal choices—for example, attending higher education institutions that have lower performance indicators and higher costs associated with them than other (more selective) institutions for which they would be qualified. While there are many possible contributory factors, one question is the extent to which lack of information is a constraining factor, especially for students from disadvantaged backgrounds. Now that big administrative data sets are becoming more accessible (e.g. in the US and the UK), it is (in principle) feasible to devise strategies based on information delivery that might improve the situation.
Furthermore, outside of the fixed cost of setting up these strategies, the ongoing costs are small relative to other potential policies. For example, programs involving tuition subsidies or one-to-one career guidance are much more expensive. But should one expect simple information interventions to have an effect on behavior? Under what circumstances do “information treatments” work or not work? These are some of the questions upon which the economic literature is beginning to shed light with a recent swathe of studies about the effects of information and/or personal guidance on post-compulsory enrollment decisions and other educational outcomes.
Discussion of pros and cons
There are several recent studies that use randomized control trials (RCTs) to test whether the provision of information about higher education influences applications, enrollment, and other measures of educational attainment. In RCTs, people or institutions are randomly assigned to receive a given intervention. Interventions vary in terms of the content, target group, and institutional context. The content of the information intervention might be characterized in the following ways: (a) information about the relative labor market benefits of different educational options , , , ; (b) information about financial aid , , ; (c) information about labor market benefits and financial aid , ; (d) more specific semi-tailored information about admissions processes, and the relative merits of different institutions/programs in terms of inputs, future prospects, and costs , , . The target groups vary from young people or their parents some time before they make post-compulsory decisions , , , , to people right at the margin of making decisions , , , . In addition, it is relevant to determine if the intervention aims to influence the decision to participate in post-compulsory education at all , or the nature of that participation, such as what courses or institutions they should apply for , , .
The institutional context varies considerably across countries. For example, in the US, a central concern is the extremely complex process for applying to higher education and seeking financial aid. Here, the interventions’ aims are often to present relevant information in a simple way—which saves people from having to do extensive research themselves. In other countries, such as Finland and the Netherlands, the application process is very straightforward; thus, interventions are not designed to de-mystify a complex system, but rather to provide people with information that they may not already have. The same is true of studies in developing countries , , .
Because of all these differences between studies and the fact that they are mostly very recent (meaning researchers cannot yet say much about longer-term outcomes), it is difficult to come to general conclusions about the effects of information provision. However, it is still possible to make some useful observations by reviewing these papers.
Information on its own is not always sufficient to change behavior
Figure 1 shows a summary of recent studies that use RCTs to test whether information (on its own) influences educational investment behavior. In common with studies that look only at student knowledge and expectations as outcomes , , almost all show a positive impact on students’ knowledge and beliefs. However, out of the ten RCT studies, only half show an impact on application/enrollment decisions and/or educational attainment—and three out of the five showing a positive impact are from developing countries.
It may be easier to influence educational decisions in developing countries through the provision of information; lack of information is a serious constraint for more people than in developed countries. Two studies that examine the link between educational achievement and future outcomes in developing countries investigate the Dominican Republic and Madagascar ; they both find that people are very misinformed about this link. Furthermore, there is a huge amount of early drop-out of education in these countries. The studies find that when people are provided with information about the link between education and earnings, it has a positive effect on educational investment decisions. In Madagascar, providing information about the average earnings at each level of education as well as the implied gain led to both improved school attendance and higher average test scores . In the Dominican Republic, providing information about the relationship between education and earnings reduced drop-out from school in subsequent years, although only amongst the “least poor” students .
By contrast, in developed countries, findings are less clear on whether low-income students know the link between educational achievement and future outcomes . In these countries, simply providing information about the link between education and labor market outcomes has not yet been found to have much impact on actual behavior, even though it does change people’s attitudes toward educational investment decisions. The studies that look at the behavioral impact of providing information on the relationship between education and earnings come from the US and Finland. In the US, the experiment involved sending regular text messages to sixth and seventh grade students about the relationship between human capital and earnings (e.g. about the relationship between years of education and earnings) . This was found to increase students’ knowledge about these issues and their self-reported effort at school, but it had no impact on educational attainment or attendance. In Finland, the experiment involved a 45 minute information session delivered by student guidance counselors where students in the final year of high school were given information on earnings distributions by education level and field of study (among other things) . This was shown to effect students’ beliefs, but had little impact on application behavior and no effect on enrollment.
Other studies (from the US , the Netherlands , and Chile ) have focused specifically on giving students information about financial aid opportunities. Of these, the US study gave the most customized information to families. The focus was on families with a low to moderate income with at least one member between the ages of 17 and 30 who did not have a university degree. Participants’ tax returns were used to calculate individualized financial aid eligibility to attend higher education institutions. They were given a written description of their aid eligibility and a list of the tuitions of four nearby colleges. However, this was found to have no impact on financial aid applications or college enrollment. Only when individuals were also given assistance with the financial aid applications was there an impact on college enrollment—and the effect was substantial. This involved completing and submitting the application form for financial aid on behalf of the family.
The fact that providing assistance has such a large effect is probably a reflection of how complicated the financial aid system is in the US, which is not necessarily true of other countries. For example, the study in the Netherlands claims that the loan application process is simple . Yet, take-up of loans is low, despite a generous system. The authors investigate whether those already enrolled in higher education might be influenced to take out student loans if they have better knowledge about loan conditions. While they find an effect on knowledge, they find no effect from providing information on the actual take-up of loans.
Two studies in developed countries that have found a positive impact of information interventions on educational investment are from the US and France . The US study uses administrative data to target high school seniors who are both very high-achieving and have low family income. Students were posted an information package that was “semi-customized” for their circumstances (e.g. regarding income and location) and included a fee waiver for making college applications. The information package also included a guide on application strategies, a list of where similar students applied, and a comparison of institutions based on graduation rates, resources, and costs. The findings from this study are that students receiving this information apply to more institutions and enroll in better-performing institutions (with a better match between their academic ability and the average intake of the institution). Yet, another US study, which has adopted a similar approach to students who are on the verge of not applying to any college at all, does not find any impact and interprets this as a reflection of its different target group (who are much lower-achieving) .
The French study focuses on students who are very low-achieving . In contrast to the US study, it aims at making people more realistic about future plans. It focuses on decisions made at the end of middle school (where students are aged 15–16) and targets the parents of young people who the school head teacher has identified as the most low-achieving and at risk of dropping out. The background is that young people in this group often have unrealistically high expectations about where they can apply (in a competitive system). In addition, they have a high probability of repeating grades and eventually dropping out of education. The intervention aims to encourage the parents of these students to consider two-year vocational programs on their list of possibilities for the following year. The intervention consisted of two group meetings with the head teacher; it involved the preparation of guidelines by district experts explaining how to inform and counsel families about the complex tracking system and the application and allocation mechanism. Parents were shown a DVD of students explaining how they performed in vocational education, even though they had failed in middle school. The guidelines suggested that each family’s expectations should be discussed in light of the student’s actual performance and that families should be helped to adjust their expectations to match those performances. In the end, this intervention increased the proportion of students who enrolled in two-year vocational programs (rather than repeating grades) and they remained there the following year (i.e. they did not drop out).
Whereas the US study focuses on changing the expectations of students that are too low , the French study does the opposite , targeting those whose expectations are unrealistically high, leading to ill-considered actions and premature drop-out from any further education. It is thus important to note that information experiments can be just as well applied to downgrading overly optimistic expectations as upgrading overly pessimistic expectations. What both studies have in common is that students make better educational choices as a result of the information interventions. In particular, the US study shows that students attend more selective colleges and perform just as well in terms of grades and persistence as they would have done in a less selective college (which is a positive outcome bearing in mind that they will have a more academically able peer group within more highly selective colleges) . The French study shows lower drop-out rates one and two years after the intervention and that the change in behavior is between attending a two-year vocational program rather than repeating grades and/or dropping out .
Both of the above interventions manifestly led to the recipients making better choices because they were designed to carefully meet the needs of a well-defined target group. This is an important characteristic of successful information interventions. Providing information that is too general (or worse, inappropriate) for a target population would serve no useful purpose.
Why information experiments work in some contexts but not in others
The US and French studies , which both find positive effects on behavior, are similar to each other in the sense that the treatment group is very well targeted; they are people who are on the verge of making a decision about educational choice and appear to have no other impediment to making that choice (i.e. they have made appropriate preparations for the choices being considered). The treatments deliver exactly what the target groups need and want at the right time. In some cases, where the information intervention has not influenced behavior, there have been other obstacles. For example, in the US study where young students received regular information by text message on the relationship between education and labor market outcomes, it was argued that the students did not know the relationship . In other words, they wanted to improve their school work and self-reported effort did increase—but they simply did not know how to translate their higher ambitions into higher test scores. In the Finnish study, where high school students were given information about the earnings return to different courses of study, there was some change in application behavior (among those students who were surprised by the information) but no change in enrollment—potentially because the system is very competitive for high-return educational routes . It may also be too late to influence effort (via information) the year before decisions need to be made. Another caveat is that both of these studies were looking at the short-term effects of providing information and not what the impact might be over many years.
Both the US and French studies finding positive effects on behavior had relatively high participation rates among the target group: about 40% in the US study and 50% in the French study. In contrast, the study that provided semi-customized information to US students on the verge of considering whether to apply to college at all attracted much less interest among the target population, with a participation rate of only 14% . Low participation rates are a general problem. Low participation rates in an experiment mean that researchers have to be especially cautious about extrapolating effects beyond the group that was selected to be involved in the experiment. For instance, the literature on financial literacy emphasizes the time cost of investing in information processing (even if it is provided for free). Thus, one might expect non-participants to have higher discount rates (i.e. to be more “present-orientated”). It is therefore not clear whether information provision should be expected to have a smaller or larger impact on non-participants compared to participants.
As discussed above, the target group and institutional context will influence the effects of any information treatment. For example, in a developing country, where most people dramatically under-estimate returns to schooling , and clearly under-invest in education, it might be easier to find a high marginal effect of providing information. In developed countries, on the other hand, many studies find that people do not under-estimate returns to schooling and that they invest in longer periods of education. While information interventions might motivate people in these countries to more closely consider the quality and type of their education investments, it might be hard to do this over the short term, because these decisions are made over time (often starting at fairly young ages) and students may need to compete for a fixed number of slots in popular programs, where entry is influenced by many years of effort and achievement.
Does personal assistance matter?
While some studies find that information on its own is not enough to influence behavior, they do find that if information is accompanied with assistance then it is sufficient to influence education investment decisions. The two most relevant studies are both from the US. One study, discussed above, shows that if financial aid applications are submitted on behalf of families, there is a big effect on college enrollment . Information provision is inherent to this intervention but does not drive the impact on college enrollment per se (which is instead interpreted to reflect the role of simplification and assistance to families for making financial aid applications).
The other US study finds that customized information is not sufficient to influence would-be college students to apply to any college . However, the researchers suggest that a “boots on the ground” intervention is very effective. The main part of this type of intervention is mentoring by university students, who guide participants through the whole college application process (including aid application), which takes three to four weeks. The authors find that this does have a significant impact on college enrollment. Although it is a more expensive intervention than those based on providing information only, the authors argue that this is still cost-effective and compares favorably to interventions that directly subsidize students to attend college. There are several other studies that support this view by showing that mentoring can be cost-effective for encouraging college entry.
Limitations and gaps
There are too few comparable studies about the effects of successful intervention strategies to inform about whether they might be successfully implemented elsewhere. Furthermore, interventions are generally short term (i.e. expected to influence outcomes within one or two years); thus, they do not answer the question as to whether information interventions might be effective if provided early enough and if they are sustained over time (e.g. as part of the regular school curriculum).
In addition, some information interventions might be effective only if they are combined with some other form of personal assistance, but the form and intensity of the assistance is likely to vary by context. This is not something that can be easily generalized and extrapolated across different institutional environments and target groups. All of this highlights the importance of testing interventions as well as proper piloting of approaches even before they are formally tested.
Summary and policy advice
The provision of tailored information might help young people to make better-informed decisions about their educational investment and researchers and policymakers can benefit from the growing body of literature on this topic. Most of the relevant studies suggest that information on the costs and benefits of educational choices and the application process for attending university and/or applying for financial aid does impact knowledge and expectations for those participating in the studies.
There are, however, fewer studies that focus on the impact of these interventions on actual behavior (such as enrollment). While studies in developing countries did find impacts on behavior, most studies in developed countries have found no impact of information without some other form of assistance. On the other hand, there is evidence to show that when supplemented with mentoring or practical help, the provision of information can lead to a behavioral response in terms of educational investment. The form of practical help does not necessarily need to be very intense or expensive for the intervention to be effective. But often, some form of personal assistance is necessary. For example, in the French study, school leaders were actively involved in identifying appropriate groups of students and encouraging parents to attend meetings .
However, the US study shows that, in some contexts, “face-to-face” contact is not required at all and that information can be provided entirely based on administrative data (albeit in a very sophisticated way—to ensure that the right information package gets to the right target group at the right time) . A big advantage is that this information can be delivered to students that are geographically dispersed and who attend schools that are not regularly the target of various forms of outreach activity by colleges.
Both the French and US studies are low cost relative to other interventions designed to encourage educational investment. Undoubtedly, they have a high ratio of benefits to cost. Both suggest that it is very important that information is provided by a trusted source. Although they are largely “information only” interventions (particularly the US example ), there is some degree of personalization in both approaches. Policymakers should take note that although information interventions are not costly (relative to many other policies), making them effective is not a simple matter.
The author wishes to thank an anonymous referee and the IZA World of Labor editors for many helpful suggestions on earlier drafts.
The IZA World of Labor project is committed to the IZA Guiding Principles of Research Integrity. The author declares to have observed these principles.
© Sandra McNally
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9562320709228516,
"language": "en",
"url": "https://www.ideasforindia.in/topics/social-identity/what-lies-behind-this-years-economics-nobel.html",
"token_count": 3440,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1572265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9c9ed632-73cc-406d-bfb8-7ee2a5b2e76e>"
}
|
In this post, Maitreesh Ghatak discusses how randomised controlled trials – the use of which was pioneered by this year’s economics Nobel Laureates, Banerjee, Duflo, and Kremer – have been successfully applied in real life with programmes and interventions that directly impact the poor. He contends that they can provide a much-needed corrective to the top-down approach of centralised policymaking.
Some of the standard questions that come up in the field of development economics, which is concerned with determinants of poverty and policies to alleviate it, are as follows: Does microcredit alleviate poverty? Are policies of financial inclusion effective in helping the poor who are self-employed to save, invest, and raise their incomes? Did the Mahatma Gandhi National Rural Employment Guarantee Act (MNREGA)1 raise wages by providing an alternative source of employment for rural labourers in India? Is it availability of textbooks or mid-day meals or better health and sanitation that can improve the educational attainment of children from poor families in rural areas?
The main challenge in answering these questions is establishing a connection between cause and effect. If the questions sound straightforward and the approach sounds simple, then you have not encountered the word that is to academic seminars what spells are to Harry Potter and his friends: ‘identification’. The moment this comes up, a hushed silence descends on the room and the speaker launches an intense defence of how their analysis establishes a robust path from cause to effect. The problem is that in the real world everything changes at the same time and so it is hard to identify what is a cause and what is an effect. For example, if the poor save less, is it because low incomes cause low savings or is it the case that low savings cause low incomes? Similarly, it is hard to figure out the effect of one cause from that of another: maybe expansion of bank branches does facilitate saving but simply establishing a correlation between the two is not sufficient as some third factor (such as rising wages in the region) could be driving both, creating a spurious correlation. Theory can justify all these lines of arguments but cannot tell whether we should focus on policies that will raise income and therefore boost savings, or whether we should prioritise policies that will enhance savings opportunities for the poor and thereby eventually raise income.
The strength of randomised controlled trials (RCTs)
Standard empirical methods try to find some externally driven change in the environment that changes one factor, and then follows the line of causality. To continue with the example of savings, the Government of India has enthusiastically pushed the Jan Dhan Yojana2 in the last few years and one could try to see if those who were brought under this scheme were able to save more than those who were not. The trouble is that the government may have chosen to prioritise some areas over others for a reason (for example, they were poorer) and so this is not a clean comparison. Similarly, those who chose to have an account may be thriftier and so we cannot use their behaviour to judge how the average person would react to such a scheme. Finally, even if we see a positive effect on the savings and incomes of those who signed up for these accounts, it could well be that something else was going on that drove both trends – maybe wages were rising due to a rise in export demand, or due to a government programme of road construction.
This is where the strength of randomised controlled trials (RCTs) lies. RCTs is a technical phrase only heard within the confines of academic and policy worlds until 14 October 2019, when the Nobel Prize in economics was awarded to Abhijit Banerjee, Esther Duflo, and Michael Kremer for pioneering the use of RCTs in development economics. Following experimental trials in medicine, RCTs use a key insight that can be traced back to The Design of Experiments (1935) by Ronald Fisher, an eminent British statistician and geneticist: you select two groups that are similar and then randomly select one to receive the treatment (a drug, or a policy) being tested and then compare the outcome of this group (called the ‘treatment’ group) with that of the other group (called the ‘control’ group). If the difference is statistically significant, that is attributed to the treatment. The very design of the study eliminates the standard problems mentioned above.
The key innovation here is not coming up with the idea of randomisation – but applying it in real life with programmes and interventions that directly affect the lives of the poor. From testing drugs to placing government programmes as well as those carried out by NGOs (non-governmental organisations) on a randomised basis across villages, households, and organisations, takes quite a leap of imagination.
Using this method in economics has altered our views about what policies work and what do not. Take the example of microfinance, which serves more than 100 million people, mostly women, belonging to the poorer sections of society worldwide. Muhammad Yunus of Bangladesh is viewed as the leader of the microfinance movement for singlehandedly creating the most famous and successful microfinance institution (MFI) of the modern era, the Grameen Bank of Bangladesh. In 2006, Yunus and the Grameen Bank jointly won the Nobel Peace Prize for their contribution to reducing world poverty.
But, is microfinance effective in reducing poverty? If we merely compared those that have access to microfinance and those that do not, we would not get a satisfactory answer for the reasons mentioned above. Banerjee and Duflo, together with their colleagues, studied the impact of access to microfinance on the creation and profitability of small business as well as various measures of standard of living by working with Spandana, an MFI. They randomly selected half of around 100 slums of Hyderabad where a new branch was opened (the ‘treatment’ group), while in the remaining half of the slums no branch was opened (the ‘control’ group).
Before the programme was carried out, the control and treatment slums looked very similar in terms of population, average debt outstanding, businesses per capita, per capita expenditure, and literacy. What about the effect of the programme on the treatment slums? Small business investment and profits of pre-existing businesses increased, but consumption did not significantly rise. Durable goods expenditure increased, which suggests that loans were mostly used to purchase these. The study found no significant changes in health, education, or women’s empowerment. This research and a set of other studies in different countries have changed our views about the role of microfinance in alleviating poverty. While access to small loans is undoubtedly useful for expanding existing businesses and funding consumer durable goods, and may also help recipients to tide over temporary gaps between income flows and consumption needs, it is no longer seen as a magic bullet for solving the problem of poverty.
Where do RCTs fit into the broad scope of the field of development economics?
Development economics is concerned with a much broader set of issues than evaluating specific programmes relating to health, education, or credit, where RCTs have been most frequently applied.
A central concern has been the process of structural transformation of an economy – how the population moves from agriculture to industry and services – and accordingly, how the sectoral composition of national income changes. This process involves not just a movement of resources (land, labour, and capital) but also a process of institutional change – from informal personalised transactions to more formal contractual arrangements and markets, and associated changes in social norms. These are the kind of issues that Simon Kuznets and Arthur Lewis, two earlier recipients of the Nobel Prize, dealt with.
RCTs, however, can mostly be applied to study problems at the micro-level where the implementation of an individual programme – whether it is by the government or a private organisation (like a MFI or an NGO) – can be done in a randomised way that allows for a statistically satisfactory evaluation of the programme’s impact, as outlined earlier. Clearly, as with any other tool of analysis, RCTs cannot be applied to every question of interest within the field. And, as with any new method that attracts young researchers and research funding, there are grounds to worry that this will push out important research that uses other methods, including theory and empirical work that does not use RCTs. By their very nature, RCTs cannot be applied to broad macro-level issues or the more long-run aspects of development and institutional change.
However, one should note that a new generation of RCTs have come up that goes beyond evaluating programmes, and suggests that the frontier of their applicability can be pushed forward in creative ways. For example, a major focus of research in development economics has been to understand the contractual terms that prevail in land, labour, and credit markets in developing countries. A number of recent research papers have applied the tools of RCTs to vary terms of credit or tenancy, and have overcome some of the limitations of earlier work. Take the case of tenancy. My own work with Abhijit Banerjee and Paul Gertler showed how Operation Barga, a tenancy reform programme carried out in West Bengal in the late 1970s and early 1980s, changed tenancy arrangements and improved agricultural productivity. However, despite our best efforts, given the data we could not rule out the role of other policies that were carried out at the same time such as empowering the panchayats. In a recent RCT carried out in Uganda, the research team collaborated with the Bangladeshi NGO BRAC (Building Resources Across Communities) to induce randomised variation in real-life tenancy contracts. As part of their operations, BRAC leased plots of land to women from low socioeconomic levels who were interested in becoming farmers, effectively acting as the landlord. In the experiment, some tenants received a higher crop share (75%) and some a lower crop share (50%). The study, which was carried out by a group of researchers that included two of my former Ph.D. supervisers from the London School of Economics (LSE), Konrad Burchardi and Selim Gulesci, found that tenants with higher output shares used more inputs, cultivated riskier crops, and produced 60% more output relative to those in the control group. While these effects are reassuringly similar to those that we had found earlier, the nature of the new evidence ensures that the new study is not subject to the methodological limitations ours had to face.
Criticisms of RCTs from inside and outside the world of academic research
The main ‘inside’ criticisms of RCTs – from within the world of academic research (for example, by recent Nobel Laureate Angus Deaton) – are as follows.
First, while RCTs overcome some problems of evaluating individual programmes, the typically small sample size of these studies implies that the conclusions cannot be generalised to the whole population or extended to other environments. Moreover, there is the possibility that these studies may also be partly picking up the sheer effect of being observed by the researchers and the surveyors, which creates a bit of an artificial environment and therefore may give a biased picture of how the programme will work out when it is not being surveyed (the so-called ‘Hawthorne effect’).
Second, if some programme works well, we do not know if there is another programme that would have worked better.
Third, if a policy worked well, it is hard to infer the exact mechanism by which it worked – for example, does microfinance work by making credit more available or is it something that empowers women, or both?
There is some validity to each of these criticisms. However, every method has some limitations and to find a way forward one has to either come up with a better method or improve the existing method. Another promising direction is to harness the synergy of different methods – for example, it may be worth exploring how RCTs can be combined with other tools of economics, such as theory and simulation. Theory is good at coming up with alternative narratives that connect cause and effect, but it is not very good at determining what may be going on in a given environment. This is exactly as in medical science – theory gives us a first hunch as to what has happened while empirics are diagnostic tests which may confirm or disprove or modify the original hunch. A recent research trajectory that combines theoretical models with RCT evidence to carry out policy simulations that estimate the effect of hypothetical alternative policies tells us what else could work even better, as well as what the likely effect will be in a different environment.
Then there are ‘outside’ criticisms of RCTs.
Some wonder why academic economists should do policy evaluation. Should that not be left to policymakers? After all, as economists, we know the value of comparative advantage and specialisation. As much as science and engineering are different fields, should research not be separate from policy work, whether it is formulation of policy or its evaluation?
There is also some concern that, because RCTs require lots of funding, the missions of certain donor agencies and philanthropic organisations may distort the direction of research – as much as the profit motive of pharmaceutical companies can influence the agenda of medical research.
Then there are ethical considerations regarding experimenting on human subjects. These range from depriving those in the control group of a beneficial programme, to manipulating the behaviour of individuals in the treatment group, which raises questions of transparency and informed consent.
Another criticism is that, since policymaking happens in a political framework, to take a purely technocratic view about evidence-based policy and incremental improvements may be misguided at best, and at worst, the equivalent of putting band-aid on a serious injury.
Once again, there is some validity to each of these criticisms. But they provide a partial picture. Policymaking may be too important to be left to policymakers only. After all, we have seen too many instances of policy formulation that oversimplify problems and take a centralised one-size-fits-all approach. In the Indian context, some of the major policy shifts, such as demonetisation or goods and services tax (GST) implementation, or making the Aadhaar3 card mandatory, were done without any grounding in evidence or without first testing the waters. Yes, there are ethical considerations regarding the design of experiments, as well as the need for accountability regarding how well the research agenda fits the development priorities of a country. But that points to the need for developing a legal and ethical framework that governs research, not to abandon a particular method. It is also true that the kind of programmes that are studied offer incremental improvements but it is not the case that stopping doing these would unleash more major initiatives, whether on the part of the government or by other actors, including the people themselves.
To me one of the most significant legacies of the RCT research agenda is to put the importance of evidence at the centre of the table in the context of policy. Knowledge consists of knowing both what we know and what we don’t know. The demands of rigorous evidence make us acutely aware of the boundary between the two. Another welcome aspect of this research agenda is its emphasis on a bottom-up rather than top-down approach towards policymaking. The same policy may not work equally well everywhere or for everyone in the same place. Only evidence can help improve the effectiveness of policies by making them better suited to the specific needs of an area or a group of people. This can provide a much-needed corrective to the top-down, one-size-fits-all approach that, sadly, is a feature of centralised policymaking, whether in contemporary India or in the failed model of central planning.
A version of this article first appeared in the Open Magazine: https://openthemagazine.com/columns/lies-behind-abhijit-banerjees-nobel/
- MNREGA guarantees 100 days of wage-employment in a year to a rural household whose adult members are willing to do unskilled manual work at state-level statutory minimum wages.
- Jan Dhan Yojanais the Indian government’s financial inclusion scheme. It envisages universal access to banking facilities with at least one basic banking account for every household, financial literacy, access to credit insurance, and pension facility.
- Aadhaaror Unique Identification number (UID) is a 12-digit individual identification number issued by the Unique Identification Authority of India (UIDAI) on behalf of the Government of India. It captures the biometric identity – 10 fingerprints, iris and photograph – of every resident, and is meant to serve as a proof of identity and address anywhere in India.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9280359148979187,
"language": "en",
"url": "https://www.ngccoin.com/news/article/8199/",
"token_count": 718,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.169921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f99cd3ce-94f8-4cd7-bbb4-2af21ba7893d>"
}
|
Hubert Ruß Publishes Standard Work on the Medieval Coinage of Würzburg
Posted on 3/24/2020
Numismatist Dr. Hubert Ruß has produced a new standard work cataloging the medieval coins of Würzburg (approx. 900-1495). He provides a detailed monetary history of this period, based on a detailed catalog of 699 coin types. His book is an extensive study of the coinage of one of the most important ecclesiastical authorities in southern Germany. It is also an essential reference work for anybody that is working with or researching the coinage of Würzburg.
The Diocese of Würzbur
Bonifatius is said to have founded the diocese of Würzburg around the year 741/2. He chose a strategically favorable place for the seat of this bishopric located on the river Main. At a time when there weren’t any usable roads in Central Europe, the Main connected the wealthy Bohemia to the hubs of trade and crafts located on the Rhine. This means that Würzburg gained a very high level of political and economic significance. And, for that reason, coins were being minted there from as far back as the Early Middle Ages: the first coins were issued in the name of King Louis the Child (899-911). The first written reference to the minting privilege of the Würzburg bishops dates back to 1030. At that time, the bishops had already been using the mint for a long time, together with the kings and emperors of the Holy Roman Empire. This practice ended with Henry IV, Holy Roman Emperor. Since Bishop Erlung (1006-1121), coins in Würzburg were minted almost exclusively by bishops.
New research findings
Hubert Ruß’s standard work presents a monetary history of the Bishopric, based on state-of-the-art research. It contains a wide range of new insights, which are greatly significant for numismatics:
- A range of unpublished coin types and varieties
- A range of previously unknown mints, mint masters, and denominations
- New datings and new attributions of a range of coin types to bishops or rulers
- Record of mint masters and other mint officials mentioned in archival sources
- Two of the mint masters among the Catholic bishops were Jewish
- Record of all European hoards with Würzburg coins, up to 2018
- Evaluation of the archival sources for the Würzburg coin standard, weight and fineness
An easy-to-use reference work
Hubert Ruß has produced a type catalog, describing many never-before published coin types for the first time. Every coin type is cataloged in full with all varieties; the text also provides locations of coins at coin cabinets, auction catalogues, and numismatic literature. Thanks to the multiple concordances, monogram concordance, and extensive register, the catalog and texts are quick and easy to explore, even for impatient users.
The book is priced at 95 euros and is available to buy from:
Künker Numismatik AG
Tel.: +49 89 5527849 0
or fromFritz Rudolf Künker
Nobbenburger Str. 4a
Tel.: +49 541 96 202 0
Want news like this delivered to your inbox once a month? Subscribe to the free NGC eNewsletter today!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9521384239196777,
"language": "en",
"url": "https://www.theguardian.com/world/2011/aug/02/us-debt-crisis-what-happens",
"token_count": 623,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.25390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0e9697e3-8b8f-455d-8924-593b728a1f85>"
}
|
What is the immediate effect of the deal?
It grants a $400bn increase in the government's debt ceiling to stave off the threat of default, with an additional $500bn increase available from February to be effective on the president's authority. Further increases of $1.2tn-$1.5tn will become available if a balanced budget amendment is considered by Congress by the end of the year.
When do the spending cuts kick in?
From the start of the government's 2012 fiscal year on 1 October, savings of $21bn are to be made through limits on spending. After that, caps on federal spending are to save an estimated $917bn over 10 years by slowing the speed of increases. Further cuts are to be identified by a bipartisan "super committee".
What's the 'super committee'?
A 12-member congressional panel to meet in November and come up with a plan to reduce deficits by $1.2tn to $1.5tn over 10 years. It can consider tax or revenue increases. If it fails to produce a plan acceptable to Congress, the deal triggers steep, automatic cuts in spending of a similar size.
How does this trigger work?
The trigger is designed to encourage the committee to produce meaningful ways to cut the deficit. If it fails, across-the-board spending cuts in discretionary spending starting from fiscal year 2013 will be set in motion, with half coming from defence – a painful prospect for many Republicans. Medicaid and social security payments are protected, with limited cuts to Medicare programmes – painful for Democrats.
What is the 'balanced budget amendment'?
By the end of 2011 Congress must consider adding an amendment to the US constitution that the federal budget be balanced. If Congress approves the amendment the debt ceiling can be raised on the president's authority by $1.5tn. But if, as is more likely, it is not approved, the debt ceiling can be raised by just $1.2tn.
What does this balanced budget amendment mean?
It would give constitutional force to a rule requiring that the budget could not exceed revenues or exceed 18% of US national income unless approved by a super-majority in both houses of Congress.
Will it help balance the budget?
No. Experts say that the amendment as proposed has significant practical flaws that make it unenforceable. Further, economists argue that it is based on questionable assumptions.
What is Obama's next move?
A battle is looming over the extension of the Bush-era tax cuts. Allowing the tax cuts to lapse as scheduled in 2012 would produce around $3tn in additional revenue over the following decade. If Democrats and Obama have the stomach for a fight, they could turn the table on Republicans and block any extension.
Is the plan set in stone?
No. Congress can reopen and amend some or all parts of the deal at any time. Depending on the outcome of the 2012 presidential election, and the state of the economy in 2013, it seems all but certain the deal will be revised and possibly discarded entirely.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9359269142150879,
"language": "en",
"url": "https://www.wallstreetmojo.com/activity-based-budgeting/",
"token_count": 1169,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.142578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4bb4bbed-565f-4706-8fae-bed3708f32de>"
}
|
What is Activity Based Budgeting?
Activity-Based Budgeting is a budgeting process where the firm first identifies, analyzes, and researches the activities that determine the cost of the company and, after that, prepares the budget based on the results.
The formula is represented as follows,
Examples of Activity-Based Budgeting
Washington Inc. has decided to switch from a traditional budgeting system to activity-based budgeting. Based on the below information, you are required to compute the budgeted cost based on those drivers.
|Activity||Last Year Actual Cost|
|Machine Set up||400000|
|Number of Machine Setups||700|
The company has shifted from traditional based to activity budgeted way, and hence here we can see that there are two activities which are driving the cost
Using the ABC formula: Cost Pool total / Cost driver, we can calculate the overhead cost
We have = Machine setup cost / Number of Machine setups & Inspection cost / Inspection hours
Calculation of Machine Set-Up Per Unit
Calculation of Inspection Cost Per Unit
- = 280000 / 15500
- = 18.06 per hour inspection cost
Hence, in ABB, the cost is determined at the activity level and not an ad-hoc rate, which was done in a traditional method where only inflation was accounted for.
4.9 (1,067 ratings) 250+ Courses | 40+ Projects | 1000+ Hours | Full Lifetime Access | Certificate of Completion
Vista Inc was losing the auction due to its higher cost compared to peers. The management then decided to start budgeting the cost for their new orders by using Activity Based Budgeting.
|Activity Cost Driver||Last Year Cost||Cost Driver||Last Year||Expected Activity|
|Purchasing||900000||Number of Purchase Orders||41||59|
|Production Steps||1518750||Number of Setups||25||30|
|Machine Maintainance||2250000||Machine Hours||2500||2600|
The expected activity for the next order is given and based on it, you are required to estimate the total cost that can be provided as a bid.
In this example, we are given all the actual costs and drivers for the same, and we can use the below formula to compute the cost that was incurred in the last order, and the same is assumed to remain the same, and hence we can estimate for the new order as well.
Using the Activity Based Budgeting formula: Cost Pool total / Cost driver
Below are the calculations for each activity and are as per the last order.
The total cost for the new order and budgeted cost will be –
The above shall reflect true cost instead of a traditional way of doing the same.
- The budgeting process can have more control when Activity-based budgeting (ABB) systems are used in the company instead of a traditional way of budgeting.
- Expenses and Revenue planning will occur at an accurate level, which shall provide meaningful details regarding the estimated and future financial projections.
- Last but not the least company can have better control and can align its annual budget with the overall firm’s goals by implementing Activity Based Budgeting.
- It helps to improve the business process by identifying unnecessary activities, which leads to an increase in cost as a lot of research is done here.
- The main disadvantage of Activity Based Budgeting is that it is more expensive to implement and comparatively more costly than the traditional way of budgeting.
- Furthermore, technical details are required to maintain to capture costs at a particular level.
- This process also involves a lot of assumptions to be made, which shall consume more time of management and can lead to inaccuracy of cost at certain times as well, which shall depict the incorrect cost of the product.
- It also requires a deep understanding of the process.
Most simply, Activity Based Budgeting shall follow below three stages:
- Identify the activities by conducting detailed research, and along with that also one needs to identify their cost drivers, which again requires proper knowledge of the process.
- Now, either forecast the number of units which shall be produced for the next period or there could be new order coming up and also at this stage compute the overhead per driver.
- In the final stage, one needs to Calculate the cost driver rate and multiply the same for the new order or the new production units, and that shall give one the total estimated or budgeted cost.
- But before the above, one needs to determine whether the required time and cost are adequately available with the management or the company.
- Does the company has required resources and software and workforce to capture the same daily?
- Cost-benefit analysis needs to be done before implementation of the same, as the management should be that benefits shall out weight the cost.
- Can operational managers be recruited at reasonable remuneration?
The traditional way of assigning cost or budget was to take the overhead cost of last period and adjust the same for inflation and compute the total cost for the new order and hence it was ignoring the activities cost wherein one could lead to no involvement in the process, and still it was charged.
Hence, by implementing activity-based budgeting, the management can identify activities that are indeed involved in the production process and accordingly price the product and save cost and hence increase the revenue of the firm.
This article has been a guide to what is Activity-Based Budgeting. Here we discuss the formula of activity-based budgeting, its calculation with examples, advantages, and disadvantages. You can learn more about finance from the following articles –
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9415277242660522,
"language": "en",
"url": "https://bioenergyconference.euroscicon.com/events-list/sustainable-energy",
"token_count": 198,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.01019287109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3dbca2e9-a780-4d4b-bdd2-10f45de97f1c>"
}
|
Renewable energy and energy efficiency are generally said to be the "twin pillars" of property energy policy. Each resource should be developed so as to stabilize and scale back dioxide emissions. There are numerous energy policies on a worldwide scale in reference to energy exploration, production and consumption, starting from commodities firms to automobile makers to wind and star producers and business associations. Recent focus of energy economic science includes the subsequent issues: climate change and climate policy, property, energy markets and economic process, economic science of energy infrastructure, energy and environmental law and policies and warming together with exploring varied challenges related to fast the diffusion of renewable energy technologies in developing countries. Most of the agricultural facilities within the developed world are mechanized as a result of rural electrification. Rural electrification has created important productivity gains; however it additionally uses plenty of energy. For this and alternative reasons (such as transport costs) during a low-carbon society, rural areas would want obtainable provides of renewably created electricity.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9584738612174988,
"language": "en",
"url": "https://www.bizfair.org/tag/attack/",
"token_count": 448,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.03857421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:177d5323-4b3f-4062-aa42-75f5be416449>"
}
|
By: Kate Hoy, Washington PTAC Business Analyst
The buzzword of the year! Cybersecurity! What is it and what does it mean to you, the small business owner?
The formal definition of cyber security is “ ….the body of technologies, processes and practices designed to protect networks, computers, programs and data from attack, damage or unauthorized access” . The word cybersecurity is defined my Webster’s as “ measures taken to protect a computer or computer system (as on the Internet) against unauthorized access or attack.” So when you develop a cyber security plan, you are reassuring those outside your organization that their information is safe and secure when they interact with you via the internet.
Examples of who might be interested in your cyber security plan include suppliers, customers, government agencies, banks and other third parties.
Standards are being created and pushed out on a daily basis in an effort to provide small business owners a simple roadmap to creating their security plan. The reality is your plan will ultimately be based on your particular situation. When thinking of your cyber security plan, it might help to think of what you did when you developed a security plan for your brick and mortar location. You assessed the situation, identified weaknesses, put proper safeguards in place, and arranged a method of monitoring the situation over time.
This will be a similar process for developing your cyber security plan. You will look at your record keeping processes, make sure your funds are safe and secure, and develop a plan to follow when security is breached.
Many large customers as well as government agencies will require you have a plan in place prior to doing business with them. Government agencies will require you have a written plan in place by the end of 2017 in order to sell to them. These requirements have made cyber security the hot topic of the day as well as a high priority for many small business owners.
Start your plan by assessing your situation and documenting your procedures. This will go a long way in making those that do business with you via the internet feel safer and more secure about continuing their business relationship!
Questions on the governments requirements? Contact your local PTAC office or visit www.washingtonptac.org
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9137095808982849,
"language": "en",
"url": "https://www.danhugger.com/2019/12/five-books-to-supplement-teaching.html",
"token_count": 198,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0966796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9d103686-e65a-44df-9b3c-5edfe9a8a7b4>"
}
|
"The particular scholar will recall Richard Whately for having coined the term “catallactics” to describe the study of exchanges. Economists such as Francis Edgeworth, Ludwig von Mises, F.A. Hayek, and James Buchanan have claimed “catallactics” as a proper constraint on the field of study for the political economist. In Easy Lessons Whately popularizes the fundamental principles of economics for children in grade school. Though written almost 200 years ago the lessons on money, exchange, commerce, coin, value, wages, rich and poor (income distribution), capital, taxes, letting and hiring (factors of production), and interference with men’s dealings with one another (constitutions and regulations) demonstrate Whately’s proto-marginalist mastery of the economic perspective. I first assigned this to students in my History of Economic Thought course who then heartily recommended it for Principles of Microeconomics classes."
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9362009167671204,
"language": "en",
"url": "https://www.e-medida.es/0tpwyir8/c5a768-capital-budgeting-process",
"token_count": 3092,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0021209716796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ca7b3867-8e9a-44e3-859b-5c453818483b>"
}
|
Capital Budgeting Process Definition: The Capital Budgeting is one of the crucial decisions of the financial management that relates to the selection of investments and course of actions that will yield returns in the future over the lifetime of the project. Capital Budgeting – Procedure & Decision Process. Capital budgeting is the process by which the financial manager decides whether to invest in specific capital projects or assets. Capital investments can commit companies to major courses of action. To many of us, the annual operating and capital budget development process is viewed with trepidation and confusion. When the value of an investment is lower and is approved by the lower level of management, then for getting speedy actions, they are generally covered with the blanket appropriations. Capital budgets evaluate long-term capital projects such as the addition of equipment or the relocation of a plant. Defining the corporate strategy, which is based on the organization’s SWOT analysis, i.e., analysis of its strength, weakness, opportunity, and threat, and also seeking suggestions from the organization’s employees by discussing the strategies and objectives with them. Opportunity costs account for the money that the company will lose by implementing the project under analysis. It involves the decision to invest the current funds for addition, disposition, modification or replacement of fixed assets. Capital budgeting is a series of steps that businesses follow to weigh the merits of a proposed capital investment. But they are really just plans: one for the immediate future and one for the long term. The capital budgeting process includes identifying and then evaluating capital projects for the company. Opportunity cost is crucial in the capital budgeting process as it becomes important to determine the true initial investment cost of a particular alternative chosen. Sanjay Borad is the founder & CEO of eFinanceManagement. Capital budgeting, as we know, is a decision making process. All the cash flows of the project should be based on the opportunity costs. It involves the following six steps: Identifying Potential Investment Opportunities: The company has various options for capital employment on a long-term basis. He is passionate about keeping and making things simple and easy. The right decisions made by the process of capital budgeting will help the manager and the company to maximize the shareholder value which is the primary goal of any business. Since it involves buying expensive assets for long-term use, capital budgeting decisions may have a role to play in the future success of the company. Construction of a new plant or a … The organization’s capital budgeting committee is required to identify the expected sales in the near future. Capital budgeting requires detailed financial analysis, including estimating the rate of return for a capital project. Custom applications created by Nagarro assist our clients perform mission critical tasks, including high volume transaction processing, performance analytics, clearing settlement, financial reporting, capital budgeting, corporate finance valuations and data management. But if the investment outlay is of higher value, then it will become part of the capital budget after taking the necessary approvals. Almost all the corporate decisions that impact future earnings of the company can be studied using this framework. To what extent the assumptions were realistic. There is certainly a great deal to know about this issue. Capital budgeting is the process of deciding whether to commit resources to a particular long-term project whose benefits are expected to be realized over a period of time, which is normally longer than one year. The Capital Budgeting process is the process of planning which is used to evaluate the potential investments or expenditures whose amount is significant. The budgeting process for most large companies usually begins four to six months before the start of the financial year, while some may take an entire fiscal yearFiscal Year (FY)A fiscal year (FY) is a 12 month or 52 week period of time used by governments and businesses for accounting purposes to formulate annual financial reports. Capital Budgeting is the process of making investment decision in capital expenditure. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at. Capital budgeting is a company’s formal process used for evaluating potential expenditures or investments that are significant in amount. What is Capital Budgeting? Capital Budgeting Process. Decision making is the third step. CFA Institute Does Not Endorse, Promote, Or Warrant The Accuracy Or Quality Of WallStreetMojo. Notify me of follow-up comments by email. Review of performance is the last step in the capital budgeting. Sorry, your blog cannot share posts by email. It is a cost-benefit exercise which seeks to produce end results and benefits which are greater than the costs of the capital budgeting efforts. So the proposals from all the departments will be submitted, and the same will be seen by various authorized persons in the organization to check whether the proposals given are according to the various requirements. Share it in comments below. This is the reason the capital budgeting process is an invaluable part of any company. The capital budgeting process is the process of identifying and evaluating capital projects, that is, projects where the cash How to the firm will be received over a period longer than a year. The first step is to explore the available investment opportunities. The motive behind these appropriations is to analyze the investment performance during its implementation. A single project can easily harm or enable the company to a large extent. These techniques assist in the determination of the anticipated return from a given project. Capital budgeting is a financial planning process that businesses use to determine the worth of long-term investments of an organization. Capital budgeting, and investment appraisal, is the planning process used to determine whether an organization's long term investments such as new machinery, replacement of machinery, new plants, new products, and research development projects are worth the funding of cash through the firm's capitalization structure (debt, equity or retained earnings). Capital budgeting is perhaps the most important decision for a financial manager. The capital project lasts for longer time, usually more than one year. Capital budgeting refers to the decision-making process that companies follow with regard to which capital-intensive projects they should pursue. In business, a capital expenditure is a large use of cash for an item or project that a company expects will add value to the business in the future. All the cash flows from the project should be analyzed on an after-tax basis. Capital budgeting describes the process which companies use to make decisions on capital projects, i.e., projects with a lifespan of one year or more. Although it doesn't consider profits that come in once the initial costs are paid back, the decision process might not need this component of the analysis. Capital budgeting is the process of determining which long-term capital investments a company will make in order to profit in the long-term. In the stage of decision making the executives will have to decide which investment is needed to be done from the investment opportunities available keeping in mind the sanctioning power available to them. The real estate company identified two lands where they can build their project. The first step is to identify the need or opportunity. There are several challenges that can be faced by the management personnel while implementing the projects as it can be time-consuming. Save my name, email, and website in this browser for the next time I comment. For instance, the managers at the lower level of management like work managers, plant superintendent, etc. This process the decision regarding the sources of finance and then calculating the return that can be earned from the investment done. The three most common approaches to … Post was not sent - check your email addresses! The correct time to make this comparison is when the operations get stabilized. Capital Budgeting Process. All the capital budgeting decisions are based on the. Capital Budgeting Process for various Categories of Projects: Evaluation and Selection of Capital Projects, Click to share on WhatsApp (Opens in new window), Click to share on LinkedIn (Opens in new window), Click to share on Facebook (Opens in new window), Click to share on Twitter (Opens in new window), Click to share on Pinterest (Opens in new window), Click to share on Skype (Opens in new window), Click to share on Tumblr (Opens in new window), Click to share on Telegram (Opens in new window), Click to share on Reddit (Opens in new window), Click to share on Pocket (Opens in new window), Click to email this to a friend (Opens in new window). The timing of the receipt of the cash flows is important. Thus, the manager has to choose a project that gives a rate of return more than the cost financing such a project. After the step of the decision making the next step is the classification of the investment outlays into the higher value and the smaller value investment. You may learn more about Corporate Finance from the following articles –, Copyright © 2020. Decision making is the third step. fas-ag.de Zudem unterstützen wir Sie beim Aufbau eines klassisch en Kapitalbudgetierungsprozesses mi t dem gewünschten Detaillierungsgrad und schulen Ihre Mitarbeiter in den Grundzügen der Bewertung. What’s your view on this? There are points which are needed to be taken care of before starting the search for the best investment opportunities. may have the power to sanction the investment up to the limit of $10,000 beyond that the permission of the board of directors or the senior management is required. Capital budgeting is a multi-step process businesses use to determine how worthwhile a project or investment will be. Whether the hopes of the sponsors of the project are fulfilled. Businesses create separate budgets for the acquisition of current assets and long-term assets. In this report, we analyze and synthesize these surveys in a four-stage framework of the capital budgeting process: identification, development, … Capital Budgeting Process. In this, the management is required to compare the actual results with that of the projected results. Lastly, the decision taken is to be implemented, and performance is to be reviewed timely. For the implementation at the reasonable cost and expeditiously, the following things could be helpful: For prompt processing, the committee of capital budgeting must ensure that management has adequately done the homework on the preliminary studies and the compendious formulation of the project before its implementation. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. For instance, before choosing the investment to be made in the company involved in the gold mining, firstly, the underlying commodity’s future direction is needed to be determined; whether the analysts believe that there are more chances of price getting declined or the chances of price rise is much higher than its declination. Capital projects are the ones where the cash flows are received by the company over long periods of time which exceeds a year. Capital expenditure budgeting is the process of establishing a financial plan for purchases of long-term business assets. This video explains about capital budgeting in less than 2 minutes. Capital budgeting is the process by which investors determine the value of a potential investment project. However, the evaluation and selection of capital projects are also affected by the following categories: Conclusion: Capital budgeting process is an amalgamation of very complex decisions and their assessments. After that, they make the identification of the investment opportunities keeping in mind the sales target set up by them. Before reaching the committee of the capital budgeting process, these proposals are seen by various authorized persons in the organization to check whether the proposals given are according to the requirements and then the classification of the investment is done based on the different categories such as expansion, replacement, welfare investment, etc. Capital budgeting is the process a business undertakes to evaluate potential major projects or investments. It may be a period such as October 1, 2009 – September 30, 2010. to complete. Capital budgeting is the process that companies use for decision making on capital project. Almost all the corporate decisions that impact future earnings of the company can be studied using this framework. According to Binder and Chaput (2012), capital budgeting is a delicate process and, therefore, should be practiced in accordance with proven techniques. De fi ne the capital budgeting process, explain the administrative steps of the process, and categorize the capital projects that can be evaluated. Any corporate decisions with an impact on future earnings can be examined using this framework. I… It starts with the identification of different investment opportunities. These decisions have the power to impact the future success of the company. Use of this feed is for personal non-commercial use only. Hence, an analyst needs to understand all the steps involved as well as the basic principles of the capital budgeting process.1,2. The capital budgeting process has the following four steps: eval(ez_write_tag([[300,250],'efinancemanagement_com-medrectangle-3','ezslot_3',116,'0','0']));Capital budgeting projects are categorized as follows: The capital budgeting process is based on the following five principles: eval(ez_write_tag([[580,400],'efinancemanagement_com-medrectangle-4','ezslot_4',117,'0','0']));All the capital projects are thoroughly analyzed on the basis of their cash flows forecast. process of deciding which long-term projects the firm should undertake Every capital budgeting method has a set of decision rules. Here we provide the top 6 steps in the Capital Budgeting along with the examples of each. It helps in determining the company’s investment in the long term fixed assets such as investment in the addition or replacement of the plant & machinery, new equipment, Research & development, etc. For instance, the managers at the lower level of management like work managers, plant superintendent, etc. After the completion of all the above steps, the investment proposal under consideration is implemented, i.e., put into a concrete project. A Fiscal Year (FY) does not necessarily follow the calendar year. As per the. A company must devise some method to deal with the uncertainty of the future. After the identification of the investment opportunities, the second process in capital budgeting is to gather investment proposals. CFA® And Chartered Financial Analyst® Are Registered Trademarks Owned By CFA Institute.Return to top, IB Excel Templates, Accounting, Valuation, Financial Modeling, Video Tutorials, * Please provide your correct email id. I really like all of the points you have made. As the project is usually large and has important impact on the long term success of the business, it is crucial for the business to make the right decision. We additionally support the establishment of a classical capital budgeting process with desired level of detail and train your staff in the principles of valuation. Purchases of current assets only affect a single operating year, while purchases of long-term assets affect multiple years. Capital projects are the ones where the cash flows are received by the company over long periods of time which exceeds a year.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9161080718040466,
"language": "en",
"url": "http://www.finance-lib.com/financial-term-total-revenue.html",
"token_count": 993,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03369140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:47883b65-ce5a-481b-b622-484ac21982c3>"
}
|
Information about financial, finance, business, accounting, payroll, inventory, investment, money, inventory control, stock trading, financial advisor, tax advisor, credit.
Main Page: financial advisor, investment, money, finance, stock trading, business, credit, financial,
Definition of Total revenue
total sales and other revenue for the period shown. Known as "turnover" in the UK.
An analytical technique for studying the relationships between fixed cost, variable cost, and profits. A breakeven chart graphically depicts the nature of breakeven analysis. The breakeven point represents the volume of sales at which total costs equal total revenues (that is, profits equal zero).
the level of activity, in units or dollars, at which total revenues equal total costs
The point at which total costs equal total revenue, i.e. where there is neither a profit nor a loss.
EBITDA divided by total sales or total revenue.
total revenue, less the cost of sales returns, allowances, and discounts.
Mutual Funds: A measure of trading activity during the previous year, expressed as a percentage of
revenue recognized on a nonexistent sale or service transaction.
the revenue resulting from an additional contemplated sale
Bond issued by local government agencies on behalf of corporations.
Refers to all federal tax laws as a group.
A federal agency empowered by Congress to interpret and enforce tax-related laws.
The amount sold after customers’ returns, sales discounts, and other allowances are taken away from
revenue recognized for a confirmed sale or service transaction in a period
The percentage return or profit that management made on each dollar of assets. The formula is:
services are readily convertible into known amounts of cash or claims to cash.
A revenue transaction where goods and services are exchanged for cash or
Return on total assets
The ratio of earnings available to common stockholders to total assets.
Return on Total Assets Ratio
A measure of the percentage return earned on the value of the
Income earned from the sale of goods and services.
Amounts earned by the company from the sale of merchandise or services; often used interchangeably with the term sales.
An inflow of cash, accounts receivable, or barter from a customer in exchange
A bond issued by a municipality to finance either a project or an enterprise where the issuer
a responsibility center for which a manager is accountable only for the generation of revenues and has no control over setting selling prices, or budgeting or incurring costs
Operating expenses that vary in proportion to
A fund accounting for all revenues from an enterprise financed by a municipal revenue bond.
The act of recording revenue in the financial statements. revenue should
Sales Revenue Revenue recognized from the sales of products as opposed to the provision of
revenue recognized from the provision of services as opposed to the sale of
Total asset turnover
The ratio of net sales to total assets.
Total Asset Turnover Ratio
A measure of the utilization of all of a company's assets to
total contribution margin
see contribution margin
total cost to account for
the sum of the costs in beginning
Total debt to equity ratio
A capitalization ratio comparing current liabilities plus long-term debt to
Total Debt to Total Assets Ratio
See debt ratio
Total dollar return
The dollar return on a nondollar investment, which includes the sum of any
total expected value (for a project)
the sum of the individual cash flows in a probability distribution multiplied by their related probabilities
total overhead variance
the difference between total actual overhead and total applied overhead; it is the amount of underapplied or overapplied overhead
total quality management (TQM)
a structural system for creating organization-wide participation in planning and implementing a continuous improvement process that exceeds
In performance measurement, the actual rate of return realized over some evaluation period. In
total units to account for
the sum of the beginning inventory
the difference between total actual cost incurred
An agreement between countries whereby an employee only has to pay Social Security taxes to the country in which he or she is working
Money that has been paid by customers for work yet to be done or goods yet to be provided.
A payment from a customer that cannot yet be recognized as earned
asset turnover ratio
A broad-gauge ratio computed by dividing annual
An intermediate measure of profit equal to sales revenue
the difference between selling price and
The company's total earnings, reflecting revenues adjusted for costs of doing business,
Overhead generally refers to indirect, in contrast to direct,
Receivables turnover ratio
total operating revenues divided by average receivables. Used to measure how
Related to : financial, finance, business, accounting, payroll, inventory, investment, money, inventory control, stock trading, financial advisor, tax advisor, credit.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8700160384178162,
"language": "en",
"url": "https://cepr.org/active/publications/discussion_papers/dp.php?dpno=14165",
"token_count": 262,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.291015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4e8ae29b-4c41-484d-8bcc-3c0dfa3790bc>"
}
|
DP14165 The Effects of Immigration on the Economy: Lessons from the 1920s Border Closure
|Author(s):||Ran Abramitzky, Philipp Ager, Leah Boustan, Elior David Cohen, Casper Worm Hansen|
|Publication Date:||December 2019|
|Keyword(s):||Immigration Restrictions, labor mobility, Local Labor Markets|
|JEL(s):||J61, J70, N32|
|Programme Areas:||Economic History|
|Link to this Page:||cepr.org/active/publications/discussion_papers/dp.php?dpno=14165|
In the 1920s, the United States substantially reduced immigrant entry by imposing country-specific quotas. We compare local labor markets with more or less exposure to the national quotas due to differences in initial immigrant settlement. A puzzle emerges: the earnings of existing US-born workers decline after the border closure, despite the loss of immigrant labor supply. We find that more skilled US-born workers - along with unrestricted immigrants from Mexico and Canada - move into affected urban areas, completely replacing European immigrants. By contrast, the loss of immigrant workers encouraged farmers to shift toward capital-intensive agriculture and discourage entry from unrestricted workers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9560434818267822,
"language": "en",
"url": "https://deepsweep.com/blog/should-water-scarcity-be-a-financial-boon-to-california-nonprofits/",
"token_count": 225,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1494140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c87361f3-a2bd-4570-bf50-9ee8d1d99a9f>"
}
|
California also has an ever-growing population which has almost reached 40 million individuals, with a growth rate of .9% annually. For a U.S. state which is larger than all but 34 of the world’s countries, maintaining a reliable water supply is important not only for CA residents, but for the resiliency of agriculture. California produces about $47 billion in agriculture, making up 12.5% of the total agricultural production for all 50 states. California also exports a whopping 28% of its agricultural production to other markets. When you consider that 80% of the state’s water consumption is allotted for agriculture, the thought of a diminishing water supply raises many concerns.
Additionally, a new report from Non-profit Quarterly has prompted the question of whether nonprofits should act in the same way as disaster capitalists may in the situation. Is it beneficial to profit from the resource scarcity and use humanitarian disaster as a means of extending fundraising? This a very interesting topic, particularly for nonprofits in Los Angeles who know the risk that water scarcity poses to the city.
To read the article, click here.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.918429434299469,
"language": "en",
"url": "https://hpl-production-01.havenpower.com/news/countries-on-track-for-cop26/",
"token_count": 1024,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1826171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dea764ff-3c8b-4408-9219-eda53dc91a78>"
}
|
The low carbon league table: Which countries are on track for COP26?
1st December 2020
Independent analysis conducted by academics from Imperial College London for Drax Electric Insights deep-dives into the changes driving Great Britain’s electricity system over the last 3 months. And as we pass the 1-year countdown until COP26 – the first year that the Conference of the Parties will be held in the UK – it investigates whether countries’ power supplies are on track to achieve their climate goals.
What is COP26?
COP stands for Conference of the Parties, and is a yearly meeting attended by countries that signed the United Nations Framework Convention on Climate Change (UNFCCC) - a treaty agreed in 1994. COP26 was due to take place in 2020 but has been postponed to 1-5 November 2021, and will be the 26th meeting of these countries.
What’s so important about COP26?
Not only will COP26 be the biggest summit the UK’s ever hosted, but it’ll also be the first ‘global stocktake’ of countries’ environmental progress since the Paris Agreement creation.
The Paris Agreement was a clear, collective target negotiated at the 2015 United Nations Climate Change Conference. It stated nations must:
- Keep the increase in global average temperatures to well below two degrees Celsius
- Reduce the amount of harmful greenhouse gases (GHG) being produced, and increase renewable energy
- Review progress every five years
As well as this global stocktake of progress, COP26 will also ask all countries to submit their new long-term goals. It’ll be a key opportunity for the UK to demonstrate what we’re doing to combat climate change and create a lower-carbon future. How fast we are decarbonising our power system to enable the electrification of our economy is a major part of our climate action.
So, which countries are on track?
The latest Electric Insights report shares the low carbon electricity league table, ranking the world’s 30 largest electricity systems by the carbon content of the generation mix.
Out of the 30 markets studied, South Africa has the highest carbon intensity. That is, the number of grams of carbon dioxide that it takes to make one unit of electricity at a kilowatt per hour (kWh).
Meanwhile, the UK sits in a strong position of number five. But it’s the change in carbon intensity over the last 10 years where the UK is full speed ahead.
The UK’s leading the low carbon revolution, with our electricity system having decarbonised at almost twice the pace of any other major economy.
The UK’s carbon intensity fell 58%, from 450 to 195 g/kWh. British households are emitting just shy of one tonne of CO2 less per year from changes in the power system alone.
Drax is a key player in keeping the UK top of the leader board. Bioenergy and carbon capture and storage (BECCS) technologies could create the world’s first negative emissions power station and be the ‘anchor’ for the UK’s first zero carbon industrial cluster in the Humber region.
And it’s not just the power sector the UK is looking to lead on. The Prime Minister’s recent announcement to end the sale of new petrol and diesel cars by 2030 – five years sooner than previously promised – will put the UK on course to be the fastest G7 country to decarbonise road transport, too.
The growing share of renewables
Countries like USA and China also sit high up on the list of carbon intensity reduction over the last decade and are perhaps not places many would associate with world-leading shifts towards clean-energy. But these countries embody the two big macro trends in electricity generation: the shrinking role for coal, and growing share of renewables.
The chart below demonstrates this shift, with the UK out in front. Over the last 10 years, coal fell from 30% to just 2% of the electricity mix, while renewables rose from just 8% to 42%.
The pandemic has shaken up all agendas, both in the UK and globally. But while it may have disrupted short-term plans, it’s also presented an opportunity to place sustainability at the core of economic recovery.Download the latest Electric Insights report
Whitepaper: How enterprises are bringing about change after COVID-19
We surveyed over 1,250 business leaders about their attitudes to sustainability, where their priorities lie, and how they plan to embrace bold decision-making, post-pandemic....
Electricity under lockdown: cheaper, cleaner, but harder to control
Download the latest Electric Insights report from Drax now. The latest Drax Electric Insights report shows the impact lockdown had on our electricity system....
Beyond COVID-19, we need a vision to reach net zero emissions
By Marc Bradbrook, Commercial and Energy Services Director, Haven Power
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9260646104812622,
"language": "en",
"url": "https://pclegko.ru/en/it-servisy/internet-security-pay.html",
"token_count": 776,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.30078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e133c125-1369-4aa3-9952-6fa20248455d>"
}
|
Internet security is extremely important. Most likely, everyone who regularly surfs the Internet has made a deal at least once through the network or paid for certain services online. And in all cases when such a payer indicates data related to his bank card, he risks material losses. The total amount of funds lost by users due to fraud amounted to one billion rubles back in 2016.
To ensure safety on the Internet and protect payments, it is worth connecting the billing system for the site at www.wellcoinpay.com
A common scheme is fake Internet resources that look like online stores, but in reality are aimed at obtaining customer card data, after which funds are withdrawn from these cards.
The security of online payments is very important for consumers, but equally for online commerce, banks and payment systems. These structures are creating ever new schemes aimed at preventing fraud through the network.
Participants in the process
Any payment involves the interaction of a number of parties, including:
- Cardholder making a purchase;
- Final recipient of funds, online store or other organization;
- The issuing bank that issued the buyer’s card;
- Acquiring bank transferring funds;
- Service providers;
- Payment system.
Sometimes the issuer and acquirer will be the same credit institution.
The buyer in the online store enters the data required for payment. Next, the payment system sends them to the acquiring bank working with this store. The acquiring bank reports your action to the issuing bank. The issuing bank checks the information (including about the funds on the card).
Further, the issuing bank agrees to the operation (or may refuse), if the result is positive, it sends the information to the payment system, and the latter to the online store.
Payment security mechanisms
The participants in the process have developed a number of schemes designed to protect users’ money.
- SSS (Secure Socket Layer) protocol. Its purpose is to guarantee the secure transmission of data from the customer to the server over HTTPS.
- PCI DSS (Payment Card Industry Data Security Standard). They serve to protect data related to bank cards. All online payment companies must comply with these standards.
- 3-D Secure technology. This technology is designed to verify the identity of the payer. When carrying out a transaction using a bank card, you must enter the code sent by the card issuing bank.
- Buyer identification in payment systems systems. Organizations such as ApplePay and PayPal perform user authentication within their system.
- Anti-fraud systems (or anti-fraud systems). These mechanisms track financial transactions and, if they believe that any of them are suspicious, they have the right to block such transactions. There are many parameters that can be suspicious, including an excessive amount, customer behavior during a payment, too many actions from the same IP address.
Internet Security – Recommendations for Internet Payers
The industry participants, as seen from the above, pay close attention to safety issues. But it makes sense for the buyer himself to follow some tips to avoid financial losses:
- It makes sense to connect SMS notifications and use the Internet bank.
- It is not worth making deals through websites that look suspicious. The address of protected sites contains https.
- Use SMS authorization (3-D Secure).
- Please note that the site contains instructions for Mastercode Securecode or Verified by Visa.
- Make online payments through a special card, do not share information about this card with anyone.
- If you do not fully trust the online store, make the transaction using PayPal or ApplePay.
- Buy only from Android gadgets with anti-virus protection installed. At the same time, Apple devices that have iOS do not require antivirus programs, they are built-in.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9656451344490051,
"language": "en",
"url": "https://smallbusiness.chron.com/success-failure-rate-sole-proprietorship-60080.html",
"token_count": 287,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.004486083984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:189bd3b2-e7d0-4d7e-8ab3-bd4c98f31f82>"
}
|
What Is the Success or Failure Rate for a Sole Proprietorship?
Sole proprietorships are the most common type of small business formed in the U.S. They do not have special filing requirements common to more complex business structures and are inexpensive to start. Proprietorships are not taxable entities; income from a proprietorship is declared on an owner's personal return where the calculation for self-employment tax is shown. These and other advantages are attractive to millions of business owners.
The U.S. Census data for 2008 identified a total of more than 27.2 million business firms in the country. According to the IRS, over 22 million tax returns were filed in 2009 that reported sole proprietorship activity. Most sole proprietorships are owned and operated by one person. The Small Business Administration reports that sole proprietorships accounted for 86 percent of businesses with no employees in 2010.
The SBA reports that about half of all new businesses survive five years or more. About one-third survive more than 10 years. These rates have not seen significant change for many years and have remained more or less constant. The SBA survival rate information does not break down the information by type of business structure.
Retired investigator Chris Bradford has been writing since 1988. His work has appeared in "Security Journal," as well as various online publications. Bradford is a certified information-technology professional and fraud examiner.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9517083764076233,
"language": "en",
"url": "https://smallbusinessavenues.com/example-of-a-business-plan/how-to-write-a-business-plan",
"token_count": 2504,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.058837890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:94b5b9e9-1a6c-4790-80c9-d6943d261661>"
}
|
How to write a business plan
A business plan is a business tool that is necessary for the successful operation of an organization.
A well-prepared plan is a forecast document for the successful development and functioning of the company in the near future.
Major mistakes in compilation
- Too optimistic forecast. In volatile market conditions, it is difficult to really appreciate the potential returns, but it is worth trying. It is better to draw up three scenarios: optimistic, realistic and pessimistic - this will save you from overestimating your financial capabilities. To do this, it is necessary to conduct a high-quality comprehensive marketing analysis of the market.
- Lack of a cost planning system for the near future. All types of expenses must be considered in the business plan. Most entrepreneurs pay a lot of attention to one-time costs, completely forgetting about the current ones (payment for banking operations, communications, visits to suppliers, etc.). Because of this, faced with additional spending, an already operating organization incurs losses, fraught with a lack of money for the development of the company.
- Underestimation of implementation time. When developing a business plan, the next 3-5 years are taken into account. As a rule, a successful project pays off in the first three years, however, during implementation, circumstances may arise that will delay the payback process: obtaining permits, delivery times for materials, etc. To avoid this, it is necessary to provide for the terms with the possibility of increasing, or calculate the project for three years, broken down by months.
- Poor calculation of the working capital requirement. The calculation of inventories, equipment, funds for wages, etc., is not taken into account, the cash reserve, which may be required in case of delays in payments from buyers.
- Insufficient development of risks. This point is the most important, since the risks can lead to unpredictable consequences, up to the bankruptcy of the company. It is possible to minimize them at the stages of the development of the plan, before its introduction into life.
Existing Business Planning Standards
A business plan is a document, which means that it has certain standards and requirements. Today there are three planning standards:
- cover page;
- privacy statement;
- general description of the project;
- products (services) ;
- industry analysis;
- target market analysis;
- promotion strategies;
- financial analysis ;
Description, or summary, includes a short overview of the plan, reveals the nature of the services offered, the mission and goals of the organization.
The "Products" section includes a description of the offered goods or services, and also presents related products that can bring profit or enhance the organization's image in the eyes of potential customers.
Industry and market analysis is carried out in order to identify the need for the organization's services. It includes demographic analysis, competition analysis and SWOT analysis. Such analytical tools allow you to identify opportunities and threats to the market, as well as determine possible strategies for behavior.
In the section on target markets, it is necessary to indicate the target consumer group, geographic market, pricing methods.
Many aspiring entrepreneurs, in an ardent desire to start their own business as soon as possible, start without realizing what their actions may lead to in the future. Others, on the contrary, do not have a clear idea of which side is better to approach the project. Before starting a business, it is very important to draw up a business plan that will serve as a guide for implementing your own idea.
A business plan is a document that highlights all the characteristics of an organization, analyzes possible risks and problems, predicts methods that will help to avoid them. In other words, a business plan for an investor is, first of all, an answer to the question: "Is it worth financing this project at all, or is it better to immediately abandon it?"
Business plan structure and content - main sections
Success in business planning consists of 3 main factors:
- Awareness of the degree of your legal capacity at the moment, which is the starting point "A";
- A clear idea of the goal that you want to achieve - point "B";
- A sequence of steps that will lead from point "A" to point "B".
Drawing up a business plan is carried out on paper, taking into account some rules and sequence. In this form, your idea begins to materialize and demonstrates the desire to develop and willingness to work. In addition, the plan, implemented on paper, simplifies the perception of the idea from the outside.
When drawing up a business plan, it is necessary to identify all the advantages and disadvantages of the idea that has arisen, and then only calculate the income and take any action. Pay attention to competitiveness and market resilience.
When conducting a superficial analysis, pay attention to the payback of the product or service, as well as the period after which you can get the first profit - this will help you determine the amount of initial investment. If, after a cursory analysis, you still have not abandoned your idea, then it's time to start creating a business plan.
When compiling it, the following information is indicated on the title page:
- project name;
- organization name;
- project manager details;
- developer information;
- compilation date;
- the most important data on the financial calculation of the project.
There is a widespread belief that the question of how to draw up a business plan is relevant only for aspiring entrepreneurs who first decided to start their own business. But this is not at all the case.
Content of the business plan
A business plan is an obligatory part of every trade, production or organizational operation.
- A program of actions aimed at achieving industrial, commercial or organizational goals by the enterprise.
- A description of the goals the project is intended to achieve.
- Calculation of the profit expected from the project.
- An assessment of the available opportunities and funds that are supposed to be used for the implementation of the project. This assessment includes the material and technical justification of the project, as well as an audit of the financial capabilities of the enterprise.
- Assessment of risks and obstacles that may be encountered on the way of project implementation.
- Calculation of the costs required to implement the project.
- Calculation of investors' funds that must be attracted for the successful implementation of the project.
The content of the plan is not limited to these points, since each business and enterprise has many of its individual features. To this we must add that the company exists in certain circumstances, which also have their own characteristics.
But the listed points are enough to get an idea of the complexity and versatility of the document.
Purpose of the business plan
Where to start when creating a plan? Before you draw up a business plan, you should clearly understand its purpose, as well as determine who it is addressed to. This will help you understand which points need to be emphasized and which points are secondary.
In other words, an entrepreneur, before writing a business plan, must understand what aspects of the document are important to him. The purpose of a business plan can be:
- Drawing up a program of action for the implementation of the project.
- Presentation of the project to potential investors.
Business plan addressees
A detailed, correctly drawn up business plan is necessary for every aspiring entrepreneur who decides to start his own business. It is the business plan that becomes the key to success, prescribing an orderly plan of action that can lead a businessman to the final positive result. The founder of the business must clearly know what he needs to take in order for his actions to help get closer to the goal, for this it is necessary to draw up a business plan. Most modern entrepreneurs needlessly underestimate the significance of this document, perceiving it as an empty formality.
Even with a fairly serious initial money capital, but without a clear understanding of the action plan that must be calculated and taken into account in advance, the entrepreneur risks getting significant losses. Only a business plan thought out to the smallest detail will allow avoiding ridiculous mistakes that arise in the process of forming a young organization. A well-established sequence will also help convey the main objectives and goals to each member of the formed team, which will come together to achieve success.
Simple business plan
Before drawing up a business plan, an entrepreneur must decipher this concept for himself, fully reveal its meaning. Under the term "business plan", modern economic theory means a certain action plan that includes all possible information about organizational issues, as well as about the further successful functioning of the business. It is on the pages of the business plan that data relating to the services provided or the goods produced, information about the intended sales market, various marketing strategies, about raw materials, about the minimum set of required equipment, and so on are indicated.
The document is a serious strategic tool that is essential for effective management and planning. In addition, as a result of drawing up such a plan, the entrepreneur will be able to calculate the amount of material resources that he will need to start, the profitability indicator, as well as the payback period.
A well-written business plan will help an entrepreneur to attract a sufficient amount of investment funds. When potential investors get acquainted with the real forecasts that are outlined in the document, they will want to meet the businessman halfway, conclude a deal and financially invest in the business being opened. Of particular importance is the preparation of a business plan for an entrepreneur who expects to receive a bank loan. This is due to the fact that a bank that agrees to issue a loan to open a business risks in the same way as a novice businessman. Banking organizations, before approving a loan, wish to familiarize themselves with information about the proposed business.
Therefore, the information contained in the business plan must be complete and concise. Submission of data should be simple, without any ambiguous suggestions. Each person who reads this document is looking for an answer to a simple question - is it worth investing money in a business that is being opened, if not, then why?
How to write a business plan
Drawing up a business plan is a laborious and difficult task, requiring the contractor to have certain knowledge in several industries. Diversified skills and knowledge in the aggregate give the desired result, which will help to shape the business correctly, as well as calculate the optimal strategy that will lead to the chosen goal. Today, there are three possible options for how to write a business plan:
- use the services of professionals specializing in the preparation of this kind of documentation;
- write a business plan yourself. However, before writing a business plan on their own, an entrepreneur needs to ascertain his competence. A businessman must have the appropriate knowledge not only in the chosen area for his business, but also in the legislation of the Russian Federation;
- buy on the Internet.
Choosing the first option, the entrepreneur can save a lot of time. An employee of a specialized organization, in exchange for a pre-agreed amount of money, will draw up a clear business plan. If we talk about the advantages and disadvantages of this method, it is worth noting that a business plan will most likely be written in the shortest possible time, but its quality will be highly questionable. This is due to the fact that such companies work with pre-prepared templates that are not able to fully express all the necessary information. As practice shows, entrepreneurs who use the services of an additionally hired specialist are most often denied bank loans.
Templates for business plans often do not describe the competitive advantages, the costs associated with each operation, the estimated profit stream, as well as other details that may be of interest to a potential investor or lender. A significant disadvantage can also be called the fact that you will have to pay for drawing up a plan, however, in the first couples, not every entrepreneur will have the required amount at his disposal.
If we consider the second option, an entrepreneur needs to understand that before drawing up a business plan on his own, he must acquire a set of theoretical knowledge and practical skills. Ideally, if a businessman can connect knowledgeable acquaintances who are competent in one area or another.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9555641412734985,
"language": "en",
"url": "https://termpapernow.com/samples/cash-management-research-term-paper-4149/",
"token_count": 2136,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.028564453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:edd30f16-a893-4549-a3e1-b01425975325>"
}
|
Cash management refers to the management of an organizations short-term assets that are useful for progressive activities. Cash management is also linked to the concept of Treasury Management with emphasis on liquidity from various processes and factors of enhancing profitability. Ineffective management of money can lead to the bankruptcy of a company. This study shows the factors that can be controlled for organization and management of the corporate cash of an organization. The factors include Credit Risk, Cash Holding and Cash Conversion Cycle (CCC). The By using data retrieved from FMCG sector, this paper also measures the impacts of RCP on CCC as well as cash holding for effective cash management (Addae-Boateng, & Brew, 2013).
In the world of today, business activities cannot be conducted without cash. It also makes payment easy or makes it easy to use the funds in future. It is therefore taken to be a storage of funds that is used to meet emergencies. In the current world, businesses use credit as opposed to cash for most of its activities. Nowadays, using draft, bills, debit cards, ECS, debit cards and the transfer of funds through the use of internet has replaced using paper currency and use of coins. Cash also refers to currency money as well as bank account balances that are held at various commercial banks (Owen 2009).
Management of cash is both the science and art of managing the short-term resources of a company so as to sustain mobilization of funds, ongoing activities and liquidity optimization. Management of cash is composed of:
i). Proper use of current liabilities and current assets of a firm all through the operating cycles of companies.
ii). Synchronized and proper monitoring, planning and the management of the businesss disbursements, collections and account balances
iii). Management and collection of information to effectively use available resources and to also identify the risks involved. Improper avoidance of risks through management of cash can also lead to the bankruptcy of a company. It is worth noting that efficient management of cash prevents bankruptcy and improves profits earned. It also reduces the risks faced by the company. It is necessary for growing and new companies to capture the share of the market.
2.1. Changing Consumer Behavior
Consumers in different parts of the world are changing; therefore, consumer driven businesses need to understand the various forces which can transform the FMCG sector in the next ten years. It is important for companies to understand how the existing transfer structures and policies should be made to reflect market trends adequately and stricter pricing regulations transfer around the world. With continued innovation, businesses serving the customers of tomorrow require sustainable pricing structures and tax effective chain management systems so as to drive the company sales and value. Leading specialists of the Transfer Pricing Associates also have in-depth experience and knowledge in supporting and advising MNEs with different operations in FMCG industry. Examples include:
Implementing and designing global transfer pricing chain management systems for all MNEs with different activities in the FMCG industry. It is worth noting that Bench marking different activities is important in the FMCG industry. The activities include centralized procurement, merchandising functions, licensing of trade names/trademarks and centralized credit management operations. Preparation of pan regional of transfer of pricing documentation solutions which aid MNEs for the reduction of costs of the preparation of local transfer documentation of pricing, while at the same time enhancing control and consistency in the disclosure of different transfer pricing policies meant for stakeholders and tax authorities.
Standard financial ratios may be used to predict financial activities of businesses. Different studies have tried to demonstrate the predictive values of various techniques for the estimation of actual business performance. Foster also reviewed the literature that describes theories and methods for predicting and evaluating financial performance that also reveals that different methods are complex while few researchers are in the position to address the problem adequately. For instance, ratio analysis studies also use a multivariate analysis which is based on assuming a normal distribution of financial ratios. Without the confirmation of the approximation of a normal ratio distribution, researchers are at risk of drawing existing erroneous inferences. When making considerations of the distribution of all financial ratios in a database, normality of distribution can also be skewed by negative denominators, recording errors and denominators that approach zero. The only way to make assessments of future financial performance is by including subjective measures.
Dept ratios also show how efficiently the organization makes use of money from other people and whether it uses borrowed money. Most researchers also divide different financial ratios into groups such as solvency, profitability, activity ratios and liquidity. Financial ratios are vital and established techniques of financial analysis. There are many benefits of making financial analysis. Financial ratios can also be designed to evaluate financial statements. It is worth noting that financial ratios can be used as a control and planning tool. Financial ratio analysis can be used in the evaluation of an organizations performance.
2.2. Decisions on Working Capital in FCMGs
A working capital decision is important to an organization because it affects the liquidity position of a firm. Accountants also see working capital as a difference existing between current liabilities and current assets. Working capital is also the investment of a company in current assets. Decisions from working capital affects the profits made from a firm through impacts on operating costs, sales, and interest expense. They also affect the firms challenge through impact on cash flow volatility, ability of cash generation during crisis and the chances of not getting cash flow (Juan Garcia-Teruel & Martinez-Solano 2007).
Working capital policy is also about functional areas of a businesss operation. A company is required to maintain balance between profitability and liquidity at the same time conducting daily operations. Working capital management becomes important because it also consumes a large proportion of the time taken by the financial manager. Most financial managers efforts and time are therefore used in identifying non-optimal levels of current liabilities and assets and ensuring they reach optimal levels. It is important to know that working capital plays a major role in a firms risk, profitability and value. A company can choose strict working capital management policies with low levels of assets as a percentage of all assets. It can be used for financing decisions of a business in form of a high level of liabilities as a percentage of all liabilities.
Maintaining an optimal balance in all working capital components is a major objective of the working capital management. A close relationship between the level of current assets and sales growth also exists. Liquidity is the precondition which ensures that a firm is able to satisfy short-term obligations. The profitability and liquidity goals usually conflict in decisions made by the finance manager. For instance, if inventories are always kept while anticipating increase in the price of different raw materials, goal profitability is approached but there is danger in the liquidity of a firm.
By using liberal credit policies, a firm can push its sales while liquidity decreases. A company can borrow little capital if it can manage its working capital. Cash can also be invested to ensure that it generates a proper return to the involved investors. Companies can minimize financing costs or increase the funds which are available for expansion. It is done by minimizing the funds that are tied up in the current assets.
Strategic determinants existing in working capital on products can help determine why different businesses have varying levels of capital. Small batch production, capital intensity, order backlog and relative breadth of products can correlate positively to working capital. On the other hand, capacity utilization, continuous process production and ordered products were associated with varying capital levels. Working capital that is divided by sales correlated in a positive way to industry concentration (Parker 2012).
Examining differences in the financial ratio between various industries shows that there are differences between ratio in other companies. There are significant differences between industries in terms of the working capital measure. Improvement of the working capital by a delay in payment to creditors is inefficient and damaging both to the economy and the practitioners as a whole. Strategies of stock reduction, draw on techniques of effective production.
People seeking a working capital reduction that is strategic should also focus on reduction of stock. The experience of the author says that most finance managers have the view that such arguments can be good in terms of theory buy not in real terms. When cash flow rises in pressure, suppliers are the first people to feel draught. Despite being ethically questionable, it also reflects a dangerous short-termism. After studying the relationship between corporate profitability and capital management for one thousand and nine people, the results from the study showed negative correlation between cash conversion cycle and gross operating income as well as inventories and accounts received. McNeil & Embrechts (2015) researched on the impact of working capital on profits made from Hindalco industries for about seventeen years. The study showed that liquid ratio, current ratio, working capital and receivables turnover to asset ratio which had a statistically important impact on the profits made from Hindalco Industries.
The relationship between corporate profitability and working capital management of an organization in Athens Stock Exchange was studied. A sample of one hundred and thirty one listed companies between 2001 and 2004 were used to examine the relationship. After using regression analysis, a statistical significance between cash conversion cycle, profitability and gross operating profit was measured. The results showed that managers had the ability to create value for all shareholders by correctly handling a cash conversion cycle and maintaining different components at an optimum level. The management of the working capital upon performance of companies in Telecom industries were analyzed.
Variables such as cash conversion efficiency, days sales outstanding, payment to vendors income to total assets and number of days of the inventory were used. Findings showed an insignificant and negative relationship between working capital requirement and profitability in the same industry. According to Squire 2010, the relationship between firms profitability and working capital management by using eighty eight American Companies that were listed in the New York Stock Exchange from 2005 to 2007, found that there was no significance difference between corporate profitability and the number of days of the payable accounts.
In the literature review paper that was prepared by Anuar and Tahir between 2008 and 2010, found out the positive and significant influences of a firms sales and its profitability. They also showed that some studies indicated that there was a negative relation between dependent variables and the ratio of total assets to fixed financial assets. A study by Reheman et al. (2010) focused on capital management policies of two hundred and four firms from Pakistans manufacturing sector between 1998 and 2007. Results indicated that manufacturing firms found in Pakistan followed a set working capital management. It also showed that companies need to make improvements in their payment and collection payment policy.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9490628242492676,
"language": "en",
"url": "https://cryptstorm.com/new-study-reveals-cryptocurrencies-could-go-mainstream/",
"token_count": 381,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.049560546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:37ea1e2c-58e0-4ffe-a4bd-a25df041ee6b>"
}
|
Cryptocurrencies could soon replace fiat currency as the preferred choice. According to a study undertaken by professors from Imperial College London and Imperial College Business School, cryptocurrencies have the potential to go mainstream. Bitcoin, the world’s first cryptocurrency could soon become the preferred choice for sending payment.
Findings of the study
Professor William Knottenbelt and Dr. Zeynep Gurguc have stated in their study that cryptocurrencies already fulfil one of the three basic roles of paper money. Fiat currency has been used as store of value since ages. The professors have said that digital currency is already being used as store of value and thus it already satisfies one key criteria.
The researchers state that the three important criteria are store of value, medium of exchange and unit of account. Store of value implies individuals can save the asset for use at a later time. Digital currencies like Bitcoin and Ethereum do fulfil this criteria. For any currency to be used it must be have a medium of exchange. The professors also list unit of account as the third criteria.
According to them, Bitcoin and cryptocurrencies will have to make progress on scalability, design and regulation to achieve the remaining two criteria.
Bitcoin replacing fiat money
Professor William Knottenbelt said that decentralized cryptocurrencies are evolving rapidly. He added,
“There’s a lot of skepticism over cryptocurrencies and how they could ever become a day-today payment system used by the man on the street. In this research we show that cryptocurrencies have already made significant headway towards fulfilling the criteria for becoming a widely accepted method of payment. These decentralized technologies have the potential to upend everything we thought we knew about the nature of financial systems and financial assets.”
Though the research has also stated six challenges that cryptocurrencies must overcome if they are to become the preferred payment method. They are scalability, usability, regulation, volatility, incentives and privacy.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9366850256919861,
"language": "en",
"url": "https://uniquesubmission.com/research-topic-the-impact-of-cryptocurrency/",
"token_count": 4739,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.345703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:26044b93-534a-440a-a567-31dbfb2f11f6>"
}
|
Research topic: The impact of cryptocurrency on economy
This research proposal illustrates the impacts that the cryptocurrency has on the economy. Over the last decade, cryptocurrency has gained huge popularity in different countries.
It opposes the idea of a centralised entity such as government and banks to control the monetary system and shares more power with its own users. The impacts of cryptocurrency are removing the need of middlemen, eradicating entry barriers, complicating regulations, segregating transactions from USD, allowing international transactions and development of ETF.
Application of reliable and authentic secondary sources and no unscrupulous form of secondary sources are being considered for the research work.
Cryptocurrency is often described as the digital currency which are developed following the cryptographic protocols which ensure the transactions cannot be faked and it remains secure.
The fact that it is not managed by central authority or in other words, it remains theoretically immune to the old interference and control of the government due to its decentralized feature of blockchain(Narayanan, et al., 2016).
With the use of private and public keys, users can carry out the transactions quite easily and it also adds substance to its privacy and security features. There are many cryptocurrency available in the market such as Bitcoin,Litcoin,Monero, EOS,Ethereum, NEO etc.
Since its inception in the form of Bitcoin by Satoshi Nakamoto in 2008, it has created a radical transformation in the way payments are being done (Telegraph, 2018).
Such is the growth of the cryptocurrency in the last decade, it has completely changed the perception towards form of money. With the increasing investment in the cryptocurrencies across the world, they have started to take the financial entities by storm and showed their major impact on the economy.
Despite the huge popularity gained in the last decade, cryptocurrency is still suffering from its highly volatile nature and questions have been raised over its effectiveness.
The way cryptocurrency has gained momentum in the last few years is quite astonishing. Cryptocurrency such as Bitcoin has shown massive surge in the current calendar year as the price per bitcoin was under $ 4,000 at the starting of this year and at the end of May, it has gone upto $ 9,000 per bitcoin (Forbes, 2019).
This staggering surge by 120 % in the price of bitcoin is due to the fact that people are seeing it as once in a lifetime opportunity and in future, it is expected to increase more. Other cryptocurrencies such as Ethereum, Dash, Rippleetc have also gained huge popularity among the consumers of the cryptocurrency.
Despite the massive investment made by the consumers in these cryptocurrencies, few major concerns have been raised on cryptocurrencies such as shortage of price uniformity, price manipulation, cybercriminal activities etc(Raymaekers, 2015).
This research aim of the study emphasizes on the impact that the cryptocurrency has on the economy. The objectives of this research are as follows:
- To illustrate the concept of cryptocurrency.
- To highlight the impacts of cryptocurrency on the economy.
- To exemplify the recommended solutions for the challenges due to cryptocurrency.
The arrival of cryptocurrency into the global market is only a decade old and it has created quite a stir on the worldwide economy. The fact that the government or financial institutions play no role in the financial transactions carried out by investors of crypto assets have created major issues for these entities(Narayanan, et al., 2016).
This subject of discussion is quite appealing in the sense that it facilitates the owners of cryptocurrency the power to carry out the transactions without indulging any protocols from government and financial institutions. The rise of prices of the crypto assets such as Bitcoin and the increasing number of people investing in cryptocurrency have also impacted the economy.
Therefore, through this work I would like to highlight the impacts cryptocurrency has on the economy and the challenges it presents. In addition to the intriguing nature of this research, much has not been elaborated in academic literatures related to the impacts it has on the global economy. Hence, this research proposal is going to reflect on the impacts of the cryptocurrency on economy.
According to Narayanan, et al., (2016), cryptocurrency are presented as the digital assets which are developed by the use of blockchain technology. As the name suggested, this form of digital currency applies cryptography to secure the transaction process.
More than thousand cryptocurrencies are present across the world. This digital form of currency are often viewed by many people as the lynchpin of a future economy. From the article by, the fundamentals of cryptocurrency has been based on the notion of decentralisation which is exactly the opposite of the centralised systems that currencies of different economies have followed over ages.
Cryptocurrency opposes the idea of a centralised entity such as government and banks to control the monetary system. Bitcoin, which is the first cryptocurrency developed to offer an alternative solution to the centralised monetary system and it does not involve the need of any intermediaries, for example – payment processors and banks.
Bypassing the centralised system has allowed the cryptocurrency users to have an immutable and transparent transaction, lesser transfer fees, secured and faster confirmation time. Although, decentralisation is prioritised in the cryptocurrency but many cryptocurrency has evolved in the last few years based on the centralisation system(DeVries, 2016).
Cryptocurrency with centralisation system focuses on either token ownership or mining centralisation. The best example of centralised cryptocurrency is Ripple where the founding company owns almost 60 % of the pre – mined currency supply. On the other hand, Mining centralisation has two key elements and they are centralised nodes and hashing power centralisation.
Centralised nodes explains the degree of nodes controlled by the entity itself and the best example is NEO which has 7 validation nodes. Whereas hashing power centralisation underlines the nodes computational power and it happens when most of the computational power is restricted to a single entity.
Whether it’s a centralised or a decentralised from of cryptocurrency, cryptocurrency has garnered massive attention from the consumers across the globe. The fact that since the inception of first cryptocurrency i.e. Bitcoin in 2008, crypto assets in the current year has reached a figure of $ 823 billion USD as their total market capitalisation and thus influencing the economies across worldwide.
The major impacts of cryptocurrency on the economy can be seen in different ways and they are segregating transactions from USD, removing the need of middlemen, eradicating entry barriers, complicating regulations, allowing international transactions and development of ETF.
From the article by Elliott, (2017), decentralised cryptocurrency such as Bitcoin does not need an intermediary as it discards the involvement of centralised institutions such as banks and government entities, instead gives the power to the users of the currency while verifying the transaction. Thus, this eradication of the service requirements from the financial institutions has made them worried.
Although this gives more power and privacy to the owners of crypto assets but it has raised some serious concerns because of no following of the protocols during these financial activities and it has serious implications on international level transactions(Elliott, 2017).
For ages, U.S dollar has been considered as the currency of the global economy and used as a benchmark of mainstream financial activities across the globe. But with the adoption of the cryptocurrency as the global currency, the status of U.S dollar across worldwide will be in jeopardy. Already different countries have introduced cryptocurrency and the prime example of it is Venezuela. ICO is another feature of cryptocurrency which has its impact on different economies, as stated by Zetzsche, et al., (2017).
Having crypto assets has allowed entrepreneurs to circumvent traditional approach of looking for loan in banks or from venture capitalists. With the help of ICO, the users are able to sell their portion of the crypto assets in return for funding for their business. This has not gone well with banks of many countries such as People’s Bank of China has completely discarded them. With the use of more cryptocurrencies, the financial institutions are taking a massive hits and this would surely going to impact the economy of countries as to recover from such losses, banks might increase the taxes on general public.
Due to anonymous nature, cryptocurrency are not adhering to financial regulations and this has presented serious concerns for the government. For example – there were many investors who bought illegal items from Silk Road with the use of cryptocurrency until the organisation had been shut down by FBI(BBC, 2013). More often this kind of anonymity often leads to different scams and people end up losing their investment.
Many consumers are even investing in crypto assets so that they can circumvent from paying taxes which prompted different governments to take serious actions on cryptocurrency. According to DeVries, (2016), the major impact on economy can be seen from the rise of international transactions with the help of crypto assets.
The money that are charged by the banks during international transactions would not further required with cryptocurrency and thus highlighting the loss which can be incurred by the banks in the coming days. Money laundering would become easy with cryptocurrency as the transactions are not regulated so a large chunk of cash can be transferred from one country to another country without even getting noticed.
Based on the article by Raymaekers, (2015), challenges such as highly volatile nature, price manipulation, pumping and dumping out ICO schemes and attention from cybercriminals have been linked with cryptocurrency. Challenge like highly volatile nature of the crypto assets can be explained on the basis of sudden rise of the price of the assets and then fall within short time interval.
“Whales” with larger possession of cryptocurrency assets have the ability of manipulating the market without even investing.Zetzsche, et al., (2017) suggested thatpumping and dumping out of ICO schemes have been another challenges which have affected its market as tokens can be introduced in the market through ICOs and the entrepreneurs use this bargaining chip to surge the price up and getting attention from the investors.
Once they are able to increase the price, these entrepreneurs tend to cash out and leaving the other investors with coins of no values. As stated by Eyal, (2017), due to the nature of money involved in the crypto assets and shortage of regulations to monitor it, it has gained attention from cybercriminals and there have incidents of many heists from these criminals on cryptocurrency trading process.
The unexpected exponential surge in prices of cryptocurrency such as Bitcoin has also raised eyebrows.Therefore to address all these challenges, an ETF (exchange – traded fund) is prioritised which would control the pricing of Bitcoin in the coming days and would mitigate the vulnerability and volatile nature of cryptocurrency, as suggested by Mikhaylov, et al., (2019). Stringent protocols need to be developed for both decentralised and centralised cryptocurrency so that price manipulation can be mitigated.
As the research aim is to highlight the impact of cryptocurrency on the economy, therefore the first step towards achieving it can be done by done by addressing the research objectives through the approach of research methods and techniques.
The current subject of discussion provides a deeper understanding on the concept of cryptocurrency, impacts of it on economy and recommended solutions to mitigate the challenges presented by it. The research purpose of the study is highly important for the development of the research work as without clearly defining the research purpose, it would be hard to address the stated objectives of this proposal(Clarke, Tamaschke, & Liesch, 2013).
To clearly define the research purpose, the reason behind the research work needs to be understood. As the cryptocurrency has gained huge success in the last decade, therefore the impact of cryptocurrency on the economy needs to be evaluated.In order to address the objectives, research philosophy needs to be outlined.
A research philosophy ensures that the researchers have the knowledge and the idea about how they should collect the information and then analyse and use it. Positivism research philosophy specifies the use of factual information so that it allows the researcher to limit their focus on the collection and interpretation of the information and therefore it will be utilised in this proposal(Saunders, 2011).
The fact that positivism philosophy enables the researchers to be independent while performing the research and this means less interaction with participants of the study as the research is based on facts. Factual information of authentic and reliable published journal articles, books, online news articles and magazines would be developed for the content of this study.
Similarly, research approach needs to be selected for this proposal. A research approach can be defined as the decisions taken and methods selected for the detailed methods or even forbroad assumption in context to data gathered and analysed(Mackey & Gass, 2015).
Deductive approach would be used for this proposal. Deductive approach normally takes the general subject of discussion and guides it into being more specific. In this proposal, the impact of cryptocurrency on economy is quite a general subject of discussion and therefore, with deductive approach, this study can be concentrated to be more specific.
Research design has a very broad meaning as some researchers consider it as the option between quantitative and qualitative methods of research whereas others see it as an option to select methods related to information gathering and analysis(Flick, 2015).
The research design also signifies the importance of methods and strategies with respect to gathering and analysis of information. The main characteristics of research design are to provide neutrality, validity, reliability and generalisation.Descriptive research design will be chosen for this proposal. With descriptive research design, the researchers have the luxury of describing the research subject based on the theories and contents collected from different data sources.
As the research subject is quite general therefore the scientific method of descriptive design enables the researcher to notice and explain the behavior of the subject and not impacting it any form. It is often used where there is no requirement of quantifiable measurements and the subject is more concentrated on the qualitative methods such as use of secondary sources instead on quantitative research.
As an alternative to quantitative experiments which are relatively costlier and time consuming, this research design enables them to finish the study on time. Validity of the research explains the sound nature of the research as this research proposal is trying to meet the research objectives developed in the earlier section, thus it justifies the validity of the research.
On the other hand, reliability of the research depends on the contents of the research and it has been developed from valid and authentic source of secondary information, hence also justifying the reliability of the research.
Research methods highlight the strategies and activities considered for this study so that this research work can be finished on a predetermined time frame. The research strategies comprise of the consideration of research philosophy, design and approach for this particular work(Olsen, 2011).
As the research subject is more general in its core, therefore a qualitative analysis would be preferred instead of quantitative analysis. Data collection methods often involve the application of primary and secondary information but in this research work, only secondary information would be preferred. Qualitative analysis would be carried out with the help of thematic analysis based on the literature review done with the application of secondary sources.
Secondary information are gathered from journal articles, books, peer reviews, online news articles and magazines(Blair & Blair, 2014). Thematic analysis is often explained as the simplest way of performing qualitative research analysis. Thematic analysis involves the generation of elements such as theme, code and topic from the qualitative contents.
Thematic analysis guides the researchers to step away from broad observation of data to exploring more meaningful patterns in the study so that it can be aligned with the research objectives. It allows the researchers to distill the information gathered and finds out the broader patterns in the information.
In this research proposal, it would be developed from the secondary contents of the literature review. The elements for the thematic analysis would be considered from the objectives that are addressed in the literature review.
Ethical issues that are required to be considered while formulating this research work are the use of reliable and authentic secondary sources and not adopting any unscrupulous form of secondary sources in the formulation of literature review(Wiles, 2012)
.As the literature review would be done based on the contents gathered from different secondary sources such as books, journal articles, online news articles, magazines etc, therefore focus needs to be on the authenticity and reliability of the information gathered from these sources. As the research is related to cryptocurrency and its impact on economy, hence priority should be on gathering viable information on these subject of discussion.
Information should be discarded from personal blogs because these contents would be used in framing the thematic analysis from the literature review. Unscrupulous form of secondary sources needs to be circumvented while framing the literature review as it would only demerit the quality and content of the literature review.
Despite taking all the precautionary measures while conducting the research work and formulating the methods and techniques, certain gaps might still be visible. There are various limitations in this research work and the first limitations is not considering the primary data(Shipman, 2014).
Given the essence of this study, there is no scope for primary research because the study selected highlights quite a broad aspect. The primary research would have provided with the ground reality on how cryptocurrency is influencing the global market and operations of government and financial bodies and thus influencing the economies.
If it would have been narrowed down to particular entity then there would have been more scope for primary research. The second limitation is the less involvement of quantitative approach in the research work.
No involvement of questionnaire survey and analysis of it directs towards the less use of the quantitative research except numbers are used in literature review section to show the impact that cryptocurrency has on the global market and economy.
The third limitation is some of the researchers do not consider descriptive design as a viable method to meet the objectives of one’s research. As in this research work, descriptive design is used because of the nature of the research study therefore it becomes quite contradictory to the methods and techniques preferred by some researchers.
A mixed method strategy is highly prioritised in the research study as the ideal way of approaching a research and it involves the utilisation of both secondary and primary sources but in this scenario, it has not been followed and this might present itself as a limitation of this study.
The research proposal concludes the impact of cryptocurrency on economy. Cryptocurrency is has been started on the notion of decentralisation which is exactly the opposite of the centralised systems that currencies of different economies have followed over ages.
Decentralisation is prioritised in the cryptocurrency but many cryptocurrency has evolved in the last few years based on the centralisation system. The impacts of cryptocurrency on economy have been highlighted as segregating transactions from USD, removing the need of middlemen, eradicating entry barriers, complicating regulations, allowing international transactions and development of ETF.
Challenges that have been associated with cryptocurrency are highly volatile nature, price manipulation, pumping and dumping out ICO schemes and attention from cybercriminals. The adoption of ETF can be seen as the actions which can mitigate the challenges.
Positivism philosophy, deductive approach and descriptive design would be preferred for this research work. No quantitative analysis would be performed in this research and qualitative research through thematic analysis would be used in this study.
It involves the generation of elements such as theme, code and topic from the qualitative contents of literature review. Use of reliable and authentic secondary sources and not use of any unscrupulous form of secondary sources are considered as ethical considerations.
BBC, 2013. FBI shuts down Silk Road website. [Online]
Available at: https://www.bbc.com/news/av/technology-24378137/fbi-shuts-down-silk-road-website
[Accessed 24 June 2019].
Blair, E. & Blair, J., 2014. Applied Survey Sampling. 1st ed. London: SAGE Publications.
Clarke, J., Tamaschke, R. & Liesch, P., 2013. International experience in international business research: A conceptualization and exploration of key themes. International Journal of Management Reviews, 15(3), pp. 265-279.
DeVries, P., 2016. An Analysis of Cryptocurrency, Bitcoin, and the Future. International Journal of Business Management and Commerce, 1(2), pp. 1-9.
Elliott, A., 2017. Collection of Cryptocurrency Customer-Information: Tax Enforcement Mechanism or Invasion of Privacy. Duke L. & Tech. Rev., 16(1), p. 1.
Eyal, I., 2017. Blockchain technology: Transforming libertarian cryptocurrency dreams to finance and banking realities. Computer, 50(9), pp. 38-49.
Flick, U., 2015. Introducing research methodology: A beginner’s guide to doing a research project. 1st ed. London: Sage.
Forbes, 2019. Bitcoin And Cryptocurrency Investment A ‘Once-In-A-Generation Opportunity. [Online]
Available at: https://www.forbes.com/sites/billybambrough/2019/05/29/bitcoin-and-cryptocurrency-investment-a-once-in-a-generation-opportunity/#4620c4052691
[Accessed 24 June 2019].
Karlsson, C., Andersson, M. & Norman, T., 2015. Handbook of Research Methods and Applications in Economic Geography. 1st ed. London: Edward Elgar Publishing.
Mackey, A. & Gass, S., 2015. Second language research: Methodology and design.. 1st ed. London: Routledge.
Mikhaylov, A., Sokolinskaya, N. & Lopatin, E., 2019. Asset allocation in equity, fixed-income and cryptocurrency on the base of individual risk sentiment. Investment Management & Financial Innovations, 16(2), p. 171.
Narayanan, A. et al., 2016. Bitcoin and cryptocurrency technologies: A comprehensive introduction. 1st ed. Princeton: Princeton University Press.
Olsen, W., 2011. Data Collection: Key Debates and Methods in Social Research. 1st ed. London: SAGE.
Raymaekers, W., 2015. Cryptocurrency Bitcoin: Disruption, challenges and opportunities. Journal of Payments Strategy & Systems, 9(1), pp. 30-46.
Saunders, M., 2011. Research methods for business students. 1st ed. London: Pearson Education.
Shipman, M., 2014. The limitations of social research. 1st ed. NY: Routledge.
Telegraph, 2018. A decade of cryptocurrency: from bitcoin to mining chips. [Online]
Available at: https://www.telegraph.co.uk/technology/digital-money/the-history-of-cryptocurrency/
[Accessed 24 June 2019].
Wiles, R., 2012. What are Qualitative Research Ethics?. 1st ed. New York: A&C Black.
Zetzsche, D., Buckley, R., Arner, D. & Föhr, L., 2017. The ICO Gold Rush: It’s a scam, it’s a bubble, it’s a super challenge for regulators. University of Luxembourg Law Working Paper, 1(11), pp. 17-83.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9571980237960815,
"language": "en",
"url": "https://www.eanvt.org/news/vtdigger-curbing-emissions-could-save-vermonters-800-million-report-says/",
"token_count": 1239,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ea8fc8a8-c906-46d1-bcfa-852ddbdb8ee2>"
}
|
Taking action to curb carbon emissions could save Vermonters almost $800 million over the next 15 years, according to a new report from the Energy Action Network.
But doing so would require significant changes in consumer behavior, like halting the purchase of new gas or diesel vehicles.
The analysis came in an annual report released Wednesday tracking Vermont’s progress in meeting emissions reductions and renewable energy goals. Neale Lunderville, former Burlington Electric Department general manager, referred to the report put out by the network of nonprofits, businesses and state agencies as the “Dow Jones of our fight against climate change.”
“This transition is not a sacrifice, it is an opportunity with great economic benefit for both individual Vermonters and the state economy,” said Jared Duval, director of the Energy Action Network.
The state needs “comprehensive” policies and regulations to reduce emissions in transportation and heating, which account for around 70% of emissions, says the report. Multiple speakers stressed that the transition off fossil fuels needs to be “equitable” for all Vermonters, which the report says will require incentives and low-interest financing for low and moderate income Vermonters.
While Vermont’s greenhouse gas pollution went down slightly in 2016 — the latest year for which state data are available — the Green Mountain State is lagging behind the rest of the Northeast and Quebec, according to the report.
Gov. Phil Scott was one of 24 governors to recommit to the Paris Agreement after Trump pulled out, meaning Vermont is supposed to reduce emissions by 25% below 2005 levels.
But Vermont now has the highest per capita emissions of New England and New York, and has made the least progress toward the Paris Agreement goals. Many of the reductions other states have seen in their heating sectors is due to the expansion of natural gas infrastructure, which is not favored by climate activists because of methane leaks during the fracking process.
The Agency of Commerce and Community Development analyzed the economic impacts of a series of proposed measures for meeting the Paris Agreement for the report, like adding more 86,000 more electric vehicles on the road and over 78,000 heat pump water heaters. The agency found that those steps would decrease out-of-state spending by more than $1 billion and invest $323 million in Vermont’s economy over the next 15 years.
Duval said that the conversation around energy has been “backwards for too long,” with people asking how much it would cost to switch off fossil fuels rather than asking what the economic benefits and overall savings could be in doing that.
“The majority of Vermonters will save money by weatherizing their homes, by buying an EV (electric vehicle) instead of a gas powered vehicle,” said Lunderville. “And by making smart choices on how to heat the water and heat their homes …Vermonters don’t have to choose between getting real economic benefits and reducing their reliance on fossil fuels.”
While Duval stressed that EAN does not endorse specific policies, all the case studies highlighted in the report included an emissions cap or a renewable technology standard beyond what is currently in place in Vermont. For example, Norway put in place a requirement that all new vehicles sold must be “zero emissions” starting in 2025. The Scandinavian country put in place measures like electric vehicle incentives for lower income people and a combined sales tax exemption for EVs with progressively higher taxes for more polluting vehicles — called a “feebate.” As of 2017, Norweigans purchased more electric vehicles than gas and diesel cars.
When asked whether the Legislature has the appetite for such measures, Senate President Pro Tem Tim Ashe said that there was an “ongoing discussion” in Senate Transportation about a feebate program following a report on the matter from the state Agency of Transportation.
“I would say that the conversations are in their infancy, but it’s an area that I think we all have to grapple with that can help drive us to more efficient vehicles,” he said.
Climate policy has been center stage this session, with the House passing the Global Warming Solutions Act, which would turn the state’s greenhouse gas emissions reductions goals into legally enforceable mandates. The House has also passed a series of updates to Act 250 that include requirements for climate change adaptability in new developments.
The Vermont Senate has been working on proposals this session to increase renewable energy requirements for electrical utilities and to put more efficiency dollars toward transportation and heating efficiency.
The Senate Transportation Committee has been grappling with how to move ahead with a regional cap and trade emissions reduction effort known as the Transportation and Climate Initiative (TCI).
The goal of the program, which is modelled after a similar effort in the electric sector, is to cap greenhouse gas emissions from transportation. The final TCI agreement is not expected to come out until after the session adjourns. Signing on to TCI could provide Vermont with an estimated $18 million-$66 million in the first year of the program for emissions reductions efforts.
The Senate will consider an amendment to authorize the governor to sign onto TCI if Massachusetts and New York join, but the timing is unclear as the Legislature recesses amid coronavirus containment concerns. Ashe said that his committee has heard that if most of the region is in the TCI, then related gas price increases would be felt in Vermont whether or not it participates.
While Duval said that it would take “carrots and sticks” — or incentives and regulations, to meet the goals of the Paris Agreement, Gov. Phil Scott has expressed reservations about emissions reductions mandates.
On Wednesday, Peter Walke, commissioner of the Department of Environmental Conservation and Scott’s lead on climate matters said that “the governor is pretty clear about his preference for the carrots.”
DISCLOSURE: Neale Lunderville is on the board of the Vermont Journalism Trust, the parent organization of VTDigger.org.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9303603768348694,
"language": "en",
"url": "https://www.iasabhiyan.com/tribunal-appellate-tribunal-and-other-authorities-rules-2020/",
"token_count": 220,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06396484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:60dbacf4-c084-4d65-845a-c517c8158cbf>"
}
|
- Recently, the Union Ministry of Finance framed a new set of rules called the Tribunal, Appellate Tribunal, and other Authorities (Qualifications, Experience and other Conditions of Service of Members) Rules, 2020 that prescribe uniform norms for the appointment and service conditions of members to various tribunals.
- The new rules have been framed by the government as the previous Rules of 2017 were struck down by the Constitution Bench of the Supreme Court in November 2019 in the case Rojer Mathew vs South Indian Bank.
Back to Basics:
- A quasi-judicial institution that is set up to deal with problems such as resolving administrative or tax-related disputes.
- Performs a number of functions like adjudicating disputes, determining rights between contesting parties, making an administrative decision, reviewing an existing administrative decision and so forth.
- Not part of the original constitution,
- Incorporated in the Indian Constitution by 42nd Amendment Act, 1976.
- Article 323-A deals with Administrative Tribunals.
- Article 323-B deals with tribunals for other matters.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9223937392234802,
"language": "en",
"url": "https://www.pcca.com/publications/cotton-market-weekly/cotton-market-weekly-glossary-of-terms/",
"token_count": 866,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0162353515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ccb36541-f89a-417d-866d-e414b98c5184>"
}
|
Definitions courtesy of the National Futures Association.
Basis – The difference between the current cash price of a commodity and the futures price of the same commodity
Bear Market (Bear/Bearish) – A market in which prices are declining. A market participant who believes prices will move lower is called a “bear.” A news item is considered bearish if it is expected to result in lower prices
Bull Market (Bull/Bullish) – A market in which prices are rising. A market participant who believes prices will move higher is called a “bull.” A news item is considered bullish if it is expected to result in higher prices.
Cash Commodity – The actual physical commodity as distinguished from the futures contract based on the physical commodity.
Cash Market – A place where people buy and sell the actual commodities (i.e., grain elevator, bank, etc.).
Cash Settlement – A method of selling certain futures or options contracts whereby the market participants settle in cash (payment of money rather than delivery of the commodity).
Commodity Futures Trading Commission (CFTC) – The federal regulatory agency established in 1974 that administers the Commodity Exchange Act. The CFTC monitors the futures and options on futures markets in the United States.
Contract Month – The month in which delivery is to be made in accordance with the terms of the futures contract.
Delivery – The transfer of the cash commodity from the seller of a futures contract to the buyer of a futures contract. Each futures exchange has specific procedures for delivery of a cash commodity. Some futures contracts, such as stock index contracts, are cash settled.
First Notice Day – The first day on which notice of intent to deliver a commodity in fulfilling of an expiring futures contract can be given to the clearinghouse by a seller and assigned by the clearinghouse to a buyer.
Last Trading Day – The last day on which trading may occur in a given futures or option.
Limit – See position limit, price limit, variable limit.
Long – One who has bought futures contracts or options on futures contracts or owns a cash commodity.
Low – The lowest price of the day of a particular futures or options on futures market.
Managed Account – Also referred to as a discretionary account. An arrangement by which the owner of the account gives written power of attorney to someone else, usually the broker or a Commodity Trading Advisor, to buy and sell without prior approval of the account owner.
Margin – An amount of money deposited by both buyers and sellers of futures contracts and by sellers of options contracts to ensure performance of the terms of the contract (the making or taking delivery of the commodity or the cancellation of the position by a subsequent offsetting trade). Margin in commodities is not down payment, as in securities, but rather a performance bond.
Nearby Delivery Month – The futures contract month closest to expiration.
Open Interest – The total number of futures or options contracts of a given commodity that have not yet been offset by an opposite futures or option transaction nor fulfilled by delivery of the commodity or options exercise. Each open transaction has a buyer and a seller, but for calculations of open interest, only one side of the contract is counted.
Position – A commitment, either long or short, in the market.
Position Limit – The maximum number of speculative futures contracts one can hold as determined by the CFTC and/or the exchange where the contract is traded.
Price Limit – The maximum advance or decline, from the previous day’s settlement price, permitted for a futures contract in one trading session.
Range – The difference between the high and low price of a commodity during a given trading session, week, month, year, etc.
Short – One who has sold futures contracts or plans to purchase a cash commodity.
Variable Limit – A price system that allows for larger than normal allowable price movements under certain conditions. In periods of extreme volatility, some exchanges permit trading at price levels that exceed regular daily price limits.
Volatility – A measurement of the change in price over a given time period.
Volume – The number of purchases and sales of futures contracts made during a specified period of time, often the total transactions for one trading day.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9512297511100769,
"language": "en",
"url": "https://ec.europa.eu/clima/policies/transport/aviation_es",
"token_count": 11065,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.005950927734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:842d6fcc-c9a8-4877-ab2e-fdb80546fee2>"
}
|
Aviation is one of the fastest-growing sources of greenhouse gas emissions. The EU is taking action to reduce aviation emissions in Europe and working with the international community to develop measures with global reach.
The revision of the EU ETS Directive concerning aviation will serve to implement the Carbon Offsetting and Reduction Scheme for International Aviation (CORSIA) by the EU in a way that is consistent with the EU’s 2030 climate objectives. The initiative will also propose to increase the number of allowances being auctioned under the system as far as aircraft operators are concerned.
The proposal, planned for the second quarter of 2021, will be part of the broader European Green Deal.
The Inception Impact Assessment (Roadmap) on the legislative initiative was open for feedback until 28 August 2020.
The open public consultation on the legislative initiative is open until 14 January 2021.
In the EU in 2017, direct emissions from aviation accounted for 3.8% of total CO2 emissions. The aviation sector creates 13.9% of the emissions from transport, making it the second biggest source of transport GHG emissions after road transport.
Before the COVID-19 crisis, the International Civil Aviation Organization (ICAO) forecasted that by 2050 international aviation emissions could triple compared with 2015.
Aviation also has an impact on the climate through the release of nitrogen oxides, water vapour, and sulphate and soot particles at high altitudes, which could have a significant climate effect. A November 2020 study conducted by the European Aviation Safety Agency (EASA) looks into the non-CO2 effects of aviation on climate change, and fulfils the requirement of the EU Emissions Trading System Directive (Art. 30.4). Overall, the significance of combined non-CO2 climate impacts from aviation activities, previously estimated to be at least as important as those of CO2 alone, is now fully confirmed by the report.
To achieve climate neutrality, the European Green Deal sets out the need to reduce transport emissions by 90% by 2050 (compared to 1990-levels). The aviation sector will have to contribute to the reduction.
CO2 emissions from aviation have been included in the EU emissions trading system (EU ETS) since 2012. Under the EU ETS, all airlines operating in Europe, European and non-European alike, are required to monitor, report and verify their emissions, and to surrender allowances against those emissions. They receive tradeable allowances covering a certain level of emissions from their flights per year.
The system has so far contributed to reducing the carbon footprint of the aviation sector by more than 17 million tonnes per year, with compliance covering over 99.5% of emissions.
In addition to market-based measures like the ETS, operational measures – such as modernising and improving air traffic management technologies, procedures and systems – also contribute to reducing aviation emissions.
The legislation, adopted in 2008, was designed to apply to emissions from flights from, to and within the European Economic Area (EEA) – the EU Member States, plus Iceland, Liechtenstein and Norway. The European Court of Justice has confirmed that this approach is compatible with international law.
The EU, however, decided to limit the scope of the EU ETS to flights within the EEA until 2016 to support the development of a global measure by the International Civil Aviation Organization (ICAO).
In light of the adoption of a Resolution by the 2016 ICAO Assembly on the global measure (see below), the EU has decided to maintain the geographic scope of the EU ETS limited to intra-EEA flights from 2017 onwards. The EU ETS for aviation will be subject to a new review in the light of the international developments related to the operationalisation of CORSIA. The next review should consider how to implement the global measure in Union law through a revision of the EU ETS legislation. In the absence of a new amendment, the EU ETS would revert back to its original full scope from 2024.
In 2016, the European Commission held a public consultation on market-based measures to reduce the climate change impact from international aviation. The consultation sought input on both global and EU policy options.
In total, 85 citizens and organisations responded.
In October 2016, the International Civil Aviation Organization (ICAO) agreed on a Resolution for a global market-based measure to address CO2 emissions from international aviation as of 2021. The agreed Resolution sets out the objective and key design elements of the global scheme, as well as a roadmap for the completion of the work on implementing modalities.
The Carbon Offsetting and Reduction Scheme for International Aviation, or CORSIA, aims to stabilise CO2 emissions at 2020 levels by requiring airlines to offset the growth of their emissions after 2020.
Airlines will be required to
During the period 2021-2035, and based on expected participation, the scheme is estimated to offset around 80% of the emissions above 2020 levels. This is because participation in the first phases is voluntary for states, and there are exemptions for those with low aviation activity. All EU countries will join the scheme from the start.
A regular review of the scheme is required under the terms of the agreement. This should allow for continuous improvement, including in how the scheme contributes to the goals of the Paris Agreement.
Work is ongoing at ICAO to develop the necessary implementation rules and tools to make the scheme operational. Effective and concrete implementation and operationalisation of CORSIA will ultimately depend on national measures to be developed and enforced at domestic level.
Historic aviation emissions are the basis for calculating the cap on aviation emissions applied when the sector is included in the EU ETS from January 2012. Today's decision by the European Commission publishes the mean average of the annual emissions for the years 2004, 2005 and 2006 of all flights that would be covered by the EU ETS performed by air carriers to and from European airports. Based on this average annual historical aviation emissions for the period 2004-2006, the number of aviation allowances to be created in 2012 amounts to 212,892,052 tonnes (97% of historic aviation emissions), and the number of aviation allowances to be created each year from 2013 onwards amounts to 208,502,525 tonnes (95% of historic aviation emissions).
The Commission has been assisted by Eurocontrol – the European organisation for the safety of air navigation. The comprehensive air traffic data contained in Eurocontrol's databases from the Central Route Charges Office (CRCO) and the Central Flow Management Unit (CFMU) were considered the best available data for calculation of the historic emissions. These provide among other things a calculation of the actual route length for each individual flight. Emissions were then calculated on a flight-by-flight basis using the ANCAT 3 (Abatement of Nuisances Caused by Air Transport) methodology and the CASE (Calculation of Emissions by Selective Equivalence) methodology.
In addition to Eurocontrol's data, the Commission also used information on actual fuel consumption from almost 30 aircraft operators of different types and sizes. This data was for aircraft types that were responsible for 93% of emissions in the base years.
Thirdly, additional calculations were carried out to account for fuel consumption associated with the use of the auxiliary power units (APUs). APUs are small engines that are used to provide lighting and air conditioning when the aircraft is stationary at airports. They are used when the aircraft is not connected to ground source electrical power and ventilation services. The approach taken was first to determine the average APU fuel consumption for different aircraft types. The individual emission factors of APU fuel consumption were then extrapolated to calculate total APU emissions applying a process which took into account the actual share of fuel burn for the flights under the EU ETS of each aircraft type and the use of ground power in airports. The emissions corresponding to the resulting total APU fuel consumption were included in the historical aviation emissions for each of the years 2004, 2005 and 2006.
The 2004-06 baseline period is defined in the legislation on the inclusion of aviation in the EU ETS. The baseline period for aviation allocation under the EU ETS is different from the 1990 baseline for the EU's overall reduction commitment as it takes into account the significant growth in aviation over the last 15 years.
This decision has been adopted later than originally foreseen in order to spend more time collating data on the historic emissions. Additional studies were done to increase the accuracy of the estimations of historic aviation emissions, in particular in relation to the fuel used by auxiliary power units (APU). Together with the support from Eurocontrol and contribution from aviation sector, a methodology to assess the APU was developed and the fuel consultation by APU was estimated. This figure was then added to the flight based CO2 emissions.
The subsequent steps foreseen in the implementation of the Directive are to determine free allocations to aircraft operators and the volume of allowances to be auctioned.
82% of the allowances will be given for free to aircraft operators and 15% of the CO2 allowances are allocated by auctioning. The remaining 3% will be allocated to a special reserve for later distribution to fast growing airlines and new entrants into the market.
The free allowances will be allocated by a benchmarking process which measures the activity of each operator in 2010 in terms of the number of passengers and freight that they carry and the total distance travelled. The benchmark should be published by 30 September 2011.
Member states have agreed that all revenues from auctioning should be used to tackle climate change including in the transport sector.
The events from the Icelandic volcano in 2010 will have no effect whatsoever on the total size of the emissions cap for aviation under the EU ETS or the total number of allowances that will be allocated free of charge to aircraft operators.
We have not seen data to suggest that the impact of the ash cloud will have a material impact on the distribution of free allowances between aircraft operators. Redistribution might occur if certain airlines had to cancel a greater proportion of flights then others, while the vast majority of operators have been impacted by the flight restrictions resulting from the volcanic ash cloud. Indeed all the estimations that we have seen confirm that distributional impacts are very small.
For the regulator to change or adapt the 2010 benchmarking year for the allocation of free allowances to aircraft operators, it would require a change in primary EU legislation. Adopting such legislation usually takes 2 years and there are no plans to start this process.
The EU ETS will cover any aircraft operator, whether EU- or foreign-based, operating international flights on routes to, from or between EU airports. All airlines will thus be treated equally. Very light aircraft will not be covered. Military, police, customs and rescue flights, flights on state and government business, and training or testing flights will also be exempted.
To reduce administrative costs, each operator will be administered by a single Member State regarding emissions from the total of its flights to, from and within the EU.
The list of aircraft operators that may be covered by the system includes over 4000 operators. The list has been created with the support of Eurocontrol and was based on actual flight information; it was last updated in February 2011 to take account of all changes that happened in 2010.
The EU is the strongest advocate for global action to reduce climate impacts of aviation. States have not been able to agree on a common global system through either the United Nations Framework Convention on Climate Change (UNFCCC) or the International Civil Aviation Organisation (ICAO). In the Resolution on climate change adopted at its most recent Assembly in October 2010, states in ICAO called for further work to explore the feasibility of a global market-based measure. The Resolution also recognized that states may take action prior to 2020. The EU ETS provides a good model for applying market-based measures to aviation. Development of other national programmes covering international aviation, compatible with the EU ETS, are a pragmatic way in which global action can be implemented.
While a number of airlines support action by the EU to address the climate change impacts from aviation, a challenge to the EU Directive has been launched by a number of US airlines. This has been referred to the European Court of Justice, and the European Commission, European Parliament, Council and a number of Member States have submitted observations, in addition to other organisations intervening in the case. The airlines involved are complying with the Directive's requirements in full pending the resolution of this challenge.
The environmental impact of including aviation in the EU ETS will be significant because aviation emissions, which are currently growing rapidly, will be capped at below their average level in 2004-2006. By 2020 it is estimated that a total of 183 million tonnes of CO2 will be saved per year on the flights covered, a 46% reduction compared with business as usual. This is equivalent, for instance, to twice Austria's annual greenhouse gas emissions from all sources. Some of these reductions are likely to be made by airlines themselves. However, participation in the EU system will also give them other options: buying additional allowances on the market – i.e. paying other participants to reduce their emissions - or investing in emission-saving projects carried out under the Kyoto Protocol's flexible mechanisms. Providing aviation with these options does not reduce the environmental impact of the proposal since the climate impact of emission reductions is the same regardless of where they are made.
Including aviation in the EU ETS will not directly affect or regulate air transport tickets. However, aircraft operators may have to invest in more efficient planes or buy emission allowances in the market in addition to those allocated to them. The impact on ticket prices will probably be minor. Assuming airlines fully pass on these extra costs to customers, by 2020 the ticket price for a return flight within the EU could rise by between €1.8 and €9. Due to their higher environmental impact, long-haul trips could increase by somewhat more depending on the journey length. For example a return flight to New York at current carbon prices of around €15 might cost an additional €12. However, ticket price increases are in any case expected to be significantly lower than the extra costs airlines have passed on to consumers due to world oil price rises in recent years. Including aviation in the EU ETS will also have a smaller impact on prices than if the same environmental improvement were to be achieved through other measures such as a fuel tax or an emissions charge.
Direct emissions from aviation account for about 3% of the EU’s total greenhouse gas (GHG) emissions. The large majority of these emissions comes from international flights, i.e. flights between two Member States or between a Member State and a non-EU country. This figure does not include indirect warming effects, such as those from NOx emissions, contrails and cirrus cloud effects. The overall impact is therefore estimated to be higher. The Intergovernmental Panel on Climate Change (IPCC) has estimated that aviation’s total impact is about 2 to 4 times higher than the effect of its past CO2 emissions alone. Recent EU research results indicate that this ratio may be somewhat smaller (around 2 times). None of these estimates take into account the uncertain but potentially very significant effects of cirrus clouds.
EU emissions from international aviation are increasing fast – doubling since 1990 – as air travel becomes cheaper without its environmental costs being addressed. For example, someone flying from London to New York and back generates roughly the same level of emissions as the average person in the EU does by heating their home for a whole year. Emissions are forecast to continue growing for the foreseeable future.
Emissions from aviation are higher than from certain entire sectors covered by the EU ETS, for example refineries and steel production. When aviation joins the EU ETS it is forecast to be the second largest sector in terms of emissions, second only to electricity generation.
Airlines have been monitoring their emissions during 2010, and are required to verify and report these emissions to their administering Member States by 31 March 2011. By that same date, airlines may also apply for free allocations of emissions allowances on the basis of their activities in 2010. Based on information submitted by the Member States, the European Commission will calculate the benchmark that will define how many free allowances aircraft operators will receive. This benchmark decision will be published by 30 September 2011.
By end September the Commission will also publish the emissions cap and the percentages of allowances to be: auctioned; given for free; and allocated to the special reserve.
The definition in Article 3(o) of the EU ETS Directive determines who is an "aircraft operator" for the purposes of the EU ETS. This definition refers to a natural or legal person which operates an aircraft at the time it performs an aviation activity specified in Annex I to the EU ETS Directive (i.e. a flight departure or a flight arrival at an aerodrome in the territory of the EU). If the identity of the operator cannot be ascertained then the aircraft owner is deemed to be the operator unless the owner identifies the relevant operator.
The legal requirements of the EU ETS apply when an aircraft operator first performs an aviation activity in Annex I of the EU ETS Directive which is not covered by any of the exemptions in that Annex. The specific obligations which an operator needs to fulfil are explained in FAQs 3.1and 3.2 below.
An aircraft operator that does not perform any flight activity in Annex I of the EU ETS Directive for a complete calendar Year X is not required to comply with EU ETS requirements for that calendar year. However, verified emissions reports and the surrender of allowances will be required in Year X in respect of any relevant flight activity performed in the calendar year X-1.
Annex XV of the Monitoring Decision states in Part 2 that for the purpose of identifying the aircraft operator defined by Article 3(o) of the EU ETS Directive, the ICAO designator in box 7 of a flight plan is to be used or, in the absence of such a designator, the aircraft registration marking is to be used. It appears that there is no uniform system, criteria or procedure for the application and issue of ICAO designator codes. So that it is unclear whether all operators will have a designator or whether aircraft operators within the same corporate group will share the same designator or have separate and distinct ICAO designators. Further complications may arise in identifying an aircraft operator due to the various types of aircraft leasing, the use of management companies, or the use of multiple ICAO designators by the same aircraft operator. Where the aircraft operator cannot be identified then the legislation stipulates that the owner will be responsible unless the owner can identify the relevant operator. Naturally, complications will not arise if each operator possesses and uses its own distinct ICAO designator.
The relevant test in the EU ETS Directive for an aircraft operator is simply that there is a legal person responsible for flights arriving or departing from EU aerodromes which are not covered by the exemptions in Annex I of the EU ETS Directive. Individual companies that have been duly incorporated each possess their own distinct legal personality. It follows, therefore, that each company responsible for flights covered by Annex I is a different aircraft operator for the purposes of the EU ETS Directive even if they are in the same corporate group of companies.
In addition, Article 18(a) of the EU ETS Directive identifies an administering Member State, in relation to a particular commercial aircraft operator, by reference to the mandatory operating licence issued to that operator by the Member State concerned. There is a presumption, therefore, that each legal person issued with an operating licence by a Member State should be treated as a distinct and separate aircraft operator.
There is no explicit requirement for an aircraft operator to have a unique identifier. Recital 15 of the Aviation Directive states that an aircraft operator may be identified by the use of an ICAO designator or any other recognised designator used in the identification of a flight and that if the identity of the operator is not known, then the owner of the aircraft should be deemed to be the operator unless proven otherwise. The crucial point for the operation of the EU emissions trading scheme is that the activities of a given aircraft operator can be attributed unequivocally to that operator. As such, and given the absence in Community law any requirement to be identified by a single and unique identifier, it follows that there is no legal obstacle for an aircraft operator to be identified by multiple ICAO designators so long as these are associated with a single aircraft operator. Obviously, it is administratively simpler if an operator uses only a single identifier when filing its flight plans.
Under a wet lease arrangement an aircraft is operated by the lessee for the benefit of the lessor who essentially remains responsible for the state and maintenance of the aircraft i.e. the lessor retains effective control of the flight. The presumption, therefore, is that the lessor is the aircraft operator and that the flight plan will contain the ICAO designator of the lessor/owner or the registration marking of the aircraft. However, the lessor and lessee may agree and indicate alternative responsibility for the flight activity by, for example, using the ICAO designator of the lessee in the flight plan.
Under a "dry lease agreement" an aircraft is operated by the lessee under the AOC of the lessees and control of the aircraft effectively passes to the lessee. The presumption, therefore, is that the lessee is the operator and the ICAO designator of the lessee should appear in the flight plan.
Some aircraft operators employ the services of management companies to file flight plans and pay route charges on their behalf. Some management companies also provide services related to the ETS obligations of their clients. However, management companies are not aircraft operators for the purposes of the EU ETS Directive unless they also operate flights covered by Annex I of the EU ETS Directive.
It is entirely possible for a service company to be empowered to represent an aircraft operator before the competent authorities of the administrating Member State in relation to EU ETS matters. The extent of the powers of the service company will depend upon what is agreed between the operator and the service company.
It is possible, therefore, for a management company to file monitoring reports, and applications for free allowances on behalf of a particular aircraft operator if the management company is duly empowered. The issue of allowances can only be made directly to a registry account held by the aircraft operator. However, the Registries Regulation permits an aircraft operator to nominate an "additional authorised representative" who has limited rights on the account (the exact scope of these limited rights can be set by the account holder). Naturally, administering Member States will wish to be certain about the identity of the aircraft operator represented by a management company.
The Commission also has a duty to ensure the efficient operation of the EU ETS and so it will continue to identify and to include in the list of aircraft operators it publishes those operators who may nonetheless be represented by service companies for the matters relating to the EU ETS.
There are several categories of flight which are exempt from the EU ETS. These are contained in Annex I of the EU ETS Directive and include activities such as search & rescue, state flights transporting third countries' Heads of State, Head of Government and Government ministers, police flights amongst others. There are special codes to designate these types of flight which should be inserted into the flight plan which is filed by the operator in order that the flight can be correctly excluded. More information about the types of flight excluded and the associated codes to be inserted in the flight plan can be found in the Annex I Decision1.
There is a de minimis exemption in subparagraph (j) of Annex I to the EU ETS Directive below which an entity ceases to be an aircraft operator covered by the provisions of the EU ETS. This exemption only applies to commercial air transport operators. Flights may also be provided by commercial operators without remuneration but this factor is not relevant when determining whether the de minimis threshold is exceeded.
In summary, all flights of a commercial operator which are not covered by any of the other exemptions in Annex I of the EU ETS Directive must be considered when assessing whether the de minimis threshold is exceeded.
The primary function of the list of aircraft operators published by the Commission is to facilitate the good administration of the EU ETS by providing information on which Member State will be regulating a particular operator. This prevents double regulation.
It must be emphasised that inclusion on the list of aircraft operators published by the Commission is not determinative as to whether a natural or legal person is an aircraft operator. This is clearly spelled out in Part 1 paragraph (3) of the Annex to the Annex I Decision. Moreover, a separate information note has been published on the Europa web site on the role of the list whose primary function is to facilitate the good administration of the EU ETS by informing regulators and aircraft operators about who is regulating whom. Conversely, aircraft operators that are on the list do not fall under the EU ETS if they only perform aviation activities that are exempt under Annex I to Directive 2003/87/EC.
It is possible that the list published by the Commission contains inaccuracies or does not reflect the most up to date information about aircraft operators' activities. The Commission will update the list from time to time and where appropriate bring inaccuracies to the attention of competent authorities. Member States are not bound only to regulate those entities contained in the list published by the Commission but have some flexibility to regulate "off-list", for example, where a Member State issues an operating licence to a new operator.
The Commission intends to publish an updated list each year around the beginning of February on the basis of the best available information. The aim of this update is to include new aircraft operators that have undertaken flight activities covered by Annex I of the EU ETS Directive in the previous calendar year. In addition, this represents an opportunity to correct manifest errors in the designation of operators or administering Member States.
It is not so important to remove operators that cease their activities given that obligations arise under the ETS from performing relevant flight activities in Annex I of the EU ETS Directive rather than from inclusion on the list. However, to keep the list manageable administratively, where operators have clearly ceased to be covered by the ETS and will not return to it because, for example, they are no longer in existence or because they have rescinded their operating licence, then the Commission will remove such operators from the list at the time of its update. It should be remembered that the activities of some operators may be such that in one year they are not covered by the ETS but activity levels may increase so that in subsequent years they are covered. It does not make sense to amend the list in such circumstances.
Airspace users using services companies for flight planning and payment of route charges may not necessarily be included in the list.
Whilst an aircraft operator is defined by Article 3(o) of the EU ETS Directive, in practice the call sign used for Air Traffic Control (ATC) purposes has been used. The call sign appears in field 7 of the flight plan. The call sign either starts with the 3-letter ICAO designator of the operator or, if not available, represents the registration marking of the aircraft. In the latter case, the aircraft operator is identified by the operator indicated in field 18 of the flight plan or the operator identified by EUROCONTROL’s Central Route Charges Office (CRCO) with alternate sources of information (such as States’ registries or States’ administrations).
An airspace user may not appear as a distinct aircraft operator in the current list if all of its flights have been (a) operated under the ICAO designator of a service company; or (b) identified by the aircraft registration marking and the service company has indicated to the CRCO that it is responsible for the payment of route charges. In such cases, all the flights of the airspace user have been attributed to the service company.
If an aircraft operator has a 3-letter ICAO designator, the aircraft operator should ensure that this code is used in its flight plans or that box 18 of the flight plan indicates its ICAO designator as the operator of that flight. Alternatively, the operator can place the registration marking of the aircraft in field 18 of the flight plan and submit to EUROCONTROL an annual declaration, including information on the composition of their fleet.
The aircraft operator responsible for a flight has been identified on the basis of the information inserted in field 7 of the flight plan. Consequently, flights of subsidiaries operated under the ICAO 3-letter designator of the parent company will have been allocated to the parent company. Also, subsidiaries operating flights under their own ICAO 3-letter designator may also have been allocated to the parent company when the parent company took responsibility of the flights for air navigation charges purposes.
If the parent company has been identified as the aircraft operator for all the flights of a subsidiary, the latter will not appear as a distinct aircraft operator in the current list as there are no flights attributed to it. Aircraft operators which are subsidiary companies should ensure that they identify their flights using a separate ICAO designator and/or that they include all aircraft under their company in the fleet declaration submitted to EUROCONTROL’s Central Route Charges Office (CRCO).
Two conditions need to be fulfilled in order for an aircraft operator to benefit from the de minimis exemption under subparagraph (j) of Annex I to the EU ETS Directive:
If these conditions are met, the most probable reason for inclusion in the list is that for its present functions EUROCONTROL does not retain comprehensive records about AOCs for all operators flying in the EU region. As a result, EUROCONTROL may not be aware of the commercial status of particular operators (as defined in Article 3 of the EU ETS Directive). When this AOC information is missing, the operator is deemed not to be a commercial air transport operator.
An operator may also be included in the list because the last condition above is not satisfied. This means that according to the air traffic information held by EUROCONTROL and the CO2 emissions estimations produced by EUROCONTROL, in any of the years since 2006 both of the following conditions were fulfilled:
If your AOC contains information confirming that you are a commercial air transport operator, please provide a copy of it to EUROCONTROL. Please also keep your competent authority informed that you have sent your AOC to EUROCONTROL.
For non EU operators it may not be possible in all cases to determine your commercial status from your national certificate that is equivalent to the AOC (e.g. US Air Carrier Certificates). This is due to differences in the types of information that is contained in these certificates. However, you are still welcome to submit a copy of your certificate to EUROCONTROL, who may contact you for additional supporting documents.
The maximum take-off mass that has been used to determine whether flights should be exempted under subparagraph (h) of Annex I to the EU ETS Directive was that held by EUROCONTROL for the calculation of route charges. If you consider that all the flights you have operated were flown only with aircraft of less than 5.7 tonnes, please discuss this issue with your competent authority. The Commission is not in a position to decide whether an operator is exempt from the EU ETS. You may also wish to contact EUROCONTROL for further information.
If you are on the list it means that you have been identified as the aircraft operator of at least one flight since 2006 that was not considered exempted according to Annex I of the EU ETS Directive.
This situation could be the case for ferrying flights operated, for instance, during the delivery of the aircraft or for bringing it to or back from maintenance facilities. Such ferrying and positioning flights are not exempt from EU ETS. If you consider that all the flights you have operated are exempted under either of the subparagraphs of Annex I of the EU ETS Directive, please discuss this with your competent authority. The Commission is not in a position to decide whether an operator is exempt from the EU ETS. You may wish to contact EUROCONTROL for further information.
If you are on the list it means that you have been identified as the aircraft operator of at least one flight since 2006 that was flown to, from, or within the EU and that was not considered exempted according to Annex I of the EU ETS Directive.
This can be the case for ferrying flights operated, for instance, during the delivery of the aircraft or when bringing it to or back from maintenance facilities. If you consider that you have never operated any flight to, from or within the EU, or you do not plan to have any flights in the future, please discuss this with your competent authority. You may also wish to contact EUROCONTROL for further information.
The name of the operator is the name used by EUROCONTROL’s Central Route Charges Office (CRCO) when establishing the invoices for route charges. If you wish to correct the name of the operator on the list, please notify EUROCONTROL about the name change, providing sufficient evidence as to the correct name of the aircraft operator.
The list has been defined on the air traffic information since 2006. An operator has been included in the list as long as it had operated at least one eligible flight in those years.
EUROCONTROL can determine when the most recent flight was flown by a given operator but does not hold comprehensive information on whether such operator is still in operation. If you consider that an operator should NOT be on the list because it does not exist any longer or because it has ceased or suspended its aviation actives in the EU, please inform the competent authority about this. Please also notify the European Commission by sending a message to:
You may wish to contact EUROCONTROL for further information (e.g. the date of the most recent flight in the EU).
The EU ETS Directive stipulates the administering Member State for any given operator in receipt of an operating licence in the EU is the Member State that issued the operating licence. Unfortunately, a complete and comprehensive database of all the operating licences granted by Member States in accordance with the provisions of Council Regulation (EC) No. 1008/2008 is not available, nor does EUROCONTROL hold this information. There is no definitive way, therefore, for the Commission or EUROCONTROL to check which Member State has issued AOCs and operating licences to particular operators and so there may be discrepancies in the list.
If you possess an operating licence from an EU Member State, but in the list you are allocated to a different Member State, please provide a copy of your operating licence to EUROCONTROL.
The administering Member State has been determined on the basis of the information available for the operator’s base year as defined by Article 18a(5) of the EU ETS Directive. The fact that an operator no longer operates or does not fly mainly from (or to) aerodromes located in such a State does not change the designation of the administering Member State.
Different companies operating flights covered by Annex I of the EU ETS Directive are considered as separate aircraft operators (see question 1.5). Administering Member States are attributed either on the basis of which Member State issued the operating licence or the State with the greatest attributed emissions for that operator. It is for the parent company to decide how to organise its corporate structure and flight activities in relation to the administration of the EU ETS and the allocation of administering Member States.
Article 18a(1) of the EU ETS Directive sets the rules on the initial attribution of an aircraft operator to an administering Member State. Attribution is done on the basis of which Member State has issued the operating licence or which is the Member State with the greatest attributed emissions from flights performed by that operator in the base year (2006).
However reattribution of an operator to a new Member State may be necessary if it turns out that the initial attribution does not meet the conditions set under Art 18a(1) of the EU ETS Directive.
Reattribution may be necessary where:
Reattribution is different from the transfer of aircraft operators based on Article 18a(2) of the EU ETS Directive. Such transfer occurs where in the first two years of any trading period, none of the attributed aviation emissions from flights performed by an aircraft operator without an operating licence granted by a Member State are attributed to its administering Member State. That aircraft operator must be transferred to another administering Member State in respect of the next period. The new administering Member State will be the Member State with the greatest estimated attributed aviation emissions from flights performed by that aircraft operator during the first two years of the previous period.
After an aircraft operator is reattributed on the basis of Article 18a(1) or transferred on the basis of Article 18a(2) of the EU ETS Directive to a new administering Member State, the monitoring plan will have to be transferred from one administering Member State to another, or resubmitted by an operator to the new administering MS. This process has to be agreed between the Member States on a case by case basis, taking account of the views of the aircraft operator affected and seeking to minimize the financial costs and administrative burden to aircraft operator.
The timing of the transfer or resubmission of the monitoring plan should also be agreed between the Member States and the operator.
The list now contains a unique identification number (code) for each aircraft operator. This code will be used for compliance purposes. The code coincides with the number used by EUROCONTROL’s Central Route Charges Office (CRCO) for identifying airspace users in the route charges system. This identification number is shown in the reference of air navigation charges bills.
In the list, a number of aircraft operators may be indentified only by their ICAO designator or the registration mark of the plane. The majority of such aircraft operators are associated with flights operated entirely outside of the region for which EUROCONTROL provides the Central Route Charges Office function, such as flights from the French overseas territories to the Americas. In these cases EUROCONTROL does not have full information about the identity of the operator at this stage. In future versions of the list, the intention is to replace these notations with a complete company name.
For new entrants the EU ETS requirements will start from the moment an operator performs an aviation activity laid down in Annex I of the EU ETS Directive i.e. it departs or arrives at an aerodrome in the EU. The Administering Member State responsible for all aspects of administering the ETS in respect of the operator is the Member State that issued the operating licence. The following steps will need to be followed by the new aircraft operator and administering Member State for an activity which commences in Year X:
The operator must surrender sufficient emissions allowances to cover its emissions in calendar year X.
The same basic procedure in 3.1 above should be followed. However, the administering Member State is determined according to the greatest attributed emissions in the first year of operation which may not be immediately clear and may not be established definitively until the operator is included in a revised list published by the Commission. As such, the operator cannot submit a monitoring plan for approval to its administering Member State.
In such circumstances, the operator is required to determine its emissions with retrospective effect for the time it falls under the scope of EU ETS. For the period when it has not been attributed to an administering Member State, the operator can determine its emissions according to the approach in section 5 of Annex XIV of the Monitoring Decision to fill "data gaps". This allows an operator to determine its emissions which are missing for reasons beyond its control by a simplified method.
Where the administering Member State is clear from the nature of the operator's flight activity, operators can submit monitoring plans on an informal basis to the administering Member State before formal inclusion on a revised list of operators published the Commission.
An operator could apply to its administering Member State by 31 March 2011 for free allowances and provide verified tonne-kilometre activity reports to support the application. Before forwarding the applications to the Commission by 30 June 2011, the Member State should assess the admissibility of the reports and check for potential irregularities. This could be complemented by inspections of the monitoring activities of the operator during the monitoring year as well as supervision of verifiers. Nonetheless, the Member States should also be able to rely upon the verification process to establish the reliability and correctness of the activity data submitted by the operator.
Article 3f of the EU ETS Directive permits new operators who commence flight activity after 2010 or operators who experience a growth in tonne-kilometre activity in excess of 18% on average annually between 2010 and 2014 to apply for free allowances from the "special reserve". Any application must be made by 30 June 2015 and be supported by verified tonne-kilometre activity data and documentary proof that the operator meets the either of the two eligibility criteria. Before forwarding the application to the Commission (within 6 months) the administering Member State should assess compliance with the eligibility criteria using the material provided by the operator in support of the application as required by Article 3f(3) of the EU ETS Directive. The Commission may provide further guidance on how to perform this assessment at a later date.
Article 3f(1) states that allowances in the special reserve will not be allocated in respect of the flight activities of a new operator or the sharply increased growth of an existing operator if this new activity or increase in activity is a continuation of the activity (either in part or in whole) of another aircraft operator.
The above provision is designed to prevent the free allocation of allowances for flight activities that have already been the subject of a free allowance allocation albeit to a different operator. As such the competent authorities in the administering Member States will need information to establish that:
A small emitter is a non-commercial air transport operator (i) whose flights in aggregate emit less than 25 000 tonnes of CO2 per annum; or (ii) which operates fewer than 243 flights per period for 3 consecutive 4-month periods. A small emitter can take advantage of a simplified procedure to monitor its emissions of CO2 from its flight activity. This procedure is described in Section 4 of Annex XIV of the Monitoring Decision and involves the use of a calculation tool developed by EUROCONTROL or similar tool developed by other organisations.
Aircraft operators emitting less than 25 000 tonnes of CO2 per year, both commercial and non-commercial, can choose an alternative to verification by an independent verifier. The alternative involves determining their emissions by using the small emitters tool approved under Commission Regulation No 606/2010. In such cases, data used for determining emissions must originate from Eurocontrol. As a result, aircraft operators taking advantage of this simpler method need to use data from the ETS Support Facility, without any modification, Of the two types of small emitters defined by Article 54 of Regulation No 601/2012, this simplification only applies to aircraft operators operating flights with total annual emissions lower than 25 000 tonnes CO2 per year. It should be noted that the exemption threshold of 25 000 tonnes CO2 per year is based on the full scope of the EU ETS as defined in Annex I to the EU ETS Directive.
Article 16 of the EU ETS Directive establishes a limited harmonisation of the financial penalties that will be paid by operators that fail to surrender the necessary number of emissions allowances (i.e. €100 per tonne of CO2). More generally, the co-legislators decided that the Member States should adopt rules on penalties for breaches of national legislation which transpose the Directive's requirements and that these penalties should be "effective, proportionate and dissuasive". This formulation allows the Member States to choose between criminal or administrative penalties and provides flexibility to implement a system of penalties that best fits with their national legal systems whilst respecting the obligation to treat breaches of Community law in a manner that is similar to a breach of a wholly national rule or law. The degree of harmonization decided by the co-legislators is arguably sufficient whilst at the same time respecting the principles of subsidiarity and proportionality by which action is to be taken only in so far as it cannot be sufficiently taken by the Member States alone and does not exceed what is absolutely necessary to achieve the desired objective.
Further harmonisation of administrative penalties could be envisaged under the EU ETS Directive but that would have to be decided by the co-legislators following a proposal from the Commission. There is also scope for establishing certain common criminal offences and penalties under the new Treaty on the Functioning of the European Union but again this will require a proposal from the Commission or a quarter of the Member States.
The Council has put into a place a framework for the mutual recognition of financial penalties in the form of Framework Decision 2005/214/JHA. This means that financial penalties due to offences arising from breaches of instruments adopted to comply with Community law that are committed in one Member State (the issuing State) can be recognised and enforced in another Member State (the executing State). A central authority is responsible in each Member State for the administration of the scheme. Monies obtained from the enforcement go the executing Member State unless there is a contrary agreement between the two Member States concerned.
The Agreement on the European Economic Area (EEA), which entered into force in 1994, is an agreement between the 27 EU Member States and three of the Member States of the European Free Trade Association (EFTA). The latter states, which are Iceland, Liechtenstein and Norway, are collectively called the 'EEA-EFTA countries'. The EEA Agreement provides for the extension of selected EU legislation to the EEA-EFTA countries.
The EEA-EFTA countries have been part of the EU ETS since October 2007, when the EU ETS Directive was incorporated into the EEA Agreement. The aviation part of the EU ETS was incorporated into the EEA Agreement by EEA Joint Committee Decision 6/2011.
The extension of the scheme entails that in addition to the 27 EU Member States the EU ETS covers also the 3 EEA-EFTA countries (Iceland, Liechtenstein and Norway). As a result, flights which depart from or arrive in an aerodrome situated in the territory of an EEA-EFTA country, collectively called 'EEA additional flights', are subject to EU ETS rules. More precisely, EEA additional flights are:
The list of exemptions from the scope of the EU ETS in Annex I of the EU ETS Directive also applies for the EEA additional flights.
Equal treatment of aircraft operators is a fundamental element of the EU ETS for aviation. The EU and the EEA-EFTA countries therefore have ensured that the design of the scheme is not altered by the extension to the EEA-EFTA countries. In particular, the same benchmark and harmonized allocation rules are applied for the EEA additional flights as for other flights covered by the scheme.
Aircraft operators which are already covered by the EU ETS are only be affected by the extension of the system if they perform EEA additional flights (see answer to question 8.2). These operators have to include their EEA additional flights into their monitoring and reporting activities.
These operators should have already updated their monitoring plans to cover their EEA additional flights.
Operators who update their monitoring plans should notify their competent authority without delay of any changes made. In case of substantial changes to the monitoring methodology, the operators need to submit their updated plans for re-approval. Substantial changes are described in the EU ETS monitoring and reporting guidelines and include:
If a commercial aircraft operator is exempted from the scope on grounds of point (j) of Annex I of the EU ETS Directive, (i.e. because it operates either fewer than 243 flights per period for three consecutive four-month periods or flights with total annual emissions lower than 10 000 tonnes per year (de minimisrule)), the exemption could cease to apply if EEA additional flights cause the aircraft operator to exceed the aforementioned limits. Those aircraft operator should submit monitoring plans as soon as possible to the competent authority in its administering state.
12/02/2019 - EEA-wide list of aircraft operators
The criteria set under Article 18a (1) of Directive 2003/87/EC to determine aircraft operator's administering Member State must take into account the extension of the aviation part of the EU emission trading scheme to EEA-EFTA countries (Iceland, Liechtenstein and Norway). Thus, certain aircraft operators, previously allocated to one of EU 27 Member States, are allocated to the EEA-EFTA countries for administration. Regulation (EC) No 748/2009 has therefore been amended.
To facilitate a smooth changeover of the affected aircraft operators, the former administering Member State should complete all its obligations related to the aviation activities carried out during the calendar year before the reattribution of an aircraft operator to an EEA-EFTA country took place. The new administering State (Norway or Iceland) will take over the obligations related to the calendar year in which the reattribution took place and for the following calendar years.
The aircraft operator will need to deal with two authorities for the changeover period, as it completes it obligations in relation to aviation activities carried out in the previous year to the former administering Member State and progressively develops its relations with the newly attributed authority.
The key steps are as follows:
If the former administering Member State has modified the data before submitting to the Commission, it should inform the new administering State about the modifications made.
The new administering State should:
The change of administrative responsibility, from a EU 27 Member State to Iceland or Norway, of those aircraft operators which are marked with an asterisk in the EEA list of operators may be subject to a specific timeline. This is to be agreed in conformity with Decision of the EEA Joint Committee n° 6/2011 of 1st April 2011 amending Annex XX (Environment) to the EEA Agreement, (published at the OJ L 93 7.04.2011 page 35).
Those aircraft operators, attributed to Iceland and Norway under the EEA list, which are marked with an asterisk, can request to remain under the administration of its former administering Member State until 2020 the latest, as provided in the Decision of the EEA Joint Committee No 6/2011 of 1st April 2011 amending Annex XX (Environment) to the EEA Agreement.
Such a request can be made by an affected aircraft operator to its former administering Member State within six months from the adoption by the Commission of the EEA-wide list of aircraft operators. The Member State concerned may agree to administer that operator for another year or longer, but only until the end of the trading period in 2020. The EEA-wide list was adopted on 20th April 2011, thus the requests can be made until 20th October 2011.
If the former administering Member State agrees to continue administering the aircraft operator concerned, it should inform the Commission about this agreement and indicate the date from which the aircraft operator will be administered by the new administering State.
Data from the EEA-EFTA countries will be taken into account when calculating the EEA historical aviation emissions The EU 27 historical aviation emissions will thus increase to reflect the extended scope of the EU ETS. Likewise, the total amount of allowances to be allocated free of charge, the total amount of allowances to be auctioned and the size of the special reserve will increase proportionally.
The following note was added on the Commission's website on aviation:
'Please note that all references to Member States on the templates should be interpreted as including all 30 EEA States. The EEA comprises the 27 EU Member States, Iceland, Liechtenstein and Norway.'
In addition to this, references to the EEA-EFTA countries have been added to the list of Member States in several places in the templates:
All commercial aircraft operators registered in Iceland and Norway have been informed about the extension. Information has been sent to the EU Member States administering other operators who are known to be affected by the extension, including a standard letter that can be used to inform these operators. In addition the EEA-EFTA countries, the EFTA Secretariat and the European Commission hosted an information meeting with European and international aviation associations on 11 December 2009 to inform them of the changes.
In advance of biofuels becoming more commonly used in aviation, the following approach proposes a solution to monitoring and reporting biofuel used in relation to an EU ETS aviation activity. This approach is based on the understanding that it is currently technically not feasible or within reasonable costs to determine biofuel content at the point of uptake to an aircraft.
The monitoring and reporting guidelines (Commission Decision 2007/589/EC as amended) provide possibility in Annex I Section 13.4 for the aircraft operator to propose an estimation method for approval by the competent authority, where it is technically not feasible or disproportionately expensive to determine the biomass fraction of certain aviation biomass fuels
In addition, Section 2.3 of the Annex XIV of the monitoring and reporting guidelines provides for the possibility to use fuel purchasing records for the purpose of determination of the biomass content in the fuel.
Therefore, the following type of methodology could be proposed to the competent authority:
It will be important to demonstrate two important criteria in the proposed methodology:
The calculation of biofuel use shall be independently verified. In particular the verifier must be satisfied that the percentage of fuel purchased by the aircraft operator which was used in EU ETS Annex I aviation activities has been correctly calculated.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9559985995292664,
"language": "en",
"url": "https://ecotext2.ru/role-of-it-in-banking-sector-essay-22067.html",
"token_count": 369,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2392578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:63e096fb-27bd-4828-b24c-b070b3bab88c>"
}
|
Buyer’s attitude towards the products may be determined not merely by the products as manufactured in factories, but also by what added in the form of packaging, services, advertising, customer advice and other things that people value.
Thus, the advertising plays an important economic role in the introduction of new products in the markets.(Sidhu, 2003 ).
The relatively stable banking environment is being altered with innovation, opportunism, and government intervention.
Increasing knowledge among societies is forcing the financial institutions to adopt international best practices to remain in business.
The same period witnessed a decrease in the expenses on magazines dealing with business articles, reports and business statistics by as much as 5.2%.
A survey conducted on the advertising industry trends also suggested that total amount spent on advertising pertaining all categories of media registered a reduction by 0.3% in the first three months of 2007 as compared to the last quarter of the 2006.
For example, if a bank offers the home loans at an attractive EMI’s, Fix account with more rate of interest, A post office has to offer various saving schemes, corporate loans, the financial intuitions by adopting the best mode of communication so that the people will be known to their offerings.
Due to the paradigm change in the societal behavioral patterns and technology there are many new advertising opportunities which are coming like Popup ads, Flash ads, Banner ads, and email ads (often a form of SPAM), Social networking sites etc.
For these purposes, advertisements sometimes embed their persuasive message with factual information.
Advertising can be used to change the behavior of the reader/viewer toward the product or service, to influence public opinion, to gain political support, to support, to advance a particular idea or to bring about some effect as desired by some of the advertisers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9283978343009949,
"language": "en",
"url": "https://www.choicesmagazine.org/choices-magazine/submitted-articles/cooperative-extension-system-trends-and-economic-impacts-on-us-agriculture",
"token_count": 4935,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1044921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0150473c-5c59-41c9-a6b4-e74cf7a1f2fb>"
}
|
To vitalize rural America and improve rural life, the Morrill Act of 1862 and 1890 established land-grant universities and colleges (LGU) to educate citizens in agriculture, home economics, and other practical professions. In 1908, President Theodore Roosevelt appointed a Commission on Country Life to “make rural civilization as effective and satisfying as other civilization” (Bailey, 1920). Based on the Commission’s recommendation of a nationalized extension service, and built upon the pre-established LGU system, in 1914, the Smith-Lever Act created a unique U.S. agricultural Cooperative Extension System (extension). The extension system established a partnership among a federal partner (the U.S. Department of Agriculture (USDA)), state partners (LGU and state governments), and local partners (city or county governments). Today, the USDA’s National Institute of Food and Agriculture (NIFA), which was created through the Food, Conservation, and Energy Act of 2008 to replace its predecessor—Cooperative State Research, Education, and Extension Service (CSREES), provides annual grants—including formula funds based on population-related formulas and funds for specific programs—to LGU. States are requested to match this formula portion of federal funding. In addition to this major grant, NIFA also provides competitive funding to award projects that target USDA’s priority mission areas.
Since it was first established 100 years ago, extension has played critical roles in various time periods, including World War I, the Great Depression, and World War II. It helps to secure national food and fiber needs through education, marketing, and organization. It also has helped USDA implement its main objectives in developing the rural economy, training tomorrow’s leaders, disseminating knowledge, and pursuing sustainable agriculture and the environment since WWII. Although the contribution of extension to the farm economy seems to be straightforward, the economic benefit of extension is not easy to quantify. In addition, there has been an ongoing tension in extension regarding its focus on agriculture versus its role for broader rural development (Bishop, 1969).
According to USDA’s agricultural productivity estimates in 2011, total U.S. agricultural production was more than 2.5 times its 1948 level with inputs growing by a mere 4% between 1948 and 2011 (USDA Economic Research Service (ERS), 2013). Therefore, productivity growth accounted for nearly all of agricultural output growth in the period 1948 to 2011. While research and development (R & D) investment is the major driver of productivity growth (Alston et al., 2000; Evenson, 2001; Huffman and Evenson, 2006; Fuglie and Heisey, 2007; and Wang et al., 2013), the new technology or practice cannot have its intended impact if farmers do not adopt those skills or techniques. It is widely agreed that extension has played an important role in disseminating new technology and bridging the gap between innovation in the lab and practice on the farm (Huffman, 1976; Feller, 1987; Birkhaeuser, Evenson, and Feder, 1991; and Ahearn, Yee, and Bottom, 2003).
The U.S. extension system has changed over time in terms of its budget, funding composition, and extension staffs’ program focus.
Over the years, nominal (in current dollars) federal extension appropriation has continued to grow, while real total federal extension funding (in inflation-adjusted dollars using ERS’s research price index as the deflator (ERS, 2013)) as well as the real formula funding have declined since 1980 (Figure 1). The share of formula programs as a proportion of federal funding has also been reduced. In 1964, the formula programs accounted for more than 80% of total Federal Extension appropriation; this share shrunk to below 70% by 2010.
Under the Cooperative Extension System, in addition to federal funding, state and local governments also provide funding to LGU to support extension activities. The state’s role in funding extension has continued to grow since 1936 after a decline between 1928 and 1935 (Figure 2). In 1928, total funding by the states accounted for 66% of the total extension budget. By 1936, this share declined to 41%. Since 1936, the share of total funding from the states continued to grow except for declines in a few short periods, including an energy shock period of 1969-1973. In recent years, overall state funding has grown to account for about 80% of the total extension budget. While the state’s role in funding extension has become increasingly important, the total extension spending as well as total number of extension full-time-equivalent (FTEs) staff people are quite diverse across regions.
According to USDA’s “Salary Analysis of Cooperative Extension Service Positions” report, the number of extension FTEs declined between 1980 and 2010 (Figure 3). However, the changes are unevenly distributed between specialists and county extension agents. They are also different among the 10 USDA production regions. Feller (1987) addressed concerns over the uneven decline in the numbers of specialist and county agents as the former shrank much faster than the latter between 1975 and 1984. He cited Congress’ Office of Technology Assessment (OTA) that the decline of specialists was particularly alarming “since the specialist staff has the largest level of training and is the best equipped to educate both county agents and farmers on evolving agricultural technologies," and indicated that “Extension has opted to protect county agents rather than extension specialists.” Yet this trend seemed to be reversed between 1980 and 2010. In 1980, the number of FTEs for specialists and county agents were 3,714 and 11,441, respectively, accounting for 22% and 67% of total FTEs. In 2010, the number of specialist FTEs increased to 3,972 while that of the county agents declined to 7,974, accounting for 30% and 60% of total FTEs, respectively (Table 1). Most of the increase in the number of specialist FTEs occurred during the 1980-1990 period. This trend may have been a response to an adjustment made to address concerns from Congress as well as the public. However, the total specialist FTEs still declined along with county agents in the two decades thereafter, adjusting to overall budgetary constraints. The split appointments among extension, research, and teaching, and nine-month appointments made has also been considered as one of the factors causing the declining trend in specialists.
While the Appalachian, Corn Belt, and Northeast regions have more extension FTEs than all other regions and remained in the top three in both 1980 and 2010 (Figure 3), their total FTEs still declined considerably along with that of all other regions. The Southeast region, including South Carolina (SC), Alabama (AL), Georgia (GA), and Florida (FL), experienced a much more significant 45% decline in its total FTEs. Its FTEs’ ranking has, therefore, dropped from fourth place in 1980 to seventh place in 2010, surpassed by the Southern Plains, Lake States, and Delta regions. The Pacific region, including Oregon (OR), California (CA), and Washington (WA), had the least FTEs among the 10 regions in both 1980 and 2010.
USDA-NIFA identifies national priorities for the extension programs, while funding allocations are still up to each individual university. In addition to formula programs, NIFA also provides competitive grants to LGU to attract proposals that best address NIFA’s priority topics. Formula funding, on the other hand, is more flexible and to be used in addressing regional or state’s priority subjects and emerging issues. With various budget conditions and priority preferences, extension program portfolios differ from one region to another.
Since extension program areas have changed over time, there is no single classification method that can be used over long periods. For the period between 1977 and 1992, Ahearn, Yee, and Bottom (2003) showed that the program area “Agriculture and Natural Resources” ranked first among four major program areas and accounted for about 45% of total FTEs. On the other hand, many FTEs have shifted from 4-H and youth, and community programs to home economics program over time. In 1992, 26% of total FTEs were dedicated to home economics programs, which had a 22% share in 1977. After the Agricultural Research, Extension, and Education Reform Act of 1998 (AREERA) required that states submit plans of work (POW) in order to receive federal funding, the program areas have shifted along with changes to NIFA’s reporting system. Therefore, data for the previously reported major program areas no longer exist. Nevertheless, NIFA’s POW reporting system could help to provide information on extension program portfolios across regions in more recent years.
According to reported POWs, in 2010, for the 48 contiguous states, most FTEs were dedicated to sustainable agricultural systems and the family and consumer sciences areas, which accounted for about a quarter each of total FTEs. Youth development was the third largest component reported in POWs, accounting for 17% of total extension activities in 2010. Nonetheless, each state has its own goals and extension priorities. Among the 10 regions, the Lake States and Corn Belt dedicated about one-third of their total FTEs to the family and consumer sciences area, with less than one-fifth of their total FTEs dedicated to the sustainable agricultural systems. On the other hand, the Pacific region dedicated nearly 40% of its total FTEs to the sustainable agricultural systems area and only 12% to the family and consumer sciences area (Table 2). For the Corn Belt, Delta, and Southern Plains regions, more than a quarter of their extension FTEs were dedicated to the youth development program while that share in other regions only ranged from 10% to 18%.
U.S. agriculture has experienced structural changes in the past few decades. Studies show that U.S. farmers have relied more heavily on contracting with food processors to allow risks to be spread over a wider set of stakeholders (the value of production under contract increased roughly 10 percentage points between 1991 and 2007 (O’Donoghue et al., 2011). There was also a shift of production to larger farm operations. The long-term shifts in farm size have been accompanied by greater specialization—beginning with a separation of livestock farming from crop farming (MacDonald, Korb, and Hoppe, 2013). Along with these changes, private firms have played an increasing role in providing production-related information to farmers, such as pest management and other chemical usage (Padgitt et al., 2000). Still, the public extension system is unique in providing a multi-functional portfolio of programs as a public good.
Although it is widely agreed that extension has played an important role in disseminating new technology, given a smaller budget relative to R&D it is difficult to quantify extension’s economic benefit or separate it from that of R&D and other local resources. Its economic impact is also restricted by its local extension capacity.
Historically, extension has been authorized and expected to play a leading role in assisting the diffusion of information in farm practice and home economics to improve agricultural productivity; promote better human nutrition and health; strengthen children, youth, and families; revitalize rural American communities; and much more. Yet, the economic performance of extension is difficult to evaluate given the unique nature of extension as a public good, an educational system, and an information communicator. Its performance is also subject to extension density. For a certain amount of FTEs, extension with higher FTE density could be more productive in reaching out to people. FTE density could be measured by FTEs per number of farms, FTEs per thousand square land miles, or even FTEs per million dollar sales. It depends on the purpose of the measurement. In 2010, based on the criteria of FTEs per thousand square land miles (Figure 4, Panel A), states in the Mountain and Pacific regions had much lower Extension density than states in other regions given the territory’s wide range. South Dakota (SD), Virginia (VA), and North Carolina (NC) are the only three states with the lowest extension density outside of most states in the Mountain and Pacific regions. Yet, based on the criteria of FTEs per thousand farms (Figure 4, Panel B), Nevada (NV), Arizona (AZ), and Idaho (ID) from the Mountain region are among the highest extension density states. Other high extension density states include North Dakota (ND) and Nebraska (NE) from the Northern Plains; Louisiana (LA) from the Delta region; West Virgina (WV) from the Appalachian region; and Vermont (VT), New Hampshire (NH), Connecticut (CT), and Rhode Island (RI) from the Northeast region.
Besides extension density, there are other local resources, such as R&D or infrastructure, which could influence extension capacity and hence its economic performance. With a combined effect from extension and those local resources, each state and region could perform differently in their agricultural productivity growth, and rural community development. Indeed, as shown in the Appendix table for the 48 contiguous states for 1960 and 2004, there were only three states—California (CA), Florida (FL), and Iowa (IA)—that were ranked in the top four by productivity level in both 1960 and 2004. On the other hand, with extraordinary productivity growth, some states have significantly improved in their ranking between 1960 and 2004. For example, during the 44-year period, Michigan (MI), Oregon (OR), Rhode Island (RI), Massachusetts (MA), and Indiana (IN) were among the top five states with the highest productivity growth rates. In 1960, each had a ranking of 47, 46, 35, 28, and 27, respectively, and performed at rankings of 28, 15, 8, 10, and 7 by 2004. While productivity growth is mainly driven by innovation from R&D, it can also be affected by infrastructure and extension (Paul et al., 2007; and Wang et al., 2012). Therefore, when measuring the economic performance of extension we need to be cautious about distinguishing its contribution from other factors.
While the contribution of extension to disseminating technology, shortening the period of technology adoption, bridging the gap between findings in the lab and practices on the farm, and enhancing the return of research funding are widely agreed upon, only in recent decades have researchers tried to quantify the economic impact of U.S. agricultural extension by identifying its independent influences and untangling its combined impacts with other sources. NIFA has designed an extension performance evaluation system based on desired outcomes and program areas. The indicators include the number of people reached, number of preferred tasks implemented, number of policy changes, number of environmental changes, and so on (USDA-NIFA, various years). Those outcomes and evaluation results could help to direct LGU and other local partners in implementing USDA’s goals and performing extension activity more efficiently.
From a different aspect, researchers have tried to measure the economic benefit for U.S. agricultural extension using either a combined research and extension capital stock (the accumulation of investment in R&D and extension based on different assumptions on their lagged impacts in each time period) (Alston et al., 2010 and 2011), or by creating separate variables for R&D stock and extension stock to analyze individual economic impacts from each (Huffman and Evenson, 1993; and Yee et al., 2002). There are also studies evaluating extension’s contributions from the aspect of its interaction with local research capital stock (Wang et al., 2012) or its impacts on production efficiency (Schimmelpfennig, O’Donnell, and Norton, 2006). According to the literature, the economic impacts of extension can be summarized into two main points:
Under the Cooperative Extension System, extension has helped improve agricultural productivity growth, strengthen the rural economy, educate youth, promote better human health, sustain the environment, and much more since 1914. Yet, the priorities of extension's mission have varied through time and among states and regions. While extension is built on a unique partnership among federal, states and local governments, and LGUs, over the past decades, extension funding has relied more heavily on sources within the states. Extension funding in constant dollars has declined and led to the number of extension FTEs declining significantly over time and across regions. Given the downsized extension, its program portfolio has changed to address the evolving priorities of extension missions and in adjusting to tightening budget constraints.
U.S. agriculture continues to experience structural and organizational changes. Over time, farmers have relied more heavily on contracting to manage their risk and agricultural production has shifted to larger and more specialized farm operations, while the number of small farms has grown. Although private firms have played an increasing role in providing production-related information to farmers along with those structural changes, the public extension system still has its irreplaceable role of providing a multi-functional array of programs as a public good.
The economic benefit and return on investments of extensions are not easy to measure nor to be distinguished from those of public research funding and other local resources. Yet extension’s overall contribution to agricultural productivity growth has been well recognized. Nevertheless, there are challenges awaiting extension in its second century, including the changing roles between state specialists and county agents, budget constraints, and emerging issues—such as climate changes’ impact on production, and greenhouse gas emissions, as well as its focus on agriculture versus a broader role addressing rural development, youth, and human health and nutrition.
Ahearn, M., Yee, J. and Bottom, J. (2003). Regional trends in extension system resources. Washington, D.C.: U.S. Department of Agriculture, Economic Research Service. Agricultural Information Bulletin No. 781.
Alston, J., Chan-Kang, C., Marra, M., Pardey, P. and Wyatt, T. (2000). A meta-analysis of rates of return to agricultural r&d: ex pede herculem? Research Report No. 113, International Food Policy Research Institute, Washington, D.C.
Alston, J.M., Anderson, M.A., James, J.S., and Pardey, P.G. (2011). The economic returns to U.S. public agricultural research. American Journal of Agricultural Economics 93:1257-1277.
Alston, J.M., Anderson, M.A., James, J.S., and Pardey, P.G. (2010). Persistence pays: U.S. agricultural productivity growth and the benefit from public r&d spending. New York, N.Y.: Springer.
Bailey, LH. (1920). The country-life movement in the United States. The Macmillan Company, New York.
Ball, E., San-Juan-Mesonada, C., and Ulloa, C.A. (2013). State productivity growth in agriculture: catching-up and the business cycle. Journal of Productivity Analysis. July.
Bishop, C. E. (1969). Urbanization of rural America alters extension responsibilities. Journal of Cooperative Extension. Fall.
Birkhaeuser, D., Evenson, R.E., and Feder, G. (1991). The economic impact of agricultural extension: a review. Economic Development and Cultural Change, 39, 607-650.
Evenson, R. (2001). Economic impacts of agricultural research and extension. In Gardner. B. and Rausser, C. (eds) Handbook of Agricultural Economics, Volume 1, Part A, Elsevier Science, New York, 573-628.
Feller, I. (1987). Technology transfer, public policy, and the cooperative extension service-OMB imbroglio. Journal of Policy Analysis and Management, Vol. 6, No. 3 (Spring), pp. 307-327
Fuglie, K., and Heisey, P. (2007). Economic returns to public agricultural research. Washington, D.C.: U.S. Department of Agriculture, Economic Research Service, Economic Brief No. 10.
Huffman, W. E. (1976). The productive value of human time in U.S. agriculture. American Journal of Agricultural Economics 58, no. 4: 672-83.
Huffman, W.E., and Evenson, R.E. (2006). Science for agriculture: a long-term perspective (second edition). Blackwell Publishing.
MacDonald, J., Korb, P., and Hoppe, R. (2013). Farm Size and the organization of U.S. crop farming. Washington, D.C.: U.S. Department of Agriculture, Economic Research Service ERR-152August.
O’Donoghue, E. J., Hoppe, R.A., Banker, D.E., Ebel, R., Fuglie, K., Korb, P.,Livingston, M., Nickerson, C., and Sandretto, C. (2011). The changing organization of U.S farming. Washington, D.C.: U.S. Department of Agriculture, Economic Research Service, Economic Information Bulletin No. 88, December.
Padgitt, M.,Newton, D., Penn, R., and Sandretto, C. (2000). Production practices for major crops in U.S. agriculture, 1990-97. Washington, D.C.: U.S. Department of Agriculture, Economic Research Service,Statistical Bulletin. No. 969.
Pardey, P.G., Craig, B., and Hallaway, L. (1989). U.S. agricultural research deflators: 1890-1985. Research Policy 18: 289-296.
Paul, C., Ball, V. E., Felthoven, R. and Nehring, R. (2001). Public infrastructure impacts on U.S. agricultural production: panel analysis of costs and netput composition. Public Finance and Management 1, 2, 183-213.
Schimmelpfennig, D.E., O’Donnell, C.J., and Norton, G.W. (2006). Efficiency effects of agricultural economics research in the United States. Agricultural Economics 34: (273-280.
U.S. Department of Agriculture, Economic Research Service. Agricultural productivity in the United States. Available online at http://ers.usda.gov/Data/AgProductivity/
U.S. Department of Agriculture, National Institute of Food and Agriculture. Salary analyses of state extension service positions. Various years (a.)
U.S. Department of Agriculture, National institute of food and agriculture. AREERA State Plans of Work. Various years (b.)
Wang, S.L., Ball, E., Fulginiti, L., and Plastina, A. (2012). Accounting for the impacts of public research, r&d spill-ins, extension, and roads in U.S. agricultural productivity growth, in agricultural productivity: an international perspective, Fuglie, K.O., Wang, S.L., and Ball, V.E. (eds.), CABI.
Wang, S. L., Heisey, P., Huffman, W., and Fuglie, K. (2013). Public r&d, private r&d, and U.S. agricultural productivity growth: dynamic and long-run relationships. American Journal of Agricultural Economics 95(5): 1287–1293.
Yee, J., Huffman, W.E., Ahearn, M., and Newton, D. (2002). Sources of agricultural productivity growth at the state level, 1960-1993. In Ball, V.E., and G.W. Norton, Agricultural productivity: measurement and source of growth, Norwell, Mass.: Kluwer Academic Publishers, 187-209.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9572129845619202,
"language": "en",
"url": "https://www.dewoskinlaw.com/blog/2015/01/when-can-missouri-children-obtain-social-security-benefits/",
"token_count": 404,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.11181640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d0eb40fe-6776-4e73-a8b1-1be44a138d1b>"
}
|
Residents of Saint Louis, Missouri, work hard for their family’s financial stability. However, an unfortunate accident or injury resulting in disability or death can lead to financial difficulties for them and their children.
Thankfully, there are provisions under Social Security benefits that help stabilize the financial future of a family and their children if one or both parents are disabled, about to retire or pass away. About 4.4 million children get financial support from the Social Security Administration, which pays approximately $2.5 billion each month to family members in order to provide the basic requirements of life and to help children until they complete high school.
A biological child, adopted child or dependent stepchild may be able to get benefits under the Social Security provisions due to a parent’s death, retirement or disability. A child also could be eligible for benefits, in some cases, based on that child’s grandparents’ earnings. In order to get benefits, a child must have a disabled or a retired parent, a parent who is eligible for Social Security benefits or a deceased parent who had worked for an adequate time to pay taxes into the Social Security system.
Other requirements for benefits eligibility include that the child must be unmarried and below 18 years of age. The benefits cease after the child reaches the age of 18, unless that child is a high school student or has some form of permanent disability. A disabled child, however, may continue receiving benefits until the age of 22, if the disability began before that age.
At the time of applying for benefits for a child, one must provide the child’s birth certificate and Social Security numbers for the parents and child. The Social Security Administration may ask for other documents depending on the type benefits involved, such as medical evidence when applying for benefits for a disabled child or proof of the death of the parent when applying for survivors’ benefits for the child.
Source: Social Security Administration, “Benefits for Children,” accessed Dec. 26, 2014
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9384889006614685,
"language": "en",
"url": "https://www.eia.gov/outlooks/ieo/section_issue_upstream.php",
"token_count": 3723,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06982421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ea8f11ef-af0a-424d-bf9c-c0b0bc429116>"
}
|
The effect of oil prices on natural gas production
Release Date: 1/15/2020 | Full report
In this analysis, the U.S. Energy Information Administration (EIA) evaluates the effect of oil prices on natural gas production. Depending on the nature of oil and natural gas resources specific to particular regions, changes in future oil prices can produce very different results. Relatively higher oil prices push investment toward oil projects and away from natural gas projects, and relatively lower oil prices typically produce the opposite effect. In regions where oil and natural gas resources do not tend to be comingled, such as Australia, higher oil prices increase oil production without much effect on natural gas production. However, in regions with comingled oil and natural gas resources, such as Brazil, the competition is more complex with less straightforward results. We model production under three oil price cases in Australia and Brazil to illustrate how the combination of resource configurations and price incentives result in different production projections.
Oil and natural gas reserves
Oil and natural gas are fossil fuels that are produced from organic matter and are formed through many of the same decomposition, burial, temperature, and pressure processes. Whether oil or natural gas forms depends on the combination of organic material, heat, and pressure. As a result, oil and natural gas are often found close to one another, and in many cases, they are mixed together in underground deposits.
In a typical production scenario, both oil and natural gas are produced from a single well or field, with one considered the primary product and the other, the secondary. In crude oil production, natural gas is often comingled, saturating the oil, and is released when the pressure and temperature change as the oil is brought to the surface. The same happens in reverse with primarily natural gas production; comingled hydrocarbon liquids are separated from the natural gas, typically at natural gas processing plants. Comingled products are handled in various ways.
Oil has typically been the more valuable commodity, and in some cases, the natural gas produced alongside the oil is simply vented or flared. However, global demand for natural gas is growing, and more natural gas infrastructure is coming online to transport natural gas long distances between supply and demand centers. The increased demand for natural gas creates additional options for oil producers. By producing natural gas as a marketable commodity, producers can get more value from their investments.
Despite the deep interconnection between oil and natural gas resources, limited analysis is available on the relationship between oil price changes and natural gas production. Because of the resource connection, changes in the price of oil can affect the production of natural gas through two primary mechanisms:
- Associated gas production. A rise in oil price can prompt increased oil production, which can raise natural gas production by increasing production of associated gas. Conversely, a decline in oil prices can lead to decreased associated gas production.
- Pure gas production. A decline in oil price can also encourage natural gas production by shifting the relative economics and encouraging producers to shift their resources to pure gas projects. In these projects, the primary product is natural gas, and little associated oil is produced. The opposite is also true: a rise in oil price can shift resources to predominantly oil projects.
The configuration of resources in a region can be such that a change in oil price can trigger both mechanisms, and as a result, the relationship between oil price and natural gas production is difficult to predict.
The Global Hydrocarbon Supply Module
For a more thorough analysis of oil and natural gas production, we developed a new component of the World Energy Projection System Plus (WEPS+) for the International Energy Outlook 2019. The Global Hydrocarbon Supply Model (GHySMo) consists of three modules that reflect the economics of hydrocarbon extraction, transformation (refining), and movement. The three modules produce estimates of
- Natural gas
- Crude oil
- Refined product production, processing, and transport
Unlike our other international tools, GHySMo considers the interrelationship between oil and natural gas resources.
The GhySMo upstream module, which is the only component we used in this analysis, represents the volume of global resources and production as a function of resource extraction costs. The module estimates the annual production of oil, natural gas, and associated products in world regions based on externally determined future oil prices. To support the analytic capabilities of this module, we assembled datasets based on multiple independent sources to describe the world’s oil and natural gas resources, as well as the costs, taxes, investment requirements, and drill rig resources associated with their extraction from the earth.
The activities modeled in the GHySMo upstream module include
- Projected expenses to bring products to surface for each of the world’s oil and natural gas fields
- Estimated physical and chemical characteristics for oil and natural gas fields (depth, chemical composition, etc.)
- Expenses for transportation, operation, and maintenance, based on the nations’ discount rates and contemporaneous oil prices
- New discoveries, based on forecast investments as well as past and expected future activity
- Estimated drilling requirements to develop parcels within reservoirs
- Annual production profiles (fractions of total production accomplished each year) based on build out and decline rates for any given development
- Sales income and net present value for each parcel (a parcel is a subunit of an oil or natural gas field within a certain capital cost range)
- Scheduled activation (drilling) of parcels, as well as the associated production years
- Production rates based on price forecasts
- Commodities produced each year
Based on this process, the GHySMo upstream module estimates the future production of both oil and natural gas, and it includes the coproduced products of each. By running the upstream module with different oil prices, which vary the sales income and net present value for all crude oil types, we can estimate the change in crude oil and natural gas volumes. The difference between any two crude oil price paths and the accompanying changes in natural gas production allow us to estimate the impact of oil price changes on natural gas production.
For this analysis, we consider oil and natural gas accumulations in three gas-oil-ratio (GOR) categories using physical characteristics of saturated and non-saturated oils, rock properties such as porosity and permeability, and resulting buoyancy:
- Fields with a GOR lower than two are primarily oil with little associated gas
- Fields with a GOR higher than five are mostly natural gas
- Fields with GORs between these two values range from gassy oil at the low end of the scale and wet gas at the upper end
Tight formations can also span the range of GOR, but they have generally higher GOR because they have low permeability and so natural gas can move through the formations more easily than oil. Further, to produce oil, most low-permeability rocks rely on that natural gas movement as a drive mechanism for oil production.
The GHySMo database contains global resource data, excluding the United States.[5,6] Globally, oil is almost exclusively available at GORs lower than five, and natural gas is available both at GORs lower than five and higher than five (Figure 1). With greater resolution, the data show that most oil is available at GORs lower than one (Figure 2). Crude oil resources gradually decrease at a GOR higher than one, and associated gas also increases as the GOR increases. Each extracted unit progressively contains more associated gas. GOR is a continuum where an increase in GOR corresponds to a change from primarily oil wells with associated natural gas production to primarily natural gas wells with associated liquid production.
Different regions have resources at different GORs
This paper presents a sensitivity analysis focusing on two countries: Brazil and Australia. We selected these two countries because their oil and natural gas resources are very different.
Australia is rich in fossil fuel reserves, and consequently it exports significantly more energy than it imports. Almost all of Australia’s conventional gas resources (about 95%) are located in the North West Shelf (NWS) offshore in the Carnarvon, Browse, and Bonaparte Basins and in the Gippsland Basin in the southeastern region. By contrast, Australian petroleum production is largely onshore and has been declining since its peak in 2000 (Figure 3). Production from new, smaller offshore oil fields generally lasts less than 10 years and does not offset the production declines of larger, mature onshore fields.
Geoscience Australia reported economic reserves, which include proved and probable reserves, of nearly 5.4 billion barrels (22% crude oil, 52% condensates, and 26% liquid petroleum gas) in 2014. For natural gas, the same organization estimated total proved plus probable commercial reserves at 114 Tcf (61% conventional natural gas, 38% coal bed methane, and less than 1% tight gas) as of 2014. GHySMo, as used in this analysis, also includes unconventional and yet-to-be-found resources and reserves, including resources that would be available with a higher price or lower cost.
The geographic split between onshore oil production and offshore natural gas production is consistent with resources that are largely pure gas resources and pure oil resources. Each can independently rise and fall without bringing along much associated secondary product. Australian coalbed methane resources are also considered pure gas without associated liquids.
A chart of Australia’s resources by GOR (Figure 4 and Figure 5) demonstrates this characterization of resources. The geographic distinction means that most Australian oil reserves and resources fall into a low GOR category, and the natural gas resources fall into a higher GOR category. Almost all natural gas is at GORs higher than 5, and almost all oil is at GORs lower than 5. As a result, the resources can mostly be extracted independently.
Unlike Australia, Brazil is a growing crude oil and natural gas producing region. EIA estimates that Brazil had 12.8 billion barrels of proved oil reserves in 2019. More than 94% of Brazil’s oil reserves are offshore, and 80% of all reserves are offshore near the state of Rio de Janeiro. EIA estimates that Brazil had 13 Tcf of proved natural gas reserves, most of which (84%) are similarly located offshore, and 73% of offshore reserves are similarly concentrated off the coast of Rio de Janeiro. Relative to Australia, Brazil contains more oil reserves at GORs lower than two (Figure 6 and Figure 7), and these associated natural gas volumes dominate Brazil’s natural gas production. As a result, unlike Australia, Brazil’s natural gas production rises with crude oil production (Figure 8).
Price Case Analysis
The analysis presents three oil price cases that generate production estimates for Australia and Brazil from 2018 to 2050 (Figure 9):
- Reference: The IEO2019 Reference case oil price path
- Double-Price: Twice the IEO Reference price in all years from 2018 to 2050
- Half-Price: A 50% reduction in the IEO Reference price path from 2018 to 2050
We developed these crude oil price cases in order to explore the basic sensitivities of natural gas production at the regional level. These cases are not intended to serve as predictions of a probable future. Natural gas prices remain constant across cases.
The results of this sensitivity analysis reflect only the effects of changes in price on crude oil and natural gas supply. Absent other factors, within the WEPS+ framework, a price change would generate a demand response that would affect overall energy consumption and other energy supply sources. Although we accounted for these effects within the overall WEPS+ system, we did not include them in this analysis.
In Australia, the region with a stronger split of oil and natural gas geography, crude oil price changes do not produce a strong effect on natural gas production. Changes in oil price and development have only a minor effect on natural gas production volumes because the region produces few associated resources or those resources have GORs lower than five.
In the Half-Price case, oil production decreases relative to the Reference case (Figure 10). In the Double-Price case, production increases in the near term but is ultimately about the same as the amount of total oil production by 2050 as in the Reference case. Even with doubled prices, Australia runs out of profitable oil at near-Reference case levels of cumulative production. Production is advanced, but few new resources are produced.
As a result of the resource geography and the few resources containing associated gas with liquid production, natural gas production changes very little in Australia when the oil price changes (Figure 11). Australia’s physically distinct resources result in little coproduction. Regardless of the change to oil price, the resulting change in onshore oil production does not affect the offshore natural gas production. The onshore oil and offshore gas do not compete for the same physical extraction gear. As a result, in Australia, the three price cases are virtually indistinguishable in their natural gas production.
As discussed previously, oil price changes across cases can have several consequences at a regional level, depending on the resource types and characteristics involved. In the case of Brazil, higher and lower oil price paths generate multiple impacts.
Unlike Australia, Brazil has large amounts of crude oil resources with associated natural gas, leading to a high level of dependency between total natural gas production and crude oil production. However, Brazil also has large resources of pure gas, and projects targeting that resource are generally independent of crude oil price changes. This collection of hydrocarbon resources allows producers in Brazil to rank and compete crude oil and pure gas development against each other, an opportunity not available to Australia. As a result, the modeled results of varying crude oil price assumptions in Brazil are more complex than those seen in Australia.
The Double-Price price case projects more oil production than in the Reference case, which in turn projects more production than the Half-Price case price (Figure 12).
Across the three cases, assumed natural gas prices are the same, yet the natural gas production varies (Figure 13) to reflect the competition for rigs in pure gas plays, as well as the role of associated gas.
In the Half-Price case, which lowers oil prices, pure gas projects are preferred, and projected natural gas production increases through the entire projection period. Because natural gas prices are unchanged from the Reference case, the lower oil prices increase the relative value of the pure gas projects, and these projects begin development earlier than expected. At the same time, lower crude oil production reduces the development of associated natural gas, offsetting some of the overall natural gas production growth.
In the Double-Price case, the higher oil price leads to a preference for oil projects, with two different consequences during the projection period. All other project economics being equal, the projects with the most associated gas are selected early in the projection period because the natural gas is viewed as a bonus to the oil production. This preference indicates that additional investment is diverted to natural gas processing and shipping to take advantage of the bonus gas. In the near- and mid-term, the increased oil price leads to substantial increases in crude oil production and accompanying increases in natural gas production.
With time, however, this effect diminishes, in part because another effect of doubling oil price is an acceleration of projects, as well as developing what would otherwise have been less economic projects sooner. As a result, cumulative natural gas production is higher at an earlier stage but slows during the mid-to-late projection period. With the higher GOR projects that were selected and developed sooner, fewer projects are available later in the projection period to sustain those higher rates of natural gas production. However, because of the raised oil prices, oil projects continue to take precedence in development capital and infrastructure, and little pure gas development occurs to mitigate the declining volumes of oil projects with associated gas.
Oil and natural gas production is increasingly intertwined. Through these two case studies, we have shown how changes in oil price can produce different results in natural gas production depending on the local resource availability and configuration. For example, associated gas production in oil fields where drilling levels are dictated by oil price can affect drilling levels in areas where drilling is dictated by the natural gas price. Our GHySMo tool helped us estimate these changes in a global context, and shows how Australia and Brazil production is affected differently based on the underlying resources.Back to top
- “EIA projects a nearly 50% increase in world energy usage by 2050, led by growth in Asia,” Today in Energy, September 24, 2019.
- “Australia is on track to become world’s largest LNG exporter,” Today in Energy, August 12, 2019.
- For more information see the Global Hydrocarbon Supply Model Fact Sheet.
- An upstream GHySMo parcel represents that volume of an oil or natural gas that may be produced from a single resource when the unit price is raised from one user-defined cost point (e.g., $30 per barrel) to the next (e.g., $35 per barrel).
- In this paper, we generally refer to accumulations of extractable commodities as “resources and reserves.” While the distinction is not critical to this analysis, further details on these categories is available in “Oil and natural gas resource categories reflect varying degrees of certainty,” Today in Energy, July 17, 2014.
- U.S. resources, which are analyzed in greater depth, are available via EIA’s National Energy Modeling System (NEMS). Documentation of the Oil and Gas Supply Module.
- Geoscience Australia. Australian Energy Resources Assessment.
- Geoscience Australia. Australian Energy Resources Assessment.
- EIA International Energy Statistics Database.
- EIA Country Analysis Brief: Brazil.
- EIA International Energy Statistics Database.
- Resources were first discovered in Brazil’s offshore Santos Basin by state-controlled Petrobras, the dominant participant in Brazil’s oil sector. Further exploration in the Santos, Campos, and Espirito Santo Basins revealed an estimated 5 billion to 8 billion barrels of oil equivalent in a presalt zone 18,000 feet below the ocean surface.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9554724097251892,
"language": "en",
"url": "https://www.fullyprepped.ca/blog-posts/learn-how-to-create-a-budget-with-rbc-campus-advisors-and-prepped",
"token_count": 666,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.037841796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9f836cd9-0d4c-40d8-8639-414e94af7563>"
}
|
The Prepped Team
February 20, 2020
In its simplest terms, a budget is a tool that helps you manage and track your money. To utilize a budget to the best of your ability, you will need to plan and consider all of the factors that will ultimately impact how much you can save. After all, being able to save money for your future is the real end goal when it comes to budgeting. A common misconception of budgeting is that you need to have a lot of money to even consider being able to save. But the truth is: it’s not about how much money you have, it’s about how you manage it.
Creating a plan for your financial future can feel overwhelming, but it doesn’t have to feel that way. In this Budgeting 101 Webinar, Prepped partnered with RBC Campus Advisors, Fabiana Sutter and Jared Estrada, to break down everything you need to know about budgeting to set yourself up for financial success.
The dollar amount owed to you through income, allowance or other means of the money that belongs solely to you.
The amount of money you owe for expenses like groceries, rent, bills, commuting and more.
The balance after all of your non-negotiable expenses are covered.
An estimate of what you can afford to save based on timelines that relate to your specific budgeting needs.
When you are living within your budget it means you likely have less stress and will be able to prioritize how much money will go toward your wants versus your needs. If you find yourself in a situation where you are constantly going over your budget, you may experience additional stress and begin overspending. Understanding the difference and how to navigate it is key to budgeting success.
Trimming down your expenses will be easier when you can identify what your essential items are so you can build them into your budget. These items are generally things you can’t live without like a roof over your head, clothing, and food. It also includes the non-negotiables in your budget, like bills and loans you have to pay every month.
The first goal you should have in mind is paying down any debt you have acquired from loans, credit cards and other financial obligations. Once that is taken care of, you will be able to identify where you can save money. And once you know how much money you can save each day, week or month, you will be able to set goals that give you the ability to pay for a dream vacation or a large ticket item you otherwise wouldn’t be able to afford.
These specific callouts are just the basics to help get you started. Watch the full Budgeting 101 webinar to learn additional information about how to manage your money like where to find extra money, ways to monitor your money and tips on how to build a personalized template for budgeting tracking. Booking an appointment with a Financial Advisor is a smart way to connect with an expert who can answer your questions in detail, and help you build a plan. And don’t forget, Prepped offers free tools that can help you further improve your networking skills. Sign up with Prepped and immediately gain access to resources that will help you grow your career.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8941754102706909,
"language": "en",
"url": "https://www.neuralnine.com/plot-candlestick-charts-in-python/",
"token_count": 1136,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0235595703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d67f2619-a7e4-4c95-881e-58c5a8b215f4>"
}
|
Candlestick charts are one of the best ways to visualize stock data because they give us very detailed information about the evolution of share prices. In fact, they give us information about four major values at the same time. In this tutorial, we are going to implement a candlestick chart visualization using Python because it is a great choice for financial analysis and data science. This is due to the powerful libraries like Matplotlib, NumPy and Pandas.
We are going to write a little script that plots a professional candlestick chart for a specific company at the stock market. If you are not familiar with the concept of that diagram type, take a look at the following image. There you can see two different types of candlesticks.
As you can see, a candlestick can be either positive (green) or negative (red). The former means that the close price is higher than the open price and the latter means the opposite. We can gather four different values out of one candlestick:
- The highest share price of that day (top point of the white line)
- The lowest share price of that day (bottom point of the white line)
- The share price when the market opened (top point of the red area or bottom point of the green area)
- The share price when the market closed (top point of the green area or bottom point of the red area)
Additionally, we can also look at the price span of the respective day which is visualized in the colored area. Thus, this type of chart provides a lot of information.
For our script we will need to import a couple of libraries. Most of them are external and need to be installed.
Let us take a quick look at each of these:
- datetime: We will use this one to define our desired time span
- matplotlib.dates: This library will convert our dates into the necessary number format
- matplotlib.pyplot: Will be used for displaying our chart in the end
- pandas_datareader: The module that will load the desired stock data
- candlestick_ohlc from mpl_finance: Our main library for plotting
Except for the datetime module, none of these libraries is included in Core Python. This means that you will need to install them with pip. For detailed information click here!
Preparing The Data
In order to plot our data properly, we will first need to load it and to set it up. We will start by defining our desired time span.
Our start date is January 1st, 2010 and our end date is defined as the current date. This means that we are looking at the data from our start date up until now. The next step is the definition of our data reader.
Here, we specify that we want to use the Yahoo Finance API in order to download the data for the ticker symbol AAPL (which is Apple). The time span we are looking at is defined by start and end. What we get is a data frame that contains our requested values. We can print out the first few rows to see the structure.
Right now, we have two columns that we don’t need for our chart. These are Volume and Adj Close. For our candlestick chart, we need the values Open, High, Low, and Close in that exact order. Also, we will need Date for our x-axis. In pandas, it is quite simple to select and reorder columns in a data frame.
Basically, we are just selecting the four relevant columns in the right order and replacing our current data frame. Notice that we use double square brackets here.
Now, we have our columns in the right order but there is still a problem. Our date doesn’t have the right format and since it is the index, we cannot manipulate it. Therefore, we need to reset the index and then convert our datetime to a number.
So what we do is, we drop the index and map the date2num function onto our Date column. Now, our datetime values are converted into numbers that matplotlib can deal with. We can start with the plotting.
Plotting The Data
For the final step, we will define our plots and visualize the data we have prepared.
First, we define a new subplot (also called axis) for our data. Then, we use the candlestick function, in order to plot our values. Also, we define the colors and the width of the sticks and we put the dates on the x-axis and turn on the grid.
Now, to make our chart look a bit more professional, we will make some changes in its style.
What we do here first is to place the grid below the candlestick chart itself. After that we define a title for our plot. Then we set the background of the figure but also of the plot itself to a dark gray color. Of course we also need to set the color of the axis-ticks to white. Finally, we show our plot. After these style changes, the end result looks like this:
Of course, the more you zoom in, the more you will be able to see the individual candlesticks. This is a great way to visualize four different values in a single chart.
I hope this tutorial was helpful to you! You can check out the detailed YouTube video for this tutorial. Also you can download the source code or follow NeuralNine on Instagram.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9111271500587463,
"language": "en",
"url": "http://www.bayareaeconomy.org/issue/science-innovation/",
"token_count": 214,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.010009765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1f407858-e76d-4301-8764-be1806f88325>"
}
|
California’s stock in trade is innovation. From ideas to capital to talent, the critical mass exists in California’s complex business networks to deliver almost any concept to its fullest potential. As California produces success in innovations impacting every industry, industries have responded with new approaches to scouting and investing in the startup universe. In addition, California’s diverse economy and history of innovation make it well positioned to capture future growth in many manufacturing sectors, but state and local governments will need more targeted policy tools to stimulate commercialization of new products, close the workforce gap, and incentivize more manufacturers to locate within the state. While the offshoring of production and automation technology have fundamentally transformed the industry, high-skilled manufacturing jobs are essential to a balanced, competitive economy.
Science and Innovation
Ecosystem for Entrepreneurship
This analysis documents the economic contributions of companies founded by UC Berkeley students, alumni and faculty and estimates a minimum economic impact of entrepreneurial activity associated with UC Berkeley. It was prepared by the Economic Institute for UC Berkeley, which funded the project.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8995321393013,
"language": "en",
"url": "https://californiapolicycenter.org/a-version-of-prosperity-that-california-ought-to-show-the-world/",
"token_count": 1916,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2099609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e636eeab-c94c-40c7-a991-1a256f1efc77>"
}
|
As reported earlier this month in the Los Angeles Times, California policymakers are expanding their war on “climate change” at the same time as the rest of the nation appears poised to reevaluate these priorities. In particular, California’s legislature has reaffirmed the commitment originally set forth in the 2006 “Global Warming Solutions Act” (AB 32) to reduce the state’s CO2 emissions to 40% below 1990 levels by 2030.
Just exactly how California policymakers intend to do this merits intense discussion and debate. As the Los Angeles Times reporter put it, “The ambitious new goals will require complex regulations on an unprecedented scale, but were approved in Sacramento without a study of possible economic repercussions.”
At the risk of providing actual quantitative facts that may be extraordinarily challenging for members of California’s legislature, most of whom have little or no formal training in finance or economics (ref. California’s Economically Illiterate Legislature, 4/05/2016), the following chart depicts data that helps explain the futility of what California’s citizens are about to endure:
CALIFORNIA ENERGY CONSUMPTION, POPULATION, GDP, AND CO2 EMISSIONS
Comparisons to the rest of the USA, China, India, and the world
(For links to all sources for this compilation, scroll down to “FOOTNOTES”)
The first row of data in the above table is “Carbon emissions,” column one shows California’s total annual CO2 emissions including “CO2 equivalents” – bovine flatulence, for example, is included in this number – expressed in millions of metric tons (MMT). As shown, in 2014 (the most recent year with complete data available) California’s CO2 emissions were down to 358 MMT. That’s 73 MMT lower than 1990, when they were 431 MMT. While this is a significant reduction, it is not nearly enough according to California’s state legislature. To hit the 40% reduction from 1990 levels by 2030, CO2 emissions still need to be reduced by another 100 MMT, to 258 MMT. That’s another 28% lower than they’ve already fallen. But California is already way ahead of the rest of the world.
As shown on row 8 of the above table, California’s “carbon intensity” – the amount of CO2 emissions generated per dollar of gross domestic product – is already twice as efficient as the rest of the U.S., twice as efficient as the rest of the world, more than three times as efficient as China, and nearly twice as efficient as India. We’re going to do even more? How?
A few more data observations are necessary. As shown, California’s population is 0.5% of world population. California’s GDP is 2.0% of the world GDP. California’s total energy consumption is 1.4% of world energy consumption, and California’s CO2 emissions are 1.0% of the world’s total CO2 emissions.
These stark facts prove that nothing Californians do will matter. If Californians eliminated 100% of their CO2 emissions, it would not matter. On row 1 above, observe the population of China – 1.4 billion; the population of India – 1.3 billion. Together, just these two developing nations have seventy times as many people as California. The per capita income of a Californian is four times that of someone living in China; nine times that of someone living in India. These nations are going to develop as much energy as they can, as fast as they can, at the lowest possible cost. They have no choice. The same is true for all emerging nations.
So what is really going on here?
If California truly wanted to set an example for the rest of the world, they would be developing clean, safe, exportable technologies for nuclear power and clean fossil fuel. Maybe some of California’s legislators should take a trip to Beijing, where burning coal generated electricity and poorly formulated gasoline creates killer fogs that rival those of London in the 1900’s. Maybe they should go to New Delhi, where diesel generators supplement unreliable central power sources and raise particulate matter to 800 PPM or worse. Maybe they should go to Kuala Lampur, to choke on air filled with smoke from forests being incinerated to grow palm oil diesel (a “carbon neutral” fuel).
According to the BP Statistical Review of Global Energy, in 2015, renewables provided 2.4% of total energy. Hydroelectric power provided 6.8%, and nuclear power provided 4.4%. Everything else, 86% of all energy, came from fossil fuel. In the real world, people living in cities in emerging nations need clean fossil fuel. So they can breathe. Clean fossil fuel technology is very good and getting better all the time. That is where investment is required. Right now.
Instead, purportedly to help the world, California’s policymakers exhort their citizens to accept a future of rationing enforced through punitive rates for energy and water consumption that exceed approved limits. They exhort their citizens to submit to remotely monitored, algorithmic management of their household appliances to “help” them save money on their utility bills. Because supposedly this too averts “climate change,” they restrict land development and exhort their citizens to accept home prices that now routinely exceed $1,000 per square foot anywhere within 50 miles of the Pacific coast, on lots too small to even put a swing set in the yard for the kids. They expect their citizens to avoid watering their lawns, or even grow lawns. And they will enforce all indoor restrictions with internet enabled appliances, all outdoor restrictions with surveillance drones.
This crackdown is a tremendous opportunity for a handful of high-technology billionaires operating in the Silicon Valley, along with an accompanying handful of California’s elites who benefit financially from politically contrived, artificial resource scarcity. For the rest of us, and for the rest of the world, at best, it’s a misanthropic con job.
The alternative is tantalizing. Develop clean fossil fuel and safe nuclear power, desalination plants, sewage recycling and reservoirs to capture storm runoff. Loosen restrictions on land development and invest in road and freeway upgrades. Show the world how to cost-effectively create clean abundance, and export that culture and the associated enabling technologies to the world. Then take credit as emerging nations achieve undreamed of prosperity. With prosperity comes literacy and voluntarily reduced birthrates. With fewer people comes far less pressure on the great wildernesses and wildlife populations that remain, as well as fisheries and farmland. And eventually, perhaps in 25 years or so, renewables we can only imagine today, such as nuclear fusion, shall come to practical fruition.
That is the example California should be showing to the world. That is the dream they should be selling.
* * *
Ed Ring is the vice president of policy research for the California Policy Center.
Invest California’s Pension Funds in Water and Energy Infrastructure, November 14, 2016
California Needs Infrastructure, and Unions Should be Helping, September 6, 2016
California’s Misguided Water Conservation Priorities, August 17, 2016
How Gov’t Unions and Crony Capitalists Exploit Global Warming Concerns, June 21, 2016
The Alternative to Crony Capitalism and Phony Shortages, June 15, 2016
Government Unions and the Financialization of America, May 24, 2016
Investing in Infrastructure to Lower the Cost of Living, March 14, 2016
Why Aren’t Unions Fighting California’s Bullet Train Boondoggle?, November 24, 2015
Libertarians, Government Unions, and Infrastructure Development, May 5, 2015
Desalination Plants vs. Bullet Trains and Pensions, April 7, 2015
Raise the Minimum Wage, or Lower the Cost of Living?, March 31, 2015
The Abundance Choice, December 24, 2014
An Economic Win-Win For California – Lower the Cost of Living, December 3, 2014
How to Create Affordable Abundance in California, July 1, 2014
California’s Green Bantustans, May 21, 2014
The Unholy Trinity of Public Sector Unions, Environmentalists, and Wall Street, May 6, 2014
World Population Clock:
Directorate-General of the European Commission:
US Census Bureau – California:
U.S. Energy Information Administration:
United Nations Framework Convention on Climate Change:
Total Energy Consumption
BP Statistical Review of World Energy:
California per capita energy consumption:
US Dept of Commerce – Bureau of Economic Analysis:
Note: There are only minor differences between the nominal US GDP and PPP (purchasing power parity) US GDP:
https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal). With other nations, such as China and India, however, the differences are significant. Using purchasing power parity GDP figures for comparisons yields ratios that more accurately reflect energy intensity and carbon intensity among nations.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9387092590332031,
"language": "en",
"url": "https://cloudfriday.com/what-does-ai-mean-for-small-businesses/",
"token_count": 1569,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.060546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a21e460c-9bb5-4089-869c-e17b2ec4bbc2>"
}
|
What Does AI Mean for Small Businesses?
The use of machine learning, deep learning, and artificial intelligence (AI) are on the rise, especially in consumer-related industries. Some trendsetters, such as Amazon, Google, Microsoft, and Netflix, are quickly integrating these as a way of improving the customer experience. These technological advancements provide a more efficient way of getting things done.
Although the big companies have already embraced machine learning and deep learning, most small businesses are yet to join them. While these technologies seem complicated, there are already numerous software-as-a-service solutions available that make harnessing the power of your company’s data simple. Before making these tools an essential part of your business processes, it’s crucial to gain an understanding of how they work and what’s behind them.
What Is Machine Learning?
Machine learning is simply an application of artificial intelligence. It is a combination of statistics and computer science concepts. It is concerned with developing computer programs that access data and use it to learn by themselves. Simply put, machine learning gives systems the ability to work with increasing accuracy and effectiveness without being manually programmed based on past data.
How Does Machine Learning Work?
Machine learning uses computational methods to identify any patterns in the input data and make predictions on future outcomes. The process begins with providing input data for the system. This could be through observation, instruction or direct experience. The machine learning algorithms then learn from this data and make decisions based on the patterns they have learned from the data.
There are two major types of techniques that machine-learning algorithms use:
Supervised machine learning focuses on training a model to predict future outcomes by using a known set of input data and their known output. The model makes the predictions based on the given past evidence.
Unsupervised machine learning is a more complicated process which is currently used less than supervised learning but accounts for much of the excitement surrounding AI. Unsupervised Learning is utilized for datasets that contain the input data without the response, or result, already being known.
How Can Small Businesses Benefit From Machine Learning?
Many small businesses will disregard the use of machine learning. Although it may appear a bit complex, machine learning can be incredibly powerful when it comes to boosting productivity in many areas of your organization. Here are some of the ways that small firms can incorporate the use of machine learning to improve customer experience.
Providing excellent customer service is a goal for many businesses. However, this may not be possible as customers have different needs. By using machine learning, you can enhance your business’ customer care. One solution for this is Adobe’s Experience Cloud, which claims “You can rely on data science to analyze user behavior, preferences, feedback, and characteristics to predict behavior and deliver unique, personalized experiences. This helps you increase engagement and offer dynamic, one-to-one touchpoints with ease.”
Additionally, the integration of natural language processing and historical data on customer care, machine learning algorithms can learn from business interactions and provide any answers that a customer may need right away. From automated livechats to online customer knowledge portals, companies are exploring new ways to make the experience easier for their customers. With this system in place and operating effectively, customer care representatives are needed only in extreme cases.
Companies, both small and large, lose a lot of money through fraudulent activities. By using machine learning, small businesses can prevent business disasters through fraud. The machine-learning model studies the historical data on transactions and social network information and can spot any anomalies. One solution for Ecommerce Business Owners is Fraugster, a fraud-detection product that claims “you will never pay for a chargeback again.”
Most equipment nowadays uses built-in sensors based on IoT. The machine-learning program analyzes data on fuel gauges and tires. They can also be used to predict future outcomes based on temperature and humidity.
What Is Deep Learning?
Although both machine learning and deep learning can be classified as AI, deep learning is one of the many techniques used in machine learning. Like the latter, deep learning is used to make predictions of the future outcomes based on the input data provided. However, deep learning is more accurate.
How Does Deep Learning Work?
Deep learning models are usually trained by using a large set of data mainly from images, text or voice. The models also use artificial neural network architectures. From these, the models learn how to perform classification tasks with utmost accuracy that may at times exceed human performance.
The neural network architectures usually contain a large number of hidden layers. ‘Deep’ is used to refer to these layers that can be as many as 150 while traditional neural networks can only have a maximum of 3 layers.
Many deep learning techniques often get rid of the need for manual feature extraction, meaning that the models can extract features directly from 2D images without requiring you to identify the said features. This makes deep learning more accurate than machine learning; hence, it is more suitable for complex computer problems such as computer vision and natural language processing.
How Can Deep Learning Help Small Businesses?
As is the case with machine learning, the use of deep learning in small businesses can help improve productivity. Here’s how.
While the most severe data breaches often affect large companies, small companies are still at risk of numerous online threats. Despite even the best security systems, ultimately your network is only as secure as your least computer-savvy employee. Fortunately, solutions exist to help business owners to protect themselves from cyber threats. Darktrace, one of the leading AI cybersecurity firms, uses unsupervised learning to determine the normal behavior for each of your employees, allowing them to identify any event that is out of the ordinary. Deep learning can spot any suspicious behavior or dangerous URL. It isolates any threats that may put valuable data at risk. Many other companies are already utilizing AI for cybersecurity, including Fortinet, Sophos, Symantec, and Cynet.
Small business can use machine learning to increase their productivity by using automation. Machine learning programs are used for repetitive tasks such as scheduling, day-to-day organization, and other paperwork. This helps to save on time and increase overall productivity and can enhance your employee experience.
Many small businesses can benefit from AI in their accounting processes by automating tasks like auditing of expense submissions, risk assessment, and analytic calculations. Solutions like Pegg, for the Sage accounting platform, make this simple. Many of the most popular accounting solutions already utilize some form of artificial intelligence: Quickbooks uses machine learning to automate tasks like invoice categorization and mileage tracking.
For any business, marketing is a crucial strategy. Deep learning programs can help small business owners improve their marketing. By predicting the future outcomes of different marketing activities, the algorithms can help managers know what marketing approach to use. Many marketing solutions are already using machine learning in their core tools to deliver more personalized experiences to consumers, such as Google Ads and Facebook Ads. Additionally, Salesforce and Hubspot, two of the most widely-used CRM’s, both make use of AI in their marketing tools.
Wrapping it Up
Deep learning is one of the subsets of machine learning. This means that all deep learning is machine learning, but the converse is not true. The significant difference between deep learning and machine learning is that the former computes prediction with more accuracy. Sometimes, results are above human capacity. Both are great AI techniques that small businesses can use to boost their production.
Are you looking for More Artificial Intelligence Powered Tools for Small Businesses? Check out this list from toprankblog.com.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.935903787612915,
"language": "en",
"url": "https://definitions.uslegal.com/e/electronic-payment/",
"token_count": 550,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.11572265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:58e675e8-90d4-4fb6-aff3-256c8ce19efc>"
}
|
Electronic Payment Law and Legal Definition
Electronic payment is becoming a commonly used payment method in a wide variety of transactions. Many people believe that the electronic payment option offers more convenience, safety, and efficiency over paper-based methods. Electronic payment may be used in such transactions as, among others, banking, utility bill payment, tax payment, and consumer purchases.
Electronic payments are generally subject to the same contract laws as more traditional payment methods. In the case of mistakes due to technology failure, the entity being paid usually has their own policies applicable to refunds. The federal Electronic Fund Transfer Act covers some electronic consumer transactions. The Fair Credit Billing Act (FCBA) and Electronic Fund Transfer Act (EFTA) establish procedures for resolving mistakes on credit billing and electronic fund transfer account statements, including:
- charges or electronic fund transfers that you — or anyone you have authorized to use your account — have not made;
- charges or electronic fund transfers that are incorrectly identified or show the wrong amount or date;
- computation or similar errors;
- failure to reflect payments, credits, or electronic fund transfers properly;
- not mailing or delivering credit billing statements to your current address, as long as that address was received by the creditor in writing at least 20 days before the billing period ended;
- charges or electronic fund transfers for which you request an explanation or documentation, due to a possible error.
The FCBA generally applies only to "open end" credit accounts — credit cards, revolving charge accounts (such as department store accounts), and overdraft checking accounts. It does not apply to loans or credit sales that are paid according to a fixed schedule until the entire amount is paid back, such as an automobile loan. The EFTA applies to electronic fund transfers, such as those involving automatic teller machines (ATMs), point-of-sale debit transactions, and other electronic banking transactions.
For transactions covered by EFTA, there are certain rules applicable to erroneous or unauthorized fund transfers. The rules require the payor to be provided with periodic statements and the payor has a duty to report errors within 60 days of the statement containig the error. Financial institutions must investigate the claim and if funds are due to the payor, they must be returned within 10 days, along with any applicable interest.
Legal Definition list
- Electronic Passport
- Electronic or Electromechanical Facsimile
- Electronic Municipal Market Access (EMMA)
- Electronic Monitoring
- Electronic Mail Message
- Electronic Payment
- Electronic Preparation [Tax Law]
- Electronic Product
- Electronic Product Radiation
- Electronic Product Radiation Control Act
- Electronic Reading Room [Aeronautics and Space]
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9756966233253479,
"language": "en",
"url": "https://wanttoknowit.com/what-is-a-hedge-fund/",
"token_count": 286,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01556396484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:50215fcb-7d40-47b1-9b4e-2b5f536eb353>"
}
|
There are many different types of investment funds designed to increase personal wealth. One of the fastest growing type of investment funds is called a hedge fund. The first hedge fund was created by Alfred W. Jones in 1949. Today, it is estimated that almost $2 trillion is currently invested in hedge funds throughout the world. Unlike many other investments, a hedge fund is designed to increase the investment in any financial conditions. Continue reading to find out more about hedge funds.
What is a hedge fund?
A hedge fund is similar to most other investment funds because money is invested with the goal of increasing this investment. A hedge fund differs simply because of the type of investment strategy. Generally speaking a hedge fund is designed to make money regardless of the current economic situation and protect the investors from market downturns. However, the actual investment strategies can vary considerably between each hedge fund. It is not uncommon for people who invest in a more traditional investment fund to also have some money in a hedge fund as a kind of insurance against a market collapse.
Other types of hedge funds take on more risk and are designed to outperform other types of investments. These are managed much more aggressively than a traditional hedge fund and may include speculative investments. This type of hedge fund has become more common over the last decade and carries far greater risks to the investor. As with any financial decision, it is important to seek qualified financial advice before making any investment.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.942015528678894,
"language": "en",
"url": "https://www.catalyst-commercial.co.uk/services/carbon-offsetting/",
"token_count": 962,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0869140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b182236c-8164-4a15-bfca-2db5b5778fd2>"
}
|
Carbon OffsettingCarbon offsetting solutions allow companies and to invest in environmental projects around the world in order to offset their own carbon footprints.
The more a business can do to reduce its carbon footprint the better for the planet, but sometimes that’s not possible. So, offsetting your carbon usage is a great way to bridge that gap and help reduce emissions.
Most carbon offsetting projects are usually based in developing countries and are usually designed to reduce future emissions.
These types of schemes involve the planting of trees which work by soaking up CO2 directly from the air through the planting of woods and forests.
These types of projects also create jobs, improve health and can help sustain community’s in rural areas.
In simple terms offsetting means buying carbon credits equivalent to your business or personal carbon impact.
This allows you to compensate for every tonne of CO2 you emit by ensuring there is one tonne less in the atmosphere.
Regardless of location one unit of CO2 is the same and has the same climate impact wherever it is emitted. So, the benefit is the same wherever it is reduced or avoided too.
How Does Carbon Offsetting Work
“Frequently carbon offsetting reduces emissions much faster than you can as company. Carbon offsetting projects help to combat global climate change as well as caring for local communities”
3 Steps to Carbon Offsetting
calculate your emissions
This is easy for a single transaction like a flight or journey, but more complicated for those business looking to offset company wide carbon.
You should explore other ways to reduce carbon on a more permanent basis first, before deciding if you should reduce your emissions further.
choose an offset project
Buy carbon credits equivalent to your business impact. This allows you to compensate for every tonne of CO2 you emit by ensuring there is one tonne less in the atmosphere. Regardless of location one unit of CO2 is the same and has the same climate impact wherever it is emitted. So, the benefit is the same wherever it is reduced or avoided too.
When to offset your carbon emissions
Some organisations choose to offset their entire carbon footprint, but others prefer to target the impact of a specific activity such as a flight or long journey. Others for example will offset their gas usage annually to support their sustainable business goals.
Many organisations are also providing products that are also available with carbon neutrality included as part of the overall price. This is where the price of the product includes the ability to offset the carbon that is used to generate the product or the carbon to deliver the product.
Some logistics companies are providing carbon neutrality for example on delivered products.
How to calculate your carbon footprint
This is relatively simple when it’s a single transaction approach, as several websites allow you calculate the emissions from say a single flight or journey and then pay the offset company to reduce emissions elsewhere in the world by the same amount. This would make the journey or flight carbon neutral.
For more complicated requirements a simple website calculator probably won’t work, and particularly for those organisations looking to offset more than just a single transaction.
A more robust method must be defined and often this is an ongoing monthly transaction as apposed to a single offset action.
The cost of carbon offsetting
The cost of carbon offsetting varies widely although the planting of trees is the biggest and cheapest way to tackle the climate crisis. Offset schemes vary widely in terms of the cost, though a fairly typical fee would be around £8-£10 for each tonne of CO2 offset.
> So, a typical British family would pay around £60 to offset a year’s worth of gas and electricity use by the planting of approximately 160 trees a year.
> Whereas a small SME business (14,900 kWh/Year) with a carbon footprint of 6.14 tonnes would cost £370 to offset a year’s worth of electricity use by the planting of approximately 1,000 trees a year.
So for those companies that are looking to improve their carbon credentials and enhance their environmental outlook, you can see the low cost and high gain benefits of these types of schemes.
Offsetting your business carbon footprint is easier than you might think and Global climate experts agree that forests are an essential part of fighting climate change.
Planting trees to carbon offset
Planting trees is a great way to offset your company’s carbon emissions by investing in sustainable reforestation projects and invest in the most efficient action to mitigate climate change.
Through photosynthesis trees absorb and store pollutants such as carbon and release oxygen back into the atmosphere. By ensuring that the trees planted are native species you can help to preserve the environment and biodiversity.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9274736046791077,
"language": "en",
"url": "https://www.coherentmarketinsights.com/market-insight/lng-bunkering-market-1085",
"token_count": 864,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:39629739-60eb-4b3c-9bf6-dfbde400370d>"
}
|
Impact Analysis of Covid-19
The complete version of the Report will include the impact of the COVID-19, and anticipated change on the future outlook of the industry, by taking into the account the political, economic, social, and technological parameters.
Global LNG Bunkering - Insights
Bunkering is the process of transferring fuel to a vessel or a facility in the form of conventional marine fuels or LNG. The density of LNG is around half that of heavy fuel oil. This translates to around 1.8 times LNG needing to be bunkered to obtain same range in comparison to bunkering heavy fuel oil.
Increasing number of vessels or ships are using LNG owing to the need for cleaner fuels coupled with stringent government regulations to reduce chemical emissions. This in turn is expected to drive growth of the LNG bunkering market. For instance, a regulation passed by the International Maritime Organization (IMO) in 2012, stated that ships must reduce their sulfur content in fuel from 4.5% to 3.5%. Various end users are increasingly inclined towards replacing conventional fuels such as natural gas with LNG, given its significant cost advantage over the former. LNG has high combustion efficiency, is easy to redeploy, and is of lower volume than natural gas, translating in easier and relatively cost effective transportation and storage of the same. This has led to rampant adoption of LNG across various industries, leading to commissioning of various new LNG plants worldwide, in turn creating a highly conducive market for LNG bunkering.
Ship-to-ship LNG bunkering is projected to be the fastest growing segment over the forecast period. This is attributed to its advantages such as quick transfer operations and high capacity of 700-7500 tons. Moreover, ship-to-ship operations are feasible for all types of vessels. In 2017, the Port of Gothenburg, Europe, conducted its first ship bunkering using liquefied natural gas.
Offshore support vessel segment dominated the global LNG bunkering market in 2016, owing to it being relatively cost effective in terms of offshore exploration and production activities. In 2013, Harvey Gulf Marine Company invested US$ 400 million to operate and build LNG offshore support vessels and two LNG fueling docks with 0.27 million gallons of LNG storage and capacity of transferring 500 gallons of fuel per minute.
Europe was the largest revenue contributor to the global market in 2016, accounting for 42.17% share. This is due to presence of largest bunkering hub, Norway in the region, as it offers over 18,000 LNG bunker stations. Moreover, rising concerns for minimizing the environmental impact and augmenting investments towards rebuilding and upgrading LNG infrastructure is anticipated to further boost growth of LNG bunkering market in Europe. In 2017, the European Union (EU) member states approved the European Commission’s proposal of investing US$ 24.18 million to support seven actions for developing efficient and sustainable transport and energy infrastructure including that for LNG bunkering.
Asia Pacific is projected to be the fastest growing market for LNG bunkering, exhibiting a CAGR of 61.8% over the forecast period. This is attributed to growing production activities in this region coupled with increasing energy demand. According to International Energy Agency, demand for energy from Southeast Asia between 2000 and 2013 had increased by over 50%. Petronas Company invested US$ 1.16 billion for FLNG project in Malaysia with a capacity of 1.2 MTPA in 2016. Maritime and Port Authority of Singapore also announced funding of US$ 1.45 million for six vessels under a pilot program in 2017, to test procedures for operations and safety protocols for LNG bunkering. This is expected to drive growth of the market.
Figure1. Global LNG Bunkering Market Share, By Region, 2016
Source: Coherent Market Insights (2017)
The major players in the global LNG bunkering market include Royal Dutch Shell Plc., Skangas, ENN Energy, Korea Gas Corporation, Prima LNG, Harvey Gulf International Marine LLC, Bomin Linde LNG GmbH & Co KG, Fjord Line, Crowley Maritime Corporation, and Polskie LNG.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9430776834487915,
"language": "en",
"url": "https://www.customearthpromos.com/eco-blog/solar-power-worlds-cheapest-energy",
"token_count": 474,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1806640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6feb055b-eadd-48a4-9ccc-e3cac88cc633>"
}
|
Solar Power is Now the World’s Most Affordable Energy Source
This year has been a hot one for solar power. From Elon Musk’s unveiling of SolarCity’s stylish solar roofs to Floridians voting in favor of solar-friendly policies, we’ve seen a lot of positive vibes for solar in 2016. And now, for the first time, solar is becoming the most affordable form of new electricity, more so than wind power.
“Renewables are robustly entering the era of undercutting” energy made by fossil fuels, Bloomberg New Energy Finance chairman Michael Liebreich wrote this week.
Solar is booming in the U.S., a record-breaking year, with 4,143 megawatts (or millions watts) of solar generating capacity added in the third quarter of 2016, according to a new report by the Solar Energy Industries Association and GTM Research.
But the U.S. isn’t the only country that’s benefiting from this solar energy explosion. This is also great news for developing nations that typically do not have the infrastructure in place that developed countries have dedicated to fossil fuels. This means as they build their energy infrastructures, they can start with a renewable option like solar that is not only cleaner but much cheaper.
Compared to wind, solar projects are costing less to build in emerging markets. Bloomberg data reveals the average cost of new wind and solar from 58 emerging-market economies, including China, India, and Brazil.
“While solar was bound to fall below wind eventually, given its steeper price declines, few predicted it would happen this soon,” according to BloombergTechnology. It also predicts that a peak in fossil fuel use for electricity could be reached within the next decade.
Ethan Zindler, head of U.S. policy analysis at BNEF, credits China for the increases in solar power investments. “A huge part of this story is China, which has been rapidly deploying solar and helping other countries finance their own projects.”
With so many record breaking moments for solar power in 2016, we’re excited to see how solar energy will continue to grow and develop in 2017 and are looking forward to the day when we can say goodbye to those yucky fossil fuels for good.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9376832842826843,
"language": "en",
"url": "https://www.empirecenter.org/publications/costing-out-cuomos-green-tax/",
"token_count": 786,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.28125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f723d71b-7e38-4304-b2c5-99db7167ab45>"
}
|
Governor Andrew Cuomo’s new Clean Energy Standard is shaping up to be one of the largest tax hikes in state history.
Beginning in January, the standard will force electricity ratepayers to subsidize costly windmills and solar panel farms, along with money-losing upstate nuclear plants, by requiring utilities and other electricity customers to purchase “renewable energy credits” (RECs) and “zero-emissions credits” (ZECs) from the state.
The standard is part of the governor’s goal of having the state get 50 percent of its electricity from renewables by 2030—up from 23 percent as of 2015. Paradoxically, the PSC assumes that this policy can drive down prices even while mandating higher demand for renewables.
The credits are tied to the amount of power each utility purchases from the electrical grid, on a per-megawatt-hour basis. The state will be using the proceeds of the RECs and ZECs to pay renewable energy generators and upstate nuclear plants while they sell power to the grid at a loss.
In ordering the Clean Energy Standard on August 1, the four members of the Public Service Commission (PSC) set percentages of electricity usage that need to be offset with REC purchases each year beginning in 2017.
The New York State Energy Research and Development Authority (NYSERDA), which will play a key role in implementing the standard, hasn’t yet set a price for RECs. However, equivalent credits in Connecticut, Massachusetts and Rhode Island are trading for more than $40.
New York state’s previous attempt to boost renewable energy generators with subsidies between $20 and $35 fell considerably short of its goals, so it’s reasonable to assume REC prices will top $40.
NYSERDA will add to that amount “administrative costs and fees.” In other words, in true Albany form, the state will be taxing its own tax, by forcing utilities (and their customers) to pay a commission to NYSERDA for its troubles.
Assuming New York REC credits cost $40, and based on the PSC’s ZEC own projection of how many RECs and ZECs would have to be purchased in the first five years of the program, the Clean Energy Standard will cost New Yorkers $521 million in 2017 alone. The cost will rise each year, reaching $891 million by 2021, as shown in the table below, for a five-year cost of $3.4 billion.
It’s a sneaky tax: ratepayers won’t see this a separate line in their bills. Instead, “supply” costs will go up, as utilities are forced to charge more to make up for their payments to NYSERDA.
Worst of all, Cuomo’s green tax is being levied without any vote by the Legislature—which hasn’t bothered to challenge the administrative process that produced a significant and costly new energy policy without legislative approval.
Including revenues raised from other surcharges, NYSERDA will soon be dispensing over a billion dollars a year outside any sort of transparent budget process.
*NOTE: RECs and ZECs are collected on different fiscal years, with the annual cost of RECs beginning Jan. 1 and ZECs beginning on Apr. 1.
You may also like
"...the Empire Center is the think tank that spent months trying to pry Covid data out of Mr. Cuomo's government, which offered a series of unbelievable excuses for its refusal to disclose...five months after it (the Empire Center) sued, Team Cuomo finally started coughing up some of the records." -Wall Street Journal, February 19, 2021
SIGN UP TO READ ABOUT THE ISSUES IMPACTING NEW YORKERS.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9542554020881653,
"language": "en",
"url": "https://www.oecd.org/canada/oecdhealthataglance2009keyfindingsforcanada.htm",
"token_count": 569,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.19921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c58a68ce-b8c1-4998-8007-eb1803b16461>"
}
|
The OECD’s latest edition of Health at a Glance shows that all countries could do better in providing good quality health care.
Quality of Care
Screening rates for breast and cervical cancer are higher in Canada than in most other developed countries, and Canada’s survival rates for cervical, breast and colorectal cancer are among the highest in the OECD. Canada does well also in achieving low mortality rates for people admitted to hospitals with a heart attack, but mortality rates for people admitted for stroke are higher than the OECD average.
In 2006, 73% of eligible women in Canada were screened for cervical cancer (OECD average 64%), and 70% for breast cancer (OECD average 62%).
The 5-year relative survival rate for cervical cancer during 2000-2005 was 72%, the second highest after Korea (OECD average 66%). For breast cancer, it was 87%, the third highest after the United States and Iceland (OECD average of 81%).
The 5-year relative survival rates for colorectal cancer during 2000-2005 were 62% for females and 60% for males, lower than in Japan, Iceland and the United States, but higher than the OECD average (58% for females and 56% for males).
In-hospital case-fatality rates within 30 days of admission for acute myocardial infarction (heart attack) are slightly lower than the OECD average (4.2% vs. 4.9%, in 2007). However, Canada has higher rates of deaths in hospital for both ischaemic (7.0% vs 5.0%) and hemorrhagic stroke (23.2% vs 19.8%).
Canada spent 10.1% of GDP on health in 2007, more than the OECD average of 8.9%. Spending per person is also higher than the OECD average.
Total health spending accounted for 10.1% of GDP in Canada in 2007, compared with an average of 8.9% across OECD countries. The United States (16.0%), France (11.0%), Switzerland (10.8%), Germany (10.4%) and Belgium (10.4%) had a higher share.
Canada’s spending on health per person is also higher than the OECD average, with spending of 3895 USD in 2007 (adjusted for purchasing power parity), compared with an OECD average of 2984 USD. Per capita health spending over 1997-2007 grew in real terms by 3.8% in Canada, slightly less than the OECD average of 4.1%.
The public sector continues to be the main source of health funding in all OECD countries, except Mexico and the United States. In Canada, 70% of health spending was funded by public sources in 2007, less than the average of 73% for OECD countries.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9326610565185547,
"language": "en",
"url": "http://ag-groundwater.org/presentations/themes/?uid=1252&sharebar=share&ds=517",
"token_count": 502,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.052490234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f352e602-d72c-4760-8a71-50a9a9b938df>"
}
|
Constraints to Smallholder Livelihoods in Irrigated Agriculture in Groundwater-Dependent Parts of Asia
Geological Survey of Denmark and Greenland
Groundwater is of paramount importance as resource input to smallholder irrigated agriculture in many parts of Asia today, both for securing subsistence farming as well as part of economic livelihood strategies. It is estimated that 1 billion farmers across India, China, Pakistan, Bangladesh and Nepal are reliant on groundwater for their farming. However, despite and in some places because of effective and widespread technologies for accessing and utilizing groundwater, the farmers often encounter constraints in their further development and benefit optimizing of this resource. As part of devising policies and programs that contribute towards sustainable farming systems, integrated land use planning, effective use of water resources, increased food production, and adaptation to global changes in climate, demography, and economic conditions, it is key to understand the realities of farmer interaction with and impediments for utilizing groundwater in these parts of the world. Data and results are presented from action research carried out in the alluvial sedimentary basins of the Indo-Gangetic and Yellow River systems (Fig. 1) as part of a major training and research capacity building effort for groundwater professionals from these five Asian countries. A subsidiary objective to the capacity building aim was to gain insight into and collect key figures and comparative descriptions of the physical, the agricultural, and the household economic conditions for the poor farmers to engage in groundwater irrigation. Major constraints for groundwater use relate to exhaustion of the resource (Yellow River Basin, the North China Plains and western India) and to lack of reliable or affordable energy sources for the pumping of groundwater (eastern India and Bangladesh). Agricultural production levels are relatively low in a global context, particularly in the poorest areas, reflecting other constraints, such as lack of other production inputs and supporting market and service infrastructure. Nowhere is groundwater managed actively and directly, though few examples of local and social schemes for management were encountered. Adaptation or coping strategies of the farmers varied from drilling deeper wells and implementing more efficient pumps in over-exploited areas to substituting expensive diesel fuels with the subsidized cooking oil kerosene in areas with plenty of groundwater but poor energy sources (Table 1). In most places, farmers respond by diversifying crops and livelihood income sources. Migration is also practiced but not always to the effect of relieving further stress on groundwater. General recommendations are provided for addressing the groundwater-related constraints in the diverse landscape of groundwater based economies.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9090935587882996,
"language": "en",
"url": "http://www.indiaenvironmentportal.org.in/content/467793/environmental-performance-index-2020/",
"token_count": 260,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0289306640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c515a0be-d105-450f-8da1-f05825a66ee9>"
}
|
Environmental Performance Index 2020
The 2020 Environmental Performance Index (EPI) provides a data-driven summary of the state of sustainability around the world. Using 32 performance indicators across 11 issue categories, the EPI ranks 180 countries on environmental health and ecosystem vitality. These indicators provide a gauge at a national scale of how close countries are to established environmental policy targets. The EPI offers a scorecard that highlights leaders and laggards in environmental performance and provides practical guidance for countries that aspire to move toward a sustainable future. The metrics on which the 2020 rankings are based come from a variety of sources and represent the most recent published data, often from 2017 or 2018. Thus the analysis does not reflect recent developments, including the dramatic drop in air pollution in 2020 in the wake of the COVID-19 pandemic or the greenhouse gas emissions from the extensive Amazonian fires in 2019. These indicators provide a way to spot problems, set targets, track trends, understand outcomes, and identify best policy practices. Good data and fact based analysis can also help government officials refine their policy agendas, facilitate communications with key stakeholders, and maximize the return on environmental investments. The EPI offers a powerful policy tool in support of efforts to meet the targets of the UN Sustainable Development Goals and to move society toward a sustainable future.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.