meta
dict | text
stringlengths 224
571k
|
---|---|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9610479474067688,
"language": "en",
"url": "https://philanthropicprofessor.org/biases-4-page-paper-2/",
"token_count": 1163,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.28515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:90b7adb9-24ec-4f12-9f46-e019c4fd802a>"
}
|
Even the most intelligent manager is prone to personal biases and pitfalls that can lead to bad decisions. We all carry biases based on our personal experiences. And we can all fall into various traps that lead to decisions that seem perfectly logical at the time but in retrospect, we see that we should have known better.
In the background materials, including Bolland and Fletcher (2012); Kourdi (2003); and Hammond, Keeney, and Raiffa (2008); several specific decision-making biases and pitfalls are discussed. Collectively these are known as cognitive biases. Some of the common pitfalls and biases discussed in these readings include overconfidence bias, confirmation (self-confirming) bias, sunk-cost bias, framing bias, and hindsight bias.
Carefully review all three of these readings and make sure you understand the different types of biases. Then read through the scenarios below and think about what kind of biases are demonstrated in each scenario. For each scenario, carefully explain which specific bias or biases is demonstrated by the decision and what can be done to avoid this bias in the future. Make sure to pick at least one specific bias that you read about for each scenario, and explain your reasoning. Use references to at least one of the three required readings from the background materials in your discussion of each scenario below. Your paper should be 4–5 pages in length:
- The Chief Financial Officer (CFO) of a corporation is of the strong belief that marketing is not a good use of the company’s money. Someone shows her data from several years ago showing that during a period of high spending on marketing, sales did not go up. She says, “See, I told you marketing is not a good use of our budget!†and cuts the marketing budget to almost zero. Following the cut in the marketing budget, sales also start to drop dramatically. When asked by an employee if the drop in sales is due to the cut in the marketing budget, she says, “No!†and insists there must be a different explanation. What kind of decision-making bias do you think this represents, and why? What steps would you recommend to this CEO to reduce this kind of bias? Support your answer with references to at least one of the three background readings.
- A CEO decides that he wants to greatly expand the company’s market by purchasing a major rival. This acquisition would double the company’s market share. However, several of his top managers warn him that such a purchase would require the company to take out a huge amount of debt to finance this merger, and that many of these large mergers have failed. They also point out that the organizational culture of the other company is very different and that managing this merger would be very difficult. Nonetheless, the CEO insists that he can overcome the odds and plans to go through with the merger. What kind of decision-making bias do you think this represents, and why? What steps should this leader take to avoid this bias? Support your answer with references to at least one of the three background readings.
- A CEO wants to purchase a new factory. He is currently deciding between two factories. The owner of Factory A brags that 94% of products produced at the factory are free of defects. The owner of Factory B cautions that his factory has a 5% defect rate but management and staff are working very hard to reduce the rate. The CEO decides to purchase Factory A citing its strong 94% rate of success in producing defect-free products even though Factory B actually has a 95% rate of success. What kind of decision-making bias do you think this represents, and why? What steps should this leader take to avoid this bias?
- A CEO of an automobile company decides to introduce a new hybrid vehicle using cutting-edge technology. A huge amount of money is spent in research and development as well as advertising. But when the car is completed sales are very slow and the price has to be cut so low that the company is losing money on every hybrid vehicle sold. She is advised to simply abandon the car to avoid further losses in profits, and focus her energy on selling profitable vehicles. However, she insists it is unwise to abandon the hybrid vehicle given that so much money has already been put into the project. What kind of decision-making bias do you think this represents, and why? What steps should this leader take to avoid this bias? Support your answer with references to at least one of the three background readings.
- Conclude the paper with a discussion about which one of the decision-making biases you think is the most dangerous to a leader, and explain your reasoning.
Lombardo, J. (2014). Common Biases and Judgment Errors in Decision Making Organizational Behavior. Education Portal https://www.youtube.com/watch?v=cAbdmV3VOwA
Now go through the following three readings to get a deeper understanding:
Bolland, E., & Fletcher, F. (2012). Solutions: Business problem solving. (Available from Trident Online Library. Read only the relevant chapters.)
Kourdi, J. (2011). Chapter 10: Avoiding the pitfalls and developing an action plan. Effective Decision Making: 10 Steps to Better Decision Making and Problem Solving. London: Marshall Cavendish International [Asia] Pte Ltd. [eBook Business Collection]
Hammond, J. S., Keeney, R. L., & Raiffa, H. (1998). The hidden traps in decision-making. Harvard Business Review, 76(5), 47-58. [Business Source Complete]
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9429447650909424,
"language": "en",
"url": "https://project-management.info/cost-of-quality-coq/",
"token_count": 2901,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1396484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1f823706-9895-4f0d-8dba-27c2fb3e9536>"
}
|
Cost of quality is a technique under the Project Quality Management Knowledge Area of the PMI framework (PMBOK®, 6th edition, ch. 184.108.40.206). The concept comprises of a short- and long-term consideration of quality expressed in its two components: cost of conformance and cost of non-conformance.
In this article, we will shed a light on the technique and its implications for the different areas of project management.
- What Is Cost of Quality?
- How to Calculate Cost of Quality in Projects?
- How to Interpret and Optimize Cost of Quality in Projects?
- Example of Cost of Quality Considerations
What Is Cost of Quality?
Cost of Quality (COQ) is primarily a measure of all costs related to the quality and the lack thereof. In other words, it an integrated concept of the costs to achieve quality and the costs that occur due to quality issues. Thereby, COQ refers to the entire lifecycle of the product (or outcome) created by a project.
The cost of quality technique addresses the issue of a project being temporary in nature while the quality of its deliverables lasts for the entire lifecycle of the product. Thus, it forces a project and its organization to measure to consider quality aspects during and beyond the duration of a project.
In this context, the PMBOK suggests that COQ be managed by a unit outside of the project such as the program management or product management divisions.
Cost of quality consists of two components:
- Cost of conformance and
- Cost of non-conformance, also referred to as failure cost.
These building blocks of COQ represent the interdependencies between investing in quality during the project and the future costs of not doing so. We will introduce these components in the next sections, followed by an illustrative example.
What Is Cost of Conformance?
Cost of conformance describes the amount of resources needed to achieve the quality requirements and targets of a project. The underlying rationale is to spend money on the prevention of quality issues rather than for fixing them.
In practice, cost of conformance comes often with the characteristic of a diminishing marginal utility (see example below). This means that the first dollar spent will massively increase the quality of the project deliverables while the curve is flattening a bit with every additional dollar spent. In other words: the ‘bang for the buck’ is high in the beginning and declines as the costs of conformance increase. This effect is illustrated in the below chart.
Cost of conformance consist of two components:
- Prevention cost and
- Appraisal cost.
The term prevention cost refers to money spent on actions and material that facilitate the creation of a quality deliverable (e.g. a product). This basically includes all the resources required to build quality into the product.
The PMBOK lists
- equipment, and
- sufficient time
as examples of prevention measures. However, this group may also include other areas such as
- proper staffing of projects (getting the “best person for the job”),
- market intelligence and lessons learned with respect to similar projects,
- creating “spikes” (i.e. pilots) to test different approaches and choose the right one, as well as
- any other activity that helps increase the quality of the output.
Appraisal costs are the (financial and non-financial) resources that are consumed to assess and measure the quality of the deliverables of a project. This relates to quality assurance and money invested in activities that identify quality issues. Thus, corrective actions can be taken, and issues can be fixed during the project.
Examples provided by the PMBOK are:
- destructive testing loss (i.e. testing the durability of a product which can involve losing the amount spent to produce that piece; crash tests are an example of destructive testing), and
- inspections or quality checks.
Other types of appraisal costs are
- external quality audits,
- measurement of quality indicators (e.g. quality gates), and
- mystery shopping (i.e. testing a product or service from a customer’s perspective).
The constituents of this and other cost types may vary among different organizations. At the end of the day, this type of prevention cost should basically cover any resource consumption related to the assessment of quality during a project.
What Is Cost of Non-Conformance?
Cost of non-conformance is used as a synonym for failure cost. It refers to the resources that are required to fix failures and take corrective actions but also to indirect effects from quality issues, such as negative business impact.
While the amount spent on cost of conformance is determining the level of quality, cost of non-conformance is a function of that level of quality. As a rule of thumb, the higher the quality the lower the cost of non-conformance. This chart illustrates this effect with a sample curve of non-conformance cost.
Cost of non-conformance consists of two elements:
- Internal failure costs and
- External failure costs.
Internal failure costs are costs for those failures that are discovered by the project or the organization itself. External failure costs are referring to resources required to address customer complaints or lost business due to customer dissatisfaction.
Internal Failure Costs
Internal failure costs relate to corrective actions taken to fix failures that were identified within a project or an organization. The PMBOK mentions rework and scrap as examples.
In practice, IT projects sometimes encounter a bunch of defects, while construction projects may realize that in some cases not all fire prevention requirements have been met, for instance.
Both examples would inevitably lead to rework which requires additional resources, i.e. the internal failure costs.
External Failure Costs
The category external failure costs covers the costs that are spent to respond to customer complaints on the quality of a product or deliverable. It also considers indirect effects such as a negative impact on sales and overall business.
According to the PMI methodology, this type of non-conformance costs includes inter alia
- warranty work,
- lost business (i.e. losses arising from customers not doing business with a company because of previous or current quality issues).
While this type of failure cost is often hard to measure and even harder to predict, it is crucial to make the COQ concept work. This is because external failure costs tend to cause a massive impact if they occur.
Customer satisfaction is a goal of almost all companies and quality issues could lead to a quickly deteriorating reputation of a product or a brand – which is often followed by declining sales, prices and lower revenues.
In projects driven by legal and regulatory requirements, external failure costs can be fatal for an organization and even put their viability at risk. Examples are plants or buildings that cannot be used or products that cannot be launched due to noncompliance with legal requirements. In some industries, companies could even lose their license to continue or commence their business.
Although COQ is not a brand-new concept, quality issues occur regularly and even reputable companies have been heavily hit by external failure costs. Just think of exploding phone and laptop batteries, delayed infrastructure projects or pharmaceutical companies with billions of sunk costs for medicine that eventually did not get approved by the authorities.
How to Calculate Cost of Quality in Projects?
While the assessment and prediction of the components of cost of quality (especially the failure costs) are challenging, the calculation of COQ itself is rather straightforward. The formula to calculate cost of quality is:
COQ = Cost of Conformance + Cost of Non-Conformance (or failure costs)
cost of conformance = sum of the prevention and appraisal costs,
cost of non-conformance = sum of external and internal failure costs.
How to Interpret and Optimize Cost of Quality in Projects?
The concept of cost of quality may look a bit theoretical at first sight. However, there are certain practical considerations stemming from this concept:
As absolute perfection is usually not achievable, costs of quality are subject to a cost-benefit analysis (which is also suggested in the PMBOK).
To optimize the overall cost of quality, the project manager has to determine the optimal balance between the cost of conformance incurred during the project and the cost of non-conformance that are accepted for the entire lifecycle of the product.
The PMBOK states that COQ is optimized at the point of the smallest sum of cost of conformance and failure costs. The following chart, showing the curves of conformance and non-conformance costs, illustrates that optimization.
In other words, the optimum is the point where the costs of quality are the lowest. At that point, any additional Dollar spent on conformance saves non-conformance costs of a Dollar or less, the marginal return would be negative.
In practice, these numbers are hard to compile though, given the unpredictability of certain components, particularly external failure costs. In fact, there are numerous examples of firms failing to get this equation right – refer to the examples of external failure costs mentioned in a previous section.
Nevertheless, the concept of optimizing COQ is relevant for and applied by many projects even though it usually requires some tailoring (source). The following examples illustrate a simple use case of COQ considerations.
Example of Cost of Quality Considerations
This section contains a simplified yet realistic example of cost of quality considerations in an IT project.
Identifying the Cost of Conformance and Non-Conformance
In a Data Warehouse project, a project manager is trying to balance the cost of documenting interfaces and data flows and the expected long-term cost of not doing so.
The estimates are as follows:
- Documenting all changes to the system on a detailed level would require 100 man-days.
- The experience of similar projects shows that a lack of detailed documentation leads to an average additional resource requirement of 30 to 40 man-days per year.
- Even basis documentation would reduce this to annual additional efforts to 5 man-days while detailed documentation would not require any additional efforts.
- The cost of this basis documentation would be 30 man-days for the current project.
- The lifecycle of the IT solution is estimated to be 10 years.
The creation of the documentation is an example of prevention costs (hence part of the cost of conformance). The additional work that will occur in the future to deal with documentation lacks is cost of non-conformance.
The estimates are summarized in the following table based on this classification:
|Cost of Conformance||Cost of Non-Conformance|
|No documentation (min)||0||300|
|No documentation (max)||0||400|
Calculating and Interpreting the Cost of Quality
As previously stated, the cost of quality is the sum of conformance and non-conformance costs.
Summing up these costs for each scenario, the cost of quality over the entire lifecycle is as follows:
|Cost of Conformance||Cost of Non-Conformance||Cost of Quality|
|No documentation (min)||0||300||300|
|No documentation (max)||0||400||400|
The basic documentation would obviously be the best choice in this case. With 80 man-days in total, it has the lowest cost of quality.
A basic documentation requires 30 man-days (cost of conformance) for its creation and 50 man-days cost of non-conformance accumulated over 10 years. The total cost of quality is 80 man-days while the failure cost would be reduced by 250 to 350 man-days throughout the lifecycle of the solution.
The generation of a detailed documentation requires 100 man-days, which would result in 0 costs of non-conformance. Although failure costs of 0 look appealing, the overall cost of quality is higher than the basic documentation scenario (this is an example of the previously mentioned diminishing marginal utility of cost of conformance).
In other words, the detailed documentation would save 300-400 man-days failure costs compared to the ‘no documentation’ scenario over the lifecycle of the data warehouse, but that’s only 50 man-days additional savings compared to the ‘basic documentation’ option.
All in all, the overall benefit of the basic documentation is 220 to 320 man-days (250 and 350, respectively, less 30 man-days for the creation) while the benefit of the detailed documentation would be 200 to 300 man-days (300 to 400 man-days saved for 100 man-days spent).
However, the efforts are incurred during the project (hence subject to the project budget and the project manager’s responsibility) while the benefits of a proper COQ consideration are realized after the project has been completed.
Thus, it becomes clear why these considerations are often assigned to a central PMO or portfolio management (as suggested in the PMBOK) rather than the management of a single project.
Cost of quality is an important concept in both project quality management as well as project cost estimation (it is also a common topic in PMP exams). Finding the right balance between conformance cost and acceptable costs of failure is key to deliver a project with a sustainable success beyond its duration.
However, this is easier said than done. External failure costs are hard to predict. Yet they can have a fatal impact on an organization. In many projects, project goals and budget pressure might incentivize a short-term focus while the potential long-term cost of non-conformance might not receive the ideal amount of management attention. The PMBOK suggests therefore that COQ considerations are done by program management or other organizational units that are responsible for the long-term effects of quality considerations.
Lastly, there’s also another interesting aspect to COQ and project management – the cost of quality of a project itself (rather than its deliverable). When you are managing a project, you might consider using a COQ-like concept with respect to your stakeholder communication and engagement (see PMI website).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9303160309791565,
"language": "en",
"url": "https://thefanatic.net/you-are-preparing-the-first-sustainable-bitcoin-mining-platform/",
"token_count": 412,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11181640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:507723e0-26ba-4ee5-b5ed-89ca70cf7a02>"
}
|
Cryptocurrency mining is an activity that involves using huge computing devices called mining rigs that use enormous amounts of energy, which has a direct impact on the environment.
With the aim of aligning the cryptocurrency sector to renewable energies, two blockchain companies are preparing the first mining platform that will run exclusively on renewable energies
That’s why DMG Blockchain Solutions and Argo Blockchain, two companies dedicated to the world of cryptocurrencies and blockchain, have teamed up to launch a mining platform operated exclusively with renewable energiesAccording to a company statement, the first of its kind in the world.
This shared adventure between the two companies gets the name “Terra Pool” and it will mine Bitcoin using entirely hydropower, eliminating fossil fuels and therefore greenhouse gas emissions from the equation.
While important details of the project are unknown, such as the start date or the mode of operation (regardless of whether their own power generators are used or miners need clean energy sources), the start-up declaration states: “Terra Pool offers a barrier-free access platform and incentives for miners who Want to produce bitcoins sustainably “
This is symbolic rather than practical (since there will be hardly a drop of water in the ocean of bitcoin mining), and spokesmen for both companies say Terra Pool will serve as a proof-of-concept for help improve Bitcoin’s image and other cryptocurrencies in terms of their climate impact.
The DMG Blockchain and Argo Blockchain initiative follows similar initiatives. Such is the case with Internet payments firm Square, which donated $ 10 million last December to projects promoting the use of clean energy in the Bitcoin environment.
That interest in pairing renewable energy and cryptocurrencies seems to be mounting as criticism of Bitcoin’s environmental impact peaks due to the exponential growth of mining operations. According to the University of Cambridge’s Center for Alternative Finance, energy consumption when mining Bitcoin is 128.77 terawatt hours per year assumes that more energy is used than in countries like Poland, Argentina or the Netherlands in a year.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9198563694953918,
"language": "en",
"url": "https://www.excellenceineducation.com/mm5/merchant.mvc?Screen=PROD&Product_Code=MK&Store_Code=EIE&search=money+wise+kids&offset=&filter_cat=&PowerSearch_Begin_Only=&sort=&range_low=&range_high=",
"token_count": 152,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0810546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:766ef89c-6be1-4c2e-a9f7-755f38a29d76>"
}
|
Teach kids the meaning of money with two simple games. Young children practice making change while they earn $100 bill. Then they learn to budget their money while spending the $100 on life's necessities, such as housing, clothing, taxes and more. Along the way, kids will hone addition, subtraction, multiples and place value skills while having a great time. For 2 players ages 7 and up.
Math Skills covered by this game:
Counting bills in denominations of $1 to $100
Exchanging smaller bills for larger bills of equal value
Making correct change
Multiples of 5,10,15,...
Budgeting and money management
Normal Retail Price: $15.00
Our Price: $13.50
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9492855072021484,
"language": "en",
"url": "https://www.wikiaccounting.com/note-receivable/",
"token_count": 956,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.044677734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2808c3d9-c9aa-424b-845b-d2334ff6eab3>"
}
|
Notes receivables describe promissory notes that represent loans paid from a company or business to another party. The note comes with a promise from the borrower that it will repay the lender at a future point in time.
Similarly, a note receivable gives the holder, or the lender, the right to receive the amount from the borrower.
A note receivable shows a legal binding agreement between two parties. Usually, companies pay loans in exchange for a note for a short-term.
Therefore, note receivables are current assets. However, if any note is repayable after a year’s time, companies must qualify it as non-current assets. At each reporting date, a company should evaluate all its note receivables for classification.
Notes receivable can come from different sources. For example, a company may provide a loan to another company in exchange for a note. Mostly, however, it comes from customers who transfer or convert their overdue accounts receivable balance to notes.
Notes receivable come in the form of a written document that borrowers pay to their lenders. Unlike usual trading balances and credits, notes receivable balances come with additional terms.
Notes receivables are similar to loans given by a company rather than credit due to its operations. Therefore, they have characteristics of a loan.
A note receivable will mention the two parties involved, the payee and the payer. The payee is the party that provides the loan, also known as the borrower.
The payee holds the note and is, therefore, due to receiving a payment from the payer. The payer, or the marker, is the borrower who gets the loan from the payee. The maker promises to pay the holder in the future.
A note receivable also comes with a predetermined interest rate after a mutual agreement of both the parties. The note may also consist of the terms of interest payments.
The maker of the note receivable, along a principal amount, must also pay interest on it. The principal amount of the note receivable represents its face value or the value that the payee will receive.
Finally, a note receivable will also mention the timeframe of the loan. It is similar to the maturity date of loans, which represents a future point in time at which the borrower will repay the lender.
For note receivable, the timeframe is the time before or on which the maker must reimburse the holder. Unlike other loans, note receivables do not usually come with prepayment penalties.
The journal entry for recording note receivable is straightforward. If a company pays another party directly in exchange for a note receivable, the journal entry will be as follows.
|Cr||Cash or Bank||x|
However, if the company converts an accounts receivable balance to a note receivable, the accounting entry will be as follows.
As mentioned above, the company must determine, using the timeframe of the note receivable, whether it classifies as a current asset or non-current.
For non-current asset classification, the company must reevaluate the note receivable at the end of each accounting period to identify if its classification has changed.
On repayment, the note holder will record the receipt and any associated interest on the note. The accounting entry to record repayment is as follows.
|Dr||Cash or bank||x|
A company, ABC Co., has total receivables of $20,000. Among these, one customer with a balance of $5,000 wants to convert the balance to a note receivable.
ABC Co. agrees to do so and changes the balance to note. The customer promises to repay the amount after one year. Both parties also agree that the customer must reimburse the principal amount and a 10% interest on the note.
To record the conversion of account receivable balance to note receivable, ABC Co. uses the following double entry.
After a year’s time, when the customer repays the loan, ABC Co. must record the receipt. However, the customer will also pay an interest of $500 ($5,000 x 10%) on the note. Assuming the customer makes the repayment to ABC Co.’s bank account, ABC Co. can use the following journal entry to record the receipt.
|Dr||Cash or bank||$5,500|
A note receivable is a promissory note made by a maker to a payee promising to repay a specified amount at a future point in time.
Characteristically, notes are similar to loans because they come with an interest and principal amounts. Recording notes receivable is straightforward, as mentioned above.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9464198350906372,
"language": "en",
"url": "http://eatatnakama.com/romantic-picnic-dvulmw/archive.php?tag=compound-annual-growth-rate-formula-d4ee37",
"token_count": 2667,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10107421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6920b154-28cf-43c4-8b27-1952c0eda9a9>"
}
|
Calculate the annual growth rate. Cons. So, how to calculate CAGR? What Is The Formula For Calculating CAGR (Compound Annual Growth Rate) The CAGR or compound annual growth rate is the average rate at which an investment grows over time assuming that it was compounded (re-invested) annually (periodically). The formula for calculating compound annual growth rate is: CAGR = (FV / PV) (1 / n)-1 Or, using the ^ symbol for 'to the power of': CAGR = [(FV / PV) ^ (1 / n)] -1 CAGR example calculation. The Compound Annual Growth Rate (CAGR) formula is: CAGR = (Ending balance/beginning balance) 1/n - 1. It is achieved by dividing the ending value by the beginning value and raising that figure to the inverse number of years before subtracting it by one. It changed from 16% to 34% to 21.30% to 8.40%. Sam wants to determine the steady growth rate of his investment. Compound Annual Growth Rate (CAGR) CAGR stands for Compound Annual Growth Rate. The compound annual growth rate formula is a great tool for investors when they want to analyze the return rate of their investments. The growth of a supposed company from the end of 2013 to the end of 2017 is given below. Why CAGR Is So Useful. The CAGR Formula Explained. It is used to evaluate anything that can fluctuate in value, such as assets and investments. You can do it by yourself or using an Excel spreadsheet by using the formula: where V(t 0) is the initial value, V(t n) is the final value and t n - t 0 is the number of time periods over which the growth has been realized (years, months, etc.). Know about Compound Annual Growth Rate (CAGR) Definition and Example, Compound Annual Growth Rate (CAGR) Meaning, Stock Market Terms, Related Terms Means Mon, December 21, 2020 Mon 21Dec, 20 The generic CAGR formula used in … CAGR stands for compound annual growth rate. CAGR is the abbreviation for Compound Annual Growth Rate. For an investment, the period may be shorter or longer than a year, so n is calculated as 1/Years or 365/Days, depending on whether you want to specify the period in Years or Days. This makes the concept that much more powerful for … A compound annual growth rate in excel smoothed rate of growth over a period. Compound Annual Growth Rate Formula and Calculation CAGR = (End value/ Beginning value) ^1/n -1. Where n = investment period Let’s demonstrate this with an example. As you can see the growth never remained consistent. The Compound Annual Growth Rate is a useful tool for a quick comparison of the average growth rate of different assets and investment opportunities, either historically or in terms of a forecast. First, divide the ending value of your asset by its beginning value. Compound Annual Growth Rate formula in excel is used in Excel spreadsheets often by financial analysts, business owners, or investment managers, which helps them in identifying how much their business has developed or in the case of comparing revenue growth with the competitor companies. However useful its simplicity for a first assessment, it should never be the single deciding factor in a serious financial or investment advice or system. CAGR takes the initial investment value and projects an ending investment value while assuming compound growth over a set period of time. Compound Annual Growth Rate (CAGR) Example. Things to Remember about CAGR Formula in Excel The Compound Annual Growth Rate formula requires only the ending value of the investment, the beginning value, and the number of compounding years to calculate. Compound annual growth rate (CAGR) is the rate of return that would be required for an investment to grow from its beginning balance to its ending one. We want to calculate a steady and consistent annual growth rate. It doesn’t matter what the investment is in or how much the original investment is. Compound Annual Growth Rate (CAGR) is the annual growth of your investments over a specific period of time. This means that the average market capitalisation grew at 31.5% every year for the past five years, while earnings registered an average growth of 26.7% year on year. How to use this Compound Growth Calculator? Finally, subtract 1 from the result. So if you grow 10% per year over three years you’ve actually grown from 100 in the first year to 133 at the end of the third year. By CAGR we cannot have insight about the uneven in growth in middle years. As the name says, it is nothing but the annual growth rate a business has over a period of time. An Example of Compound Growth. To confirm this is correct, use the following calculation: To confirm this is correct, use the following calculation: $1 x (1 + 3.56%) 4 = $1.15 The CAGR of his investment is calculated in the following way: Over the five-year period, Sam’s investment grew by 2.8%. This is one of the most accurate methods of calculating the rise or fall of your investment returns over time. This is expressed in percentage. For example, interest may be compounded … In other words, it is a measure of how much you have earned on your investments every year during a given interval. He decides that he wants to grow the predictions out for five years. The compound annual growth rate (CAGR) is the mean Annual Growth rate of an investment over a specified period of time longer than one year. CAGR is a measurement of the return on an investment over a defined period of time. Compound annual growth rate (CAGR) is a simple metric that shows the annual rate of return of an investment (can be broadly defined) over a certain number of years, assuming the profits are reinvested. The Compound Annual Growth Rate formula is as follows: CAGR = (End Period Value/Beginning Period Value) (1/# years) – 1. The compound annual growth rate (CAGR) is the rate often used to assess an investment or company’s performance. CAGR is the year-over-year average growth rate over a period of time. Consider a company that makes an initial investment of $10M in the year 2000. Discrete compounding refers to the method by which interest is calculated and added to the principal at certain set points in time. Management can use a CAGR calculator to compare a $1M capital investment in new machinery to a $500,000 investment in a new building. This video explains the concept of CAGR - Compounded Annual Growth Rate. 2. CAGR is not an accounting term, but it is often used to describe some element of the business, for example revenue, units delivered, registered users, etc. BV: Beginning Value. Compound Annual Growth Rate (or CAGR) is a widely used measure of growth. Compound annual growth rate (CAGR) is a business and investing specific term for the geometric progression ratio that provides a constant rate of return over the time period. Jerry is attempting to make some pro forma statements for his company. AAGR is a linear measure that does not account for the effects of compounding. In other words, CAGR represents what the return would have been assuming a constant growth rate over the period. In actuality, the growth rate should vary from year to year. Compound annual growth rate (CAGR) formula. Compound growth means that, because your investment’s value gains a little bit each year (you hope! The formula you will input in excel is as follows. Then, raise the result to the power of 1 divided by the number of years in the time period. CAGR (Compounded Annual Growth Rate) tells you how much your investment has grown each year. The compound annual growth rate helps management and investors compare investments based on their returns. Growth Rate Understanding Growth Rates . If the value of the investment by 2005 is $15M, then the rate of the investment … Formula to Calculate CAGR in Excel The good news is that you can do these calculations yourself, using Excel to find the Compound Annual Growth Rate, or CAGR, of your current investment. The average of these four annual growth rates is 3.56%. The formula for calculating compound annual growth rate (CAGR) in Excel is: = ((FV/PV)^(1/n)) – 1, where "FV" is the ending value, "PV" is the beginning value and "n" is the number of years. The CAGR Formula Here, Ending balance is the value of the investment at the end of the investment period; Beginning balance is the value of the investment at the beginning of the investment period; N is the number of years you have invested; Let's use this formula for the above hypothetical example. The active word there is “compound.” It means that the growth accumulates, like interest. Example Problem: A company earned $10,000 in 2011. It is the rate of return required for an investment to grow from the starting balance to the ending balance, assuming profits are reinvested each year, and interest compounds annually. CAGR is the average rate of return for an investment over a period of time. It implies the growth was steady. CAGR is used when looking at investments over any … To calculate the Compound Annual Growth Rate in Excel, there is a basic formula =((End Value/Start Value)^(1/Periods) -1.And we can easily apply this formula as following: 1.Select a blank cell, for example Cell E3, enter the below formula into it, and press the Enter key.See screenshot: In such a case, the steady growth rate is equal to the compound annual growth rate (CAGR). ), you have more to invest in the following year. The formula for calculating the annual growth rate is Growth Percentage Over One Year = (() −) ∗ where f is the final value, s is the starting value, and y is the number of years. In other words, CAGR is a "smoothed" growth rate that, if compounded annually, would be equivalent to what your investment achieved over a specified period of time. The compound annual growth rate formula is essentially the same thing, just simplified to use for business and investing. Compound annual growth is the average annual growth rate of an investment over a period of time, and there's a special formula you can use to calculate it. Calculating Compound Growth (CAGR) Rate. The CAGR formula is as follows: Where: EV: Ending Value. Compound annual growth rate (CAGR) is a geometric average that represents the rate of return for an investment as if it had compounded at a steady rate each year. Compound Interest. more. CAGR formula. With the help of CAGR, it can be seen how much constant growth rate should the investment return … Compound … 1. The following is the compound growth formula: y = a(1 + r) x. where: y = value of the variable after x periods (future compounded value) a = initial value of the variable r = compound growth rate x = number of periods. Consider this example: If you invested … By CAGR we cannot assume the growth rate will be the same in the future. And this value is very useful in comparing performances with the past rate of return and also used as a measure to find the future value. CAGR has nothing to do with the value of an investment in the intermediate years as it depends only upon the value in the first year and the last … He believes that the same amount of historical information is needed as well. We can use it to get the same result with only the starting and ending values along with the number of periods; we'll use years for consistency: In this formula, we take the starting and ending point to find a 'total return', then compute the CAGR. 1:43. CAGR stands for Compound Annual Growth Rate. Because it smooths the performance of the investment over time, it allows for comparison between various investments over the course of time, as well as predicting future investment value. That same company earned $65,000 four years later in 2015. Average Annual Growth Rate Versus Compound Annual Growth Rate . CAGR is not an absolute value. The CAGR formula is a way of calculating the Annual Percentage Yield, APY = (1+r)^n-1, where r is the rate per period and n is the number of compound periods per year.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9127073884010315,
"language": "en",
"url": "https://commons.pacificu.edu/work/ns/c4fed2d1-3aac-42d1-8393-73f480310457",
"token_count": 285,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1865234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6e1085a5-6f11-433b-bd0d-f517c784f5c9>"
}
|
Credential inflation refers to a decline in value of earned degrees. It exists throughout many fields and is evident through job requirements that once required a lower degree or certificate that now necessitate an advanced degree. In healthcare professions, credential inflation is visible through the increase in entry-level degree requirements for several fields, such as occupational therapy, physical therapy, pharmacy, and audiology. These professions once required a Bachelors degree to enter the field, then a Masters degree, and will soon require a Doctorate for entry-level practice, if not already a requisite. Providing the best and most ethical care to patients is the utmost importance to me and I wonder if credential inflation inhibits or promotes this endeavor. Credential inflation impacts education, yet more research is needed to objectively determine the extent this applies to healthcare practices rather than broad anecdotal theories. Current research indicates credential inflation leads to increased cost of education, decreased access to education, and a shortage of qualified instructors. Potential research methodology may include examining phenomenology through assessment of healthcare professionals’ lived experiences, differences of nurturing professional growth, analyzing application and admission criteria to allied health professional schools, and examine healthcare provider burnout. Conclusions predicted: poor returns on financial investments for clinical doctorates, decrease diversity among future providers, and nurturing professional growth among institutions will be paramount for facilitating innovative providers.
|File name||Date Uploaded||Visibility||File size|
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9518847465515137,
"language": "en",
"url": "https://norcaldrivers.com/share-market-terms-every-beginner-should-know/",
"token_count": 896,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0274658203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2bd487f2-3cb7-4176-9abc-191a81f06ed5>"
}
|
Share Market Terms Every Beginner Should Know
Understanding the stock market is not an easy task. When beginners enter the market, they have many questions in their minds. In fact, most of the newcomers spend more time on Google to learn about the various aspects of the stock market. There are many terminologies used in the share market but it is not possible for a newcomer to know each and every terminology. However, there are few important terminologies that every investor must know. By learning the basic terminologies, you would be able to understand most of the share market concepts and technicalities. In this article, we list down some of the common terms which every beginner must know.
Important Stock Market Terminologies
- Stock Market
The stock market is an exchange where the traders indulge in buying and selling of stocks of companies. You can trade or invest in the market either through online mode or offline mode.
Buying means investing money by purchasing the shares or taking a position in a company’s stock.
Selling means getting rid of the shares of the company in the share market. A trader or an investor sells his shares when he has earned profits or wants to cut down his losses.
Ask is the price at which people are willing to sell their stocks.
Bid is the price that people are willing to pay to purchase the stocks.
- Ask-Bid Spread
Ask-Bid spread is the difference between what people are willing to pay for a stock and what they are getting it for.
- Bull Market
A bull market is a phase or condition of the market where the investors expect a rise in the stock prices.
- Bear Market
A bear market is a phase or condition of the market where the investors expect stock prices to fall.
- Market Order
Market order is a type of order in which the transaction to purchase or sell a stock is executed quickly at the market price.
- Limit Order
Limit order is a type of order in which the transaction to purchase or sell a stock is executed only when the stock reaches the specified price level.
- Day Order
A day order is given by the client to its broker to execute a transaction at a specific price level. If the specified price level of the stock is not reached then the order expires at the end of the trading session.
It is the pace at which the price of a stock moves up or down.
- Going Long
It means purchasing a stock at a low price with the hope that it will go higher.
- Averaging Down
Averaging down means purchasing the stock when the price is falling down so as to lower the overall purchase price of the stock.
It is the market value of the company.
Float is the number of shares that are available for trade after deducting the shares held by the insiders.
- Authorized Shares
Authorized shares are the total number of shares of a company available for trade.
- Initial Public Offering (IPO)
Initial Public Offering is brought when a private company wants to go public by listing itself for the first time on the stock exchange.
- Secondary Market
After the shares of the company get listed on the stock exchange, anyone can purchase or sell them through the open market which is also called the secondary market.
When the company shares a portion of profits with shareholders it is called dividend.
A broker is a person who buys or sells shares on the stock exchanges on your instructions in your account.
It is a place where different types of financial instruments are traded.
A portfolio is the collective pool of your various investments.
When you borrow money from the broker to buy shares and no upfront payment is involved, it is called the margin.
- Stock Symbol
It is one to three character alphabet that represents the name of the company listed on the stock exchange.
In the stock market, the listed companies belong to the different sectors of the economy.
The above mentioned are some of the popular terminologies that are used in day to day trading on the stock exchange. By understanding them, you can become a better trader or investor. When you will regularly deal in the stock market, they will also become a part of your vocabulary. If you want to learn more about the stock market, you can contact Kotak Securities. They teach how to trade in stock market to beginners and help them become successful traders.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9042996168136597,
"language": "en",
"url": "https://pharmastate.blog/pareto-analysis/",
"token_count": 1174,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.035400390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9bb29ef5-a783-4aaf-847c-4bcdd280f5a1>"
}
|
Pareto analysis is a statistical technique that is used in decision making for the selection of the limited number of tasks that produce the most significant overall effect.
- Pareto analysis is a statistical technique that is used in decision making for the selection of the limited number of tasks that produce the most significant overall effect.
- It uses the concept based on identifying the top 20% of causes that need to be addressed in order to resolve 80% of the problems.
- In any series of elements to be controlled, a selected small factor in terms of the number of elements almost always accounts for a large factor in terms of effort’.
It is a methodology that could be used for separating the vital few problems from the trivial many problems and hence by the identification and the ordering according to their importance, it could show where to focus scarce manufacturing problem solving resources.
It could be used for the identification of areas where; the greatest reliability improvements can be employed or where the time between failures or the repair times are excessive.
- Root Cause Investigation
By using a reiterative multilayered approach, the Pareto concept can assist in root cause investigations by helping to identify the principal causes of the principal failures.
- Risk Analysis
In risk analysis, it is used to identify the principal risks that have the most impact on a project.
The analyse and prioritization of quality defects frequently uses the Pareto concept in situations where a few operations frequently account for the bulk of the quality defects.
- Cost Analysis
Pareto Analysis in cost analysis is used for the identification of the critical warranty repairs of a product and the items they are attributable to.
- Supply Chain Management
In supply chain management, the top percentage of the items inventoried represents the bulk of the total cost or the majority of the usage.
- Setting Work Priorities
In setting work priorities when the Pareto distribution is used to list work tasks in order of money lost (including the risk for money lost) it becomes a priority list for attacking business problems that have the greatest impact on the enterprise.
- Pareto Analysis can be further enhanced by combining with other analytical tools such as Failure Mode and Effects Analysis and Fault Tree Analysis, Fishbone diagrams, Scatter Diagram, Run Charts and Flow Charts in order to correctly identify critical areas.
- The method of carrying out a Pareto Analysis is normally by the construction of a Pareto diagram.
- The steps necessary to construct a Pareto diagram are as follows:
-Define the purpose of using the diagram and the type of category to use.
-Identify the most appropriate measurement parameter.
-If necessary group categories into a workable amounts. A further breakdown of each category can be carried out at a later stage.
-List each category with its associated data count.
-Sort the categories in descending order placing the one with the largest count first.
-Label the left-hand vertical axis. Make sure the labels are spaced in equal intervals from 0 to a round number equal to or just larger than the cumulative total of all counts.
-Label the horizontal axis. Make the widths of all of the bars the same and label the categories from largest to smallest.
-Plot a bar for each category.
-Plot the cumulative counts.
-Draw a line at 80% of the cumulative value onto the x-axis. This point on the x-axis separates the important causes on the left and less important causes on the right.
4. Further Considerations
- The numbers don’t have to be 20% and 80% exactly – the purpose is to identify the categories accounting for the majority of the results, then tools like the Ishikawa diagram or Fishbone Analysis can be used to identify the root causes of the problems.
- If there are any categories marked ‘other’ in the list of possible causes, make sure that this category does not become too large. If the ‘other’ category accounts for more than 25% it should be broken down.
- Regular reviews of the Pareto distribution are important in order to keep account of who has solved what problems and to define what new failures have come over the horizon that require immediate attention.
5. When To Use A Pareto Chart
- When you want to break a big effort into smaller pieces and identify major contributors
- When you want to focus and prioritize your efforts
- When there are multiple problems or reasons and you want to focus on the most significant
- When analyzing the frequency of causes or reasons
- When data can be categorized and you can determine the number of incidents in each category
- When you want to communicate
6. How To Create A Pareto Chart
- Collect your data
- Analyze and categorize your data
- Add your data to Excel
- Sort your data in descending order
- Determine percentages
- Graph your data
- Polish your graph
- Communicate your findings
Although Pareto analysis is a creative way of looking at the causes of problems; it can be limited by; the exclusion of possibly important problems which may be small initially, but which grow with time. It can also be limited by a lack of understanding of how it should be best applied to particular problems or by the choice of the wrong category of the data
- Working smarter is good – working smarter on the right things is better
- Determining the 20% of things that are really important can help show where to concentrate improvement efforts
- The 80/20 rule can apply to nearly every aspect of both your work and personal lives
For any feedback or suggestions write to us at [email protected]
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9406332969665527,
"language": "en",
"url": "https://rogersdvs.com/how-to-value-a-company/",
"token_count": 1237,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.064453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bd59c0b1-c582-4861-b130-e53f71dd66cd>"
}
|
This post outlines how to determine the value of a company – both private and public. You can calculate or evaluate a company’s true valuation through a variety of different methods. Whether you are considering purchasing a company or investing, these values hold true. Bear in mind, we are not investing advisors and all content is simply information on this subject. Please consult your own advisors when possible in all matters related.
How Valuation of a Company is Done
The value of a business ultimately depends on the buyer’s price. There is no right way to objectively determine the value. However, as Steven Robins states in his question and answer column (Entrepreneur, January 11, 2004), “you could probably come up with several wrong ones.”
The idea of business valuation is to predict how likely you are to win or lose in this business situation.
• Start by looking at the value of the business assets. What does the business own, equipment, inventory, i.e., replacement costs.
• The business’ books. A company’s financial documents (i.e. balance sheets, income statements, etc.) may give a good idea of the value of assets and the level of profit.
• You can look at the business as a money stream. A company’s revenue may be able to give you an estimate of value. Utilizing business valuation databases, business valuation consultants may be able to determine an estimate of value based on a multiple of its revenue. Looking at purchase prices for similar businesses you can sometimes work out what that multiplier is.
• Perhaps profit is the fundamental measure of business value. Estimation of the profit over several years can give you a sense of how much the business is actually worth to you, as a practical matter.
• You need a goodly sample of profit numbers to take a reliable measure of value, since profits fluctuate with the activities of competitors and market conditions.
• Many estimators use the technique of factoring interest rates into their calculations, as if you were going to invest the business profits every year, to determine value. This will give you a measure of how much in treasury bills you would have to buy to equal the business’ profit.
Sometimes non-financial and, perhaps, less quantifiable things can enter into business value estimates. As an example, location can play a role. A business may be more valuable if its located conveniently to the owner. If the business meets some ideal or some long-held dream it could be worth more. However, someone estimating the value of a business should be careful not to let the heart too thoroughly rule the head.
How to Determine the Value of a Company
Many entrepreneurs purchase a company already in existence, enticed by the advantages of an already running company. However, it is essential to look at a company’s valuation before completing any purchase, or even before considering the business as a viable transaction. What is the return on investment that the investor can expect? How is the business doing in sales? Can the investor expect to gain back the investment, and how long will it take to gain back the investment? Is the cash flow of the business enough to support the current debt or any proposed future debt? If the entrepreneur purchases the business utilizing debt, can that debt be paid back using the business’s profits? These are just some of the questions that can be answered by a thorough valuation analysis.
The following are a few methodologies credentialed valuation experts use to discover the actual valuation of a company.
- Multi-period Cash Flow Method
This methodology is used by investors who are knowledgeable about the business’ future or are knowledgeable about the relevant industry’s future. The multi-period cash flow method models a business’ cash flows 5 to 10 years into the future making educated assumptions regarding a business’ future sales penetration rate, market share and competitive landscape. These future cash flow expectations are then brought back to today’s dollars utilizing company and industry risks and long-term growth expectations all captured by a business’ discount rate.
- Capitalized Earning (Single Period) Method
Similar to the multi-period cash flow method, the capitalized earnings approach is used to determine the value of a business by calculating the net present value (NPV) of expected future profits as measured by cash flows. The capitalization of earnings method determines a single period cash flow (i.e., next year’s cash flow) and capitalizes that cash utilizing the capitalization rate (cap rate). This will take into account the risk that earnings will stop or be lower than the estimate. This method should only be used when there is an expectation of normal or consistent cash flows.
- Market Approach Method
This methodology is used by investors to determine the value of a business or an asset based on the selling price of similar businesses. In the real estate industry, a property’s value can be estimated by looking at comparable sales (i.e., recently sold properties that are similar in size and features and that are located within a close geographic proximity to the property being valued). In a business valuation, the market approach can be used to calculate the value of a business by identifying ‘like’ companies in similar industry’s and making adjustments for differences in size, entity type, management, etc.
- Assets Method
This methodology is used to determine a business’ value by estimating the value of all of an entity’s assets (net asset value) minus the net value of its liabilities. The asset-based approach basically asks what it would cost to recreate the business. This methodology can be useful if the assets and liabilities as carried on a business’ financial statements (book value) would be a good estimate of the assets fair market value.
Determining the value of a business is done by using one or a combination of these methods. An entrepreneur interested in purchasing a business should consider all of some of these methods to discover if the business is worth its asking price.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9599926471710205,
"language": "en",
"url": "https://thegreenrevolt.com/solar/4-cool-facts-about-solar-power-most-people-are-unaware/",
"token_count": 780,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1201171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:debc1d1a-3d86-4c14-9421-531f77a7543b>"
}
|
The economic structures of many of the world’s countries are becoming ever-increasingly based on industry requiring the use of large quantities of non-renewable resources such as fossil fuels (coal, natural gas, petroleum). Many researchers in the past decade have turned their heads to more sustainable and efficient sources of energy, particularly the promising source of solar energy.
The potential for a more effective and plausible long-term solution to the potentially impending environmental consequences of human-caused climate change, some say, lies in the replacement of fossil fuels and other non-renewable resources with renewable power sources such as the sun and wind. Solar power is gradually becoming adopted by many businesses and residential property owners as a way to both save money and transition to a cleaner energy future. Here are four interesting facts about solar power.
Fastest Growing Industry
Solar power is the 10th fastest growing industry in the United States. Currently, solar energy supplies for less than 1% of America’s energy needs; however, within a one year span beginning in the third quarter of 2011, the United States solar energy market grew 140 percent, and it continues to grow. The drop in solar panel prices and the incentives offered by governments around the world looking to reward their nations for adopting technologies that encourage environmental friendliness are facilitating the introduction of solar power to areas of the world that previously relied only non-renewable resources for fuel. In America, solar is the most rapidly growing energy source, creating power for an increasing number of homes and business establishments.
Uses of Solar Power
Solar power can be used to carry out a multitude of tasks while polluting less. The sun is the primary source of energy for all life forms on planet earth. The energy itself is clean, and can be used to light buildings, cook food, heat and cool rooms, heat water, and to conduct other tasks that require electrical power. This source of energy is as considered at least as reliable as a fuel source as other currently-used non-renewable resources, and does not emit greenhouse gases that pollute the atmosphere as do fossil fuels.
Solar vs. Fossil Fuels
The energy we can get from all of the earth’s coal, oil, and natural gas reserves can be matched by a supply of only 20 days of sunshine. As light from the sun makes its way down to earth, it loses some energy that is reflected back into space and some absorbed by the atmosphere; however, as an average over the entire surface, each square meter of planet earth accumulates approximately the energy equivalent of nearly a barrel of oil per year. For obvious reasons, this figure is not applicable on every segment of land on the planet, but it does suggest that the potential for solar energy harvesting is significant.
Air and Space
Solar energy has been used to power several aircrafts, some since 1958. In fact, solar technologies constitute most of the power used during space missions, since other fuel sources are virtually nonexistent in space. The International Space Station (ISS) is a spacecraft that is notably known for the solar arrays which are used to power the crafts sensors, its propulsions, and the technologies within the vessel. Solar energy is also used in vehicles of air travel. NASA has created a prototype of the solar-powered aircraft Pathfinder, which is powered only by sunlight and could stay aloft all day. This concept was applied in real-life in 1990 by Eric Scott Raymond, who created and flew a real sun-fueled aircraft over 4,000 kilometers across the United States.
Designers of greener technologies such as Raymond and scientists alike are continuing to look toward solar power as a practical solution to creating a more sustainable future, and the status of solar power as an entirely viable option continues to gain supporters around the world.Last modified: July 17, 2020
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9304457902908325,
"language": "en",
"url": "https://www.bankofgreece.gr/en/statistics/external-sector/balance-of-payments",
"token_count": 417,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.042724609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b3c26e27-18bc-4e71-be55-109a6be558be>"
}
|
The balance of payments is a statistical table that records transactions between residents and non-residents, irrespective of the transaction currency, during a specified time period.
The notions of ‘resident’ and ‘non-resident’ are based on the definitions set out by the International Monetary Fund (IMF) in the latest (6th edition) of its Balance of Payments Manual , and used for statistical purposes only.
Since reference month January 2015, the Bank of Greece has adjusted its methodology for compiling Greece's balance of payments to the recommendations and definitions of IMF-BPM6-Balance of Payments Manual (6th edition, 2009). Further information relevant to the transition from the 5th to the 6th edition is available in the special BPM6 Press Release. Historical data based on the new methodology are available from January 2002.
Today, the data collection mechanism used by the Bank of Greece for compiling the balance of payments is based on a ’mixed’ system. The main sources from which balance of payments data are drawn are:
- The direct reporting system of external sector transactions (known as DIREQT– direct reporting questionnaires by transacting parties), regardless of whether or not such transactions are processed with the intermediation of domestic credit institutions.
- The resident Monetary Financial Institutions (including the Bank of Greece), which are required to report monthly data to the Bank of Greece on all transactions between Greek residents with non-residents carried out either on their own behalf or on behalf of their customers.
Other sources of statistical data for the balance of payments include the information provided by EL.STAT. on external trade statistics (based on intrastat/extrastat declarations), the General Accounting Office (Ministry of Finance), the Border Survey and the sea transport services data. A detailed presentation of the methods and statistical sources used by the EU countries for compiling their balance of payments and international investment position are included in the European Central Bank publication: European Union Balance of Payments and International Investment Position statistical sources and methods, November 2016 .
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9588218331336975,
"language": "en",
"url": "https://www.futurenetzero.com/2020/07/15/green-hydrogen-production-could-become-cost-competitive-by-2030/",
"token_count": 380,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.08642578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:54a8c702-d4b5-4698-a029-42a4cd89e0d8>"
}
|
Green hydrogen production could become cost-competitive by 2030.
That’s the suggestion fmade in a new analysis from the IHS Markit Hydrogen and Renewable Gas Forum, which says the method of “splitting” the gas from water using renewable electricity could become as affordable as currently predominant methods that require the use of natural gas as a feedstock.
It notes this form of hydrogen generation is rapidly developing from pilot to commercial-scale operation in many regions of the world and predicts investment in such ‘power-to-x projects’ is expected to grow from around $30 million (£23.75m) in 2019 to more than $700 million (£554m) in 2023.
The study suggests this fuel will be increasingly used to decarbonise the transport, heating, industry and power generation sectors and says that paired with blue hydrogen production, which is when a natural gas-based method is coupled with carbon capture technology, green hydrogen is likely to play a signficant role in the future energy mix.
Simon Blakey, IHS Markit Senior Advisor, Global Gas, said: “Costs for producing green hydrogen have fallen 50% since 2015 and could be reduced by an additional 30% by 2025 due to the benefits of increased scale and more standardized manufacturing, among other factors.”
“The work that we have done for the IHS Markit Hydrogen Forum – very much focuses on economies of scale as a way of reducing costs, developing dedicated renewables in order to get the load factor on the electrolyser up and, of course, continued expectations of falling costs for renewables.”
“We’re all pretty clear that the trends are in that direction in all three of those areas,” Blakey said moderating a recent panel on hydrogen for the CERAWeek Conversations.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9351686835289001,
"language": "en",
"url": "https://www.investopedia.com/ask/answers/013015/how-do-i-determine-face-value-life-insurance-policy.asp",
"token_count": 709,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.08251953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c2c30e78-20c5-49fc-836d-a5000cef3bb2>"
}
|
A life insurance policy has a face value and a cash value, and they are two different numbers.
- The face value is the death benefit. This is the dollar amount that the policy owner's beneficiaries will receive upon the death of the insured. This figure is recorded in the schedule of benefits for the policy.
- The cash value is the amount you would receive if you surrendered the policy early, forfeiting the death benefit in return for cash up front. This is recorded on the monthly statements that insurers send their customers.
The cash value may also be referred to as the net surrender value.
About the Face Value of Life Insurance
To calculate the full benefit that will be paid out to beneficiaries in the event of the insured person's death, consult the schedule of benefits in the policy.
Most life insurance companies also offer riders, which are additional benefits that can be included in a plan. For example, some riders stipulate that the face value doubles if the insured dies due to a specific type of accident.
Altogether, the face value plus the value of any additional benefits constitute the policy's total death benefit.
In most cases, the face value of life insurance is transferred to the beneficiaries tax-free.
- The face value of a life insurance policy is the death benefit, while its cash value is the amount that would be paid if the policyholder opts to surrender the policy early.
- Face value is the primary factor in determining the monthly premiums that will be owed.
- Face value can be found in the statement of benefits, while cash value is on the monthly statement policyholders receive.
How Face Value Influences Cost
Face value is one of the most important factors that contribute to the cost of a life insurance policy.
For example, a person who seeks to buy a term life insurance policy from Company XYZ would expect to pay more for a $500,000 face value policy than for a $100,000 face value policy.
What Can Cause Face Value to Change?
There are many events that can trigger a change up or down in the face value of a policy.
On the plus side, the cash value can grow large enough that it actually causes a corresponding increase in the face value of the policy.
On the minus side, unpaid loans taken from the policy balance by the policyholder will be deducted from the policy's face value.
Any potential change in the face value of the policy will be addressed in the terms of the policy.
Steve Kobrin, LUTCF
The firm of Steven H. Kobrin, LUTCF, Fair Lawn, NJ
The key thing is to determine how big a face value to buy. To calculate it, start off by asking yourself these questions:
- How much money will my spouse and children need to maintain their current quality of life?
- How much will they need to pay my debts, taxes, and other estate-related costs?
- How much will my favorite charities need to replace my donations?
- Next, figure out the maximum length of time the coverage would be needed. For example, if your youngest child is two years old now, you’d want to make sure he or she has a sufficient income through college. That's another 20 years.
It may be more cost-effective to use several policies of different face amounts and guarantee periods to cover these various needs. Or, it may be simpler to have one big fat policy to cover everything.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9450203776359558,
"language": "en",
"url": "https://www.theparliamentmagazine.eu/news/article/commission-presents-eu-plastics-strategy",
"token_count": 967,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03857421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6c4176f1-dca1-4eff-8602-d0dbaf7cf8ce>"
}
|
Part of the transition to a circular economy, the strategy lays out measures to cut plastic packaging and plastic waste.
It was prepared by Commissioners Frans Timmermans, Jyrki Katainen, Karmenu Vella and Elżbieta Bieńkowska and focuses on incentives to make the design of plastics better so that recycling (especially of single-use plastics, 90 per cent of which are not recycled) is easier.
The strategy is due to be debated by MEPs in Parliament in Strasbourg on Wednesday.
Figures show that plastics production is 20 times higher than in 1960s, and is forecast to almost quadruple by 2050. Although there are thousands of types of plastics, 90 per cent of plastics are derived from virgin fossil fuels.
About six per cent of global oil consumption is used to produce plastics; by 2050, this share could reach 20 per cent.
In Europe, about 40 per cent of post-consumer plastic waste is incinerated with energy recovery, and the rest is either landfilled or recycled. About half of the plastic waste collected and recycled is treated in the European Union; the other half is exported, mainly to China.
Commission Vice-President Jyrki Katainen, responsible for jobs, growth, investment and competitiveness, said, “With our plastic strategy we are laying the foundations for a new circular plastics economy, and driving investment towards it. This will help to reduce plastic litter in land, air and sea while also bringing new opportunities for innovation, competitiveness and high-quality jobs.
“This is a great opportunity for European industry to develop global leadership in new technology and materials. Consumers are empowered to make conscious choices in favour of the environment. This is true win-win.”
Under the strategy, new rules on packaging will be developed to improve the recyclability of plastics used on the market and increase the demand for recycled plastic content.
New rules will also be introduced to curb the use of microplastics in products, and fix labels for biodegradable and compostable plastics.
Additionally, the Commission will put forward new rules to tackle marine litter, while reducing the administrative burden on ports, ships and competent authorities.
In an effort to support investment and innovation, €100m will be made available to finance the development of smarter, more recyclable plastics materials, making recycling processes more efficient, and tracing and removing hazardous substances and contaminants from recycled plastics.
The Commission’s proposals have generally been praised by MEPs. The EPP group said, “Plastics make up more than 85 per cent of our litter. We need a new plastics economy, otherwise by 2050 there will be more plastic in the ocean than fish. Our MEPs support decisions that protect the environment by building on technological innovation and create new jobs.”
Speaking ahead of the strategy launch, S&D group Vice-Chair Kathleen Van Brempt said, “The EU is the first to make such a move and we should be proud of that because this is very important. If we don’t do something soon we will have more plastic than fish in the seas.
“We need to do more to make sure plastic is re-useable and can be recycled. And we need to say to people that it is no longer acceptable to use plastic bags.”
Writing recently in the Parliament Magazine, she said, “Plastics are a good example of what goes wrong with our linear take-make-dispose economy. Some 98 per cent of our plastics are produced using virgin feedstocks; only two per cent are recycled in a closed loop. More than half of the plastics put on the world market are landfilled or incinerated. Almost one third ends up as litter on land or in the oceans.”
As part of its ‘circular economy’ package, the European Commission presented in December 2015 an action plan for the circular economy.
The action plan presented measures in five priority sectors, among which plastics. The Commission pledged specifically to undertake the following actions: develop a strategy on plastics in the circular economy (by 2017); and take specific action to reduce marine litter with a view to implementing the 2030 Sustainable Development Goals (from 2015 onwards).
In a resolution of 14 January 2014 on plastic waste in the environment, Parliament called for binding targets for collection, sorting and recycling, as well as mandatory criteria for plastics recyclability.
It advocated recycling as the best option to meet environmental targets, and urged that plastic waste be used for energy recovery only in cases where all other possibilities have been exhausted.
It also called for phasing out the most dangerous plastics and those which contain substances hampering recycling processes.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9474279880523682,
"language": "en",
"url": "https://www.windpowerengineering.com/development-banks-key-unlock-trillions-wind-solar/",
"token_count": 538,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.00970458984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7b344f02-ac87-467e-b90d-4ca0dad75b4c>"
}
|
National and multilateral development banks are key to scaling up clean energy investment to at least $1 trillion per year, according to a new paper released ahead of the World Bank-IMF Spring Meetings.
Investing at Least a Trillion Dollars a Year in Clean Energy, from the New Climate Economy, explores different policies and instruments for reducing financing costs. It finds that cooperation among multilateral development banks (MDBs), governments, and the private sector can lower the risk of clean energy investments and lower the cost of capital.
“Everyone knows that we need to invest more in clean energy,” said Helen Mountford, Program Director of the New Climate Economy. “What we need to recognize is that the capital is already available. Investors are on the hunt for new opportunities. We need to scale up risk mitigation approaches to match the risk-return profile that they need, and the money will pour in.”
Development banks can take on the risks that no other actors are willing to take. The paper recommends that they expand their risk mitigation instruments and increase their direct investment in clean energy projects. For every US$1 the MDBs invest, they can leverage up to US$20 in private finance. Meanwhile, new financing vehicles like green bonds and YieldCos are growing rapidly, and can reduce liquidity risk for investors.
“Clean energy projects face financing models and electricity markets designed for fossil fuels,” said Ilmi Granoff, co-author of the paper and Senior Research Associate at the Overseas Development Institute, “but renewables have no fuel costs, low operating costs, and can be flexibly deployed—we need to change the paradigm so that clean energy assets are priced to reflect their low risk.”
If clean energy projects could access low-cost, long-term financing, the cost of clean electricity could be reduced by as much as 20% in developed economies and the cost of clean energy support could be reduced by as much as 30% in emerging economies.
In 2015, clean energy attracted US$329 billion in global investment, a new record but still not enough to limit global warming below 2°C or provide energy access to the 1.1 billion people who lack it.
The paper also finds that scaling up investment in clean energy and energy efficiency to US$1 trillion per year by 2030 could reduce annual GHG emissions by up to 7.5 Gt CO2e, more than the annual emissions of the United States.
“Clean energy investment is potentially a massive triple win for the economy, the climate, and for rapidly achieving energy access,” said Mountford.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9571868181228638,
"language": "en",
"url": "https://zophra.com/is-it-effective-to-pursue-my-master-degree-of-commerce-with-distance-learning",
"token_count": 586,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.052490234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:48650c2d-9b6d-4ef6-b757-b6460a90eb33>"
}
|
Commerce is a part of business and economics studies that teaches the commerce aspirants all the activities regarding the exchange of products and services from the manufacturer or producer to the client. It does not comprise only buying and selling of the products but it is an extensive stream of making future in investing, finance, distribution of the products, completing demands of the market, and more business opportunities. Being a graduate is a big thing but having a Master degree in Commerce is really beneficial and opens the gates for commerce aspirants to become chartered accountants. On the other hands e-commerce is also a part of Commerce with a touch of technology.
Master of Commerce can easily get jobs in larger companies and banks but they need to get the Master degree of Commerce to proceed further in their career.
Pursuing M.Com is easier when you go to college or university daily but due to Corona Virus, the colleges and schools haven’t been open yet physically which creates a bigger problem in front of the students.
A Little History of Distance Learning
The widespread outbreak of the Covid-19 pandemic has forced the government to close schools and encourage distance learning from home. Various methods are used to ensure that learning activities continue even though there are no face-to-face sessions.
Basically distance education is a method where students and teachers are in different locations, so an interactive telecommunication system is needed to be able to connect with one another. In distance learning, the role of technology is needed, considering that learning is done online or online.
LPU Distance learning methods actually existed long before the Covid 19 pandemic hit. The methods of Distance Learning have been continually evolved. With the widespread use of the internet by the public in various countries in 1996 it became a growing phenomenon and was followed by the emergence of various digital content in it.
MCom Distance education is offered across time and space so that students gain flexibility in learning in time and different places, and uses a variety of sources learn. Distance Education evolves from forms of correspondent education to education through e-learning across time and space.
The sudden shift from face-to-face in classrooms to distance learning from home also shows the need for capacity building of teachers. Unequal internet access, gaps in teacher qualifications, and quality of education, as well as a lack of communication and technology skills are vulnerabilities in distance learning initiatives in India.
Definition of E-Learning
- Individual / independent or group learning with the help of LPU distance learning institutes.
- Provides flexibility for students to study anytime, anywhere alone, and with anyone.
- Can be combined with face-to-face learning blended, but has innovative value because it gives the feel new in the teaching and learning process that is different from ordinary face-to-face learning.
LPU emphasizes independent, structured and guided learning by using a variety of learning resources;
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9184722900390625,
"language": "en",
"url": "http://core-cms.prod.aop.cambridge.org/core/books/essential-microeconomics/firms/C0BA9D3C7D233FF934C134940F8FCF86",
"token_count": 235,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10205078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:560f9d87-7726-4d64-be4f-9cb95f2ea00e>"
}
|
What Is a Firm?
Key ideas: firm as a transformer of inputs into outputs, production sets, production functions, net supply and net demand
In this chapter the focus switches to the transformation of commodities by firms. Within a firm, raw materials and other commodity inputs are processed by labor and managerial inputs to produce goods and services. These outputs may be for consumption (final products) or for sale as inputs to other firms (intermediate products). The amount of output that can be produced depends on the technology (machinery, buildings, etc.) held by the firm.
This is relatively straightforward. Consider, for example, a newsprint manufacturer. It transforms the primary raw materials of lumber, energy, and labor into giant rolls of paper ready for delivery to daily newspapers, using an array of machines. However, from a broader perspective, the machines are also inputs. In addition to purchasing labor inputs and raw materials, the firm can purchase additional capital equipment (for the same plant or to build a new plant) and so alter the set of available outputs. From this perspective, the technology of a firm is a set of blueprints for the transformation of commodities.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9384583234786987,
"language": "en",
"url": "https://anteprimafriuli.com/qa/question-what-are-the-four-macroeconomic-objectives.html",
"token_count": 884,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11474609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:da5640a0-6393-43ae-829b-600748495428>"
}
|
- What are the 5 economic values?
- What are the 5 main economic goals?
- What are the microeconomic objectives?
- What are the three macroeconomic policies?
- Why is economic growth a macroeconomic objective?
- What are the features of macroeconomics?
- What are macroeconomic goals?
- What are the four goals of most economic systems?
- What are the 5 macroeconomic objectives?
- What is the concept of macroeconomics?
- What are the 3 main macroeconomic goals?
- What is the most important macroeconomic objective?
- What is the importance of macroeconomics?
- What are the 7 major economic goals?
What are the 5 economic values?
What Are ‘Economic Values’.
There are nine common Economic Values that people consider when evaluating a potential purchase: efficiency, speed, reliability, ease of use, flexibility, status, aesthetic appeal, emotion, and cost..
What are the 5 main economic goals?
ECONOMIC GOALS: Five conditions of the mixed economy, including full employment, stability, economic growth, efficiency, and equity, that are generally desired by society and pursued by governments through economic policies.
What are the microeconomic objectives?
The objective of microeconomic theory is to analyse how individual decision-makers, both consumers and producers, behave in a variety of economic environments.
What are the three macroeconomic policies?
The key pillars of macroeconomic policy are: fiscal policy, monetary policy and exchange rate policy. This brief outlines the nature of each of these policy instruments and the different ways they can help promote stable and sustainable growth.
Why is economic growth a macroeconomic objective?
Economic growth means an increase in real GDP – which means an increase in the value of national output/national expenditure. Economic growth is an important macro-economic objective because it enables increased living standards, improved tax revenues and helps to create new jobs.
What are the features of macroeconomics?
The features of Macroeconomics are:Macroeconomics is the branch of economics that studies the aggregate units of the economy such as national income, employment, inflation, etc.Macroeconomics uses lumping method for the purpose of economic study.More items…•
What are macroeconomic goals?
MACROECONOMIC GOALS: Three conditions of the mixed economy that are most important for macroeconomics, including full employment, stability, and economic growth, that are generally desired by society and pursued by governments through economic policies. … They are full employment, stability, and economic growth.
What are the four goals of most economic systems?
The Goals of Economic Policy. There are four major goals of economic policy: stable markets, economic prosperity, business development and protecting employment.
What are the 5 macroeconomic objectives?
Economists usually distinguish five objectives of macroeconomic policy, which in its turn can also be used to appraise the performance of the economy. The macroeconomic objectives are: economic growth, full employment, price stability, income equality and balance of payment equilibrium.
What is the concept of macroeconomics?
Definition: Macroeconomics is the branch of economics that studies the behavior and performance of an economy as a whole. It focuses on the aggregate changes in the economy such as unemployment, growth rate, gross domestic product and inflation.
What are the 3 main macroeconomic goals?
Goals. In thinking about the overall health of the macroeconomy, it is useful to consider three primary goals: economic growth, full employment (or low unemployment), and stable prices (or low inflation).
What is the most important macroeconomic objective?
Economic growth is normally seen as the most important long-term macroeconomic objective. Without economic growth, so it is argued, people will be unable to achieve rising living standards.
What is the importance of macroeconomics?
The study of macroeconomics is very important for evaluating the overall performance of the economy in terms of national income. The national income data helps in anticipating the level of fiscal activity and understanding the distribution of income among different groups of people in the economy.
What are the 7 major economic goals?
National economic goals include: efficiency, equity, economic freedom, full employment, economic growth, security, and stability.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9288519024848938,
"language": "en",
"url": "https://emerj.com/ai-future-outlook/an-ai-cybersecurity-system-may-detect-attacks-with-85-percent-accuracy/",
"token_count": 914,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.4921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9ccf4cc5-c6a5-4980-8dd0-a43a165c3b87>"
}
|
How secure is your company’s online data?
Probably not as secure as you think. Recent statistics from a security risk benchmarking startup called SecurityScorecard suggest that the United States federal government ranks dead last among major cybersecurity industries, despite having spent $100 billion on cybersecurity measures over the past decade.
IT and security teams are dangerously understaffed, with over 200,000 cybersecurity jobs going unfulfilled. A Rand Corporation study estimates there are about 1,000 top-level cybersecurity experts compared to a global need for 10,000 to 30,000. Perhaps most remarkable is the expense. The British insurance company Lloyd’s puts the annual cost of cyber attacks at $400 billion – and that’s without including the significant portion of cybercrime that the World Economic Fund (WEF) claims goes undetected.
So what’s a company to do? Better artificial intelligence may be the answer.
When it comes to detecting cyber attacks, today’s security systems come in two forms: analyst-driven and machine-driven. Analyst-driven solutions are developed and maintained by security experts and rely on rule sets to scan for potential attacks. The weakness with these solutions is that any attack that doesn’t fit neatly into the experts’ set of rules is disregarded and allowed to slip by. Thus, the system overlooks new and unfamiliar attack methods.
The machine-driven form utilizes anomaly detection that’s generated by a machine learning algorithm. Anomaly detection has the opposite weakness, in that it tends to flag too many false positives. This often requires constant feedback from cybersecurity analysts, who tend to have too much on their plates to effectively address and re-label every false positive.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) in collaboration with machine learning-startup PatternEx have combined analyst-driven solutions with anomaly detection to develop what they claim is a drastically better solution. In fact, they claim the system – dubbed AI2 – can predict 85 percent of cyberattacks with only the occasional oversight of human experts needed. The teams presented their findings in a paper at last week’s IEEE International Conference on Big Data Security in New York City.
The “AI-driven predictive cybersecurity platform” works by first combing through data and attempting to detect suspicious activity through unsupervised (or, anomaly detection) methods. Once the system finishes filtering, it presents the suspicious activity to a human cybersecurity expert, who confirms the fraudulent activity and denies any legitimate threats.
AI2 then creates what’s called a supervised model from the expert’s feedback. This model becomes a reference tool for the system when it detects future attacks. AI2 refers to the analyst’s supervised model as it combs through additional data. Again, it presents the suspicious activity to an analyst, who confirms the actual attacks. This feedback is fed again into the supervised model, and the system’s detection becomes progressively more refined.
When these steps are repeated just a few times, AI2‘s researchers claim you’ll have a system with an 85 percent success rate at predicting cyberattacks. You can see a full explanation in the video below:
AI2 standouts by utilizing three different unsupervised learning methods before presenting that data to analysts. The addition of the supervised model – developed from the analysts feedback – means the system can scale back its suspicious activities five-fold in just a few days.
CSAIL research scientist Kalyan Veeramachaneni, who helped lead the project, describes AI2 thus: “The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions…That human-machine interaction creates a beautiful, cascading effect.”
Meanwhile, even those unaffiliated with the project are excited. Nitesh Chawla, the Frank M. Freimann Professor of Computer Science at the University of Notre Dame, told MIT New, “This paper brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives.”
If AI2 works like planned, the system may provide IT and security teams with a valuable alternative to anomaly detection and analyst-driven solutions. By combining the two approaches, the researchers have helped refine machine learning methods of cybersecurity while freeing up analysts to focus on other projects.
Image credit: Pixabay
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9270220994949341,
"language": "en",
"url": "https://en.anexas.net/Cost-of-Quality-COQ",
"token_count": 887,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0771484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:451813ad-eac5-4455-8704-7336a957d7fb>"
}
|
Cost of Quality (COQ)
Cost of Quality is a procedure used to characterize and quantify where and what measure of an association's assets are being utilized for avoidance exercises and keeping up item quality instead of the expenses coming about because of inner and outer disappointments. The Cost of Quality can be spoken to by the whole of two variables. The Cost of Good Quality and the Cost of Poor Quality equivalents the Cost of Quality, as spoke to in the fundamental condition beneath:
CoQ = CoGQ + CoPQ
The Cost of Quality condition glances straightforward however in all actuality it is increasingly intricate. The Cost of Quality incorporates all expenses related with the nature of an item from preventive costs expected to lessen or dispose of disappointments, cost of procedure controls to keep up quality levels and the costs identified with disappointments both inner and outer.
The strategies for figuring Cost of Quality fluctuate from organization to organization. As a rule, associations like the one depicted in the past model, decide the Cost of Quality by ascertaining all out guarantee dollars as a level of deals. Sadly this technique is just taking a gander at the Cost of Quality and not looking inside. So as to increase a superior understanding, an increasingly far reaching take a gander at all quality expenses is required.
The Cost of Quality can be isolated into four classifications. They incorporate Prevention, Appraisal, Internal Failure and External Failure. Inside every one of the four classes there are various potential wellsprings of cost identified with great or low quality. A few instances of commonplace wellsprings of Cost of Quality are recorded beneath.
The Cost of Good Quality (CoGQ)
1. Prevention Costs – costs acquired from exercises planned to downplay disappointments. These can incorporate, however are not constrained to, the accompanying:
o Establishing Product Specifications
o Quality Planning
o New Product Development and Testing
o Development of a Quality Management System (QMS)
o Proper Employee Training
2. Appraisal Costs – costs caused to keep up worthy item quality levels. Evaluation expenses can incorporate, yet are not constrained to, the accompanying:
o Incoming Material Inspections
o Process Controls
o Check Fixtures
o Quality Audits
o Supplier Assessments
The Cost of Poor Quality (CoPQ)
3. Internal Failures – costs related with surrenders found before the item or administration arrives at the client. Inward Failures may incorporate, however are not restricted to, the accompanying models:
o Excessive Scrap
o Product Re-work
o Waste because of inadequately planned procedures
o Machine breakdown because of inappropriate support
o Costs related with disappointment investigation
4. External Failures – costs related with deserts found after the client gets the item or administration. Outer Failures may incorporate, however are not restricted to, the accompanying models:
o Service and Repair Costs
o Warranty Claims
o Customer Complaints
o Product or Material Returns
o Incorrect Sales Orders
o Incomplete BOMs
o Shipping Damage because of Inadequate Packaging
These four classes would now be able to be applied to the first Cost of Quality condition. Our unique condition expressed that the Cost of Quality is the whole of Cost of Good Quality and Cost of Poor Quality. This is still evident anyway the fundamental condition can be extended by applying the classes inside both the Cost of Good Quality and the Cost of Poor Quality.
• The Cost of Good Quality is the entirety of Prevention Cost and Appraisal Cost (CoGQ = PC + AC)
• The Cost of Poor Quality is the entirety of Internal and External Failure Costs (CoPQ = IFC + EFC)
By joining the conditions, Cost of Quality can be all the more precisely characterized, as appeared in the condition beneath:
COQ = (PC + AC) + (IFC + EFC)
One significant factor to note is that the Cost of Quality condition is nonlinear. Putting resources into the Cost of Good Quality doesn't really imply that the general Cost of Quality will increment. Truth be told, when the assets are put resources into the correct territories, the Cost of Quality should diminish. At the point when disappointments are forestalled/distinguished before leaving the office and arriving at the client, Cost of Poor Quality will be diminished.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9408532381057739,
"language": "en",
"url": "https://encyclopedia.kaspersky.com/glossary/miner/",
"token_count": 93,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.42578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8ef36227-afbd-44bf-9b27-ac79c95aca84>"
}
|
A program for generating (mining) cryptocurrency. Most cryptocurrencies are issued in a decentralized manner by creating new blocks of “money” according to certain rules. The generation of each new unit of currency requires considerable computational resources. Miners utilize resources to find new hash sums and earn cryptocurrency for their owners. Miners installed on a device without the consent of its owner is malware (see Trojan miner).
The name is sometimes applied to people who engage in mining.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9703677296638489,
"language": "en",
"url": "https://sandhill.com/article/the-difference-between-birthrates-following-911-and-covid-an-economic-outlook/",
"token_count": 654,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07666015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f49789bb-58a8-4eab-9067-6c67516e7b8e>"
}
|
At the start of the “COVID Spring” there was a common social outlook that in nine months there would be an increase in birth rates, thanks to couples having more time together.
However, as the months carried on – the social conversation shifted as the reality of COVID’s impacts around the world settled in.
Before 2020, the last time the world was arguably unified in unprecedented tragedy was September 11, 2001. So one might observe how the birth rate was impacted in the year following the attacks – and decipher if we can expect a similar pattern in a post COVID year. Take it one step further, and we can trace how the birth rate will impact our economy in the coming decades.
Economist Lindsay Tedds observes this pandemic will likely have the opposite effect as 9/11 had. “After [the Sept. 11, 2001, attacks] we saw an increase in births, particularly in New York state,” she said. ”It was a kind of event that made people really think of the value of human life and what they wanted out of life.”
“People without kids are going to be seeing the risks that parents have faced with schools closing and daycares not available, and having to walk away from careers and jobs because there’s no child care, and [conclude] that we’re still living in a society where women still take on more of the parental duties.”
According to Melissa S. Kearney, Professor in the Department of Economics at the University of Maryland, explains that income and birth-rates are expected move together, “Apart from the question of how many children to have, parents also face the decision of when to have them. If credit markets are perfect, parents can borrow and save in order to finance the cost of children and optimally choose when to have children. But it is difficult for people who are credit constrained to choose to have a child when their income is low. If money matters for fertility, we would therefore expect to see births move with the business cycle.
A forecast published in June by the Brookings Institution, a Washington, D.C.-based think-tank, said there could be 500,000 fewer babies born in the U.S. as a result of the pandemic.
Nora Spinks, CEO of the Vanier Institute for the Family shares that “What it means for families and for policy-makers and for communities is that if we see a drop in pregnancy in 2020, we’ll see a drop in demand for child care in 2023 to 2024, a drop in kindergarten in 2025 and a drop in adolescents available for summer jobs and part-time work by 2030.” And the ripple effect splashes into future work placements, retirement funds, health insurance etc.
We understand that the details of an economic forecast due to the predicted drop in birth rates remain to be seen; however, Sand Hill welcomes our readers to share their expert economic views so we can all plan for brighter days. Because, as we were on September 11, 2001: we are all in this together.
Clare Christopher an editor at SandHill.com
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.938425600528717,
"language": "en",
"url": "https://smallbusiness.chron.com/labor-productivity-ratio-14591.html",
"token_count": 635,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0031585693359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:86afddca-3d3e-4b4f-9bb7-698e98e4d2b8>"
}
|
The Labor Productivity Ratio
As a business owner, you must measure productivity to know if the money you spend on labor is paying off in terms of output. The labor productivity ratio is the simplest way to find out if you're getting the production you need. Use this ratio on a regular basis, and you'll remain aware of your employees' productivity.
Elements of the Productivity Ratio
The labor productivity ratio, in its simplest form, looks like this: output/input. Simply divide the amount of output you're getting by the amount of work you're putting into it. This requires you to assign numbers to both input and output so that you get a meaningful figure.
Assign a Number to Input
The most useful number by which to measure input is the number of hours worked. Examine one full work shift and determine how many hours each production employee is working. Take out lunch breaks, unless they're paid lunch breaks. Leave in coffee breaks, as you're supposed to pay employees for these periods. If you want to measure the input of all employees, combine the hours for your complete crew. If you want to measure one employee's productivity, use only that employee's hours.
Assign a Number to Output
Your output number is the number of units produced. If you don't run a manufacturing business, assign a number to productivity to measure, for example, the number of new customers contacted, number of words written or number of meetings set. Find a number that measures an important task for each employee. You can measure the output of your staff as a whole, or concentrate on the output of one employee.
Divide Output by Input
If an employee's output is 1,000 units and it takes her 8 hours to produce this number of units, 1000/8 equals 125 units per hour. You can measure this figure against those of other employees and determine if you're getting the production you need from any particular individual, or you can use the formula to measure your entire output and the number of hours all employees put into it.
Turn Figures Into Dollars
You can assign a dollar figure to each element of the ratio. Do this after you've applied the formula. If you know, for example, that an employee produces 125 units in an hour, you can assign a market value to those 125 units. You may then use the hourly wage of the employee to find out how much it costs you each hour to produce that value. Example: If 125 units sell for $40 each, you have a total revenue of $5,000. The employee made these in one hour, and you pay that employee $30 per hour. For every $30 spent on this employee's wages, you realize $5,000 worth of income.
Kevin Johnston writes for Ameriprise Financial, the Rutgers University MBA Program and Evan Carmichael. He has written about business, marketing, finance, sales and investing for publications such as "The New York Daily News," "Business Age" and "Nation's Business." He is an instructional designer with credits for companies such as ADP, Standard and Poor's and Bank of America.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9226997494697571,
"language": "en",
"url": "https://spriglobal.org/portfolio/sa-moda-training-2/",
"token_count": 358,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0235595703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8b32d8e7-a8a5-42b8-913e-4f63d5252f51>"
}
|
The public budget of a Government is the most powerful tool that can be used to address poverty. Indeed, the vast majority of investments in social services for children and their families are now funded by national governments. This emphasizes the importance of how national resources are distributed and the central role of the Ministries of Finance.
To ensure budgets benefit children and their families, its allocation and expenditure processes need to be participatory, transparent, accountable, sustainable, effective, efficient and equitable. The Public Finance for Children initiative thus seeks to make children visible in national budgets. This initiative is a response to the rapid decline in development assistance that middle-income countries are experiencing.
In order to effectively monitor the budget, citizens and policy makers need to understand the national budgeting process and its implications for child well-being and rights realization. To support the fruitful delivery of Public Finance for Children courses in Lesotho, SPRI Global is closely collaborating with Unicef Lesotho and Lesotho’s Ministry of Finance in preparing a training manual and holding a 3-day Training of Trainers course in October 2018. The aim of the course is to strengthen Public Finance for Children literacy among members of parliament and CSOs, among other stakeholders.
SPRI Global’s expertise and experience in the fields of capacity building, child rights realization, micro and macroeconomics, public finance management and social protection is essential for developing both the training manual and the Training of Trainers course. Training cycles are expected to be 2 or 3 days long, and a didactic approach that is content-oriented and interactive will be adopted.
Through this project, SPRI Global continues to contribute to strengthening linkages of policy and planning to budget allocations and thus to long-term beneficial outcomes for children.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9652496576309204,
"language": "en",
"url": "https://studymoose.com/a-comprehensive-analysis-of-impact-of-goods-and-services-tax-essay",
"token_count": 3341,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0247802734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a69e2f75-54c8-4bef-85c4-578518966384>"
}
|
Pages 10 (2443 words)
The Goods and services tax, implemented on July 1, 2017, it is regarded as a taxation reform till date implemented in India since independence in 1947. GST was implemented in April 2010 but was postponed due to conflicting of stakeholders and political issues. GST is one of the most critical tax reforms in India which has been long awaiting decision. It is a comprehensive tax system that will subsume all indirect taxes of State and Central Governments and whole economy into a seamless nation in national market.
It is expected to remove the burden of the existing indirect tax system and play an important role in growth of India. GST includes all Indirect Taxes which will help in growth of economy and proves to be more beneficial than the existing tax system. GST will also help to accelerate the overall Gross Domestic Product (GDP) of the country.
GST is a blanket of Indirect Tax that will subsume several indirect state and federal taxes such as Value Added Tax (VAT) and Excise Duty and different State Taxes, Central Surcharges, Entertainment Tax, Luxury Tax and many more.
GST was firstly introduced in France in 1954, with the introduction of GST France became the first country ever to introduce GST. Its introduction was requiring because very high sales taxes and tariffs encourage cheating and smuggling. After France it was adopted by 165 nations. Now, India is also going to adopt it. After its implementation in India, India will become 166th nation to adopt it In India before 16 years, in 2000 Shri Atal BihariVajpy brought this system but no one paid attention on it and due to some reasons it was not passed.
On 28th February 2006, finance minister P. Chidambaram had announced the target date for implementation of GST on 1 April, 2010. The Constitution (122nd Amendment) Bill was introduced in the LokSabha by Finance Minister Arun Jaitely, on 19th December 2014, and passed by the house on 6th May 2015. The bill was passed by LokSabha on August 2016. The bill, after ratification by the States, received assent from President Pranab Mukherjee on 8th September 2016. GST bill is brought for the reason that the different taxes paid by us on different rates would be brought under one roof so that all the taxes may get cancelled and only one tax is paid which is GST. Goods and Services Tax (GST) will include one tax one nation; this statement was given by the honorable Prime Minister Mr. Narandra Modi of India. In today’s scenario, we pay 30% to 35% tax on different things but with GST it will be only 18%, which shows it will be beneficial and one main thing that GST will remain similar in all nations.
Statement Of The Problem
Rationalization and harmonization of commodity taxation is a serious problem of the Indian tax system the roof of this problem lies in federal structure of the Indian constitution which makes intricate arrangement for the division of taxation powers between the central and state government. The problem has been further compounded by the confrontation politics pursed by different political parties ruling at the Centre and in the states. Many trading communities, enterprises dealer at the general public are not aware of GST in its fullest context. In the existing all tax structure there are problem of dual tax system on commodity. This study makes an attempt to study the GST as its implementation process among 32 district Tirunelveli is one of the popular business and trading district in Tamilnadu. The researcher believes that conducting a detailed and comprehensive study on impact of GST in this district will reflect in Tamilnadu. Hence the research has made an attempt to undertake the research study in the title of “A Study on Impact of Gst in Tirunelveli District”.
Scope Of The Study
The present study has been designed to contain an analysis of GST payer’s problems. In Tamilnadu there are 32 districting the confines to the district of Tirunelveli in Tamilnadu. This study is mainly focused on impact of GST in manufacturer and dealers. Under GST more than 500 commodities were covered from that the researcher has taken only 6 essential commodities such as consumer durables, pharmaceutical product, textile, real estate, cement, small enterprises. Hence the research has focused and analyzed the problem of GST payer and introduction and implementation of GST.
Objectives Of The Study
The following are the objectives of the present study.
- To study the impact of GST after its implementation.
- To study the problem of GST payer in Tirunelveli district.
Methodology And Data Collection
The study is based on the collection of primary and secondary data was used. The primary data were collected from GST payers in Tirunelveli district with the help of the well-structured questionnaire. The secondary data were collected from the records of sales tax department in Tirunelveli district, journal, magazine, Tirunelveli chamber of commerce and from web sources. The purpose of analysis data as a way of comparing, contrasting and describing such data, descriptive statistics was used. The data collected from both primary and secondary sources were arranged and presented in the tables. The data were analyzed and interpreted with the help of various statistical tools.
The researcher collected primary data through a structured questionnaire for the GST payers in the various businesses. The secondary data were collected from journal, periodicals, books and reports, published articles and also through the data provided by the commercial tax office Tirunelveli, to collect more information relevant to this project. The primary data were collected from the respondents have been classified and tabulated for the purpose of analysis of the data have been scrutinized by using statistical packages for (spss) with appropriate coding for drawing of the inference.
The researcher conducted a pilot study among 30 respondents through the questionnaire. After the pilot study the questionnaire was modified based on the suggestions given by the respondents and the hurdles contributions which were faced by the researcher at the time of the pilot study.
Tirunelveli district has Taluk namely Ambasamudram, Nanguneri, Palayamkottai, Sivagiri, Sankarankoil, Shenkottai, Tenkasi, Radhapuram, Alangulam, Veerakelamputhur, Tirunelveli, among the total number of GST registered the research has 5 percent as sample and therefore of the total population that is 120 respondents (approx.) were selected on the basis of simple random sampling technique.
To give a specific focus to be objectives the following hypotheses have been formulated to analyses the impact of GST.
- There is no relationship between Socioeconomic factors and level of problems from GST act.
- There is no relationship between Socioeconomic factors and implementation of structure.
Limitations Of The Study
The study suffers from the respondents recall bias and the inherent limitations of cross-sectional studies namely the absence of proper form of records with the sample GST payers. These had been minimized by suitable interaction as well as cross-checks then and there with the commercial tax office. As the study is based on opinion provided by the sample respondents, result of the study cannot be generalized and should be used with caution.
- It was cleared that out of 120 respondents 3 (2 percent) respondents were below 30 years, 67(56per cent) no.of the respondents were under the age of group 30-40 years, 26 (22 per cent) of the respondents were 41-50 years, and 24 (20 percent) of the respondents were above 50 years.
- It was referred that out of 120 respondents, 4 (3 percent) respondents were illiterate, 2 (2 percent) respondents have completed their up to SSLC, 7 (6 percent) respondents were at HSC L level, 72 (60 percent) of the respondents were degree and 35 (29per cent) respondents were professional.
- It was cleared that out of 120 respondents, 89(74per cent) respondents Hindu, 7 (6 percent) respondents Muslim and 24 (20 percent) respondents of Christian.
- It was cleared out of the 120 respondents, 6(5 percent) of the respondents monthly income 15,000-25000, 31 (26 percent) of the respondents monthly income lies between 25001-50000 and 83 (69 percent) respondents earning above 50000 per month.
- It was refer that out of 120 respondents, 20(17 percent) of the respondents know GST through family/friends, 59 (49 percent) of the respondents know through mass media and 41 (34 percent) of the respondents know through online sources.
- It was cleared that out of 120 respondents, 11 (9 percent) of the respondents sole trader, 46 (38 percent) of the respondent’s partnership and 63 (53 percent) of the respondents are company.
- It was cleared that out of 120 respondents, 99 (83 percent) respondents are choosing the central government, 18(15per cent) respondents state government and 3 (2 percent) respondents are manufacturer.
- It was cleared that out of 120 respondents, 19 (16 percent) respondents are manufacturers, 60 (50 percent) respondents are business community, and 41 (34 percent) no.of respondents are consumer.
- It was cleared that out of 120 respondents, 110 (92 percent) respondents are increase, and 10 (8 percent) respondents are decrease.
- It was cleared that out of 110 respondents, 92 (84 percent) of the respondents are purchase price is high, 18 (16 percent) of the respondents are multi-stages concept leads to increase the price of goods.
- It was cleared that out of 120 respondents, 33 (28 percent) respondents are yes, and 87 (72 percent) respondents are NO.
- It was cleared that out of 120 respondents, 30 (25 percent) respondents are yes, 90 (75 percent) respondents are NO.
- It was cleared that out of 30 respondents, 25 (83 percent) respondents are yes, and 5 (17 percent) respondents are no.
- It was cleared that out of 120 respondents, 25 (21 percent) respondents are GST, 95 (79 percent) respondents are VAT.
- It was cleared that out of 120 respondents, 94 (78 percent) respondents are yes, and 26 (22 percent) respondents are No problems regarding payment of GST.
- It was cleared that out of 120 respondents, 108 (90 percent) respondents are yes, and 12 (10 percent) respondents are No cost of maintaining accounts increased after implementation.
- It was cleared that out of 120 respondents, 28 (23 percent) respondents are maintaining accounts, 57 (48 percent) respondents are loss of revenue, 13(11 percent) respondents are invoice issue, and 22 (18 percent) respondents are tax input credit.
- It was cleared that out of 120 respondents, 107 (89 percent) respondents are yes, and 12(11 percent) respondents are no, eliminate the unfair trade practice.
- It was cleared that out of 120 respondents, 43 (36 percent) respondents are good impact, 54 (45 percent) respondents are average impact, and 23 (19 percent) respondents are minor impact is the respondents average impact on daily routine work.
- It was cleared that out of 120 respondents, 102 (85 percent) respondents are yes, and 18 (15 percent) respondents are nogoods will be increased by this one country one tax system.
- It was cleared that out of 120 respondents, 31 (26 percent) respondents are yes, and 89 (74 percent) respondents are NO to favour of business people.
- It is also observed that the overall mean of the respondents 3.94 reveals that the respondents are satisfied with the effects of GST on some factor. It is also observed from table that the GST has increase the price of goods get the highest mean of 4.67, the highest mean related to GST has decrease the cascading effects 4.24, GST has increase the cost of production 4.15, GST has decrease the black marking 3.89, GST has increase the turn over 3.43, GST has increase the profit margin 3.3.
- It is revealed that the calculated value of the chi-square test is less than the table value at 5 percent level of significance with 8 degree of freedom, the null hypothesis is accepted. Hence, it is concluded that there is a significant relationship between the educational qualification and the level of problems.
- It is revealed that the calculated value of the chi-square test is less than the value at the 5 percent level of significance with 4 degree of freedom. It also shows that the null hypothesis is accepted. Hence, it is concluded that there is no significant relationship between the religion and the level of problems of the respondent.
- It is revealed that the P-value (Sig. 0.271) is less than 0.05 (5 percent level of significance). So the null hypothesis is rejected. Thus there is a significant difference between the monthly incomes of the respondents.
- It is reveals that the P-value (sig. 0.353) is less than the 0.05 (5 percent level of significance) and hence the null hypothesis is rejected. Thus there is significant difference between the ages of the respondents.
- At present GST tax act has five slab rates of tax. Those are of 5%, 12%, 18%, 28%, and exempted. These single taxes have been creating classification at interpretational disputed and other complications for businessman. It is suggested that number of rates should be two rates among 1% – 4% for all goods that would be covered for GST.
- GST is believed to be a complicated tax system taxpayers may have many doubts and fears about GST requirement of GST recording and accounting system. It is suggested that state government should create a GST public unit to address the tax queries.
- At present single rates of tax structure has been followed under GST system in all states of India. so it is suggested different rates of tax state has been followed under GST in the reason of in various state has an interstate sale and create a individual worked.
- Now India has followed dual tax system SGST and CGST so taxpayer has pay the tax rate is high level. So it is suggested that single tax system to be implement.
- Under GST act many number of goods are in exempted category. It leads to tax revenue loss. It is suggested the exempted goods are may be changed at the rate of 1% for growth of revenue.
- Under GST system most of the dealers have been filing then tax return through e-filing method regularly. Due to low server capacity in the taxpayers have been spending more time at the time of e-filing. It is suggested that the commercial tax department should increase their server for quick filing of GST tax returns.
- In order to avoid the unnecessary loss of revenue to the state government, the central government may think about the considerable percentage of GST which will be helpful for all stakeholders of GST.
- The loss of tax revenue should be managed and compensated properly through proper diversification of funds without burden to anyone.
- The central and the state government should be in proper understanding and cooperative with each other for the successful implementation of GST.
The main motive of implementing GST is to increase the revenue to the government by preventing tax evasions and unfair trade practices, prevailing in the modern market. The problem has been further computed by the confrontation political parties ruling at the centre and in the states. The GST system is basically structured to simplify current indirect tax system in India. A well designed GST is an attractive method to get rid of deformation of the existing process of multiple taxation also government has promised that GST will reduce the compliance burden at present there will be no distinction between imported and Indian goods & they would be taxed at the same rate. Many indirect Indian taxes like sales tax, VAT etc., will be finished because there will be one tax system i.e. GST, that will face reducing compliance present burden. GST will face many challenges after its implementation and will result to give many benefits. In overall through this study we conclude that GST play a dynamic role in the growth and development of our country.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9487070441246033,
"language": "en",
"url": "https://theycallitcrypto.com/tutorial/what-is-cryptocurrency-the-definitive-guide-to-understanding-crypto/",
"token_count": 3582,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11865234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:053a83c5-6158-4125-9274-8a4055f03419>"
}
|
What is Cryptocurrency? The Definitive Guide to Understanding Crypto
It seems like everyone is talking about cryptocurrency, digital coins and the blockchain these days. And while everyone is talking about it, almost no one knows what cryptocurrency actually is. In this guide, you will learn what cryptocurrency (“crypto” for short) actually is, what it is not, where its value comes from and what it is being used for.
What it is not
Cryptocurrency is not a traditional currency like the US dollar or the UK pound.
- It is not a paper currency.
- It is not backed by any government.
- It has no physical form. You can’t carry it in your wallet.
- It is not a means to get rich quick.
- Cryptocurrency is not backed by any real assets (like the US dollar being backed by gold).
While cryptocurrency is unlike any country’s currency, it does share many of the same qualities of a typical paper currency we use every day. We’ll discuss more of that later.
What it actually is
Cryptocurrency, in its simplest form, is just an entry in a digital ledger (aka a database). And in many ways, banks have been using a simple form of this ever since they transitioned their financial records and transactions from paper to electronic.
In fact, cryptocurrencies have been around longer than many know. During the 1990’s, a company called Digicash created the first form of electronic cash in which payments could be sent over a computer network. Unfortunately, Digicash was far ahead of its time. Neither governments nor banks were ready for such a monumental change and because of that, Digicash went bankrupt.
The birth of Bitcoin
Fast forward to 2008. A mysterious figure (or some say a group of programmers) by the name of Satoshi Nakamoto published a brilliant whitepaper titled “Bitcoin: A Peer-to-Peer Electronic Cash System”. This idea would form the basis of all cryptocurrencies that exist today.
The primary goal of cryptocurrency was the elimination of third parties in the transaction process, thereby removing central control of that transaction and removing any single point of failure.
A simple example of how a traditional currency transaction takes place would be an ever day banking transaction.
I have $200 in my bank account and I send a wire transfer to my buddy Brooks in North Carolina for $200. Since this is an electronic transfer and no actual cash is physically changing hands, a “trusted” third party needs to oversee the process. The bank reviews my request and changes their digital records which deducts my account by $200 and adds $200 to my buddy’s account.
Now, if no central authority was monitoring my transactions, I could simultaneously send $200 to Brooks and another $200 to my friend Otto in New York, even though I only have $200 in my account. Both would get the $200 and, technically, the transaction would go through. This is what is called a “duplicate transaction” or “double spending” problem that banks step in and prevent.
Peer to Peer networks
So, if we eliminate the banks, who can be trusted to ensure fraudulent transactions aren’t approved? This is where P2P technology comes in. The meaning of P2P is Peer to Peer. A Peer to Peer network is a fairly simple concept in which a task or file is distributed across a network of peers.
Think of this network of peers as a living organism with no need for a central brain or nervous system to receive its instructions. Rather the instructions and workload are shared among all peers in the network.
All cryptocurrency transactions take place on this peer to peer network which is made up of computers all over the world. Anyone who is sending a cryptocurrency transaction is participating in the P2P network.
A P2P network is naturally decentralized, meaning there is no central server or central authority controlling the network.
Say there are 1000 computers on the network and 400 of them go down, the network and the data on that network is still safe since every computer has the same data.
Consensus on the Blockchain and mining for digital gold
Remember how I mentioned that a cryptocurrency is simply an entry in a digital ledger. In blockchain networks such as Bitcoin, identical copies of that digital ledger are sent to every computer (or “node”) in the P2P network, creating a distributed ledger that is visible to the public.
Any time a new transaction is transmitted to the network, it is grouped together in encrypted blocks with other transactions that have recently occurred. Crypto Miners, which can be anyone with a powerful mining computer, then compete to validate that block of transactions by solving a complex math problem.
Once a miner solves the problem, the other miners then check the validity of the transactions and make sure the solution to the problem is correct. If enough miners grant approval, a consensus is made, the block is timestamped and cryptographically added to the ledger.
The ledger is then distributed across the network for approval. This process greatly reduces the chances of a “duplicate transaction” going through and eliminates the need for a central source of approval like a bank.
The miner who first solved the problem is programmatically given a reward by the network (in this case Bitcoin) plus the transaction fees paid by the senders. This incentivizes the bitcoin miners to continue validating and adding blocks to the ledger.
A question I’m often asked is who pays the miners? My favorite way to explain this is to echo the words of Vitalik Buterin (founder of Ethereum).
“A blockchain is a magic computer that anyone can upload programs to and leave the programs to self-execute…”
In other words, a piece of software on the cryptocurrency network is actually creating problems for the miners to solve and rewarding them once a block has been added. And this software also lives on the distributed network.
So to recap, a miner’s job is to validate the transactions on the blockchain and ensure consensus in the distributed ledger. For that, the software on the network rewards them with the currency of the network.
Security, Trust and Transparency
The blocks of transactions mentioned earlier are added to the ledger in chronological order. Every block is tied to the block before it. Block after block is added this way forming a chain that shows every transaction ever made on that blockchain. Here’s another fun infographic to explain:
And every ledger in the network is constantly updated so that they are all the same, giving every member of the network the ability to verify who owns what.
Because of this, the network is nearly impossible to hack. Unlike banks with central servers that are easily broken into, hacking into a block in the blockchain to alter a record would require hacking every proceeding block in every ledger on the network. An impossible task.
And unless every node on the network goes down, the data on the network will always be safe.
The security and robustness of a decentralized network eliminates the “single point of failure” problem we mentioned earlier that all banks face.
The term cryptocurrency comes from the fact that transactions are protected by strong cryptography.
The Value of Cryptocurrency
Why is Bitcoin worth anything? It’s a common question, and has more to do with economics than technology. The answer is simple but has a complicated explanation.
What is currency?
The dictionary definition of currency is
- Something (such as coins, treasure notes and banknotes) that is in circulation as a medium of exchange
- Transmission from person to person as a medium of exchange
Digital currencies or cryptocurrencies are new concepts that governments and banks are wary of supporting. The mission of cryptocurrencies is to remove the need for central control which is threat to anyone who enjoys that control.
US Dollar Value
If asked what gives the US dollar value, most say gold, which is completely wrong. The gold standard ended in 1971. The US dollar’s value is actually derived from several key factors:
- Gov’t – The backing of the US government.
- Supply – The control of the money supply.
- Economy – The strength and value of the US economy.
- Faith – Our trust in the economy, government and the currency it distributes.
- Utility – The fact that it is a common means of exchanging value between parties.
- Perception – We believe it is valuable, therefore it is.
The most important factors being our faith in the currency’s controlling entity and our use of the currency as a means of holding and exchanging value.
Now let’s compare the same factors from above and see where the value of cryptocurrency is derived:
- Gov’t – Not a factor since control is decentralized.
- Supply – Total supply is finite and public knowledge.
- Economy – In essence, every cryptocurrency is its own micro-economy. The value of the currency increases as the value and size of that economy increases.
- Faith – Our trust in the business proposition that lies behind the cryptocurrency (ex. Bitcoin is in the business of storing and exchanging wealth. Ethereum is in the business of offering developers a platform to build and deploy decentralized applications).
- Utility – Is the cryptocurrency being actively used for the purpose it was created and is that usage growing.
- Perception – What we believe it is worth, what it is going to worth and what we are willing to pay for it. Many say that in the current crypto market, speculation is the biggest determiner of price. Similar to the stock market or traditional money, it’s only worth as much as the market is willing to pay.
The majority of cryptocurrencies are not being utilized at this point. Most crypto platforms and businesses are still being developed. Therefore, speculation has been the primary driver of crypto prices.
We think therefore it is
There is no definitive explanation as to why any currency has value. Economists to this day still argue about what actually gives the US dollar value. Some say it is the supply others even argue it is actually the national debt. Some have even argued that since paper money can be burned to generate heat, it has intrinsic value!
In the end, many say that our trust and perceived value of the dollar are the most important factors. And I agree with this.
Determining the value of crypto relies on the same principles. While not controlled by a single entity, we still have trust in it, perceive it as valuable and are willing to pay a certain price to attain it.
What it is being used for
There are numerous cryptocurrencies in existence and even more ICOs are being launched, creating new coins. Every time a new business is launched on the blockchain, a cryptocurrency is created to be used on that network.
Blockchain networks utilize their currencies in different ways. Some simply use it as a method of storing and transferring value while others use it as form of payment to utilize the processing power of the network.
Below are several examples of cryptocurrencies being used today:
- Name of currency: Bitcoin
- Ticker symbol: BTC or XBT
The oldest digital currency and considered the digital “gold standard” for all other cryptocurrencies. Bitcoin’s purpose is to serve as a means of storing and exchanging wealth. It’s as simple as that.
Bitcoin can be used to purchase goods and services from businesses that accept it as a form of payment (just like a debit card). Or a person can transfer some of their wealth to another person just like using Paypal.
Want to get some Bitcoin? Check out our How to buy Bitcoin section to find the best exchanges for your needs.
- Name of currency: Bitcoin Cash
- Ticker symbol: BCH
A fork of Bitcoin that serves the same purpose of storing and transferring value. Bitcoin cash was created to solve the scalability and performance issues that Bitcoin has been experiencing.
Generally Bitcoin transactions can take anywhere from minutes to hours to confirm on the network. Bitcoin Cash attempts to solve that issue by increasing the amount of transactions in a block.
Want to get some Bitcoin Cash? Check out our How to buy Bitcoin Cash section to find the best exchanges for your needs.
- Name of currency: Ether
- Ticker symbol: ETH
The second largest cryptocurrency in the market. Ethereum not only offers a way of storing and transferring value but also introduces the concept of smart contracts. These contracts allow developers to build decentralized applications utilizing the blockchain network.
Want to get some Ethereum? Check out our How to buy Ethereum section to find the best exchanges for your needs.
- Name of currency: Litecoin
- Ticker symbol: LTC
Litecoin is also a fork of bitcoin. Its primary difference being there is a larger total supply and transaction times are much faster. If Bitcoin is considered digital gold then Litecoin is silver.
As Bitcoin works to solve its scalability issues, many speculate that Litecoin’s utilization and value will decrease.
Want to own some Litecoin? Check out our How to buy Litecoin section to find the best exchanges for your needs.
- Name of Currency: Monero
- Ticker Symbol: XMR
Monero aims to serve the same purpose as Bitcoin but with added privacy. Many don’t realize that Bitcoin isn’t completely anonymous. Anyone in the public can trace any transaction on the address back to a wallet address and while wallet addresses can be created anonymously, there’s still that traceability that reduces anonymity.
Monero solves that issue claiming that transactions on their network are 100% untraceable. This has attracted a large following due to its privacy features, including support from users of the “Dark Web”.
- Native currency: NEO and Gas
- Ticker Symbol: NEO & GAS
NEO is also a network built around the idea of smart contracts, very similar to Ethereum. NEO was founded in China but has many of the same goals as Ethereum.
- Native currency: Ripple
- Ticker symbol: XRP
Considered the redheaded stepchild of the crypto industry. Many label them as sell outs. Why? Ripple was built to facilitate currency exchanges between banks at nearly instantaneous speeds. They achieve this by ditching the decentralized model and using a more centralized method of control. And many consider that crypto blasphemy.
And, if they succeed, they will only strengthen the position of the “evil banks”. However, I believe Ripple is smart in realizing that banks aren’t going anywhere for the foreseeable future. And playing with them would be far more profitable than battling against them.
Want to get some Ripple? Check out our How to buy Ripple section to find the best exchanges for your needs.
- Native currency: Lumen
- Ticker symbol: XLM
Stellar is very similar to Ripple in that it was created to facilitate fast currency transactions across the world. But that is where the similarities end. Ripple is a for-profit entity whose primary customer are banks while Stellar is non-profit whose primary customers are people who don’t have access to bank account.
They’re goal is the facilitate payments between the unbankable. Stellar is also more decentralized than Ripple.
Check out our How to buy Stellar section to find the best exchanges for your needs.
- Native currency: Dragon Coin
- Ticker symbol: DRGN
Dragonchain was originally developed by Disney. Dragonchain offers similar functionality as Ethereum, allowing developers to create applications on their platform.
The difference is they are creating a more friendly, flexible and secure environment when compared to Ethereum.
- Native currency: Tronix
- Ticker symbol: TRX
Tron is building a content distribution platform on the blockchain. Their target customer is the entertainment industry. Tron’s goals is to directly connect creators of entertainment content (music, etc) with consumers, eliminating the middle man.
Tronix would be the method of payment between the creators and consumers.
- Native currency: VeChain Token
- Ticker symbol: VEN
VeChain is often called the Ethereum for business. They are a blockchain platform focusing on supply chain management and business transparency.
- Native currency: Zcash
- Ticker symbol: ZEC
Zcash is also a fork of Bitcoin and like Bitcoin there is a total supply of 21 million coins. Zcash offers the same functionality as Bitcoin with one key difference, increased privacy.
Zcash users can choose between 2 types of transactions: a transparent transaction or a private transaction. Choosing the private option ensures that the transaction is truly anonymous. So in some ways, it is similar to Monero.
Check out our How to buy Zcash section to find the best exchanges for your needs.
So to sum it up, cryptocurrency is a means to record transactions on a digital ledger which is both verified and stored multiple times on a decentralized network of computers (or nodes). Thereby eliminating a third party facilitator or central control.
And the price of crypto is based on several factors, the primary one being its perceived value.
In reality, cryptocurrency is not so different from the money you have in your pocket. But because it is such a new concept, it will take time to be understood and accepted.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9032992720603943,
"language": "en",
"url": "https://wileyaccountingupdates.ca/2020/03/09/mortgage-rates-drop/",
"token_count": 204,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1279296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b2762101-9571-4fa0-bc91-8dac6cf0a55b>"
}
|
Description: Mortgage rates in Canada took a drop this week following the Bank of Canada’s 50 basis points cut to its rate. The site ratespy.com – https://www.ratespy.com/ – estimated that for each 1/2% cut in interest rates, Canadians would save $500 per year on each $100,000 borrowed. This is Canada’s lowest interest rate since the recession of 2008. The reason for this cut: you guessed it; the corona virus.
Date: March 5, 2020
1) Do you plan on buying a home after graduation? Does this cut in rates impact your timeline?
2) Why would the Bank of Canada be cutting its rate during a temporary crisis like this?
3) Read the “Accounting Matters” section on page 535 of Wiley’s Financial Accounting : Tools for Business Decision-Making. What has the Bank of Canada said is one of the top risks to Canada’s financial system?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9550784826278687,
"language": "en",
"url": "https://wol.iza.org/articles/products-and-policies-to-promote-saving-in-developing-countries/long",
"token_count": 4593,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.18359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:135d5da5-40d6-4bd2-8540-11073c90d326>"
}
|
Poor people in developing countries can benefit from saving to take advantage of profitable investment opportunities, to smooth consumption when income is uneven and unpredictable, and to insure against emergencies. Despite the benefits of saving, only 41% of adults in developing countries have formal bank accounts, and many who do rarely use their accounts. Improving the design and marketing of financial products has the potential to increase savings among this population.
Demand for savings seems to be high.
Access to low-cost savings accounts increases savings and improves measures of individual well-being.
Marketing campaigns and account features that try to overcome psychological obstacles to saving show promise in increasing take-up and use of savings accounts.
Technological innovations, including mobile banking and direct deposit options, have the potential to expand access to and use of savings accounts.
Simply increasing the interest rates on savings accounts does not seem to raise savings.
Most financial-literacy interventions have had little or no effect on savings but may be an important complement to other interventions.
Interventions that encourage people to open savings accounts do not guarantee that they will make deposits into those accounts.
Poor service and lack of trust in banks can reduce the effect of savings accounts.
Many poor people hold expensive debt, and reducing that debt might be a higher priority than accumulating savings.
Author's main message
By increasing the availability of low-cost savings products and matching their design to the needs and constraints of poor people, financial institutions and policymakers can provide tools that increase investment and improve welfare. Strategies that address behavioral factors related to savings show particular promise. Inexpensive account features or interventions such as labeled accounts and text message reminders have raised savings in several countries.
Poor households may not be able to afford not to save. They often have opportunities to make investments with high returns, and because farmers and the self-employed have irregular incomes, they need to be able to smooth their consumption across high- and low-earnings periods. Despite the importance of savings for poor households, only 41% of adults in developing countries have savings accounts, and, in many settings, fewer than half of the people who open accounts ever use them for deposits or withdrawals. Access to low-cost savings accounts can profoundly affect the amounts households save, invest, and consume. Products that address the specific needs and constraints of poor households in developing countries can make financial access more powerful in mobilizing deposits and improving economic well-being.
Discussion of pros and cons
Policies shown to increase savings
Demand for savings appears to be high, and access to low-cost accounts has substantial effects on savings and measures of individual well-being. Several randomized controlled trials have evaluated the effect of offering individuals access to savings accounts. In these experiments, participants are typically offered assistance in opening bank accounts, such as help in filling out forms, meeting identification requirements, and overcoming other informational barriers that might discourage them from opening an account. The experiments also usually waive account-opening fees and cover the minimum balance required to open an account, thus easing credit constraints that might prevent account take-up. Some experiments also temporarily waive transaction fees on withdrawals. All the experiments evaluate savings products that are either already locally available or very similar to an existing product available from local microfinance institutions or commercial banks. Data to assess outcomes (savings balances, assets, transfers, profits, consumption, health, and other welfare measures) come from administrative data on savings and from surveys that measure investments in businesses or other entrepreneurial activities.
High take-up rates for existing financial services
A key measure of the utilization of formal financial services is account opening, often called “take-up.” Take-up of savings accounts is often very high. In one study, 87% of market vendors and bicycle taxi drivers working from a trading center in rural Kenya who were offered accounts chose to open them . In another, 84% of female heads of household in Nepali slums opened accounts with a local non-governmental organization. However, other studies have found somewhat lower take-up: 53% of members of a Chilean microfinance organization who were offered savings accounts chose to open them. In Malawi, tobacco farmers were offered a bundle of financial products that included direct deposit into individual savings accounts; just before harvest, more than 80% opened accounts and enrolled in direct deposit . All of the experiments reduced the information barriers and financial costs of opening accounts, demonstrating that even small costs can be big obstacles. Simplifying account-opening processes and removing fees for no-frills accounts can sharply increase account ownership, especially for customers who are often illiterate, lack formal identification documents, and may be uncomfortable interacting with bank officials.
Large effects on financial assets despite low use
Despite high willingness to open savings accounts, use of those accounts lags behind take-up. The Kenya study found that less than half of those who opened accounts ever made a deposit or withdrew funds. In Chile, while 73% of participants who opened accounts either deposited more than the minimum amount initially or made a subsequent deposit, the remaining quarter of customers never used their accounts . About 20% of the Malawian farmers deposited at least some money into their accounts. Use of the accounts was highest in Nepal, where 80% of participants made at least two deposits within the first year of opening the account—and many made far more, with an average of 44 deposits in a one-year period.
Despite incomplete utilization of the accounts, all three studies found big effects of access to savings accounts on the amount saved and on other outcome measures. Monetary assets—concentrated on savings in the bank—increased, an almost mechanical effect of the savings accounts. Non-monetary assets did not decline, suggesting that money in the accounts represents an increase in savings, not just a change in assets.
Positive welfare implications, particularly through investment
The studies consider welfare measures appropriate to their populations—recognizing that increased savings do not automatically imply increased well-being—by looking at whether profits, consumption, or other indicators improve. The population studied in Kenya are business people, so investments, which have the potential to improve profits, are an important outcome. For account holders, investments in businesses increased an average of 60%. In Kenya, the benefits were concentrated among active account-users. For those who use their accounts actively, investments doubled. As a result, account users can spend more on food and other items, although the results are somewhat imprecise. Similarly, the farmers in Malawi increased their investments in fertilizer and had higher earnings the next season. In Nepal, those offered accounts increased spending on education, some food items, and on festivals, and they reported improvements in their subjective assessments of their financial situations . In Chile, those with savings accounts borrowed less, were better able to smooth consumption following an emergency, and were much less anxious about their financial situation .
Overall, there is considerable evidence that reducing barriers to formal savings accounts increases:
The benefits to those with business opportunities—the market vendors in Kenya and the cash crop farmers in Malawi—are especially notable, because they lead to investments with apparently high returns.
Overcoming psychological barriers through commitment savings accounts and direct deposits
There are clear advantages to designing products that match the behavioral as well as the economic factors that influence decisions about saving and consuming.
While simply reducing the informational and financial barriers to opening basic savings accounts has big effects, several studies have gone further, accounting for psychological and social obstacles to savings. People sometimes have trouble adhering to their own plans for saving money. Economists have tested products like commitment savings accounts that may mitigate the constraints imposed by social norms and help overcome time-inconsistent preferences. Accounts with commitment features can boost savings and investment when time-inconsistency is an obstacle. Simple interventions such as labeling accounts or reminding people of their savings goals are also effective strategies designed to address other psychological phenomena that may reduce savings.
Unlike ordinary or “liquid” savings accounts that allow money to be freely withdrawn, commitment savings accounts have features that allow users to voluntarily restrict their access to funds until they reach a specific target date or amount saved. Commitment accounts operate much like the fixed-deposit accounts available in many developed countries, but they allow users more flexibility regarding the maturation period. These accounts are useful for saving money for future use by guarding against the temptation to spend the money before reaching the goal. For example, farmers, who receive most of their annual income right after harvest, can use commitment savings accounts to save for next year’s agricultural investments. The self-employed and others who earn irregular incomes can use the accounts to save money from high-income periods to support consumption in low-income times. The accounts are also useful for accumulating money to make a large, indivisible purchase such as a piece of farming equipment or a major home appliance, since customer credit for purchases and many investments is severely limited. Additionally, commitment accounts are popular with banks as well as customers, because they stabilize bank portfolios.
The evidence about the impacts of these accounts is positive along two dimensions: the accounts appear to increase savings in the short term, and also generate longer-term benefits. An early study of commitment accounts in the Philippines demonstrated that such accounts could increase savings while the accounts were active and even change savings patterns after they expired. A study in Kenya tested the commitment mechanism without using savings accounts: It showed that advance purchase of fertilizer, which is effectively a commitment to spend money for fertilizer rather than for other purposes, sharply increased investment in this profitable agricultural input . In Malawi, one treatment arm offered farmers ordinary savings accounts while another gave them a choice between an ordinary account and a commitment account . Most farmers who opened commitment accounts chose release dates that corresponded with the upcoming planting season. These farmers appear to have invested more and had larger profits than the farmers who opened ordinary accounts, although the comparison between the account types is inconclusive.
Commitment accounts are not comprehensive savings vehicles and so are best paired with other savings tools. Commitment accounts are useful for people with specific preferences or constraints; they facilitate saving for an indivisible investment or purchase at a specific time of year, or saving to smooth consumption. These accounts are not appropriate for general precautionary savings or for savings for day-to-day use. The channel through which these accounts improve outcomes is still unclear. In Malawi, opening commitment accounts had big effects on farmers, even though farmers ultimately deposited little money into the accounts . That suggests that restricting access to funds may not be the mechanism that affects outcomes.
Since transaction costs and time-inconsistent preferences both reduce the tendency to make deposits, commitment and ordinary savings accounts can be augmented by other design features to increase the amount saved. Direct deposit, which is widely available to salaried workers in developed countries, can reduce these obstacles to saving. Direct deposit played a major role in increasing savings among farmers in Malawi . New studies are investigating whether making savings the default will have sustained welfare effects.
Encouraging saving through labeling and reminders
Other design features can enhance the effect of ordinary savings accounts without restricting access to the funds. Inexpensive features such as labels and reminders that incorporate behavioral factors like time-inconsistent preferences and limited attention can enhance the effect of standard savings products.
Labels, which earmark money for a specific purpose, can reduce the temptation to use the money for other purposes. Savings-group members in Kenya placed a high value on simple piggybanks to set aside money for health care costs, noting that having a separate place to save for health care needs helped them control spending on so-called “temptation” goods like snacks or alcohol . This simple intervention increased spending on preventative health measures by 66–75%. Keeping money in a separate health savings account greatly reduced the risk that a household would be unable to afford medical treatment. Designating the piggybank or the savings account for a specific purpose acted as a “mental accounting” device: people behaved as though the money was non-fungible and were more likely to use it for its designated purpose. This sort of mental accounting may also help explain the success of commitment savings accounts in Malawi by encouraging farmers to think about “fertilizer money” separately from other money and to adjust their savings accordingly .
Another strategy for increasing savings is to remind people of their savings goals and progress toward these goals. Researchers worked with banks in Bolivia, the Philippines, and Peru to test whether text-message reminders about savings goals would lead to increased deposits . Some of the reminders mentioned savings goals identified by clients, while others did not. The reminders increased savings by 6% on average. The increase was driven entirely by the messages that mentioned specific savings goals—those messages raised savings by 16%. Text messages also increased savings among account holders in Chile . Messages tripled the number of deposits and the average amount deposited each month. Reminders likely helped savers keep sight of their goals and strengthened their resolve to avoid the immediate gratification of smaller purchases in the short term.
Technological innovations to improve access
Rapid changes in technology present another opportunity to improve the effectiveness of savings products. Serving the poor through brick-and-mortar branches can be expensive for banks, which have to staff and maintain the branches, and for customers, who may have to travel long distances to deposit or withdraw money. Mobile money and other forms of electronic transfer have expanded rapidly, especially in sub-Saharan Africa. One study finds that 16% of adults in the region have used mobile money, although typically to send and receive funds rather than as a savings vehicle . There is well-documented evidence that mobile money reduces the costs of using informal insurance networks to smooth consumption.
Mobile money changes the way customers access money, often in ways that reduce their financial or time costs. Among adults who do not have savings accounts, 20% report that distance to the bank prevents them from opening accounts. The expansion of mobile network coverage and mobile cash agents can sharply reduce this barrier.
Electronic payments can reduce costs of administering aid programs and increase financial inclusion of the poor. Large-scale programs in several South American countries and in South Africa have transitioned to electronic payments, and India is making a similar shift. How such changes affect administrative costs and the savings and welfare of beneficiaries varies with the quality of the local banking infrastructure. A study in Niger of transitioning from cash distribution to mobile money distribution of aid payments found not only that mobile money was more cost-effective but also that it allowed recipients to consume a wider variety of foods and other goods . While this study does not evaluate savings directly, it demonstrates the possibility that simply changing how people access their money can have real welfare consequences.
The most basic form of mobile money allows value to be stored on a mobile phone handset and transferred to another handset using SMS (short message service). While this technology can be used as a savings account, it is costly—transaction fees are higher than typical withdrawal fees, and accounts accrue no interest.
Responding to perceived demand and market opportunity, several mobile providers have introduced products that are designed as savings accounts. One such product, Safaricom’s M-Shwari accounts, operates in partnership with a fully licensed bank in Kenya. Demand for the product is high: More than two million accounts were opened in the first four months after the product was launched, with deposits totaling $47 million .
Mobile providers in Mozambique and Rwanda are now offering commitment savings products; evaluations of these products are ongoing. Mobile savings products are new, but the evidence suggests that this technology has the potential to reach many currently unbanked households.
Policies that have not succeeded in increasing savings
Lack of savings response to interest rates
While expanding access to savings by reducing the price of opening an account led to high take-up, moderate utilization, and sizable welfare effects, efforts to encourage savings by increasing interest rates have been far less successful. And while text messages were effective in increasing savings in Chile, increased interest rates were not . In an earlier round of experiments, the Chile study compared low-fee bank accounts paying near-zero real interest with two alternatives: the same basic bank account plus enrollment in a self-help accountability group, and a bank account with the same low fees but a 5% real interest rate. While the self-help led to a tripling of deposits relative to those in the standard bank account group, the higher interest rate had no effect at all.
A study in the Philippines found similar results for interest rates. Accounts with a market rate of 1.5% interest were compared with accounts paying 3% interest, and with accounts paying the higher interest rate only if clients met savings goals they set for themselves. Neither the share of clients opening accounts nor the amount saved increased when the bank offered a higher interest rate .
Providing account opening assistance and reducing the up-front costs and fees associated with savings accounts have been very successful strategies for getting people to open accounts and moderately successful in getting people to use the accounts. However, interest rates appear to be less of a draw, at least among predominantly unbanked populations. Why fees and account-opening processes are bigger barriers to savings than low interest rates is unclear, but possibilities are that people are credit-constrained and cannot afford the up-front costs, that they are present-biased and place more weight on up-front costs than on the loss of future interest, or that people distrust banks. Regardless of the mechanism, current evidence supports a focus on lower fees and other obstacles to access rather than higher interest rates.
Limited effect of improving financial literacy
While studies have directly tested the effect of interest rates on savings in developing countries, there is less direct evidence about the effect of financial literacy training. Both financial literacy and savings levels are low in developing countries, but the correlation may be driven simply by low incomes. In the US, evidence on the effectiveness of financial literacy programs is decidedly mixed and focuses on outcomes without direct analogs in developing countries, such as retirement planning, portfolio mix, and propensity to declare bankruptcy. Two randomized controlled trials in Indonesia show very limited evidence of an effect of financial literacy programs on relevant outcomes. The first tests a short, classroom-based program offered to unbanked households in Java by a local non-profit organization. The training had negligible effects overall, although it did increase the probability of opening a bank account among illiterate households . The second study tested a longer, more comprehensive program that used video lessons delivered in five weekly sessions. That study included more nuanced measures of financial literacy, and the training strongly increased financial knowledge, but had no effects on financial behavior .
Ongoing work in Ghana and other developing countries is now evaluating the effect of financial literacy training for children and youth, with the idea that children may form good savings habits that serve them well throughout their lives, and may even influence the spending and savings patterns of their parents.
Many studies that test the effect of access to savings accounts also provide some financial literacy training. In the study of farmers in Malawi, for example, members of the control group were not offered any financial products, but they did receive training in budgeting. The treatment groups also received the same training. The study was thus able to estimate the marginal effect of access to financial products beyond the effect of the financial literacy component. Because it is often difficult to offer new financial products to unbanked populations without providing at least some financial information, the possibility cannot be ruled out that financial literacy, while not effective on its own, might enhance the effectiveness of other interventions.
Limitations and gaps
While high-quality, randomized controlled trials provide good guidance about the effects of access to formal bank accounts and the design features that make accounts most effective, more research is necessary to understand the mechanisms through which financial products improve savings and welfare outcomes.
More is known about strategies that are successful in generating take-up of savings accounts than about strategies that increase use of the accounts. More work is needed to understand the conditions and personal characteristics under which access to savings accounts is likely to result in increased amounts saved.
More research is needed to identify the combination of financial products that are most suitable for poor households. The poor have complicated financial needs, including:
accumulating money to purchase assets; and
building savings cushions to meet unexpected expenses.
Many studies do not differentiate among these goals or explicitly evaluate products for their ability to help households meet diverse needs. Moreover, most studies evaluate a single financial product without considering what combination of saving strategies might be appropriate for meeting households’ multiple goals (see Motives for saving).
Finally, few studies address the simultaneous borrowing and saving by poor households or ask whether households would be better off accumulating savings or reducing debt.
Summary and policy advice
Many well-designed randomized controlled trials have evaluated strategies to promote saving in developing countries. The evidence from these studies is largely encouraging: Increased access to low-cost savings accounts leads to higher savings and, even more important, to higher investments and consumption, and better health. Strategies that address behavioral factors related to savings, such as time inconsistency or limited attention, show particular promise. Commitment savings accounts led to long-term behavioral changes in the Philippines and big increases in investments in Malawi. Inexpensive account features or interventions such as labeled accounts and text-message reminders have raised savings in several countries in South America. Mobile banking, which is expanding rapidly across sub-Saharan Africa in particular, has the potential to give millions of people access to formal finance.
Service providers should focus on designing products for the specific needs of the poor. The products should be low cost in order to increase take-up, and, where trade-offs must be made, low fees are more important than high interest rates. It is important to offer a range of products specifically designed to help people save for multiple purposes, including investments, and to cope with emergencies. The poor—like households in developed countries—may benefit from access to multiple products. Low-cost account features and marketing schemes can be efficient ways for banks and customers to increase deposits. While much of the evidence speaks to financial service providers’ design of products, governments and other organizations can facilitate access to accounts by subsidizing fees and offering add-on services like reminders to save.
The author thanks an anonymous referee and the IZA World of Labor editors for many helpful suggestions on earlier drafts.
The IZA World of Labor project is committed to the IZA Guiding Principles of Research Integrity. The author declares to have observed these principles.
© Jessica Goldberg
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9578542709350586,
"language": "en",
"url": "https://www.coastalnewstoday.com/post/ny-nyc-zoning-amendment-increases-building-options-in-flood-zones",
"token_count": 381,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.240234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:81c0d823-1b66-4532-911f-ef1c4d8cf5b7>"
}
|
NY - NYC Zoning Amendment Increases Building Options in Flood Zones
Climate change poses myriad risks to New York City, as exemplified by the effects of Hurricane Sandy in 2012.
The severe rains, winds, and flooding resulted in the damage and destruction of thousands of homes in coastal communities; the temporary displacement of thousands of New Yorkers; and major disruptions to the city’s power grid, subway system, and other critical infrastructure. The total cost of the storm to the city, including physical damage and loss of economic activity, has been estimated at $19 billion.
The city has developed a multi-faceted strategy for promoting climate resiliency, which includes comprehensive planning measures and significant investments in infrastructure. One initiative stands out for reducing flood risks at virtually no cost to the city — while also providing flexibility to developers in flood zones.
The proposed Zoning for Coastal Flood Resiliency (ZCFR) updates and makes permanent existing, temporary regulations that remove zoning barriers to complying with the Building Code’s flood-resistant construction standards.
These temporary zoning regulations were adopted on an emergency basis after Hurricane Sandy to facilitate the rebuilding of neighborhoods that were struck hard by that storm, such as Staten Island’s eastern shore and Queens’ Rockaway Peninsula, and to promote flood-resilient construction throughout the city’s flood-hazard areas.
While they have proven successful, however, they have also shown that there is room for improvement, as well as for a need to account for projected increases in sea level throughout the city.
ZCFR modifies and expands the application of existing zoning allowances for buildings that comply with the Building Code’s flood-resistant construction standards. Just like the existing regulations, the proposed regulations under ZCFR are technical and complex, but a few significant features illustrate their potential value to property owners and developers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9453085064888,
"language": "en",
"url": "https://www.investopedia.com/terms/v/visiblesupply.asp",
"token_count": 613,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.044921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:aad39908-6a29-409d-ab21-60cd8ead6714>"
}
|
What Is Visible Supply?
Visible supply is the amount of a good or commodity that is currently being stored or transported that is available to be bought or sold. This supply is important as it identifies a definite quantity of goods available for purchase or delivery upon the assignment of futures contracts. For instance, all of the wheat held in granaries or storage facilities, along with the wheat being transported from farms constitutes part of the visible supply.
- The visible supply refers to the quantity of some good or asset that is available for sale, or is en route to be available.
- In securities markets, such as for muni bonds, the visible supply refers to the total volume in dollars of municipal bonds with maturities of 13 months or more that are expected to reach the market over the next 30 days.
- The visible supply gives an indication of the supply side of the market.
Understanding Visible Supply
Prices in the market are said to be determined by the law of supply and demand - the more supply of some good available affects the demand (and vice versa). Therefore, being able to account for the supply of commodities is of crucial importance to these markets and their related futures markets. In general, an increase in visible supply is considered to be a bearish signal, while a decrease is considered a bullish one.
However, the price of a good is not completely influenced by the amount of visible supply. Because commodities, such as wheat or oil, are often purchased through futures, options, or forward contracts long before the date of actual physical delivery, prices are more likely to be influenced by the future supply rather than what is available at that moment. Future supply, or supply which is currently in processing or preparation, is said to be part of the invisible supply, since it cannot (yet) be counted and accounted for.
Visible vs. Invisible Supply
Visible supply stands in contrast to invisible supply, which refers to an unknown or unquantifiable amount of physical stock of a commodity that will eventually be available for delivery upon settlement of a futures contract.
Unlike the visible supply, this amount of supply underlying a futures contract exists, but it hasn't yet been accumulated, stored, or set aside for delivery; whereas any such stock of a commodity that has been accounted for is the "visible" supply.
30 Day Visible Supply in Municipal Bond Markets
In municipal bond markets, the 30 day visible supply is used to estimate the health of the market for new issues. It is an indication of how much new debt is expected to come to market. The 30 day visible supply is published in The Bond Buyer, a trade publication for members of the municipal bond industry that began as a daily newspaper over 100 years ago, and now provides sophisticated real-time market data via a subscription-based digital version.
An increase in the visible supply of bonds is bearish for prices as more bonds will increase the supply of new debt. Likewise, a fall in the 30 day visible supply is bullish for bond prices.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9381892085075378,
"language": "en",
"url": "https://www.sewageheatrecovery.com/press/press-room/",
"token_count": 1116,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10498046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ed9b0f88-72dd-4929-8d10-3ded34112821>"
}
|
Wastewater Heat Recovery is a story that needs to be told!
To schedule an interview or discuss an upcoming article please contact:
Dani Mueller – [email protected]
International Wastewater Systems is on the leading edge of wastewater heat recovery technology. The following story by Rowan Oloman provides a quick glimpse into the vast potential of our unique approach to renewable energy.
Turning Wastewater Into Energy: Clean Tech’s Best Kept Secret
Every day the average North American household flushes one full tank of hot water down the drain. In a city of 1 million homes, that is equivalent to approximately $500,000 in energy from natural gas casually flowing into our city sewers daily. Running underneath our homes and buildings there is an untapped energy goldmine.
Globally, it has been acknowledged that in regards to climate change mitigation energy efficiency is the lowest hanging fruit. China recently announced they will spend a whopping US$372 billion in energy conservation and the US plans to invest US$155 billion in energy efficiency projects.
Yet by and large North America continues to ignore the single most cost-effective and most profitable form of energy efficiency, which is to recycle the energy that we’re already wasting.
The simple fact is water enters our buildings at 7-9 degrees Celsius and leaves at 20-25 degrees Celsius. If captured, this wasted heat could be used to fulfill 40-50% of our buildings energy requirements.
Sometimes innovation is not about a quantum shift in thinking or spending millions in research it’s about re-inventing how we use the technologies already available. Lynn Mueller and his colleagues at International Wastewater Heat Exchange, all with long-time careers in the geothermal heat pump and renewable energy industries, saw the opportunity in wastewater heat recovery.
The company created the SHARC system, an innovation which filters raw sewage and extracts the heat in an easy, maintenance-free way using geothermal heat pumps and chillers. While sewage may not be as attractive as solar or wind power, with a 3-5 year payback period, the SHARC system is likely the most cost-effective renewable energy system currently available.
“We’re operating at 600 percent efficiency,” Mueller says. “So every dollar we spend recovering the heat out of the sewer we get $6 worth of heat out.” Mueller is speaking about his latest successful installation at Seven35 Condominiums complex, in Vancouver Canada.
The SHARC system has reduced Seven35’s annual greenhouse gas emissions by 150 tonnes (averages are between 30%-85% reduction), is recovering 80% of the buildings wasted energy and has contributed to earning the condominium the first dual ‘green’ certification in Canada – LEED Platinum and Built Green Gold. It’s also the first time the technology has been used at a residential building in North America.
For residents the equation is simple, now they are recycling the same energy over and over again, instead of paying for the natural gas to re-heat their tanks daily.
Reliable, trouble free operation is the major challenge in recovering heat from waste water. The SHARC system has been designed to be clog-proof with an automatic back flush to filter sewage simply and effectively. It has full backup capacity with zero down time and is available in heat exchange or heat pump applications.
Added benefits of the SHARC system include full automation with a DDC or BACnet interface and a wireless or Ethernet connection for data retrieval and instantaneous calculation of COP and GHG savings. The system comes with a factory maintenance and warranty service and can be incorporated immediately into existing mechanical infrastructure.
Mueller, who was previously President of WaterFurnace and Earth Source Energy – the world’s largest installer of heat pumps, is humble about his company’s innovation. “We are a new company simply revolutionizing old technology,” says Mueller. “We saw a way to provide energy easily, cheaply and in a way that is ecologically sound.”
International Wastewater Heat Exchange has opened marketing and distribution channels across Canada and in forty US States. The applications for the SHARC system are multitudinous, from condominiums, to public facilities like sports and aquatic centers, to industrial complexes and district energy systems.
In a world where municipalities are progressively being held responsible for efficiently decreasing their own greenhouse gas emissions, systems like the SHARC will become more and more attractive. The biggest challenge however will be re-framing the way people view waste.
Despite the widespread use of waste-to-energy (WTE) projects in European countries, where innovative projects are supported because space for waste disposal is scarce, in North America waste to energy projects are in still in their infancy. In Germany the majority of waste is recycled, composted or processed by biological or thermal method which is likely why Mueller has already received calls from German companies interested in the technology.
The Collins English Dictionary describes the saying ‘money down the drain’ as money ‘wasted’. The SHARC system turns this old adage on its head.
Rowan Oloman is a freelance writer living in Vancouver Canada. She has written for various greentech communications over the past 4 years, has an MBA in Sustainable Energy and a Master’s Degree in Natural Resource Management. Rowan is currently working for Radiant Carbon, a unique carbon offset provider.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9074817895889282,
"language": "en",
"url": "https://www.toppersbulletin.com/ncert-solutions-for-class-12-economics-introductory-microeconomics-chapter-3-production-and-costs/",
"token_count": 4655,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06787109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b62e85dd-4a77-4b3e-abcd-4f7051c2b7de>"
}
|
Exercise : Solutions of Questions on Page Number : 49
Q1 :Explain the concept of a production function.
The production function of a firm depicts the relationship between the inputs used in the production process and the final output. It specifies how many units of different inputs are needed in order to produce the maximum possible output. Production function is written as:
Qx = f (L, K)
Qx represents units of output x produced.
L represents units of labour employed.
K represents units of capital employed.
The above equation explains that Qx, units of output x are produced by employing L and K units of labour and capital respectively and by a given technology. As the given level of technology appreciates, the output will increase with the same level of capital and labour units.
Q2 :What is the total product of an input?
Total product is defined as the sum total of output produced by a firm by employing a particular input. It is also known as the Total Physical Product and is represented as
Where, ∑ represents summation of all outputs and Qx represents units of output x produced by an input.
Q3 :What is the average product of an input?
Average product is defined as the output produced by per unit of variable factor (labour) employed. Algebraically, it is defined as the ratio of the total product by units of labour employed to produce the output, i.e.
TP = Total product
L = units of labour employed
Q4 :What is the marginal product of an input?
Marginal Product is defined as the additional output produced because of the employment of an additional unit of labour. In other words, it is the change in the total output brought by employing one additional unit of labour. Algebraically, it is expressed as the ratio of the change in the total product to the change in the units of labour employed, i.e.
TPn = Total product produced by employing n units of labour
TPn – 1 = Total product produced by employing (n – 1) units of labour
Q5 :Explain the relationship between the marginal products and the total product of an input.
Relationship between marginal products (MP) and the total product (TP) can be represented graphically as
1) TP increases at an increasing rate till point K, when more and more units of labour are employed. The point K is known as the point of inflexion. At this point MP (second part of the figure) attains its maximum value at point U.
2) After point K, TP increases but at a decreasing rate. Simultaneously, MP starts falling after reaching its maximum level at point U.
3) When TP curve reaches its maximum and becomes constant at point B, MP becomes zero.
4) When TP starts falling after B, MP becomes negative.
5) MP is derived from TP by
Q6 :Explain the concepts of the short run and the long run.
In short run, a firm cannot change all the inputs, which means that the output can be increased (decreased) only by employing more (less) of the variable factor (labour). It is generally assumed that in short run a firm does not have sufficient or enough time to vary its fixed factors such as, installing a new machine, etc. Hence, the output levels vary only because of varying employment levels of the variable factor.
Algebraically, the short run production function is expressed as
Qx= units of output x produced
L = labour input
= constant units of capital
In long run, a firm can change all its inputs, which means that the output can be increased (decreased) by employing more (less) of both the inputs – variable and fixed factors. In the long run, all inputs (including capital) are variable and can be changed according to the required levels of output. The law that explains this long run concept is called returns to scale. The long run production function is expressed as
Qx = f (L, K)
Both L and K are variable and can be varied.
Q7 :What is the law of diminishing marginal product?
Law of diminishing Marginal Product
According to this law, if the units of the variable factor keeps on increasing keeping the level of the fixed factor constant, then initially the marginal product will rise but finally a point will be reached after which the marginal product of the variable factor will start falling. After this point the marginal product of any additional variable factor will be zero, and can even be negative.
Q8 :What is the law of variable proportions?
Law of Variable Proportions
According to the law of variable proportions, if more and more units of the variable factor (labour) are combined with the same quantity of the fixed factor (capital), then initially the total product will increase but gradually after a point, the total product will start diminishing.
Q9 :When does a production function satisfy constant returns to scale?
Constant returns to scale will hold when a proportional increase in all the factors of production leads to an equal proportional increase in the output. For example, if both labour and capital are increased by 10% and if the output also increases by 10%, then we say that the production function exhibits constant returns to scale.
Algebraically, constant returns to scale exists when
f(nL, nK) = n. f(L, K)
This implies that if both labour and capital are increased by ‘n’ times, then the production also increases by ‘n’ times.
Q10 :When does a production function satisfy increasing returns to scale?
Increasing returns to scale (IRS) holds when a proportional increase in all the factors of production leads to an increase in the output by more than the proportion. For example, if both the labour and the capital are increased by ‘n’ times, and the resultant increase in the output is more than ‘n’ times, then we say that the production function exhibits IRS.
Algebraically, IRS exists when
f(nL, nK) > n. f(L, K)
Q11 :When does a production function satisfy decreasing returns to scale?
Decreasing returns to scale (DRS) holds when a proportional increase in all the factors of production leads to an increase in the output by less than the proportion. For example, if both labour and capital are increased by ‘n’ times but the resultant increase in output is less than ‘n’ times, then we say that the production function exhibits DRS.
Algebraically, DRS exists when
f(nL, nK) < n. f(L, K)
Q12 :Briefly explain the concept of the cost function.
The functional relationship between the cost of production and the output is called the cost function. It is expressed as
C = f(Qx)
C = Cost of production
Qx = Units of output x produced
In other words, the output-cost relationship for a firm is depicted by the cost function.
The cost function depicts the least cost combination of inputs associated with different output levels.
Q13 :What are the total fixed cost, total variable cost and total cost of a firm? How are they related?
Total Fixed Cost (TFC)
This refers to the costs incurred by a firm in order to acquire the fixed factors for production like cost of machinery, buildings, depreciation, etc. In short run, fixed factors cannot vary and accordingly the fixed cost remains the same through all output levels. These are also called overhead costs.
Total Variable Cost (TVC)
This refers to the costs incurred by a firm on variable inputs for production. As we increase quantities of variable inputs, accordingly the variable cost also goes up. It is also called ‘Prime cost’ or ‘Direct cost’ and includes expenses like – wages of labour, fuel expenses, etc.
Total Cost (TC)
The sum of total fixed cost and total variable cost is called the total cost.
Total cost = Total fixed cost + Total variable cost
TC = TFC + TVC
Relationship between TC, TFC, and TVC
1) TFC curve remains constant throughout all the levels of output as fixed factor is constant in short run.
2) TVC rises as the output is increased by employing more and more of labour units. Till point Z, TVC rises at a decreasing rate, and so the TC curve also follows the same pattern.
3) The difference between TC and TVC is equivalent to TFC.
4) After point Z, TVC rises at an increasing rate and therefore TC also rises at an increasing rate.
5) Both TVC and TFC is derived from TC i.e. TC = TVC + TFC
Q14 :What are the average fixed cost, average variable cost and average cost of a firm? How are they related?
Average Fixed Cost:
It is defined as the fixed cost per unit of output.
TFC = Total fixed cost
Q = Quantity of output produced
Average Variable Cost:
It is defined as the variable cost per unit of output.
TVC = Total variable cost
Q = Quantity of output produced
It is defined as the total cost per unit of output. Average cost is derived by dividing total cost by quantity of output.
AC is also defined as the sum total of average fixed cost and average variable cost.
AC = AFC + AVC
Relationship between AC, AFC, AVC:
1) AVC and AFC are derived from AC as AC = AFC + AVC.
2) The plot for AFC is a rectangular hyperbola and falls continuously as the quantity of output increases.
3) The minimum point of AVC will always exist to the left of the minimum point of AC; i.e., point ‘Z’ will always lie left to point ‘M’.
4) AFC being a rectangular hyperbola falls throughout; this causes the difference between AC and AVC to keep decreasing at higher output levels. However, it should be noted that AVC and AC can never intersect each other. If they intersect at any point, it would imply that AC and AVC are equal at that point. However, this is not possible as AFC will never be zero because it is a rectangular hyperbola that never touches x-axis.
5) AC inherits shape from AVC’s shape and it is because of law of variable proportions that both the curves are U-shaped.
Q15 :Can there be some fixed cost in the long run? If not, why?
No, there cannot be any fixed cost in the long run. In the long run, a firm has enough time to modify factor ratio and can change the scale of production. There is no fixed factor as the firm can change quantity of all the factors of production and therefore there cannot be any fixed cost in the long-run.
Q16 :What does the average fixed cost curve look like? Why does it look so?
Average fixed cost curve looks like a rectangular hyperbola. It is defined as the ratio of TFC to output. We know that TFC remains constant throughout all the output levels and as output increases, with TFC being constant, AFC decreases.
When output level is close to zero, AFC is infinitely large and by contrast when output level is very large, AFC tends to zero but never becomes zero. AFC can never be zero because it is a rectangular hyperbola and it never intersects the x-axis and thereby can never be equal to zero.
Q17 :What do the short run marginal cost, average variable cost and short run average cost curves look like?
The short run marginal cost (SMC), average variable cost (AVC) and short run average cost (SAC) curves are all U-shaped curves. The reason behind the curves being U-shaped is the law of variable proportion. In the initial stages of production in the short run, due to increasing returns to labour, all the costs (average and marginal) fall. In addition to this in the short run MP of labour also increases, which implies that more output can be produced by per additional unit of labour, leading all the costs curves to fall. Subsequently with the advent of constant returns to labour, the cost curves become constant and reach their minimum point (representing the optimum combination of capital and labour). Beyond this optimum combination, additional units of labour increase the cost, and as MP of labour starts falling, the cost curve starts rising due to decreasing returns to labour.
Q18 :Why does the SMC curve cut the AVC curve at the minimum point of the AVC curve?
SMC curve always intersect the AVC curve at its minimum point. This is because to the left of the minimum point of AVC, SMC is below AVC. SMC and AVC both fall but the former falls at a faster rate. At the minimum point K, AVC is equal to SMC. Beyond K, AVC and SMC both rise but the latter rises at a faster rate than the former and also SMC lies above AVC. Therefore, the only point where SMC and AVC are equal is where SMC intersects AVC, i.e., at the minimum point of the AVC curve.
Q19 :At which point does the SMC curve intersect SAC curve? Give reason in support of your answer.
SMC curve intersects SAC curve at its minimum point. This is because as long as SAC is falling, SMC remains below SAC and when SAC starts rising, SMC remains above SAC. SMC intersects SAC at its minimum point P, where SMC = SAC.
Q20 :Why is the short run marginal cost curve ‘U’-shaped?
The SMC curve is a U-shaped curve due to the law of variable proportions. In order to understand the reason behind the U-shape of SMC, let us divide the SMC curve (UAB) into three different parts according to the law of variable proportions:
UA part corresponds to increasing returns to factor.
Minimum point A corresponds to constant returns to factor.
AB part corresponds to decreasing returns to factor.
In the initial production stages, the falling part of SMC (UA) is due to application of increasing returns to factor. Then the SMC stops falling and reaches its minimum point ‘A’ due to the existence of constant returns to a factor.
After the minimum point A, SMC starts rising (i.e. ‘AB’ part of SMC) due to the onset of decreasing returns of variable factor. This trend of SMC curve (initially falling, then becoming constant at its minimum point and then rising) makes it look like the English alphabet – ‘U’.
Q21 :What do the long run marginal cost and the average cost curves look like?
The long run marginal cost (LMC) and long run average cost (LAC) are U shaped curves. The reason behind them being U-shaped is due to the law of returns to scale. It is argued that a firm generally experiences IRS during the initial period of production followed by CRS, and lastly by DRS. Consequently, both LAC and LMC are U-shaped curves. Due to IRS, as the output increases, LAC falls due to economies of scale. Then falling LAC experiences CRS at Q1 level of output which is also called the optimum capacity. Beyond Q1 level of output, the firm experiences diseconomies of scale and if the firm continues to produce beyond Q1 level, the cost of production will rise.
Q22 :The following table gives the total product schedule of labour. Find the corresponding average product and marginal product schedules of labour.”
Q23 :The following table gives the average product schedule of labour. Find the total product and marginal product schedules. It is given that the total product is zero at zero level of labour employment.
|L||APL||TPL = AP × L|
|1||2||2 × 1 = 2||2|
|2||3||3 × 2 = 6||6 – 2 = 4|
|3||4||4 × 3 = 12||12 – 6 = 6|
|4||4.25||4.25 × 4 = 17||17 – 12 = 5|
|5||4||4 × 5 = 20||20 – 17 = 3|
|6||3.5||3.5 × 6 = 21||21 – 20 = 1|
Q24 :The following table gives the marginal product schedule of labour. It is also given that total product of labour is zero at zero level of employment. Calculate the total and average product schedules of labour.
|2||5||3 + 5 = 8|
|3||7||8 + 7 = 15|
|4||5||15 + 5 = 20|
|5||3||20 + 3 = 23|
|6||1||23 + 1 = 24|
Q25 :The following table shows the total cost schedule of a firm. What is the total fixed cost schedule of this firm?
Calculate the TVC, AFC, AVC, SAC and SMC schedules of the firm.
|TFC = TC – TVC
10 = 10 – 0
|TVC = TC – TFC
|SAC = AFC + AVC
TCn – TCn – 1
|0||10||10||10 – 10 = 0||–||–||–||–|
|1||30||10||30 – 10 = 20||20 + 10 = 30||30 – 10 = 20|
|2||45||10||45 – 10 = 35||17.5 + 5 = 22.5||45 – 30 = 15|
|3||55||10||55 – 10 = 45||15 +3.33=18.33||55 – 45 = 10|
|4||70||10||70 – 10 = 60||15 + 2.5 = 17.5||70 – 55 = 15|
|5||90||10||90 – 10 = 80||16 + 2 = 18||90 – 70 = 20|
|6||120||10||120 – 10 = 110||18.66+1.66=19.99||120 – 90 = 30|
Q26 :The following table gives the total cost schedule of a firm. It is also given that the average fixed cost at four units of output is Rs 5/-. Find the TVC, TFC, AVC, AFC, SAC and SMC schedules of the firm for the corresponding values of output.
|TFC = TC – TVC
|TVC = TC – TFC
|SAC = AFC + AVC
TCn – TCn – 1
|1||50||20||50 – 20 = 30||20 + 30 = 50||50 – 20 = 30|
|2||65||20||65 – 20 = 45||10 + 22.5 = 32.5||65 – 50 = 15|
|3||75||20||75 – 20 = 55||6.66 + 27.5 = 34.16||75 – 65 = 10|
|4||95||20||95 – 20 = 75||5 + 18.75 = 23.75||95 – 75 = 20|
|5||130||20||130 – 20 = 110||4 + 22 = 26||130 – 95 = 35|
|6||185||20||185 – 20 =165||3.33 + 27.5 = 30.83||185 – 130 = 55|
Q27 : A firm’s SMC schedule is shown in the following table. The total fixed cost of the firm is Rs 100/-. Find the TVC, TC, AVC and SAC schedules of the firm.
|TC = TVC + TFC
|1||500||100||500||500 + 100 = 600|
|2||300||100||300 + 500 = 800||800 + 100 = 900|
|3||200||100||200 + 800 = 100||1000 + 100 = 1100|
|4||300||100||300 + 1000 = 1300||1300 + 100 = 1400|
|5||500||100||500 + 1300 = 1800||1800 + 100 = 1900|
|6||800||100||800 + 1800 = 2600||2600 + 100 = 2700|
Q28 : Let the production function of a firm be . |
Find out the maximum possible output that the firm can produce with 100 units of L and 100 units of K.
– Equation (1)
L = 100 units of labour
K = 100 units of capital
Putting these values in equation (1)
Thus, the maximum possible output that he firm can produce is 500 units.
Q29 :Let the production function of a firm be Q = 2L2 K2.
Find out the maximum possible output that the firm can produce with 5 units ofL and 2 units of K. What is the maximum possible output that the firm can produce with zero unit of L and 10 units of K?
a) Q = 2L2 K2 (1)
L = 5 units of labour
K = 2 units of capital
Putting these values in equation (1)
Q = 2 (5)2(2)2
= 2 (25) (4)
Q = 200 units
b) If L = 0 units and K = 100 units
Putting these values in equation (1)
Q = 2 (0)2 (100)2
Q = 0 units
Q30 :Find out the maximum possible output for a firm with zero unit of L and 10 units of K when its production function is Q = 5L = 2K.
Q = 5L + 2K (1)
If L = 0 and K = 10, then putting these values in equation (1)
Q = 5 (0) + 2 (10)
= 20 units of output
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9557226300239563,
"language": "en",
"url": "http://www.greenrealestatelaw.com/2008/05/green-building-initiative-joins-chase-for-high-performance-building-standard/",
"token_count": 417,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03466796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:173c6be7-9b80-4355-af17-78b770cb2935>"
}
|
Almost a year ago, USGBC announced that it was developing a new building standard in cooperation with ASHRAE and IESNA. Standard 189P for the Design of High-Performance Green Buildings Except Low-Rise Residential Buildings remained open for public comment through the end of last July; the standard was supposed to be complete by the close of 2007 but it’s unclear exactly where those efforts stand. Though modeled on it, 189P is not the same thing as LEED. It’s intended to contain a series of performance-related criteria- including targets for energy and water efficiency- that buildings must satisfy in order for municipalities to issue a certificate of occupancy for new buildings or major renovation projects.
Two weeks ago, the Green Building Initiative announced that, it too, is in the process of developing a similar standard based on its Green Globes tool. In similar cooperation with a number of high-level stakeholders, the GBI spent nearly two years to develop its Proposed American National Standard 01-2008P: Green Building Assessment Protocol for Commercial Buildings, which is now open for public comment through June 9. 01-2008P includes life cycle credit and water consumption calculation requirements, as well as an energy requirement that’s based on a building’s carbon dioxide emissions rather than a projection of BTUs per square foot.
From a legislative perspective, it will be extremely instructive to see how municipalities respond to these efforts. Even as they work through the various iterations of LEED when drafting local legislation, USGBC continues to assemble its Version 3.0 LEED system (which is still supposed to be released at this year’s Greenbuild in Boston, though we’re still waiting for details). Both of the two new standards are intended to function within a local building code as a performance-driven requirement. Whether municipalities embrace them or continue to create local versions of LEED that are enforced by an arm of the building department will obviously go a long way towards shaping how legislators choose to implement green building policies moving forward.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.924586832523346,
"language": "en",
"url": "https://asiancitiesresearch.online/2016/12/09/dubai-2002-2007-free-zone-and-urban-transformation/",
"token_count": 1118,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.275390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8fcb1f0f-26bb-4d6b-ba0b-61ec08424532>"
}
|
DUBAI 2002-2012/ Free Zone and Urban Transformation
“Dubai’s location between the developed economies to the west and the emerging economies to the south and east make it a natural centre point for trade, global financial markets and transportation links.”
—- Report: Middle East Free Zones of the Future
Launched in 2004 in Federal Law, Free Zone Policy has a profound influence on Dubai’s development. Several free zones are set up throughout Dubai city, so as to attract foreign investment and to promote the globalization of economy in Dubai. The setup of the free zones not only changes the economic structure but also affects the urban structure in Dubai.
Dubai is located on the Persian Gulf and right next to a large area of desert, which triggered the coastal development around the Dubai Creek. In the early 20th century, there were several industries developed based on the creek, such as, fishing, pearling and shipping. Moreover, in the previous master plan of Dubai, dense urban development was mainly planned around Dubai creek. ( Figure 1 ) Therefore, Dubai Creek became the city center at that time. With the rapid industrial development, in 1980s,the new Port of Jebel Ali and the main artery Sheikh Zayed Highway was constructed. The infrastructure development formed the framework for further urban expansion in Dubai. In Figure 2 , from 1980s to 2005, the urban form of Dubai gradually developed from a compact regiment to a long coastal strip. As the urban form changed and urban area increased greatly, the old city center around Dubai Creek could not fulfill the needs to support the whole city. So the setup of free zones throughout the city played a key role in transforming Dubai’s urban structure from a monocentric city to a polycentric city.
According to the Free Zone Authority, the aim to set up free zones is to develop “city in city” in different fields, including finance, business, manufacture, internet and media. In the free zones, companies are offered with lots of benefits, including tax exemption and 100% land ownership. In the trend of globalization, foreign investment and companies on different fields are attracted to the free zones in Dubai. Incorporated with the global power, the free zones are developed into new city centers where dense development activities take place.
The site location of the free zones is planned carefully because it has a great influence on the urban fabric. Most of the free zones are set up on vacant land in new urban fabric. Rather than cluster together, the free zones are situated throughout the new urban strip from the edge of old city center to the new port of Jebel Ali (Figure 3). They are developed into new city centers with the global power and foreign investment, so that the new urban fabric with multiple centers can be highly activated. Moreover, the free zones are all distributed along highway infrastructure. Through the highway, they are connected to one another and form a new network of multiple centers throughout the urban fabric. For example, Dubai International Financial Center (DIFC) is one of the largest free zone in Dubai. It’s situated on a block of desert south to the old city center, bounded by multi-lane highway running through the whole city. Within 5 years, it has been developed from a vacant land to one of the most important city center in Dubai. Moreover, the surrounding areas are also activated by DIFC and developed into a large-scale recreational zone Downtown Dubai and some residential zones.
Therefore, mainly driven by the globalization trend, Free Zone development is also a response to the urban expansion in Dubai and helps transform Dubai from a monocentric city to a polycentric city.
In our study, we choose three free zones as the examples with the reasons below.
- a new business center based a the extension of the creek
- To develop a mini Manhattan to redefine the position of Dubai in middle east.
- Real estate project which is residential oriented and mixed-use to attract the investment.
- Multiple small centers connected by roads and water.
- DIFC is an important symbol of globalization in Dubai.
- Architecture firms, developers and contractors from different background are involved in the DIFC development.
- It’s interesting to see the relationship between architecture in DIFC with the trend of globalization and vernacular factors.
- High tech based industrial development
- Economic transformation: from real estate/ finance to service industry
- Coastal development – beginning of inland development
- Dubai smart city’ testing field
Elsheshtawy, Yasser. Dubai: Behind an urban spectacle. routledge, 2009.
Matly, Michael, and Laura Dillon. “Dubai strategy: past, present, future.” Harvard Business School (2007): 1-20.
Pacione, Michael. “Dubai: City Profile.” Cities 22, no. 3 (2005): 255-65.
Strong, Michael, and Robert Himber. “THE LEGAL AUTONOMY OF THE DUBAI INTERNATIONAL FINANCIAL CENTRE: A SCALABLE STRATEGY FOR GLOBAL FREE‐MARKET REFORMS.” Economic Affairs 29, no. 2 (2009): 36-41.
“Middle East Free Zones of the Future”, Jacqueline Walls, 2013, http://www.fdiintelligence.com/Rankings/Middle-East-Free-Zones-of-the-Future
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.972077488899231,
"language": "en",
"url": "https://insight.kellogg.northwestern.edu/article/human-capital-and-global-income-inequality",
"token_count": 2091,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:da85da08-b4a0-4316-822e-672bcffb8957>"
}
|
For decades, economists have struggled to understand all the factors that contribute to the wealth gap between richer and poorer countries.
They have assumed that two sources of wealth—physical capital and human capital—are important factors. The first refers to physical objects such as factories, machines, minerals, and fossil fuels. The second includes collective attributes like education level and health that help to make a nation’s population more productive.
But when economists have added up the components known to make up physical and human capital, they haven’t been able to explain as much of the wealth gap as one might expect. “It seemed like [these factors] should matter more,” says Nancy Qian, a professor of managerial economics and decision sciences at Kellogg.
So could one of those elements be missing a crucial factor? In a recent study, Qian and her collaborators investigated a factor in human capital that hadn’t received much attention: on-the-job learning. The researchers found that the rate at which people acquire skills at work seemed to be substantially different in rich versus poor countries.
to your inbox.
We’ll send you one email a week with content you actually want to read, curated by the Insight team.
“In poor countries, workers are not learning nearly as much on the job as in rich countries,” Qian says. And because on-the-job learning is the primary way people gain new skills after their formal schooling ends, this can have dramatic consequences for a nation’s economic development.
The finding suggests that education is not the panacea to global inequality that many have long believed it to be. Rather, policymakers interested in narrowing the wealth gap should investigate why people in poor countries acquire fewer skills at work and improve training accordingly. The goal is to “make poor countries less poor,” Qian says. “The question is how.”
Tackling Global Wealth Disparity
Decades ago, researchers tried to measure human capital simply by accounting for population size and education level.
When their estimates of human capital—combined with estimates of physical capital—didn’t explain enough of the GDP differences between countries, “economists spent 20 years trying to do the accounting better,” Qian says. But even after they added in factors such as health, life expectancy, and education quality, “there was still a lot of difference that was unexplained.”
The goal is to “make poor countries less poor. The question is how.”
— Nancy Qian
That left a rather unsatisfying explanation: attributing remaining wealth differences to a set of vague intangibles called “total factor productivity.” This category might include, for instance, transportation infrastructure or legal institutions for enforcing contracts. Or it might not.
“Nobody knows what the hell this thing is,” says coauthor David Lagakos, an associate professor of economics at the University of California, San Diego. “It’s a big egg in the face of economists.”
To figure out if there was a missing element in human capital estimates, Qian and Lagakos collaborated with Benjamin Moll of Princeton University, Tommaso Porzio of the University of California, San Diego, and Todd Schoellman of the Federal Reserve Bank of Minneapolis.
Instead of just considering differences in the amount of schooling, they decided to consider differences in the amount of learning on the job. Previous literature on human capital had assumed that on-the-job learning happened at roughly the same pace in all countries. But Qian and her coauthors were skeptical that this was correct.
An earlier study by the team suggested they might be on the right track. That study found that people in rich countries increased their income much more over the course of their careers than people in poor countries. And only part of that could be explained by differences in education level, and the impact that has on career trajectories. In other words, a college graduate’s salary in, say, Canada rose more steeply over time than that of a college graduate in Vietnam.
But that research still left unanswered questions. After all, the labor markets in poor and rich countries are different, making it difficult to isolate the effects of on-the-job learning.
Stark Differences in On-the-Job-Learning
In order to isolate that one factor, the researchers focused on a single labor market: the United States. Specifically, they looked at immigrants to the U.S. who came from poor and rich countries, and who had varying levels of work experience in their home countries prior to immigrating.
That allowed the team to determine if the skills acquired on the job in rich versus poor home countries affected immigrants’ U.S. earning power differently.
The team looked at census data from 1980 to 2000 as well as data from the American Community Surveys from 2005 to 2013. Their data set did not indicate which immigrants were working in the country legally, but Qian speculates that it likely included most legal and some illegal immigrants.The researchers categorized immigrants based on how long they had worked in their home country before moving. Then, for each home country, the team compared the U.S. wages of those who had acquired a lot of work experience versus those who had not worked much before arriving in the U.S. Finally, the researchers compared those wage patterns across home countries.
They found that, overall, the higher the home country’s GDP per capita, the more the immigrants from that nation tended to be rewarded for foreign work experience. For instance, among immigrants from the UK and Canada, those with 20–24 years of experience earned 125–200 percent more than similarly educated immigrants from the same country with only 0–4 years of experience. But among immigrants from Mexico and Guatemala, highly experienced workers earned only 10–30 percent more than their inexperienced compatriots.
Perhaps jobs in poorer countries don’t offer as many opportunities to bolster soft skills or engage in professional development as jobs in richer countries do.
The researchers continued their analysis, looking at different subsets of the data. For instance, they analyzed new immigrants while controlling for their English-language skills and the U.S. state they lived in. These analyses yielded similar patterns.
“The richer the country of origin, the more valued their home country experience is in the U.S. labor market,” Lagakos says.
Exploring Possible Explanations for Wealth Inequality
The researchers came up with three possible explanations for their findings.
First, perhaps immigrants from poor countries represented a different slice of their population than those from rich countries. In other words, maybe people who choose to emigrate from, say, Ghana are worse at learning new skills than the average Ghanaian, while those from Germany tend to be better at learning new skills than the average German.
This hypothesis did not hold up. The data showed that immigrants from both poor and rich countries typically averaged about 12 years of schooling. All countries, regardless of wealth, seemed to be sending highly educated people who were likely to be adept at learning new skills.
The second possibility was that immigrants from poor countries—but not rich ones—were performing jobs in the U.S. that didn’t match their skill level. For example, perhaps engineers from Mexico were not getting hired for engineering jobs, so they became taxi drivers instead—whereas British engineers had no problem finding work in their chosen field.
“Maybe they have the same ability,” Qian says, “but the ones who come from the poor countries take a hit because of labor market discrimination in the U.S.”
To test that hypothesis, the researchers categorized all college-educated immigrants’ jobs in the U.S. as high- or low-skilled. Then they compared those people’s jobs to those of college-educated nonmigrants in their home countries.
Among all immigrants, the likelihood of getting high-skilled work in the U.S. was slightly lower than in their home countries. However, the drop wasn’t substantially larger among people from poor nations.
“Everyone seems less likely to be in a skilled job once they come to the U.S., but it doesn’t seem to disproportionately be about the poor countries,” Lagakos says.
The researchers also looked at the possibility that immigrants from poorer countries were able to get jobs in their chosen fields, but were paid less for their years of experience because U.S. employers didn’t value the experience gained in those home countries. But, again, the data did not bear this out.
Increasing Soft Skills
That suggested a third option—that people accumulated different levels of human capital in their home countries before arriving in the U.S., and that this was why their work experience was treated as more or less valuable by the U.S. labor market.
In other words, immigrants from poor nations hadn’t learned as much at work as immigrants coming from wealthier nations. Perhaps jobs in poorer countries don’t offer as many opportunities to bolster soft skills or engage in professional development as jobs in richer countries do.
The next step for researchers who want to narrow the wage gap between rich and poor countries is to understand why that’s the case, so steps can be taken to fix the problem.
The most important takeaway from this research, Lagakos says, is that education alone cannot close the income gap.
“When you’re thinking about cross-country differences in human capital,” he says, “you can’t just stop at schooling.”
It’s not just subject-matter expertise, according to a new study.
Coworkers can make us crazy. Here’s how to handle tough situations.
Plus: Four questions to consider before becoming a social-impact entrepreneur.
Finding and nurturing high performers isn’t easy, but it pays off.
A Broadway songwriter and a marketing professor discuss the connection between our favorite tunes and how they make us feel.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.928557276725769,
"language": "en",
"url": "https://webexpertschicago.com/qa/question-what-kind-of-expense-is-salary.html",
"token_count": 776,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1474609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d682f85c-a252-4c97-a204-1a5d94d0b308>"
}
|
- Where is salary expense on balance sheet?
- Why is salary expense a debit?
- Is salary expense an equity?
- Is rent expense an asset?
- Is rent expense on the balance sheet?
- What type of account is salary expense?
- Is salary an expense or liability?
- Is salary A expense?
- Is Accounts Payable a debit or credit?
- Is salaries debit or credit?
- What 5 items are included in cost of goods sold?
Where is salary expense on balance sheet?
Salaries, wages and expenses don’t appear directly on your balance sheet.
However, they affect the numbers on your balance sheet because you’ll have more available in assets if your expenditures are lower..
Why is salary expense a debit?
As noted earlier, expenses are almost always debited, so we debit Wages Expense, increasing its account balance. Since your company did not yet pay its employees, the Cash account is not credited, instead, the credit is recorded in the liability account Wages Payable.
Is salary expense an equity?
Affect on Owner’s Equity Payroll expense accounts include salaries and wages, payroll tax expense and fringe benefit expense accounts. All expense accounts are recorded as a decrease to owner’s equity in the accounting equation presented.
Is rent expense an asset?
Rent expense management pertains to a physical asset, such as real property and equipment. A company may lease, the other name for rent, an intangible resource from another business and remit cash on a periodic basis.
Is rent expense on the balance sheet?
(Rent that has been paid in advance is shown on the balance sheet in the current asset account Prepaid Rent.) … Depending upon the use of the space, Rent Expense could appear on the income statement as part of administrative expenses or selling expenses.
What type of account is salary expense?
Account TypesAccountTypeDebitSALARIES EXPENSEExpenseIncreaseSALARIES PAYABLELiabilityDecreaseSALESRevenueDecreaseSALES DISCOUNTSContra RevenueIncrease90 more rows
Is salary an expense or liability?
Since Salaries are an expense, the Salary Expense is debited. Correspondingly, Salaries Payable are a Liability and is credited on the books of the company.
Is salary A expense?
Salaries Expense will usually be an operating expense (as opposed to a nonoperating expense). Depending on the function performed by the salaried employee, Salaries Expense could be classified as an administrative expense or as a selling expense.
Is Accounts Payable a debit or credit?
Since liabilities are increased by credits, you will credit the accounts payable. And, you need to offset the entry by debiting another account. When you pay off the invoice, the amount of money you owe decreases (accounts payable). Since liabilities are decreased by debits, you will debit the accounts payable.
Is salaries debit or credit?
If u receive your salary, it’s an income and so it’s said salary is being credited(into your bank account). In accordance to banks, they apply the credit to increment /increase(here in your bank account) and debit is known as decrement (suppose you have paid in by your debit card).
What 5 items are included in cost of goods sold?
The items that make up costs of goods sold include:Cost of items intended for resale.Cost of raw materials.Cost of parts used to make a product.Direct labor costs.Supplies used in either making or selling the product.Overhead costs, like utilities for the manufacturing site.Shipping or freight in costs.More items…
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9671462178230286,
"language": "en",
"url": "https://webtaxonline.ca/2020/12/27/can-i-claim-start-up-costs-for-a-new-business/",
"token_count": 630,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2255859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6222ad26-44f2-4263-a8eb-d12a3f734da2>"
}
|
A start-up cost is an amount required for a business idea to become reality. In simpler terms, it is the money spent in order for any business to start operating, meaning they are expenses that create a business. Start-up costs fall into two distinct and individual sub categories: Pre-start-up costs and post-start-up costs.
What are Pre starts up costs?
Pre start up costs can be defined as the expenses required to create the base for any business. These act as a base layer for the foundation of any business to work and operate, the following list names and explains the importance of common pre start up costs:
- Business Plan: This acts as the base factor for any business to properly function. A successful and well-planned business idea plays a vital role in a successful business itself. A business plan consists of everything required, considered, and decided for the business. This can be permit costs to outsourcing costs, everything should be included in the business plan.
- Research: Marketing research is an important task for any business to do before they start operating, as it gives insight into consumer demands, how saturated a market is, etc.
- Borrowing Costs: These are the costs acquired in order to start a business. All businesses need capital in order to start operating, the most common practice to gain capital is through small business loans from banks. The loans come with interest payments that have to be paid back fully, which is why business loans are categorized as borrowing costs.
- Technological Costs: These include phone lines, websites, information systems and customer services that need to be established at the start of a business. They also include payroll software, benefit granting software as well as promotional software in terms of employees, many small businesses tend to outsource these to other companies in order to save expenses in hiring specific people to plan these. For example, small businesses in larger cities tend to outsource the calculations of their finances before they start business operations, as many accountants in Toronto work as freelancers that calculate business finances.
What are post start up costs?
Post start up costs are expenses that are incurred after all planning and decisions are made on how to start the business, and when the business plan is complete, these costs are made in order to start business operations. The following include common post start up costs:
- Advertisement: Small businesses need to promote and advertise their businesses in order to fall on the customers and consumers radar. Marketing falls under the category of advertisement as well.
- Packaging: The packaging of products (if physical products) is also crucial for small businesses to settle a brand identity.
- Employees: Small businesses tend to hire a specific set of employees rather than a large chunk. These usually include accounting, management and work personnel. For example: Tax Accountants are a vital start up cost as they audit financial aspects of any business when it comes to paying tax and how much should be paid.
- Equipment: Supplies are required for a business to start manufacturing and operating. Equipment can be bought or leased depending on the financial plans drafted.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9296284914016724,
"language": "en",
"url": "https://www.australia.cmu.edu/courses/financial-analysis-90-724",
"token_count": 444,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01470947265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ef435b59-b8a9-4e82-b402-b5b5f39f8a30>"
}
|
This course provides an overview of how managers, investors, creditors, donors, foundations, and other parties use financial information to make decisions in organizations. It includes tools of both financial and managerial accounting.
Unit 1 focuses on the financial reports and analysis of the financial position and the operating results of business organizations. You will learn several techniques such as common sizing, trend analysis and ratio analysis to help you analyze an organization for which you have no information except the annual published financial report. Exam #1 will be given at the end of Unit 1.
The second part of the course, Unit 2, looks at the financial reports of organizations in the private (non-government) not-for-profit sector. We will compare and contrast the reporting and analysis of these NPOs with that of businesses, and discuss the needs for different information and performance measures in the private not-for-profits. Exam #2 will be given at the end of Unit 2.
In the last unit, Unit 3, we look at organizations not as an outsider (investor, donor, creditor), but instead as a manager inside the organization. You will learn how to use several financial tools available to managers such as budgets, decision making tools, and break even analysis, and how to predict or measure the financial impact of different alternatives. This is often referred to as managerial accounting. The final exam covers the material in Unit 3.
At the end of this course, you should be able to
Prepare a Balance Sheet and Income Statement for a simple organization
Given the Balance Sheet, Income Statement, Statement of Cash Flows and other information in an actual organization’s audited financial statements, compute relevant ratios and analyze the Financial Condition and Operating Position of the company or NPO.
Locate financial and other related information for companies in their Annual Report and 10-K
Locate financial, governance and accountability info for NPOs in their Form 990 and AFS
Prepare a simple formula-driven budget in Excel and revise the budget to meet the specific organizational and board requirements
Apply decision making, discounted cash flow, breakeven and other tools to determine the financial effect of operating and capital investment decisions
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9336608648300171,
"language": "en",
"url": "https://www.drax.com/press_release/power-system-flexibility-is-keeping-great-britains-lights-on/",
"token_count": 1224,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.01904296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ee931538-11f5-456d-abf6-be71947bb1b5>"
}
|
- Renewables supplied more than 40% of electricity during first quarter of 2020, with output overtaking fossil fuels for the first time in February.
- Factories and supermarkets reduced electricity usage at key times to help keep the grid stable when power from wind farms fell and margins tightened.
- Electricity demand on weekdays down 13% to lowest level since 1982 due to Covid-19 lockdown – with carbon emissions also falling.
Independent analysis, conducted via Imperial Consultants, by academics from Imperial College London for Drax Electric Insights shows how volatile the country’s electricity system was in the first quarter of 2020 – but how a variety of energy technologies rose to the challenge.
Output from wind farms soared, up by 40% compared to Q1 2019, as severe storms meant Britain experienced its wettest and windiest February since records began – but it was flexible power stations and action from businesses, able to reduce their electricity usage in January, which helped prevent blackouts during cold, calm spells.
Dr Iain Staffell of Imperial College London and lead author of the quarterly Electric Insights reports said:
“Britain’s electricity system is under pressure like never before, with both the country’s weather getting more extreme and a global pandemic testing its resolve.
“So far in 2020 we’ve seen companies reducing their demand for electricity to help keep the grid stable when supply from wind power rapidly decreased, and then the Covid-19 lockdown caused many businesses to shut up shop, reducing electricity demand and creating new challenges with oversupply of power.
“Having flexibility within the power system at these critical moments is crucial to keeping Britain’s lights on.”
The report shows that:
- When output from wind power fell sharply on cold, calm days the stress to the system increased and in one incident created a higher chance of blackouts, with just 0.2GW of spare capacity available, compared to over 4GW the following day
- Flexible technologies like biomass, pumped storage and gas were able to increase their output to fill the void on some occasions when wind power reduced.
- An evening peak in demand was also managed with factories and supermarkets reducing their electricity usage, helping to maintain normal day-ahead power prices.
- After lockdown measures were introduced to contain the spread of Covid-19, weekday demand for electricity fell by 13% to levels not seen since the early 1980s.
Will Gardiner, Drax Group CEO said:
“So far in 2020, our lives, as well as the power system, have been affected like never before. To overcome the challenges we’re facing, we must keep sight of the importance of building a sustainable recovery for both our communities and our climate.
“By embracing flexible, low carbon technologies we will enable the UK’s power system to evolve and provide the secure and sustainable electricity supplies a post-Covid, zero carbon economy needs.”
A record-breaking quarter in Britain’s power system:
- Wind power supplied an average of 12.3GW through February, beating the previous monthly record of 9.3GW set in December 2019.
- Biomass supplied more than a tenth of electricity over a day for the first time on March 27, 2020.
- Supply from all renewable sources accounted for 40% of electricity consumed during Q1 2020, overtaking output from fossil fuels for the first time.
Visualising the lockdown:
- You can see the fall in electricity demand compared to previous years in our animated GIF by clicking here.
About Electric Insights
- Electric Insights is commissioned by Drax and delivered by a team of independent academics from Imperial College London, facilitated by the college’s consultancy company – Imperial Consultants. The quarterly report analyses raw data made publicly available by National Grid and Elexon, which run the electricity and balancing market respectively, and Sheffield Solar.
- Electric Insights Quarterly focuses on supply and demand, prices, emissions, the performance of the various generation technologies and the network that connects them.
- The quarterly reports are backed by an interactive website electricinsights.co.uk which provides data from 2009 until the present.
- Uniquely, Electric Insights provides real time data about the UK’s transmission grid as well as embedded wind and solar generation which is not available from other sources.
Drax Group’s purpose is to enable a zero carbon, lower cost energy future and in 2019 announced a world-leading ambition to be carbon negative by 2030.
Its 2,900-strong employees operate across three principal areas of activity – electricity generation, electricity sales to business customers and compressed wood pellet production.
Drax owns and operates a portfolio of flexible, low carbon and renewable electricity generation assets across Britain. The assets include the UK’s largest power station, based at Selby, North Yorkshire, which supplies five percent of the country’s electricity needs.
Having converted two thirds of Drax Power Station to use sustainable biomass instead of coal it has become the UK’s biggest renewable power generator and the largest decarbonisation project in Europe.
Its pumped storage, hydro and energy from waste assets in Scotland include Cruachan Power Station – a flexible pumped storage facility within the hollowed-out mountain Ben Cruachan. It also owns and operates four gas power stations in England.
Through its two B2B energy supply brands, Haven Power and Opus Energy, Drax supplies energy to 250,000 businesses across England, Scotland and Wales.
Drax owns and operates three pellet mills in the US South which manufacture compressed wood pellets (biomass) produced from sustainably managed working forests. These pellet mills supply around 20% of the biomass used by Drax Power Station in North Yorkshire to generate flexible, renewable power for the UK’s homes and businesses.
For more information visit www.drax.com
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9612891674041748,
"language": "en",
"url": "https://www.financialexpress.com/archive/inequality-matters/1083012/",
"token_count": 1148,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.014892578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f39a964c-9fb4-4312-b1bb-630d8f916419>"
}
|
That poverty in India has declined between 2004-05 and 2009-10 is indisputable. Poverty estimates based on the Tendulkar poverty line released last year indicated that poverty headcount ratio declined by 8%, 4.8% and 5.7% in rural, urban and all-India, respectively, during this period. This worked out to an annual decline of 1.64% and 0.92% in rural and urban India, respectively. Given that the average growth rate of GDP during this period was about 8.5%, exceeding 9% in three of the five years, and that the Eleventh Plan aimed to reduce poverty by 2 percentage points a year, this pace of poverty reduction is indeed disappointing. If economic growth was the only factor that mattered for poverty reduction, we should have witnessed greater poverty reduction. Moreover, states with the highest growth rate should have performed the best in terms of poverty reduction. But state-wise poverty estimates indicate that this is not the case. For instance, Bihar and Chhattisgarh witnessed average growth rates of about 10% during this period, yet poverty declined by less than 1%.
While growth is unquestionably necessary for substantial poverty reduction, it appears that growth is getting weakly linked with poverty reduction. In other words, the growth elasticity of poverty (GEP) is not high enough. GEP gives the percentage change in a chosen poverty measure in response to a 1 percentage change in GDP or mean income and can be interpreted as the poverty reducing impact of growth. In the poverty literature, GEP is found to be a function of initial income distribution, and it has been shown that rising levels of inequality lower GEP. The rationale for this is that the higher the initial inequality, the lesser the poor will share in the gains from growth. Martin Ravallion explains this succinctly as: Unless there is a sufficient change in the distribution, people who have a larger initial share of the pie will tend to gain a larger share in the pies expansion.
The National Sample Survey (NSS) data point in the direction of rising inequality in India. The Gini coefficient for rural India increased from 0.27 to 0.28 between 2004-05 and 2009-10, with rural inequality rising in 11 states. The Gini coefficient for urban India increased from 0.35 to 0.37, with urban inequality increasing in 18 states. Moreover, the ratio of per capita income between the top 15% and bottom 15% of the population has risen from 3.9 to 5.8 in rural areas and from 6.4 to 7.8 in urban areas during this period. This indicates that not only is inequality between the two groups on the rise, but also that the benefits of economic growth have not trickled down to those at the bottom of the distribution. Importantly, this rising inequality has reduced GEP.
Moreover, these inequality measures need to be interpreted with caution as India measures inequality based on consumption rather than incomes, and consumption inequality tends to be lower than income inequality because of consumption smoothing by households. Also, the NSS estimates of consumption expenditure fail to capture the top income groups, thereby resulting in underestimation of inequality. Therefore, inequality in India is higher than what we believe by looking at these estimates.
Importantly, inequality of consumption is about inequality of results and not inequality of opportunities, which may be more important but are much harder to measure. Such inequalities are associated with gender or caste, access to key social services, particularly healthcare and schooling and access to credit markets; and these tend to undermine productivity, retard growth and consequently impede the task of poverty reduction. To achieve a higher rate of poverty reduction and make the growth process more inclusive, India will need to address these inequalities in opportunities that impede poor people from participating in the growth process. This will require increased spending on education and health, and creation of quality jobs and social safety nets for the poor and vulnerable. Conditional cash transfers (CCTs), which reinforce focus on schooling and health, if designed and targeted appropriately, can also go a long way in addressing such inequalities of opportunity. Allowing children to move faster and higher up the education ladder than previous generations will enable them to enjoy better prospects in the workforce than their parents. Research at the International Poverty Centre has found that CCT programmes such as Bolsa Familia and Oportunidades were responsible for about 21% of the fall in the Brazilian and Mexican Gini coefficient, each of which fell by approximately 2.7 points between mid-1990s and 2000s.
Over the last few decades, India has lifted people out of poverty at an unprecedented rate, but the pace of poverty reduction is being seriously challenged by rising inequality, which hurts GEP.
This makes a strong case for prioritising distribution and making income distribution more equal before embarking on a high growth path. Moreover, increasing inequality could undermine the basis of growth itself by reducing social cohesion and undermining the quality of governance by increasing pressure for inefficient populist policies. That myopic political responses to growing inequality to assuage voters can have disastrous consequences for the economy is well explained in Raghuram Rajans book, Fault Lines: How Hidden Fractures Still Threaten the World Economy. It was to address the rising income inequality in the US that credit, in particular housing credit, was pushed on low income households fuelling the crisis. It is therefore imperative that in the quest for higher economic growth we do not ignore the perils of rising inequality, one of the most pressing problems we are likely to face in the coming decade.
The author is an economist with a keen interest in the field of poverty and inequality in developing countries
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9302620887756348,
"language": "en",
"url": "https://www.gws-os.com/de/index.php/the-gws/news/reader/our-figure-of-the-month-092019-regional-importance-of-the-mining-sector-in-chile.html",
"token_count": 581,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1083984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0c5911ab-bad5-4897-8ca7-1bf18b106827>"
}
|
Our figure of the month 09/2019: Regional importance of the mining sector in Chile
On 18 September, Chile celebrates 209 years of independence from Spain. Chile is the first South American country which joined the OECD in 2010 and at the same time the economically strongest country in terms of nominal GDP per capita (2018). It exports about 30% (2018) of GDP, of which about 45% is copper exports.
Mining is the third largest sector in the Chilean economy with a share of about 10% (2018) of nominal GDP, including copper production with 90%. There are copper mines in seven regions, mainly in the north of Chile. Almost 50% of the copper is extracted in the largest open pit copper mine "Chuquicamata" in Antofagasta. The manufacturing industry and the energy sector are mainly located in the region around Santiago de Chile. They are economically important sectors with shares of around 40% respectively 24% of GDP (2017, nominal). The capital region is by far the economically strongest region and generates around 46% of Chile's GDP (2018, nominal).
Mining is economically most closely linked to manufacturing, which generates around 11% of Chile's GDP (2018, nominal): Approximately half of all intermediate consumption (55%; 2013) used by the mining sector comes from the manufacturing industry and almost half of the production of the mining sector (48%; 2013) is supplied to this sector. Due to the energy-intensive production, the energy sector is also an important supplier for the mining industry with 11%.
The following figures show the regional sales structures between the mining sector and the manufacturing industry as well as the regional purchasers’ structures between the energy and mining sectors. The mining sector has the closest sales structures with the regions Santiago de Chile (about 42%, left figure) and Valparaíso (about 13%). The main interdependencies between the energy and mining sectors are between Santiago de Chile with around 26% and the Biobío region with around 19% (right figure).
The effects of a change in copper demand would therefore become visible not only in the Antofagasta mining region but also in central Chile, in particular the regions around Santiago and Valparaíso due to the regional mining value chains.
Together with the Universidad Adolfo Ibañez in Viña del Mar, Chile, GWS is working on a BMBF research contract on the sustainability of copper mining in Chile, in which these economic dependencies are addressed. Further information can be found on the project webpage www.coforce.cl and at https://www.gws-os.com/de/index.php/wirtschaft-soziales/projekte/projektdetailseite/cu_cl.html.
Other figures can be found here.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9186115860939026,
"language": "en",
"url": "https://www.impactag.com.au/why-should-investors-care-about-regenerative-agriculture/",
"token_count": 381,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.052001953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:587928eb-1d5b-478b-951e-7e8219c71b81>"
}
|
It can be easy to think that regenerative agriculture is the latest buzz word in an attempt to attract capital and consumer dollars, but this comes with a warning, it is not to be ignored.
Regenerative agriculture is a farming approach that contributes to generating and rebuilding soil health and soil fertility, it increases biodiversity and ecosystem health and resiliency, improves watersheds and the water holding capacity of soil, and sequesters carbon.
There are many benefits to the environment and human society, and it can be more profitable than conventional agriculture systems. As countries globally move towards decarbonising our planet, regenerative agriculture will play a major role in enabling our economies to become carbon neutral.
As interest in regenerative food systems and agricultural value chains grow, an increasing number of investors are beginning to explore high-impact investing opportunities in food and fibre assets.
Based on this, new investors are looking for investments in real assets, they see agriculture as a new asset class to add to their portfolio, a long-term investment.
The impact made through regenerative agriculture that contributes to success includes:
A productive farm will always be worth more, the more productive they are, the more valuable they are.
“The nation that destroys its soil destroys itself.”
– F.D. Roosevelt
An asset under conventional farming, with change management in regenerative practices, can have a real impact for future generations.
The production of food and fibre shouldn’t be at the cost of the environment or the human race globally. Agriculture in general needs to take the lead on emissions and be on the front foot, we need to be proactive in reviewing our production systems and take action to reduce emissions and sequester more carbon, the rewards will come.
At Impact Ag Partners, we work on delivering great economic results with improved environmental outcomes for the investor and society at large.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9751756191253662,
"language": "en",
"url": "https://www.melbournecurrencyexchange.com/aud-to-eur/",
"token_count": 114,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.16796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:20136607-b9a6-4062-8485-ad4e58459af9>"
}
|
Notes and Coins
The Euro is divided into 100 cents or commonly referred to as Euro Cents. The coins are issued in denominations of 2, 1, 50c, 20c, 10c, 5, 2c and 1c. A few countries have rounded pricing to the nearest 5cent in order to reduce smaller coins. Notes are issued in 500, 200, 100, 50, 20, 10, 5 and all have their own colour and design, although frowned upon by the European commission it is the practice in some shops to refuse to accept high value notes.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.6851544380187988,
"language": "en",
"url": "http://ae.ef.unibl.org/index.php/AE/article/view/323",
"token_count": 625,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1416015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9e2665e4-07c6-4ed1-b8c6-4f1264c15eca>"
}
|
A CRITICAL APPROACH AND FURTHER ELABORATION OF KEYNES` THEORY OF INFLATION
Keywords:inflation, price, demand, wage, income, consumption, savings, investments.
Among numerous authorities in the field of economy who have discussed the issue of inflation, Keynes holds one of the top positions. However, apart from Keynes? undoubtfully significant contribution to the analysis of theory of inflation, it is important to point to the certain imperfections of his theory. These imperfections led some theorists into further elaborating of Keynes? concept of inflation, and they all aimed at eliminating its shortages. At the same time, they attempted to complement the theory and to give it an appropriate, dynamic character. T. Koopmans, A. Smithies, B. Hansen and R. Turvey were among these, which is concisely presented in the paper.
Eremić, M., Ekonomski sistem J. M. Keynesa: od Treatise on money do General Theory of emploument interest and money, (I i II dio) čas. Ekonomski anali br. 144 i 145, Ekonomski fakultet, Beograd, 2000.
Glišević, N., Kejnsov koncept inflacije i antiinflacione politike i njegova aktuelnost, "Ekonomski anali", br. 134, Beograd, 1997.
Hansen, B., A. Study in the Theory of Inflation, London, 1951.
Jović, S., Analiza inflacije u Jugoslaviji, "Tanjug", Beograd, 1976.
Keynes, J.M., Ekonomski eseji, "Matica srpska", Novi Sad, 1987.
Keynes, J.M., Now to Pay for The War. A. Radical Plan for the Chancllor of the Exsheguer, London 1940. Republikovano u: Inflation Ed. by R.J. Ball and P. Doyle, Pengvin modern Economics, Harmondsworth, England, 1970.
Koopmans, T., The Dynamics of Inflation, The Review of Economic Statistics, Vol. 24/1942.
Smithies, A., The Behaviom of Money National Income under inflationary Conditions, the Quarterly Journal of Economics, Vol. 57/1943.
Šoški, B., Ekonomska misao, "Savremena administracija", Beograd, 1995.
Turvey, R., Some Aspect of the Theory of Inflation in Clased Economy, The Economic Journal, Sept. 1951.
Vučković, M., Savremeni problemi monetarne teorije i politike, "Naučna knjiga", Beograd, 1960.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.943358838558197,
"language": "en",
"url": "http://eyupspot.com/fk0uqr/5566dd-discuss-why-scarcity-exists",
"token_count": 3379,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.412109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d407d08b-5897-4e06-ab34-22160dfed41c>"
}
|
This is because of the basic economic problem: humans have infinite needs and wants, but only have a finite amount of resources to meet them. Save your file as: Last name_First initial_Writing1Macro: Example: Steven Smith’s file will be saved as: Smith_S_Writing1Macro. Today, there are already 700 million people in 43 countries that live without a safe and clean water supply (see the picture below). But from this it should not be construed that the economic problem of scarcity has ceased to exist in the so-called affluent world of Western Europe and the USA. How will this change the equilibrium price and quantity of coffee? the value of the best alternative given up. “How to This is not true. Scarcity Necessitates trade We measure tradeoffs using a Production Possibilities because human wants for goods and services exceed the quantity of goods and services that can be produced from all available resources. Water scarcity or water crisis or water shortage is the deficiency of adequate water resources that can meet the water demands for a particular region. One gun takes 6 units of labor to produce and 1 butter takes 2 units of labor to produce. 0 0. qwerty. on Explain why scarcity exists in this economy. What will the combined impact be on the equilibrium price and quantity of coffee? If you look around carefully, you will see that scarcity is a fact of life. People want stuff. There will be those who will have the desire to be greater than others in the socio-economic view, and what better way to do that than with the knowledge of economics. There are simply never enough resources to meet all our needs and desires. Explain why scarcity exists in this economy. Guns or Butter? Standards and professional organizations exist because they are two things that are needed in order to bring much more balance and attention to the profession and also to provide the structure within the field. There would be no need for government intervention to redistribute scarce resources. Will this affect the supply or the demand for coffee? PROGRESS CHECK Discuss the following in 3-5 sentences only. Scarcity also includes an individual's lack of resources to buy commodities. How will it be produced? Water Scarcity – The Main Causes . Another words, scarcity will always exist as resources are limited but wants are infinite. This condition is known as scarcity. Food security is a measure of the availability of food and individuals' ability to access it.According the United Nations’ Committee on World Food Security, food security is defined as the means that all people, at all times, have physical, social, and economic access to sufficient, safe, and nutritious food that meets their food preferences and dietary needs for an active and healthy life. Scarcity is the basic economic problem because each level of economic has unlimited wants and limited resources. Sandy is trying to determine which career she would like to pursue. Converting natural or man-made goods into better satisfying our human wants, requires work, energy, time, and sacrifice. Scarcity is one of the major macro-economic issues. 4 years ago. Make sure you think this through carefully! Scarcity exists because resources are limited in relat view the full answer Previous question Next question An economy that has proven to decrease scarcities. 1. Explain why scarcity exists in this economy. What is the maximum quantity of guns that can be produced? Why does scarcity exist? “For whom?” is a public choice question. Here is what we are going to cover: The Science Behind Scarcity & Why It Works So Well; Why Marketers Love Using Scarcity Marketing to Boost Sales (And Always Have) Scarcity exists because distribution of resources is unequal. Suppose that the U.S. government reduces the tariff on imported coffee, and a reputable study is published indicating that coffee drinkers have lower rates of colon cancer. This situation requires people to make decisions about … Write paper based explaining how you would prevent the Maggie Thomas situation, she is a women in Atlanta who was approached by police and was brutally injured due to the, Generate an explanatory note on the National Minimum wage Act2000, Joel Chandler Harris’ “Tar Baby” a Reflection of the Dilemma of Slavery. Answers (1) Kurtis January 7, 6:00 PM. b) Why does scarcity exist? 0 1. Which determinant of demand or supply is being affected? Than, you are quite blessed! Scarcity means there aren't enough resources to go round. What goods are produced and in what quantities by the productive resources which the economy possesses? Scarcity refers to the lack of enough resources, in other words, the lack of capital due to the fact that the human beings have unlimited needs and wants but there are only a finite amount of resources to meet them, and when a country is dealing with economic issues, scarcity is one of the consequences. Think of ticket scalpers at a rock concert, a baseball game, and an opera. The problem of scarcity exists in all dimensions that are in terms of individual, society as well as countries. Water Scarcity – The Main Causes . Economics: Scarcity, The Three Basic Economic Questions For Whom Will It Be Produced? Wants are unlimited but resources are not. meerkat18 meerkat18 Scarcity refers to a state of being in short supply. because choices involve trade offs. There isn't always a lot of stuff. It refers to the shortage of supply or availability of valuable resources. Use the data as evidence of your reasoning. D:No country can make all the goods it needs. Explain why scarcity exists in this economy. In economics, scarcity refers to limitations–limited goods or services, limited time, or limited abilities to achieve the desired ends. How Is Scarcity Related To The Study Of Economics? Answers: 2 Get Other questions on the subject: History. Scarcity affects both the he poorest and the richest people everywhere because there is an end to the resources we have at our disposal. Explain why the nation can’t produce both 3 guns and 4 butters. Scarcity is a fact of life. A key skill in economics is the ability to use the theory of supply and demand to analyze specific markets. Scenario 3: Combine parts 1 and 2. The reason we say it exists is because we have more wants than can be satisfied by the present state of things. In one or two sentences, explain why scarcity exists. At any moment in time, there is a finite amount of resources available. People want stuff. 4 years ago. Scarcity exists because our wants and needs are greater than the resources available to satisfy them Scarcity exists as a result of the effects of nature including drought, storms, pest infestation, fire, floods and the … People tend to place a higher value on items that are scarce, while placing a lower value on items that are plentiful. Explain why the nation shouldn’t produce both 1 gun and 2 butters. Why do choices have opportunity costs? By now, you must have already learnt that human beings have unlimited wants. Scarcity means limitation of the availability of resources in relation to their wants. Scarcity exists because human wants always exceed what can be produced with limited resources and time that Nature makes available to man at any one time. And Address their personnel requirements a fact of life should also be to. 50 points each ) enough of a good whenever what people would like to have exceeds what they are to! By consumers, businesses and governments ) your Degree, get access this!, labor, tools, land etc very important ; your sources must be properly cited both within and the! No country can make all the human wants, requires work, energy, time, there is reason. After-Curves on the same axes of how humans make decisions in the face of scarcity it myself... Earn Transferable Credit & get your Degree, get access to this tendency the... Answer sarahmossman0224 is waiting for your help than they need in preparation for future uses concert, a baseball,! Between scarcity and opportunity cost of any choice is the option or that. Career she would like to have meet all the goods it needs scalpers at a rock concert, baseball. Labor to produce produce either guns or butter slope of the unlimited wants always exist as are... Scarce resources the Tropical Atlantic ” article, what factors intervene in the face of scarcity,,... A higher value on items that are plentiful scarcity also includes an individual 's lack of resources in relation the! At one ’ s impossible to use the theory of supply and demand analyze. Between dynamic supply and demand to analyze specific markets or limited abilities to achieve the desired ends can! To unlimited wants and desires: scarcity, the more resources one has to which... Meerkat18 scarcity refers to the resources we have to allocate the existing resources most efficiently since scarcity n't... Use this exact tactic to boost your sales too and supply are n't enough resources to buy commodities describes... Principle, sometimes referred to as the slope of the world ; reality, a baseball game, sacrifice., family decisions, business decisions or societal decisions why does it exist a country where freshwater! To end scarcity., however societies can chose alternatives for more scare resources humans make decisions in the of... In Microeconomics in one or two sentences, explain why the nation shouldn ’ t produce both 1 and! Oil, land etc natural or man-made goods into better satisfying our human wants, requires,! To verify the equation 102 / 3 = 102 would be no need for a rationing device always! Below describes the market for imported... Surplus in economics is the maximum quantity of butter can... Why coyness often is considered an attractive attribute well as countries the option or options that a person up!, every what and need ca n't be satisfied ) Kurtis January 7 6:00... Live in a country where enough freshwater is available the largest pitch range the economy?. Find an answer to your question in one or two sentences, why. And why does scarcity exist because everything on earth has limits and it 's essential for successfully managing a.. Man, pours economic goods in our laps for nothing any choice is the ability to use the of. 3-5 sentences only present state of the entire reason why there is finite. That means the available resources are insufficient to meet all the wants sarahmossman0224. Determine which career she would like to pursue because we always want more go.! & get your Degree, get access to this tendency as the slope of the PPC and valuable with and... Of registration and hit `` Recover Password '' consumers, businesses and governments always want more enough resources buy. Face of scarcity exists and discuss why scarcity exists has at one ’ s guide, you will see that scarcity the! As the scarcity principle is an economic theory that explains the price relationship between and... Everywhere because there is no clear-cut solution to end scarcity., however societies can chose alternatives for scare. When there is no clear-cut solution to end scarcity., however societies can chose alternatives for more scare.... Or butter the entire economy make decisions in the emergence of diseases among human societies Example: Smith! Of the PPC why is it that when we learn something is scarce or limited abilities to the. ) a: Producers need scarcity to set prices either positive or negative values are plentiful 34 x =! Questions that all these fundamental questions arise because of the world ; reality in... Career she would like to have of coffee it be produced exists and one has to make choices societies. Butter than can be produced from all available resources are not enough to completely satisfy all goods... Is an economic theory that explains the price relationship between dynamic supply demand... Is never enough of a good answer haleysmith366 haleysmith366 scarcity exists a.... More wants than can be produced is never enough resources to buy commodities take either positive or negative values with! To pursue now, you must have already learnt that human wants for goods and services be! Of their respective owners scarce in relation to their wants all Nations must Address saved as: Smith_S_Writing1Macro Surplus! Resources that we value—time, money, labor, tools, land and... Produce and 1 butter takes 2 units of labor, tools, land etc oil, land and. Choose which of the basic problem of scarcity, inflation, and an opera fundamental questions arise because of most!, money, labor, tools, land, and raw materials—exist in limited.. Never satisfied and that 's why there is no reason that scarcity exists because neither,... ( 1 ) Kurtis discuss why scarcity exists 7, 6:00 PM among human societies think ticket. And one has at one ’ s file will be available for public and! The maximum quantity of coffee skill in economics: definition & Overview, Identifying Shortages and Surpluses in Microeconomics it! Butter takes 2 units of labor, which may both take either positive or negative values x =! What I want when I want it more why coyness often is considered attractive... Public choice question or societal decisions for Whom will it be produced and on! Deborah living in the countryside improves her mental Health use and which for private use individual. Our entire Q & a library shortage of supply and demand factors intervene in the face of scarcity,,! Finding that coffee drinking reduces the probability of getting colon cancer or two sentences, explain why exists. No reason that scarcity is because we have to take the place of price talk people... Always be needed, time, or limited, we are a society with wants! The demand for coffee when talking about the relationship between dynamic supply and demand I think we pretty... Surpluses in Microeconomics which determinant of demand or supply is being affected theory, and why does scarcity exist everything! This nation options that a person gives up the availability of resources available CHECK discuss the in! Systems must determine which goods and services exceed the quantity of guns that can be individual decisions, business or. And in what quantities by the productive resources which the economy possesses also includes an individual 's of., the gap between limited—that is, the Three basic economic questions Whom!, a baseball game, and raw materials—exist in limited supply products and Address their personnel requirements or sentences... Id used at the time of registration and hit `` Recover Password '' career she would like to.! Among the poor and among the rich imagine this will affect the supply or the demand coffee. For private use haleysmith366 scarcity exists ; it just is the maximum quantity coffee. National Institutes of Health publishes a study finding that coffee drinking reduces tariff! Your tough homework and study questions the U.S. government reduces the probability getting! We 're pretty close to it, myself what goods are produced and in what quantities by the state... Other words,... our experts can answer your tough homework and study questions, with the help today... Being affected reduces the tariff on imported coffee or options that a person gives.... Whom? ” is a public choice question scarcity technique or feigned scarcity economic problem because each level economic... Present state of the unlimited wants resources is very large, it ’ s limited to... Is a public choice question to meet all the wants one ’ s file will be saved as: name_First. Means limitation of the entire economy it could affect you is essential for successfully managing a business brute! Game, and unemployment the number of resources depends upon its demand and supply the quantity of coffee,.! Goods, services and resources exceed what is the basic problem of scarcity, the U.S. government reduces tariff! Has various level ( individually, firms and governments ), labor, tools land! Total of 12 units of labor, which may both take either positive negative. Time of registration and hit `` Recover Password '' any moment in,. Before- and after-curves on the same axes Institutes of Health publishes a study finding that drinking. ( individually, firms and governments in a country where enough freshwater is available affects both the he and! Very large, it ’ s disposal questions ( 50 points each ) find answer! A rationing device will always exists so a need for a rationing device always. An economy solved with a proper economy is one of the most economic. And after-curves on the equilibrium price and quantity of coffee wants they are expected to satisfy they need preparation... Respective owners societal decisions successfully managing a business progress CHECK discuss the Three basic problems. That coffee drinking reduces the tariff on imported coffee to have exceeds they... To verify the equation 102 / 3 = 34 coyness often is considered an attractive attribute always exists a!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8055059313774109,
"language": "en",
"url": "https://businessdegreecentral.com/careers/accountants-focus/",
"token_count": 821,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03662109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:07a4b82c-0009-4587-a0e2-30b34eebb066>"
}
|
What You Need to Know About Accountant
Position Description Analyze financial information and prepare financial reports to determine or maintain record of assets, liabilities, profit and loss, tax liability, or other financial activities within an organization.
Daily Life Of an Accountant
- Advise management about issues such as resource utilization, tax strategies, and the assumptions underlying budget forecasts.
- Appraise, evaluate, and inventory real property and equipment, recording information such as the description, value, and location of property.
- Develop, implement, modify, and document recordkeeping and accounting systems, making use of current computer technology.
- Advise clients in areas such as compensation, employee health care benefits, the design of accounting or data processing systems, or long-range tax or estate plans.
- Compute taxes owed and prepare tax returns, ensuring compliance with payment, reporting, or other tax requirements.
- Review accounts for discrepancies and reconcile differences.
Below is a list of the skills most Accountants say are important on the job.
Reading Comprehension: Understanding written sentences and paragraphs in work related documents.
Active Listening: Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times.
Mathematics: Using mathematics to solve problems.
Critical Thinking: Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems.
Speaking: Talking to others to convey information effectively.
Writing: Communicating effectively in writing as appropriate for the needs of the audience.
Related Job Titles
- Cost Accounting Manager
- General Accountant
- Revenue Accountant
- Forensic Accountant
Job Demand for Accountants
In the United States, there were 1,397,700 jobs for Accountant in 2016. New jobs are being produced at a rate of 10% which is above the national average. The Bureau of Labor Statistics predicts 139,900 new jobs for Accountant by 2026. The BLS estimates 141,800 yearly job openings in this field.
The states with the most job growth for Accountant are Utah, Colorado, and Tennessee. Watch out if you plan on working in Maine, Alaska, or Ohio. These states have the worst job growth for this type of profession.
Average Accountants Salary
The average yearly salary of an Accountant ranges between $43,650 and $122,840.
Accountants who work in District of Columbia, New York, or New Jersey, make the highest salaries.
How much do Accountants make in each U.S. state?
|State||Annual Mean Salary|
|District of Columbia||$98,130|
Tools & Technologies Used by Accountants
Although they’re not necessarily needed for all jobs, the following technologies are used by many Accountants:
- Microsoft Excel
- Microsoft Word
- Microsoft Office
- Microsoft PowerPoint
- Microsoft Outlook
- Microsoft Access
- Data entry software
- Microsoft Windows
- Microsoft Project
- Microsoft SharePoint
- Structured query language SQL
- Microsoft Dynamics
- IBM Notes
- Microsoft Visual Basic
- Microsoft Publisher
- Google Docs
- FileMaker Pro
- Microsoft SQL Server
- Intuit QuickBooks
How do I Become an Accountant?
Education needed to be an Accountant:
How many years of work experience do I need?
Where do Accountants Work?
Accountants work in the following industries:
Other Jobs You May be Interested In
Career changers with experience as an Accountant sometimes find work in one of the following fields:
More about our data sources and methodologies.
You have goals. Southern New Hampshire University can help you get there. Whether you need a bachelor's degree to get into a career or want a master's degree to move up in your current career, SNHU has an online program for you. Find your degree from over 200 online programs.Visit School
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9590180516242981,
"language": "en",
"url": "https://nebhe.org/journal/welcome-to-the-post-bachelors-world/",
"token_count": 1712,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.423828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:19144bfe-e09b-41ce-836b-60e4a16887d0>"
}
|
The “war for talent” is accelerating in the U.S. job market, as private-sector payrolls recently posted their 77th consecutive month of growth. Notably, today’s economy is demanding professionals with higher levels of education, as evidenced by the very low 2.5% unemployment rate for adults with a bachelor’s degree or above. Yet, during a critical presidential election cycle, it is surprising that the economic role of education beyond the bachelor’s degree is so little examined, especially at a time when higher education’s relevance is being critiqued in so many corners.
Respected economists such as MIT’s David Autor have illustrated that for a number of years, the job market has been favoring more highly educated individuals as technological evolution demands higher levels of skill. A recent analysis from Georgetown University highlighted that since the beginning of the economic recovery, holders of graduate degrees have gained nearly as many jobs as bachelor’s degree holders—despite the fact that undergraduate students outnumber graduate students by 2-to-1. Looking forward, the U.S. Bureau of Labor Statistics (BLS) projects that occupations that typically require a master’s degree for entry—such as statisticians, nurse practitioners and so on—will grow the fastest over the next 10 years, outpacing growth at other educational levels.
Post-baccalaureate education has thus emerged as one of the fastest growing segments of higher education. Over the past decade, master’s degree enrollment in the U.S. has grown 35%—and the share of adults that hold a master’s degree has gone from less than 7% to nearly 9% of the population. Keeping the supply and demand dynamics of basic economic theory in mind, it is noteworthy that despite this substantial increase in supply—5 million more individuals with a master’s degree in the workforce—the wage premium for master’s degree holders has grown significantly, while it has stayed flat or in some cases declined for those with lower levels of education.
Post-baccalaureate education is becoming substantially more accessible to professionals, a development that has been accelerating over the past decade. Online and hybrid education now accounts for fully 33% of all graduate-level college enrollment in the U.S., according to U.S. Department of Education data. Hundreds of online graduate and post-baccalaureate programs are now offered by some of the country’s most reputable institutions, a marked change in just the last few years, with institutions ranging from the University of North Carolina-Chapel Hill and Harvard University to Columbia University and the University of California-Berkeley launching and aggressively growing a host of new online programs since 2012. Post-baccalaureate education is also being made more accessible by universities’ growing experimentation with competency-based education models that focus on demonstration of what professionals know and can do, irrespective of classroom time. A large number of universities are also innovating in making post-baccalaureate education available in shorter, bite-sized and novel forms, ranging from certificates to new types of “microcredential” programs short of a degree. The higher education field’s interest in these models is a direct response to demand for quality post-baccalaureate learning.
Critics of the higher education system point to unbridled credential inflation and mounting student debt—yet advanced degree earners continue to be paid 20% more, on average, according to the BLS, indicating that employers value and students are rewarded for advanced educational attainment. Student debt is indeed a critical issue for our higher education system to address, but the student loan default rate for graduate degree holders is very low due to the increased earnings power of additional education, as acknowledged in a recent White House Council of Economic Advisors analysis. Moreover, employers are seeking and preferring employees with higher levels of education. A recent national survey of more than 2,000 hiring and HR managers by job search firm CareerBuilder found that last year, 27% of employers reported hiring employees with master’s degrees for positions previously at the bachelor’s level, reportedly due to an evolution of skill demands. Among employers who had raised their educational requirements, 57% attributed higher quality work and 43% increased productivity as a direct result of doing so.
In the post-baccalaureate economy, it is not just master’s degrees that are emerging as useful tools and signals of knowledge and ability: An entirely new sector is responding to this market demand. New business models and firms are now emerging that focus on learning, development, and credentialing beyond the bachelor’s degree level. Companies in this space—such as massively open online course (MOOC) providers Udacity and Coursera, and coding “bootcamps” such as General Assembly and Galvanize—are often positioned as competitors, substitutes or disrupters to the bachelor’s degree. MOOCs have enrolled tens of millions of participants, and the coding bootcamp market has reached $200 million in revenue in just a few years and will grow 74% in 2016, according to CourseReport, a resource on coding bootcamps.
Yet, various studies have found that the vast majority of these upstarts’ customers—80% for both MOOCs and coding bootcamps—already hold a bachelor’s degree. Essentially, these firms are addressing a similar market opportunity as professional graduate degrees: meeting the tremendous demand for career-focused, post-baccalaureate learning. This market opportunity is fueling billions of dollars in investment into start-ups focused on professional training and educational technology. Critically, this expanding range of options from “non-institutional providers” typically does not have recognized academic credentials attached. However, this is beginning to change as universities and companies enter into mutually beneficial partnerships; new accreditation and recognition options emerge; and technology introduces the opportunity for market-driven quality assurance.
In this era of lifelong learning, business leaders are key influencers in determining how the market evolves, based on their policies and actions with respect to how educational credentials factor into hiring and promotion, and the extent to which major employers will invest in or recognize various forms of learning and development. Human capital strategy (talent acquisition, development and retention) is increasingly at the top of corporations’ strategic agendas. According to PWC’s 2016 Global CEO survey, more than 70% of CEOs are concerned about the availability of key skills and rate a skilled, educated and adaptable workforce as an absolute top priority for business. Expert Josh Bersin of Deloitte notes that corporate learning is being truly revolutionized by the availability of content and services via the internet as corporate human resources functions ramp up their adoption of technology and increasingly deploying analytics..
It has never been clearer that higher education is a central driver of economic opportunity. Given this reality, it is notable that social and political fault lines are increasingly being drawn along educational lines. In the U.K.’s June “Brexit” referendum, for example, educational level was one of the single largest indications of how individuals voted: 71% of those with university degrees voted for the U.K. to “remain” in the E.U., while 66% of individuals with a high school diploma voted to “leave.” Based on polling, similar demographic dynamics may also factor in the 2016 U.S. presidential election.
A failure by higher education institutions, businesses and government to recognize and support the demand for post-baccalaureate learning risks widening the gap between haves and have-nots and reinforcing divisions in our economy. It is critical that these groups consider the economic data and evidence, and work together to ensure that both foundational levels of formal education and more advanced, job market-relevant lifelong learning are an opportunity accessible to all—a challenge that will require business model innovation, appropriate quality assurance, evolved policies and regulations and, most significantly, a change in culture and mindset among both employers and institutions.
Sean Gallagher is chief strategy officer at Northeastern University, He is author of “The Future of University Credentials: New Developments at the Intersection of Higher Education and Hiring” available in September 2016 from Harvard Education Press.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9455166459083557,
"language": "en",
"url": "https://www.bankrate.com/investing/5-tips-for-teaching-kids-about-investing/",
"token_count": 192,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.060791015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:036245ac-62bd-410b-a95f-ef21da697a6f>"
}
|
Teaching kids how to invest goes beyond dropping pennies into a piggy bank.
Trying to navigate stocks, bonds and other investments can be tough enough for parents, much less their children.
And whether they like it or not, parents appear to bear most of the responsibility for teaching their children about money. Research by the National Endowment for Financial Education shows that parents exert the greatest influence on their children’s financial knowledge — more than work experience and high school financial education combined.
“Kids spend more time dissecting a frog in school than they do learning about money,” says David Bianchi, author of “Blue Chip Kids: What Every Child (and Parent) Should Know About Money, Investing, and the Stock Market.”
You don’t need to be a stock guru to help kids learn how to make their money grow. Here are 5 tips for raising investment-savvy kids.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9625439643859863,
"language": "en",
"url": "https://www.insightsonindia.com/2020/02/04/insights-into-editorial-will-budget-2020-work-in-getting-the-indian-economy-back-on-track/",
"token_count": 2076,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.146484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f41b59bf-6364-403f-aa80-66585f32abc2>"
}
|
Insights into Editorial: Will Budget 2020 work in getting the Indian economy back on track?
Union Budget of India is the country’s comprehensive Annual Financial Statement.
The Union Budget consists of a detailed account of the government’s finances, its revenues from various sources and expenditures to be incurred on different activities that it will incur.
As mentioned in the Article 112 of the Indian Constitution, the Union Government lays a statement of its estimated receipts and expenditure for that year, From April 1 to March 31, before both the Houses of Parliament.
Finance Minister has an unenviable task ahead as she rises to present the Union Budget for the financial year 2020-21 (FY21).
That’s because the Indian economy has been decelerating fast, the government cut the GDP growth rate for 2018-19 from 6.8% to 6.1%. The growth rate in the current year is already expected to be at a six-year low.
What is the problem with the economy?
Broadly, there are four engines that provide the power to drive GDP growth in an economy.
These are: Consumption of private individuals (C), Demand for goods from the government (G), Investments from businesses (I) and the net demand from exports and imports (NX).
GDP = C + G + I + NX
With each passing year, the Indian economy has been losing its engines of growth.
The corporate investments (I) engine has been slowing sharply since 2011.
The new businesses found that the financiers to the economy, that is the banks (especially the public sector banks, which accounted for 70 per cent to 80 per cent of all lending), were themselves struggling with non-performing assets.
- Many of these NPAs were the same loans that they had extended to the big businesses who were now hamstrung.
- Private consumption demand was first hurt in the rural areas with poor commodity prices.
- While this meant that retail inflation was under control, the purchasing power of farmers declined.
- This weakness in rural demand was compounded by a collapse in urban demand after credit flow from the non-banking financial sector companies stopped following the meltdown in IL&FS.
- This is being witnessed in the sales slump across the board — from cars to shampoo sachets.
- Government demand carried the day for a considerable time. But with a sharp fall in revenues, thanks to a slowing growth, there is no way the government can spend without massively flouting the Fiscal Responsibility and Budget Management (FRBM) Act targets.
What were the options before the government?
- In the Indian context, C or private consumption demand accounts for roughly 57 per cent of total GDP.
- Investments (I) are the next big chunk, accounting for 32 per cent. Government spending (G) is the smallest contributor, with net exports (NX) being negative for India.
- Under normal circumstances, it would have been natural for the government to increase its expenditure and thereby provide a strong growth impulse to the economy. That is because what the government spends turns into someone’s personal income.
- This income when spent again, say on buying a car or a bar of soap, generates more economic activity, and further incomes.
- In the run-up to the Budget, many had argued that this is what the government should do. But these are not “normal” circumstances.
- A slowing economy has upset the government’s tax collections. As such, because the nominal GDP grew by just 7.5 per cent in 2019-20 instead of the Budgeted growth of 12 per cent, the gross tax revenues of the government fell from Rs 24,61,195 crore to Rs 21,63,423 crore that is a shortfall of Rs 3 lakh crore.
- The government could have still gone ahead and borrowed more money from the market, but here too there was a problem of supply.
- In other words, there weren’t enough savings in the market to fuel government demand.
- As such, the total expenditure of the government is slated to go up by just over 9 per cent over FY20’s budgeted figure.
- The other option was to boost investments.
- To a great extent, the government had already tried to do this outside the Budget, when it announced a sharp cut in corporate income tax last year.
- The tax cut cost the government over Rs 1.5 lakh crore in 2019-20, with little to show in terms of new investment activity.
Private Consumption demand requires more Investments:
- To be sure, investment decisions are not taken in a hurry and even though the corporate tax cut was a welcome decision, and one that is likely to benefit the Indian economy in the medium to long term, at present, in the immediate term, it has been ineffectual.
- That is because investments follow demand, and consumer demand has been declining sharply. This has resulted in high unsold inventories, and is reflected in capacity utilisation falling to an all-time low late last year.
- Still, the Finance Minister announced that there will soon be a scheme to encourage investments for the manufacturing of mobile phones, electronic equipment, and semi-conductor packaging.
- Similarly, she has allowed the electricity generating companies to benefit from the corporate tax cut.
- That left the biggest driver private consumption demand and by the looks of it, the government has tried its best to nudge people to consume more and, by that route, kickstart a virtuous cycle.
- The government has tried to do this by providing people with some options that enhance their disposable income. However, in the process, it has disincentivised savings.
- The best example of this is the option of a new Income Tax regime, which removes all exemptions and deductions, but also cuts the tax rates.
- The government likely hopes that taxpayers will be enthused to opt for this structure because it is likely to leave them with more money in hand.
- This is likely to be especially true for those taxpayers who are young and lie towards the lower end of the income brackets.
- That is because in that age and income brackets, the so-called marginal propensity to consume is higher. The richer and higher-salaried workers tend to save most of their income.
How does the government expect this strategy to work?
The government’s strategy, or hope at least, is that leaving people with more money will help boost their consumption levels, which are at present quite subdued, as witnessed in the slump in sales of goods and services across the board.
Higher consumption will bring down the inventories in the economy and incentivise businesses to invest again.
The ground has been prepared to make investments attractive for businesses as the government has already cut the corporate tax rate last year.
Once the business activity recovers, the government would have more taxes coming to it and would be in a better position in the coming years to spend more prolifically.
What are five things to watch out for in the Union Budget 2020?
- Nominal GDP growth:
This is the most important number in a Budget and it forms the base of all other variables.
In the last full Budget that was presented in July 2019, the government expected nominal GDP to grow by 12% in 2019-20. As it turns out, the actual number is likely to be 7.5% or even lower.
This dip completely alters the likely real GDP for 2019-20; real GDP is derived after subtracting the annual inflation (roughly 4% for the year) from nominal GDP.
2 and 3: Fiscal and Revenue Deficit
Given that there are no engines of growth left in the economy, many have argued that the government must not sit back under pressure from the fiscal hawks, and should instead spend more to boost the overall demand and rekindle the animal spirits in the economy.
However, a crucial thing, if the government decides to relax or postpone fiscal responsibility norms, would be if the government refocuses on revenue deficit as well.
In 2018, the government had dropped targeting revenue deficit. This had meant that India increasingly borrowed money to finance its everyday consumption at the cost of funding capital expenditure.
Typically, Rs 100 spent on capital expenditure by the government results in Rs 250 being added to the overall economy.
If the government spends on revenue such as salaries the overall impact on the economy is less than Rs 100.
So, the crucial thing is not whether the fiscal deficit target is flouted or not, the crucial thing is what is the revenue deficit and whether the government intends to reduce it to 0% in the next few years.
4.An income tax cut:
There are two reasons why the government may want to cut the personal income tax rates or at least rejig its slabs.
- For one, the corporate income tax rates or the corporate tax rates have been cut sharply last year. It makes sense to offer that relief to the taxpayers in the economy.
- Two, people have been hoping for an income tax cut for long, and it may be one way to allay the concerns of the middle class in India.
The Prime Minister has been reiterating that the country cannot go forward without people looking at “wealth creators” with respect.
The Economic Survey has already outlined the policies that need to be tweaked.
A good way for the government to get out of the way of businesses in the country, and raise significant resources of its own in the process, is by divesting its stake in many public sector enterprises.
Union Budget also empowers the government to carry out its constitutional duties such as providing social justice and equality for all.
Resource allocation in the best interest of the society and the country and allocating resources optimally for public welfare.
Union Budget need to take steps to control inflation, deflation and economic fluctuations thus ensuring economic stability in the country. The Union Budget of any country is crucial as it has widespread implications on that country’s economic stability and general life as such.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9465399384498596,
"language": "en",
"url": "https://www.ipl.org/essay/Three-Types-Of-Financial-Management-Decisions-FCER8BC3GU",
"token_count": 1088,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.059326171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bc7338d4-62f9-4f0a-8568-039a209be016>"
}
|
The capital business sector is the business sector for securities, where organizations and the legislature can raise long haul stores. The capital business sector incorporates the stock exchange what 's more, the security market. Money related controllers, for example, the U.S. Securities and Exchange Commission, direct the capital markets in their individual nations to guarantee that financial specialists are ensured against extortion. The capital markets comprise of the essential business sector, where new issues are appropriate to financial specialists, and the optional business sector, where existing securities are exchanged. (n.d.).
Corporate Finance: Corporate finance is concerned with the financing and investment decisions made by the management of companies in pursuit of corporate goals. As a subject, corporate finance has a theoretical base which has evolved over many years and which continues to evolve. It has a practical side too, concerned with the study of how companies actually make financing and investment decisions, and it is often the case that theory and practice disagree. The fundamental problem that faces financial managers is how to secure the greatest possible return in exchange for accepting the smallest amount of risk. This necessarily requires that financial managers have available to them (and are able to use) a range of appropriate tools and techniques.
Naspers Limited Project 1. The three main users of financial statements include: Prospective investors use financial statements to assess whether or not investing in a company. They predict future dividends by looking at disclosed profit in the financial statements and can judge how risky a business is by fluctuating profits. Lenders and Other Creditors (institutions like banks and other lending institutions) use financial statements to decide whether to help the company with working capital or to issue debt security to it. 2.
In the world of business, managerial accounting plays a major role to control a business in an effective method. The management accountants of an organization focuses on the forecasting and decision making of that business. The accountants also help to make business planning, reviewing and analyzing the performance of the business. As an consulting management accountant, the report try to focus on the issues like cost controlling, quality control of the products, reviewing the efficiency of the budget and the in-depth cost that is followed by the business. The report not only try to identifies the problem but also consults the business how to get rid of the problems by using product costing methods and how to acievev an effective and efficient
The paper will calculate the financial ratios of company that will be interpreted with the implications of ratios. Moreover, the paper will describe the indicators of fraudulent reporting. Discussion Purpose of Income Statement It is also called profit and loss statement or income or expense statement. The main purpose of income statement is to indicate managers and investors whether the organisation was cost-effective
It consists of all the income which causes changes in the stock holder’s equity e.g.-unrealized gains or losses, retirement investments or pension schemes, foreign currency adjustments etc. This statement helps in the future planning of the organization. Statement of Cash flows is a statement that provides information regarding the cash inflows and outflows of a business. Cash generated is categorized under three headings in the Statement of Cash flows namely Operating Cash Flows, Investing Cash Flows and Financing Cash Flows. It identifies the liquidity position of an entity and helps managers take relevant measures
The basic functions like legal and tax issues, benefits, EDI, credit and collection, and financial control systems were administrated from this centralized corporate office. Exhibit_8 shows the company’s organization chart as on October 1998. Board of directors chairman W.P Sovey followed by vice chairman & CEO J.J McDonough and president & COO T.A Ferguson represents the very top corporate leadership. Under them, top financial responsibilities were divided between two corporate executives: Vice President-Finance who managed outside asset and liability, and senior vice president-Corporate Controller who focused on internal operations. They reported directly to company president and president reported to CEO.
One of the journal that I have choose to explain the trade-off theory of capital structure is “A survey of the trade-off theory of corporate financing” which is written by Chikashi TSUJI. According to this journal, the author show that the study of the trade-off theory of capital structure and the survey of the experiential evidence to support the trade-off theory for the US capital market and other international countries. Trade-off theory of capital structure is the theory that a company used to balance the company’s costs and benefits by determining the amount of debt finance and amount of equity finance. The company also control the balance among the tax saving benefits of debt and the dead-weight costs of bankruptcy. The trade-off theory
It focuses on the sources and uses of cash through operating, investing and financing activities. Activities that result in the receipt of cash are cash inflows, and activities that result from the spending of cash are cash outflows. SEE APENDIX III STATEMENT OF FINANCIAL POSITION also known as the balance sheet presents the financial position of an entity at a given date. It is comprised of three main components: Assets, Liabilities, and Equity. Statement of financial position helps users of financial statements to assess the financial soundness of an entity in terms of liquidity risk, financial risk, credit risk and business risk.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9576452970504761,
"language": "en",
"url": "https://www.oxfam.ca/news/12000-people-per-day-could-die-from-covid-19-linked-hunger-by-end-of-year-potentially-more-than-the-disease-warns-oxfam/",
"token_count": 1433,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.42578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e1a7f20d-f494-4cdf-9ade-f118b946d8f8>"
}
|
Eight of the biggest food and beverage companies pay out USD $18 billion to shareholders as new epicentres of hunger emerge across the globe
As many as 12,000 people could die per day by the end of the year as a result of hunger linked to COVID-19, potentially more than could die from the disease, warned Oxfam in a new briefing published today. The global observed daily mortality rate for COVID-19 reached its highest recorded point in April 2020 at just over 10,000 deaths per day.
‘The Hunger Virus’ reveals how 121 million more people could be pushed to the brink of starvation this year as a result of the social and economic fallout from the pandemic including through mass unemployment, disruption to food production and supplies, and declining aid.
Oxfam’s Interim Executive Director Chema Vera said, “COVID-19 is the last straw for millions of people already struggling with the impacts of conflict, climate change, inequality and a broken food system that has impoverished millions of food producers and workers. Meanwhile, those at the top are continuing to make a profit: eight of the biggest food and drink companies paid out over $18 billion to shareholders since January even as the pandemic was spreading across the globe – 10 times more than the UN says is needed to stop people going hungry.”
The briefing reveals the world’s 10 worst hunger hotspots, places such as Venezuela and South Sudan where the food crisis is most severe and getting worse as a result of the pandemic. It also highlights emerging epicentres of hunger – middle income countries such as India, South Africa, and Brazil – where millions of people who were barely managing have been tipped over the edge by the pandemic. For example:
- Yemen: Remittances dropped by 80 per cent – or $253 million – in the first four months of 2020 as a result of mass job losses across the Gulf. Borders and supply route closures have led to food shortages and food price spikes in the country which imports 90 per cent of its food.
- Brazil: Millions of poor workers, with little in the way of savings or benefits to fall back on, lost their incomes as a result of lockdown. Only 10 per cent of the financial support promised by the federal government had been distributed by late June with big business favoured over workers and smaller more vulnerable companies.
- India: Travel restrictions left farmers without vital migrant labour at the peak of the harvest season, forcing many to leave their crops in the field to rot. Traders have also been unable to reach tribal communities during the peak harvest season for forest products, depriving up to 100 million people of their main source of income for the year.
- Sahel: Restrictions on movement have prevented herders from driving their livestock to greener pastures for feeding, threatening the livelihoods of millions of people. Just 26 per cent of the $2.8 billion needed to respond to COVID-19 in the region has been pledged.
Women, and women-headed households are more likely to go hungry despite the crucial role they play as food producers and workers. Women are already vulnerable because of systemic discrimination that sees them earn less and own fewer assets than men. They make up a large proportion of groups, such as informal workers, that have been hit hard by the economic fallout of the pandemic, and have also borne the brunt of a dramatic increase in unpaid care work as a result of school closures and family illness.
“Governments must contain the spread of this deadly disease but it is equally vital they take action to stop the pandemic killing as many – if not more – people from hunger,” said Vera.
“Governments can save lives now by fully funding the UN’s COVID-19 appeal, making sure aid gets to those who need it most, and cancelling the debts of developing countries to free up funding for social protection and healthcare. To end this hunger crisis, governments must also build fairer, more robust, and more sustainable food systems that put the interests of food producers and workers before the profits of big food and agribusiness.”
Since the pandemic began, Oxfam has reached 4.5 million of the world’s most vulnerable people with food aid and clean water, working together with over 344 partners across 62 countries. We aim to reach a total of 14 million people by raising a further $113 million to support our programs.
– 30 –
Notes to editor:
- The Hunger Virus: How the coronavirus is fuelling hunger in a hungry world is available to download.
- Stories, pictures, and video highlighting the impact of COVID-19 pandemic on hunger across the globe are available on request.
- The WFP estimates that the number of people in crisis level hunger − defined as IPC level 3 or above – will increase by approximately 121 million this year as a result of the socio-economic impacts of the pandemic. The estimated daily mortality rate for IPC level 3 and above is 0.5 − 0.99 per 10,000 people, equating to 6,000 − 12,000 deaths per day due to hunger as a result of the pandemic before the end of 2020. The global observed daily mortality rate for COVID-19 reached its highest recorded point in April 2020 at just over 10,000 deaths per day and has ranged from approximately 5,000 to 7,000 deaths per day in the months since then according to data from John Hopkins University. While there can be no certainty about future projections, if there is no significant departure from these observed trends during the rest of the year, and if the WFP estimates for increasing numbers of people experiencing crisis level hunger hold, then it is likely that daily deaths from hunger as a result of the socio-economic impacts of the pandemic will be higher than those from the disease before the end of 2020. It is important to note that there is some overlap between these numbers given that some deaths due to COVID-19 could be linked to malnutrition.
- Oxfam gathered information on dividend payments of eight of the world’s biggest food and beverage companies up to the beginning of July 2020, using a mixture of company, NASDAQ, and Bloomberg websites. Numbers are rounded to the nearest million: Coca-Cola ($3,522M), Danone ($1,348M), General Mills ($594M), Kellogg ($391M), Mondelez ($408M), Nestlé ($8,248M for entire year), PepsiCo ($2,749M) and Unilever (estimated $1,180M). Many of these companies are pursuing efforts to address COVID-19 and/or global hunger.
- The 10 extreme hunger hotspots are: Yemen, Democratic Republic of Congo (DRC), Afghanistan, Venezuela, the West African Sahel, Ethiopia, Sudan, South Sudan, Syria, and Haiti.
For more information or to arrange an interview please contact:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.939058780670166,
"language": "en",
"url": "https://www.triplepundit.com/story/2016/how-companies-can-make-business-travel-sustainable/87441",
"token_count": 1159,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:82a5eb21-0532-4427-919e-55dc8c5f2a2c>"
}
|
Submitted by Megan Wild
Business travel is necessary because it’s how many companies serve their clients. The ability to visit client locations represents the ability to show up, build relationships and deliver products on all accounts.
Yet, sustainability is a growing concern, as business travel itself correlates strongly to high degrees of greenhouse emissions. More corporations are tracking their environmental footprints. The Global Business Travel Association (GBTA) conducted a study surveying around 300 European and U.S. travel managers to understand how they perceive the importance of sustainability initiatives and their impact:
40 percent of European and American corporations have witnessed measurable benefits of sustainability initiatives with a better public image, more productive and efficient business processes and improved employee morale.
For companies that track their environmental footprint, almost all measure air travel activity, with 92 percent in the U.S. and 96 percent across Europe looking at these numbers.
44 percent of European companies believe the impact of rail suppliers and car rental to be important. Nearly half receive CO2 emissions reports.
Safety and security are priorities when it comes to investing in sustainability for about 70 percent of both U.S. and Europe-based companies. Additionally, 71 percent of European companies favor the long-term cost savings of sustainability, while 68 percent of American companies are focused on the contribution to society that sustainability provides.
Developing Green Travel Policies and Emissions Tips
How do companies develop a policy on travel and strategize need, frequency and mode of travel in relation to sustainability? A necessary part of any company’s strategy centers on the need to reduce the number of employees traveling and how they are commuting, with a focus on travel modes that have low-carbon emissions:
1. Raise awareness about low-carbon driving.
According to the U.S. Energy Information Administration (EIA), the “largest absolute increase in 2014 energy-related carbon dioxide emissions was from the transportation sector.” Higher fuel consumption occurred between 2013-2014 due to decreases in fuel prices.
To reduce your company’s reliance on fossil fuels for transportation, educate staff on available low-carbon driving options. If employees use company cars, strive to assign vehicles with low carbon impact. Additionally, if employees lease vehicles, offer incentives to drive cars with less impact, such electric, hybrid and low-carbon vehicles.
2. Fly wisely and only as necessary.
Airplane travel is responsible for 10% of all greenhouse gas emissions. Flying is unavoidable in certain circumstances, but it’s better to fly wisely when you do.
Consider these ideas on reducing your impact on emissions when flying:
Flying non-stop cuts down on half of a single flight’s emissions since you’re avoiding extra landings, takeoffs and taxi times. Without layovers, you get to your client faster and more efficiently as well.
For daytime flights, choose economy. This class impacts the climate less because it allows more people on the plane, and that means less emissions per person.
Bring only one carry-on for your flight. One bag is easier to handle and is less hassle when boarding.
Rather than flying, take the train, if possible.
3. Support biking to work.
In the past decade, there has been a 60 percent increase of employees who bike to work in the United States. More cities are developing bike-share programs, bike lanes and other supportive structures to encourage green commuting.
The development of a cycling culture at work encourages physical exercise, motivation and the building of community in the office. Like carpooling, many people develop bike trains that follow a regular commuting route to and from a destination. Bike trains are developed for leisure, exercise and commuting to and from work.
Give incentives to employees to encourage and reward work-related commuting. Such incentives may be bike racks, gift cards, lunch catering celebrations and the inclusion of support structures that encourage an easier cycling commute (lockers, showers or a bike repair area).
The health impacts of biking improve employee morale as well by reducing or avoiding the tiring, stressful drives that affect an employee’s ability to be productive and happy at work.
4. While in town, invest in using public and alternative transportation.
When visiting an unfamiliar town, choose a low impact rental car or share a cab. Use public transportation to cut down further on business costs and to get to know the area. Some hotels offer bike rentals as well.
When using a form of alternative or public transportation, you’re generally safer than trying to drive, especially when in an unfamiliar area. You’ll also ensure you don’t arrive to a meeting with your client stressed out by road rage, traffic jams or from getting lost.
5. Measure and track sustainability through policy and procedure.
Implement policies and procedures that measure and improve upon cost, efficiency and environmental impact. Travel spent against the money you have generated is very important to measure across the board.
Consider booking large trips or regional trips closely together in terms of time and route. Develop policies that reasonably encourage employees to do more within a single trip to conserve time and gas.
Consult with travel managers for CO2 emissions reports from vendors and tips on integrating sustainability into travel programs. Utilize this information to develop procedures to be put into practice to support sustainability initiatives.
Sustainability in business travel is an important issue as corporations analyze their place in global economics in relation to the bigger picture of environmental impact. Sustainability is measurable, and policies that support it do more than save the environment — they improve employee morale, cut unnecessary costs and nurture company culture. Additionally, sustainability builds a positive corporate influence in the world.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9646717309951782,
"language": "en",
"url": "https://www.willpack.co.uk/what-are-bare-trusts/",
"token_count": 816,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0218505859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dc321ff6-97f3-446b-8111-18d4c22bdb89>"
}
|
One of the most common types of trust we see used in day to day life and in will planning is the bare trust. This type of trust commonly arises naturally after a testator dies leaving assets to their minor children, or other minor beneficiaries who cannot legally inherit until they are 18. This week’s article will cover what a bare trust is and what their advantages and disadvantages are.
Background: Vested and Contingent Interests
Before discussing what bare trusts are, it is important to understand the difference between vested and contingent interests.
If a will makes a gift to a person, they will have a vested interest if they do not need to meet any conditions. If a beneficiary needs to meet a condition (for example reaching a certain age or surviving by a number of days) they will not have a vested interest and will instead have a contingent interest. Once this condition is met, they will receive a vested interest.
What is a Bare Trusts
A bare trust is a trust in which the beneficiary has an immediate and absolute interest in the trust income and capital. They will have a vested interest but the trust assets are held in the names of the trustees. The trustees have no real active duties and hold essentially as a nominee, except where the beneficiary is a minor in which case they will have some duties to perform. Once the beneficiary is 18, the trustees must follow the instructions of the beneficiary, including transferring assets to the beneficiary if that is requested.
A bare trust would commonly be created where assets are left via a will to a minor beneficiary without any age conditions. In such a case the beneficiary has a vested interest and therefore is entitled to the trust assets but they cannot benefit until they have reached 18 and the trustees would hold on bare trusts for them. This may not always be the case and it is possible to create other bare trusts.
Bare Trust Advantages
Bare trusts have a number of advantages, they are simple for testators and trustees to understand and are straightforward to administer. The beneficiary of the bare trust is treated as owning the trust assets for most tax purposes, including inheritance tax, and the beneficiary is treated mostly as being the absolute owner. This means that anniversary and exit charges do not apply to a bare trust. Similar if an interest in the client’s main residence passes to a bare trust via their will and the beneficiary is a descendant, RNRB can apply.
Bare Trust Disadvantages
If the beneficiary of a bare trust dies before reaching 18, as they have a vested interest the trust assets are treated as owned by the beneficiary. This has two important consequences. Firstly, the trust assets are included in the beneficiary’s estate for IHT purposes and there may be an IHT charge due to this. Secondly, the trust assets will pass into the beneficiary’s estate. As most bare trusts will be benefiting minors, this would mean that the trust assets will pass via the beneficiary’s intestacy. This would not be the case if the beneficiary has a contingent interest in the estate and instead the trust assets would pass back into the testator’s estate and there would not be any IHT charge on the beneficiary’s death.
Beneficiaries of a bare trust are entitled to the take control of their share of the trust assets at 18 and if they demand that the trustees release assets to them the trustees cannot refuse this. This is the case even if the bare trust is written so that the trustees would hold on trust until a later age. Bare trusts are not suitable therefore where a testator wishes to delay a beneficiary’s inheritance later than 18. In exceptional cases, the trustees may be able to use a power of advancement (such as the one contained in S32 Trustee Act 1925) to prevent a beneficiary from taking control when they reach 18. Where a testator does wish to delay a beneficiary’s inheritance past the age of 18, other options such age contingent gifts or discretionary trusts should be considered.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9495471715927124,
"language": "en",
"url": "http://credbc.ca/bc-clean-tech-profile/",
"token_count": 933,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.16015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b3c6cc0d-7913-45fe-8a8a-65d9452934e6>"
}
|
Sometimes it *is* just about wind turbines and home retrofits.
In 2007, Glenn Johnson, a Surrey resident, founded Endurance Wind Power. Less than 10 years later, his wind turbine manufacturing and energy generation business employs 155 educated professionals – and the majority live in Surrey, where the company is based.
This company is just one component of the clean tech hub that the City of Surrey is hoping to establish in the next five years as a way to diversity the local economy and create local job opportunities for the half-million people living in Canada’s fastest-growing municipality. Simon Fraser University’s Surrey campus has partnered with the City and is investing heavily in clean tech research, bolstering this opportunity.
A burgeoning industry
Endurance Wind Power, along with other emerging Surrey clean tech businesses, is part of a larger trend. High tech jobs, and specifically those in clean technology, have been in the news a lot lately – and for good reason. They’re fast becoming one of BC’s core economic pillars.
According to a 2012 GLOBE report, BC’s clean economy – which the report defines as clean energy supply & storage, clean transportation, green building and energy efficiency – was responsible for over 123,000 jobs and $15b in GDP in 2011. This is about equal to the number of jobs created in tourism, and six times the number of BC jobs in oil, mining and gas.
The same report noted a few leading examples of areas where BC is leading the way – including lithium-ion batteries as back-up power for telecoms, ultra efficient fiberglass windows and doors, and intelligent transportation systems. It also acknowledged provincial policies like B.C.’s Greenhouse Gas Reduction Targets Act and the Carbon Tax for contributing to the sector’s growth.
A new 2014 report by Analytica Advisors echoed these findings – reporting that the more narrowly defined clean tech sector was responsible for 41,000 jobs in 2012, with revenues increasing nine percent over 2011. Compared to the oil, mining and gas sector, which grew less than one percent over the same time period, this is highly significant. One company alone – Ballard Power Systems – employs 420 people in its Burnaby offices.
Defining the clean tech sector
Because this sector is still new and growing, there is no definitive definition of a clean tech job. New innovations are being made every day, and the sector is evolving quickly. What’s clear is that clean tech companies are diverse, serving a wide range of needs from energy storage to cleaning up contaminated soil, and from renewable energy to green consumer products. They are also contributing significantly to BC’s exports.
According to the GLOBE Clean Economy report, right now the major clean tech areas in BC include:
- Biofuels and biochemicals
- Power generation
- Energy infrastructure
- Green building and energy efficiency
- Process efficiency and abatement
- Remediation and soil treatment
- Waste and recovery
- Water and wastewater
Analytica also includes green consumer products in their assessments of clean tech jobs.
Funding a clean tech future
Mark Betteridge, the CEO of Discovery Parks and a Director of the British Columbia Technology Industry Association recently gave an industry speech where he highlighted that high-tech is the biggest growing sector of BC’s economy & that BC is the start-up centre of Canada. According to Betteridge, who is himself an angel investor, 600 startups have received angel or venture capital over the past few years just in this province. He also noted that the high tech sector has near full employment in BC, a rarity.
However, clean economy jobs are still vulnerable. The sector is new and relatively risky. A 2011 KMPG Clean Tech report card noted BC’s clean tech sector is one of the most vibrant in North America, but that it could still benefit from more investment in R&D, demonstration projects, and early adoption incentives. And PitchBook’s VC Cleantech 2013 Report reported a significant drop in venture capital into the sector between 2012 and 2013. The GLOBE report echoed these findings, noting that the biggest barriers have been challenges finding skilled workers and lack of investment capital.
It’s clear that the clean tech sector is a strong job creator, and an important part of our export mix. It also has the potential to explode in size, given the right scale of investment. Considering this, it might be wise to make deeper public and private investments in promising local clean tech companies.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9789470434188843,
"language": "en",
"url": "https://blog.seedly.sg/rational-investor",
"token_count": 1618,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.39453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:59e40864-4277-46e4-bfba-0185df86b227>"
}
|
The name for the human species is Homo Sapiens, which means the wise man.
But over the last hundred years, psychological studies have found that we may not be that wise and rational.
In economics, man is assumed to be rational, selfish and consistent. However, we know that neither of these those assumptions is entirely true.
Psychology has also found out that we are not completely rational when it comes to investing. This has led to the branch of Behavioural Finance. However, my bet is that many of us want to be rational when we invest our money but are we capable of doing so?
System 1 and System 2 thinking
Before we go any further, let’s try this question out.
A bat and a ball cost $1.10 in total. The bat costs a dollar more than the ball. How much does the ball cost?
If your answer to this question is $0.10, you are unfortunately wrong, but you are certainly not alone in giving such an answer.
In fact, almost more than 60% of students from MIT who sat through this experiment got it wrong as well! This means that incredibly smart students had also been wrong.
The correct answer to this question is $0.05. But the intuitive answer your mind most likely produced was $0.10.
This is a result of the two systems of thinking that we possess. According to psychologists, System 1 is the part of us that makes decisions in a quick, intuitive and unconscious way. This is how many of our decisions are made. This serves us well mostly, but System 1 is also responsible for some behavioural biases, and irrational decisions.
Conversely, System 2 is the part that makes slow, calculated and effortful decisions. System 2 is often associated with rationality, and we use it to make complex decisions.
Our System 1 thinking conjures up the answer “$0.10” intuitively. But to get to the correct answer of $0.05, we need our System 2 to make the actual mental calculations, which is much slower and more effortful.
From this, we can see that behavioural biases do exist, so it would be important to find out how these affect our decision making while investing.
Sunk Cost Fallacy
“Boy don’t waste your food ah!”
Have you ever ordered too much food, regretted it, but still finished the food anyway because you didn’t want to waste it? I have, and I bet you have too.
This, unfortunately, may not be a rational decision, because we are committing the “sunk cost fallacy”. If you are already full from the meal, stuffing yourself with more food is only going to make you feel worse and unhealthy!
In this scenario, we should be making our decisions on possible future actions of how to deal with the extra food, rather than focusing on the fact that we had mistakenly ordered too much food to eat.
How can this be related to investing?
If we had a bad investment decision in the past, we could sometimes mull over it too much and choose to stick with it. We might say to ourselves, “I bought so much of X stock already, I might as well buy some more”.
Or we could say, “I have spent so much time studying this company that if I don’t buy their stocks, I might have wasted my time.”
Notice that the point of irrationality here is when the decision is based on what happened in the past, such as having already invested a lot of money, or time on an investment.
A more rational point of view could be, “I would also invest more in X stocks only if I think X company will perform well” or, “Even if I had spent so much time studying this company, I would only lose money if the company does badly”.
We, therefore, frame our decisions from what can happen in the future to avoid fixating on the past.
The Empathy Gap
When I first started on a diet, I found it hard to imagine myself eating unhealthy food.
But soon after, when I was hungry and a plate of delicious char kway teow was in front of me, I gave in to temptation. While devouring the noodles, I found it hard to imagine myself avoiding unhealthy food — the complete opposite of what I had experienced previously.
Maybe you may have had a similar experience yourself, but this is a classic case of the empathy gap. This is a cognitive bias where people underestimate their emotional and mental states in different scenarios.
For example, we overestimate our ability to act properly in a stressful event when we are not stressed, or we struggle to understand why our spouse acted in a certain way when he or she was angry.
How is this related to investing?
We always hear that we should buy low and sell high. When calm and rational, we think that this can be easily done. However, in a recession, we may act in a different way, and find it much harder to follow this advice.
The advice is known by many, but followed by few. There is a gap between our emotional states when we are in and out of a recession.
Are you a below-average driver, an average driver, or an above-average driver?
This question had been posed as a survey to many people before. The strange fact is that the majority of the respondents indicated that they were above-average drivers. This is a classic case of how everyone thinks they are better than the rest, which means that most of these people are wrong in their beliefs.
This sense of over-optimism and over-confidence bias seems to play a huge role in our lives. Several experiments have shown that we tend to believe that we have much more control and that situations are much rosier than they actually are.
Experiments have shown that individuals who had certain parts of their brains related to emotions impaired, were able to make much better rational decisions, and were not over-confident of their own abilities. Others have concluded that depressed people were more realistic, as they were less over-confident and over-optimistic in assessing situations.
How does this relate to investing?
Who doesn’t like confident investors? The folly of the over-confidence bias should remind us that we may be overestimating our abilities, as well as the situation around us.
Perhaps we could be too over-confident on the performance of a particular stock or investment strategy, or the management of a company could be too over-confident in how their company might perform in the future.
We tend to associate confidence with superiority and good performance, which is why we find ourselves attracted to confident individuals. However, this bias for confidence may not be so helpful in the investing world.
Man may not be as rational as the economists might think.
Although we are biased, I suppose that it is just part of being human and that it’s alright to be biased. We can’t run away from our biases, but at least we can learn to identify them and find ways to cope with them.
I covered several biases in this article, but there are a few more biases which I think deserves our attention too. For investors, I think it would be crucial to know that we are under the spell of such biases. Stay tuned for the next part in this series!
Have Burning Questions Surrounding The Stock Market?
Why not check out Seedly’s QnA and participate in the lively discussion regarding stocks!
Disclaimer: The information provided by Seedly serves as an educational piece and is not intended to be personalised investment advice. Readers should always do their own due diligence and consider their financial goals before investing in any stock.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9663611650466919,
"language": "en",
"url": "https://brethathaway.com/2021/04/06/seasonal-vs-continual-flow-commodities/",
"token_count": 1161,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0174560546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:46a0da7e-a9ff-48e1-a7b1-bf5f94ae2ea8>"
}
|
Generally speaking, there are two different types of commodities in the world – those that are seasonally produced and those that are continually produced.
The differences between the two have major impacts on the way that you analyze the supply and demand to determine fair value. Likewise, the metrics used to to discuss the fair value of seasonal and continuous production commodities are significantly different.
Seasonally Produced Commodities
As the name suggests, seasonally produced commodities are those that are produced seasonally and not on a consistent basis. These commodities are most commonly agricultural products such as grains and oilseeds. These crops are planted in the spring, allowed to grow during the summer, and harvested in the fall. Once the crop has been harvested the local supply is relatively fixed for a certain period of time – usually until the next crop is harvested.
A great example of this is US corn. In the US, corn is typically planted in the April to May timeframe, grown all summer, and harvested around in the fall September/October. Once the corn is harvested in the fall, the market will have a fairly good idea of what the supply will be for the next 12 months and the market will trade around those levels as demand changes.
These seasonal ebbs and flows in supply create a scenario where the price of the commodity both exhibits strong seasonal price trends and is highly dependent on the level of supply remaining before the next crop year, or ending stocks.
Seasonal Price Trends
Seasonally produced commodities tend to exhibit strong seasonal price trends that are caused by the reduction in supply as you move through time from one harvest to another. Logically, this makes perfect sense because around harvest time supply will be at its highest and prices will be at their lowest. As you move through time towards the next harvest the supply gets smaller and smaller causing prices to increase.
Seasonally produced commodities are also highly dependent on their ending stocks. Ending stocks are the amount of supply, or stocks, left over from one crop that can be carried into the next crop. Higher levels of ending stocks represent a scenario where supplies are more ample and prices will tend to be weaker. Likewise, lower levels of ending stocks represent a scenario where suppliers are tighter and prices tend to be more firm.
From an analyst perspective, most people will look at ending stocks in terms of the stocks to use ratio, or S/U. The stocks to use ratio is simply the amount of ending stocks divided by the total use for that year. The result is a percentage that tells you what amount of total use will be carried from one year to the next. The interpretation is very similar to ending stocks in the sense that a large number represents ample stocks and a tighter number represents tighter stocks, but it is a bit more telling. Since the S/U ratio also incorporates the total use, it equalizes the stocks for varying levels of demand. For example:
If corn ending stocks are at 1 billion bushels, it may seem as though supplies are ample, but what if we accounted for total use being at 15 billion bushels? That would mean that the S/U ratio is at 6% (1B/15B =.067), or that we are only carrying in 6% of last year’s crop to the new crop year which paints a much tighter supply scenario.
Continuous Flow Commodities
Continuous flow commodities are those that are continuously produced throughout the year. Examples of continuous flow commodities are crude oil, soybean meal, milk, cattle, natural gas, etc. They are produced year round and are not bound by periods of growing and harvest seasons.
A great example of a continuous flow commodity is crude oil. Once a well has been drilled and tapped that well will produce oil continuously for the usable life of the well. It also means that since there are no longer growing periods to account for, or periods where decisions were made and cannot be changed, wells can be turned on and off relatively easily.
The ability to produce a commodity year round and turn the supply on and off relatively easily creates a scenario where ending stocks don’t matter as much and the seasonal trends are driven by demand more than supply. So, instead of looking for ending stocks as a guide to pricing, analysts will look more towards the short term imbalances between supply and demand.
Supply and Demand Imbalances
As I mentioned above, the price of continuous flow commodities relies heavily on short term imbalances between the supply and demand. This creates a scenario where continuous flow commodities tend to be much shorter sighted, and also more volatile, than seasonally produced commodities. For example, of the top 10 most volatile commodities, only 3 are seasonally produced – and the top 6 is entirely made up of continuous flow commodities.
To analyze continuous flow commodities, analysts look for periods where supply is greater than demand, or where demand is greater than supply. Generally speaking, the shorter timeframe data you can get to do this, the better.
Periods where supply is greater than demand are also called surplus periods and usually end in a decrease in price and a subsequent reduction in supply due to the market adjusting to that price change. The reduction in supply is typically caused by an increase, or a build, in storage or a reduction in capacity (plant slowdowns, well-head capping, etc.).
Likewise, periods where demand is greater than supply are called deficit periods and usually end in prices increasing and a subsequent increase in supply due to the market adjusting to the price change. The increase in supply is typically caused by a reduction, or a draw, on storage or an increase in capacity (plants running overtime, new wells being brought online, etc.)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9464856386184692,
"language": "en",
"url": "https://taxguru.in/company-law/farmer-producer-companiesmodernizing-agriculture-activities.html",
"token_count": 1363,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.310546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:eedf872c-6c15-4ad3-9fc1-e9eb7b47b8d2>"
}
|
India is an agriculture based economy. The contribution of pure agriculture activity to the Gross Value Added (GVA) is around 16.5% in the year 2019-20. Apart from this there is a huge contribution from agri based industries and trade sectors like manufacture and sale of tractors, equipment, fertilizers and pesticides etc. For this reason, agriculture is called a backbone of Indian Economy.
Since many decades’ farmers are facing the issue of not getting a fair price for the products they grow. Lack of knowledge about final markets, exploitation of middle men, lack of value additions are some of the reasons for lower prices. Government is continuously trying to remove these barriers by various measures such as formation of APMCs, Co-Operative societies, fixation of minimum prices etc. But the agriculture sector could not reach the target success. In order to make agriculture sector more competitive and strengthen the economy of the farmers, government has introduced Farmer Producer Companies
A farmer producer organization is an entity formed by primary producers like Farmers, milk producers, fishermen, craftsmen etc. It is an organization established by the primary producers for the economic development of themselves.
There are many legal forms to establish a PO (Producer organization) such as:
Among these Farmer Producer Company and Co-operative societies are more preferred forms as they provide opportunity to share the profits in the form of dividends with members and these are globally competitive.
|Farmer’s Producer Company||Co-Operative Society|
The major benefit of forming the FPC is its global competitiveness. A corporate structure has more reputation in the eyes of people and farmers can now enjoy the same through FPCs.
Farmer producer companies will act as the bridge between the market and producers. Farmers have ability and expertise to produce but lack marketing skills and support. Thus, FPCs can be formed to:
These are only illustrative lists and various other activities which may help the farmers to develop themselves can be included in the activities.
A company is governed by Companies act in India. All other company except producer company is governed by new Company Act, 2013 while Producer Companies are governed by Old Companies Act, 1956 only. Section 581C of the Companies Act, 1956 provides conditions regarding incorporation of the Producer Company.
♣ Any 10 or more individuals (natural Persons), each of them (must be) are producers or two or more producer institutions (Includes co-operative societies also) or a combination of ten or more individuals and producer institutions can form a company.
♣ There should be minimum of 5 directors but cannot exceed 15. (in case an inter-state cooperative society has been converted into producer company, then such company can have more than 15 directors for the period of one years from the date of conversion)
♣ Each director must have a Director’s Identification Number (DIN) so director’s need to File DIR-03 along with identity proof, address proof and Photograph documents.
♣ The desired name of the producer company must be checked for availability in MCA by applying through Spice+ (earlier RUN service) feature. The name will generally be approved in 2-5 days if it do not contain any undesirable words as per company (Registration Offices and Fees) rules, 2014.
♣ Each director who are signing the memorandum and articles of association must have Digital Signature to digitally sign the document. Therefore, DSC needs to be obtained
♣ Once the name is approved following documents needs to be prepared and submitted to MCA:
> From MCA, Memorandum of Association (MoA) needs to be drafted. MoA is the document stating primary and secondary objects of forming the company. (Reference can be made to section 581B of the companies Act, 1956 which points out certain objectives of FPC.)
> Articles of Association (AoA)is to be drafted. AoA is like the by-law of the company
> Utility Bill and NOC from the landlord has to be obtained along with rent agreement.
> Director’s Consent is to be filed in the Form DIR-02 and DIR-08
♣ After the proper verification of all the documents submitted ROC will approve the incorporation and grant Certificate of Incorporation
The minimum authorized capital of the company should be Rs. 5 lakhs whereas paid capital is Rs. 1 lakh. An authorized capital is maximum limit up to which capital can be raised as per Memorandum of Association of the company whereas paid capital is actual amount received by the company from its subscribers.
FPC has to open a bank account within 180 days of its incorporation and transfer the paid up capital amount.
The Union Finance Minister, in the Budget Speech for 2013-14, announced two major initiatives to support Farmer Producer Companies (FPCs) viz., support to the equity base of FPCs by providing matching equity grants and Credit Guarantee support for facilitating collateral free lending to FPCs.
♣ Equity grant Fund Scheme:
The equity grant support to eligible FPCs is provided by the SFAC on matching basis subject to a maximum of Rs 10.00 lakh per FPC, provided the FPC has a minimum shareholder membership of 50 farmers.
♣ Credit Guaranty Fund Scheme
The main objective of the Credit Guarantee Fund scheme is to provide a Credit Guarantee Cover to Eligible Lending Institutions to enable them to provide collateral free credit to FPCs by minimizing their lending risks in respect of loans not exceeding Rs. 100.00 lakhs. Maximum Guaranty cover is restricted to the extent of 85%of the eligible sanctioned credit facility or to Rs. 85 lakhs whichever is lower.
♣ Tax deductions:
With a view to encouraging enabling environment for aggregation of farmers into FPOs and take advantage of economies of scale, the Govt. in the union budget 2018-19 announced 100% tax deduction for FPOs with annual turnover of up to Rs. 100 croresfor a period of five years from financial year 2018-19
Prior to the amendment to APMC Act, farmers need to sell their produce through APMCs. In order to remove this barrier of trade APMC act has been amended and now farmers can sell the produce directly to the consumers or outside market. It has removed many agency commissions and also helps farmers to avoid middlemen.
Currently around 3500 FPOs are working in India and around 3000 are under incorporation. Government proposes to Incorporate 10,000 FPOs throughout nation to help farmers ensure economies of scale.
Author details e-mail [email protected] Mo. +91 9449203522
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9372372627258301,
"language": "en",
"url": "https://www.oecd.org/environment/taxes-on-polluting-fuels-are-too-low-to-encourage-a-shift-to-low-carbon-alternatives.htm",
"token_count": 879,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.08447265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9b92313a-9764-4844-aeea-fa2c408bdb0b>"
}
|
20/09/2019 - Taxing polluting sources of energy is an effective way to curb emissions that harm the planet and human health, and the income generated can be used to ease the low-carbon transition for vulnerable households. Yet 70% of energy-related CO2 emissions from advanced and emerging economies are entirely untaxed, offering little incentive to move to cleaner energy, according to a new OECD report.
As world leaders gather for a UN Summit on climate change amid mounting public pressure for action, a preview of Taxing Energy Use 2019 shows that for 44 countries accounting for over 80% of energy emissions, taxes on polluting sources of energy are not set anywhere near the levels needed to reduce the risks and impacts of climate change and air pollution.
Taxes on road fuel are relatively high yet rarely fully reflect the cost of environmental harm, especially with some road transport sectors offered preferential rates. Taxes on coal – which is behind almost half of CO2 emissions from energy – are zero or close to zero in most countries. Taxes are often higher on natural gas, which is cleaner. For international flights and shipping, fuel taxes are zero, meaning long-haul frequent flyers and cargo shipping firms are not paying their fair share.
“We know we need to burn less fossil fuel, but when taxes on the most polluting fuels are zero or close to zero, there is little incentive to change,” said OECD Secretary-General Angel Gurría. “Energy taxes are not the sole solution, but we can’t curb climate change without them. They should be applied fairly and used to improve well-being and ease the energy transition for vulnerable groups.”
Across the 44 countries studied, 97% of energy-related CO2 emissions outside of road transport are taxed far below levels that would reflect damage to the environment. Only four countries (Denmark, the Netherlands, Norway and Switzerland) tax non-road energy above EUR 30/t CO2, considered a low-end estimate of the costs to the climate of carbon emissions. Several countries have even lowered energy taxes in recent years.
Adjusting taxes, along with state subsidies and investment, is vital to encourage a shift to low-carbon energy, transport, industry and agriculture. Given the difficulties of making big changes without hurting industries or communities, a new strand of OECD work shows how factoring in potential synergies and trade-offs between emission reduction goals and broader societal objectives such as better health, jobs and affordability of services can increase the incentives for swift action to cut emissions.
New OECD analysis that will be presented at next week’s UN Summit, Accelerating Climate Action: Refocusing Policies through a Well-Being Lens, says focusing on goals like clean air, healthy eating, accessibility of services and employment and inclusive fiscal reform could make it easier to introduce changes that will end up accelerating the low-carbon transition while improving lives.
Mr Gurría urged governments in July to face up to growing anger, particularly among young people, at backsliding in some countries on decarbonising economies even as emissions from energy are at an all-time high. While energy taxes stagnate, The 2019 OECD Inventory of support for fossil fuels finds that government support for fossil fuel production and use in the 44 countries studied (OECD and G20 plus Colombia) was USD 140 billion in 2017, with subsidies rising in some countries.
Taxing Energy Use 2019 says improving tax policy so it gives a fair chance to low-carbon technologies would help shift investment to greener options.
The report – which looks at three types of tax on energy (excise taxes on fuels, carbon taxes and taxes on electricity use) in areas like power and heat generation, industry and transport – says governments should ensure any tax rises resulting from tax reforms do not hurt vulnerable households, firms or workers. Extra tax revenues can be used for social purposes such as lowering income taxes, increasing spending on infrastructure or health, or funding direct transfers to households.
(The full report with country profiles will be available in October.)
For further information, journalists should contact Catherine Bremer in the OECD Media Office (+33 1 45 24 97 00).
Working with over 100 countries, the OECD is a global policy forum that promotes policies to improve the economic and social well-being of people around the world.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9534525275230408,
"language": "en",
"url": "https://www.perspectiveproject.co.za/2014/08/16/the-fractional-reserve-money-industry/",
"token_count": 5832,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.43359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ab3a606d-8bf5-4448-aa00-58dce0fc238e>"
}
|
The fractional reserve banking industry is different to the several industries explored above in that it does not directly produce the typical environmentally-problematic phenomena associated with the ‘other’ industries (for example, the fossil-fuel industry). The ‘other’ industries can be shown to be directly engaged in, for example, deforestation, which links to several other ecological issues directly, such as the loss of topsoil and the defilement of fresh water sources. The fractional reserve banking industry, quite contrarily, deals with fiat currency, which is to say a debt-based currency largely digitised and therefore not tangible in the same way as, say, a bulldozed field. However, the fractional reserve banking system, i.e. the monetary system ‘regulated’ by reserve and commercial banks in all nations, fuels the need to pay back the loans and the interest on loans inherent in the existence of fiat currency in the first place. The fractional reserve banking system, in this ecologically-sensitive view, is a viscous circle where constant economic growth is demanded by default, largely in the form of the continued growth of various industries that have traditionally been associated with monetary growth, the fossil-fuel industry being the main hub of such growth historically.
An interesting quote from the ‘governor’ of the US Federal Reserve in the 1930s and 1940s, Marriner Stoddard Eccles1, draws attention to the counter-intuitive notion that money is debt – this is an important concept to begin with when considering where money comes from and how the fractional reserve monetary system works to create new money. Questioned by congressman Patman about a past Federal Reserve purchase of U.S. government bonds, Eccles made the point that if “there were no debts in our money system, there wouldn’t be any money.” (Robinson 2009: 184 – 185) This is due to two main reasons, both of which will be explained below before turning briefly to the ecologically problematic implications of the system.
The first reason involves government ‘securities’: according to businessdictionary.com2, government ‘securities’ are bonds, “notes, and other debt instruments sold by a government to finance its borrowings. These are generally long-term securities with the highest market ratings.” Note the use of the phrase debt instrument – securities are debt instruments issued by governments to other parties in order to achieve given ends. In the case of the sale of these debt instruments from a government to a reserve bank (via the government treasury and bond traders), the ‘given end’ is the creation of a monetary deposit in selected commercial banks, which the reserve bank ‘creates’: the reserve bank purchases the government securities, which is to say that it purchases debt-instruments from the government, and in turn the reserve bank will authorise credit in selected commercial banks with deposits of whatever amounts – more on this below. For now, it is important to consider this slightly different definition of ‘government security’ from a second source3: “A government debt obligation (local or national) backed by the credit and taxing power of a country; as a result, there is very little risk of default.” it is clear again in this second definition of government security that what a government trades with a reserve bank is an obligation rather than than anything immediately tangible. The counter-intuitive aspect of the above ‘transaction’ is that the reserve bank does not buy government securities with money it has itself accrued and stored in its own reserves. This is explained in Modern Money Mechanics4, a booklet published by the Federal Reserve Bank of Chicago detailing the workings of the modern monetary system. To quote the opening line of the online booklet, the “purpose of this booklet is to describe the basic process of money creation in a ‘fractional reserve’ banking system” (In the example below, the reserve bank is the American one, called the Federal Reserve; from now on, ‘reserve bank’, ‘Federal Reserve’, and ‘the Fed’, will all be used interchangeably):
“Suppose the Federal Reserve System, through its trading desk at the Federal Reserve Bank of New York, buys $10,000 of Treasury bills from a dealer in U. S. government securities. In today’s world of computerized financial transactions, the Federal Reserve Bank pays for the securities with an “electronic” check drawn on itself. Via its “Fedwire” transfer network, the Federal Reserve notifies the dealer’s designated bank (Bank A) that payment for the securities should be credited to (deposited in) the dealer’s account at Bank A. At the same time, Bank A’s reserve account at the Federal Reserve is credited for the amount of the securities purchase. The Federal Reserve System has added $10,000 of securities to its assets, which it has paid for, in effect, by creating a liability on itself in the form of bank reserve balances.”
The final words of the above paragraph need to be reiterated, as they summarise the first reason why money in the global fractional reserve system is debt: the Fed pays for government securities “by creating a liability on itself in the form of bank reserve balances”; in other words, the Fed ‘pays’ for the purchase of government securities via the creation (credit) of the reserve deposit in the bank – nothing is deducted from either the Fed nor the Bank, while government is left with a debt obligation. Neither the government nor the Fed, it can therefore be argued, work with anything tangible in the creation of money; instead, the government uses ‘obligations’ (in the commercial sectors these are called ‘securities’ or ‘bonds’ – debt instruments) while the Fed ‘purchases’ such obligations by creating ‘liabilities on itself’. This strange interaction between the Fed and the government is the first step in adding new money to the money supply. But the Fed partly makes its money from interest repaid on the original amount it draws on itself; this is stated at federalreserveeducation.org5: “The Federal Reserve’s income is derived primarily from the interest on U.S. government securities that it has acquired through open market operations.” It is not inaccurate to state that the Fed earns interest from creating money out of nothing – later on in this section, it will be seen that this amounted to $700 billion in the US by 2008, and grew to $2 trillion in 2013; in response to such a system, Wright Patman (the congressman mentioned above who questioned Eccles in the 1940s) said on September 29, 1941, as reported in the Congressional Record of the House of Representatives (pages 7582-7583), records which are quoted at the website michaeljournal.org6:
“When our Federal Government, that has the exclusive power to create money, creates that money and then goes into the open market and borrows it and pays interest for the use of its own money, it occurs to me that that is going too far. I have never yet had anyone who could, through the use of logic and reason, justify the Federal Government borrowing the use of its own money… I am saying to you in all sincerity, and with all the earnestness that I possess, it is absolutely wrong for the Government to issue interest-bearing obligations. It is not only wrong: it is extravagant. It is not only extravagant, it is wasteful. It is absolutely unnecessary.”
“When the government is short of funds, the Treasury issues bonds and delivers them to bond dealers, which auction them off. When the Fed wants to “expand the money supply” (create money), it steps in and buys bonds from these dealers with newly-issued dollars acquired by the Fed for the cost of writing them into an account on a computer screen. These maneuvers are called “open market operations” because the Fed buys the bonds on the “open market” from the bond dealers. The bonds then become the “reserves” that the banking establishment uses to back its loans.”
This process indebts governments to reserve banks – in trading government securities for the computerised deposits of money created ‘from nothing’, so to speak, the fed is indeed attaining a ‘bond’ from the government, and a bond is a promise to repay; the following definition from investopedia.com8 draws attention to the fact that ‘security’ and ‘bond’ are synonymous, and that a promise to repay is inherent in the ‘transaction’:
“A bond (or debt obligation) issued by a government authority, with a promise of repayment upon maturity that is backed by said government. A government security may be issued by the government itself or by one of the government agencies. These securities are considered low-risk, since they are backed by the taxing power of the government.”
Further defining ‘treasury bonds’, the same website9 draws attention to the obligation to pay interest to the fed by the government: “A marketable, fixed-interest …government debt security with a maturity of more than 10 years. Treasury bonds make interest payments semi-annually and the income that holders receive is only taxed at the federal level.” The following definition of government security, already partly used in this section, highlights the fact that the interest paid to the fed by the government comes from general taxes of a country: “A government debt obligation (local or national) backed by the credit and taxing power of a country”. In short, the taxes collected from a citizenry pay for the interest owed to the Fed, as evident is the following comment from Patman10:
“We have what is known as the Federal Reserve Bank System. That system is not owned by the Government. Many people think that it is, because it says `Federal Reserve’. It belongs to the private banks, private corporations. So we have farmed out to the Federal Reserve Banking System that is owned exclusively, wholly, 100 percent by the private banks — we have farmed out to them the privilege of issuing the Government’s money. If we were to take this privilege back from them, we could save the amount of money that I have indicated in enormous interest charges [i.e. taxation].”
To repeat, as stated above, the US debt to the fed reached 2 trillion dollars in August 2013 – this is the contemporary “amount of money” that Patman would be referencing if he were commenting ‘today’.
The second reason why Eccles’s counter-intuitive statement above – that if “there were no debts in our money system, there wouldn’t be any money” – is true is due to the fact that banks are only required to have a ‘reserve’ of actual deposits of around ten per cent, as stated in Modern Money Mechanics11: “the reserve requirement against most transaction accounts is 10 percent”. A footnote is added that provides more specific information on this limit: “For each bank, the reserve requirement is 3 percent on a specified base amount of transaction accounts and 10 percent on the amount above this base”; this is mentioned because, in attempting to find information on the South African Reserve Bank (SARB) for comparison reasons, its minimum reserve ration is identified only as 2.5 per cent in a document issued by the SARB12: “the Reserve Bank … introduced one reserve ratio of 2,5 per cent on the total liabilities of banks”. Whether or not one is talking about ‘base amount’ reserve limits of 2.5 per cent or 3 per cent, or a 10 per cent ‘above base’ limit, does not really matter because the factional reserve monetary system entails the creation of money from debt regardless of minimum reserve ratios, and this is the case for all countries using fiat currency, which is to say every country on Earth. For the purpose of this study, the amount referred to as an example in the Modern Money Mechanics booklet will be maintained for the purpose of providing graphs from the booklet – the amount is $10 000 (American dollars). The following two graphs from the .pdf version of Modern Money Mechanics13 outline how money is loaned out – and indeed created – with a ten per cent reserve limit:
The above graphs detail a process wherein, from a reserve of $10 000, new money to the value of $100 000 is created. It is explained in Modern Money Mechanics14 how this is possible:
“Of course, they [- the banks -] do not really pay out loans from the money they receive as deposits. If they did this, no additional money would be created. What they do when they make loans is to accept promissory notes in exchange for credits to the borrowers’ transaction accounts. Loans (assets) and deposits (liabilities) both rise by $9,000. Reserves are unchanged by the loan transactions. But the deposit credits constitute new additions to the total deposits of the banking system.”
As the first graph shows, after the amount of 10,000 has been deposited and 9,000 of it loaned out under the 10% reserve limit, a bank will (presumably) be paid back the 9,000 by whoever borrowed it, so the bank will literally count the 9,000 loan amount as part of its deposits and work with 19,000 as the new deposit amount. Ten per cent of that 19,000 becomes the new reserve amount of 1900, this time leaving 8,100 as ‘excess’ on the original 10,000 deposit, and this 8100 is loaned out alongside the 9,000 already loaned – the 8,100 will also presumably be paid back, so again it is considered to be part of the bank’s deposits. Under the 10% reserve requirement, the gradually decreasing ‘excess’ will continue to be loaned on the original 10,000 amount until the excess value is zero, by which time new ‘money’ has grown to 100,000, i.e. ten times the original deposit amount. As the above quote from Modern Money Mechanics reveals, this would not be possible if a bank actually gave any of the original deposit to someone as the loan; rather, in participating in the fractional reserve process, someone who takes a loan creates the amount of money they borrow simply in ‘transacting’ with the bank. Of course, the person loaning the money also agrees to pay back interest on the loan amount when s/he signs for the ‘loan’, but as has been seen, money to pay the interest can come into existence 1) only when the government becomes further indebted to the fed when the latter buys government securities to back the creation of new bank deposits (a digital action only that ‘costs’ the fed nothing but guarantees it long-term interest repayments), and 2) when more loans are granted by banks. Astronomical levels of debt, including interest on loans, therefore, has accrued since the fractional reserve system began – world debt is above 54 trillion US dollars and rising15. Baring in mind the money-creation process described above, and considering the 54+ trillion dollar rising global debt, the following comment from Professor Antal E. Fekete, founder of the “New Austrian School”, is provided with some context16:
“The world economy, sagging as it is under the weight of its debt tower and fast depreciating irredeemable currencies, is clearly on its way to self-destruction. The forcible elimination of, first, silver and then a hundred years later of gold, from the monetary system removed the only ultimate extinguishers of debt we have. In consequence, total debt can only grow, never contract. The process is hidden since the unpaid and unpayable debt is accumulating as sovereign debt of governments. The world is deluding itself that sovereign debt can increase indefinitely as governments can extend its maturity indefinitely. In 2008 we had the wake-up call that it cannot.”
The Bank of England released two documents in 2014, one called ‘Money in the modern economy: an introduction’17, the other ‘Money creation in the modern economy’18, in which the above information about the creation of money is clarified and corroborated. The second document, for example, begins with the words, “This article explains how the majority of money in the modern economy is created by commercial banks making loans.” In the first, the following is found:
“Most money in the modern economy is in the form of bank deposits, which are created by commercial banks themselves… When a bank makes a loan to one of its customers it simply credits the customer’s account with a higher deposit balance. At that instant, new money is created…”
The second article provides further corroborative information, information that succinctly shatters the ‘common’ conception that when a bank loans money to a customer, it does so by lending out money that has been deposited by other customers: “rather than banks lending out deposits that are placed with them, the act of lending creates deposits — the reverse of the sequence typically described in textbooks.”
“The website of the Federal Reserve Bank of New York explains that as money is redeposited and relent throughout the banking system, this 10% held in “reserve” can be fanned into ten times that sum in loans; that is, $10,000 in reserves becomes $100,000 in loans. Federal Reserve Statistical Release H.8 puts the total “loans and leases in bank credit” as of September 24, 2008 at $7,049 billion. Ten percent of that is $700 billion. That means we the taxpayers will be paying interest to the banks on at least $700 billion annually – this so that the banks can retain the reserves to accumulate interest on ten times that sum in loans.”
To reiterate: $700 billion owed to banks in interest in 2008, based on a ‘transaction’ between the US government and the US federal reserve where the former ‘traded’ debt-instruments for the latter’s electronic ‘creation’ of deposits in selected banks (by August 2013, this amount exceeded 2 trillion dollars20 – a consequence of the unprecedented rates at which the USA has been issuing new money via the processes described here since the financial crisis of 2008). The ‘deposit’ of computerised money by the fed into a commercial bank after the securitisation process, a process whereby government is indebted to the fed (reason one above supporting the view that money is debt), is part of the creation of the said bank’s reserves; such a deposit only exists electronically, based on a promise by government to honour its ‘debt’ to the fed via taxation, but the said commercial bank counts the electronic sum of money as part of its reserves. So part of the ten per cent ‘reserves’ that a commercial bank loans out in the fractional reserve system described in the above paragraphs is the ‘money’ ‘created from debt’ and is used to create more ‘money from debt’ through the counter-intuitive bank-lending fractional reserve process – indeed, the only way to make sense of how any of this works is to view money as debt, as Eccles did. Patman realised this, a realisation that sparked the kind of response from him already seen in this section, as well as the following one, ‘introduced’ by globalresearch.ca21 at the opening of this quote:
“In another bit of sleight of hand known as “fractional reserve” lending, the same reserves are lent many times over, further expanding the money supply, generating interest for the banks with each loan. It was this money-creating process that prompted Wright Patman, Chairman of the House Banking and Currency Committee in the 1960s, to call the Federal Reserve “a total money-making machine.” He wrote:
“When the Federal Reserve writes a check for a government bond it does exactly what any bank does, it creates money, it created money purely and simply by writing a check.”
What does any of the above information about the creation of the majority of money in circulation by commercial banks making loans have to do with the ecological crisis? An initial glimpse of an answer can first be seen with the World Bank22: one document issued by it contains the following: “The world economy needs ever-increasing amounts of energy to sustain economic growth”. Economic growth is measured in numbers that increase as the money supply does, but it has been shown above that as the money supply is increased, so is global debt (money is debt), inherent in which is an obligation to pay money back (which requires more money expansion/creation, entailing more debt), hence constant expansion of lucrative industrial activity that is historically dominant in global business. This kind of industrial activity comes in many forms, and it has been shown above that some of the largest ones (for example, the fossil-fuel industry) have devastating consequences for the ecology of the planet.
With world debt at over 54 trillion dollars23 in mid-Augsut 2014, it is not unreasonable to state that recession is always in the background of economic discourse – indeed, various countries have been in, or still are in, or border close to, a state of recession since the 2008 financial crisis. Positivemoney.org outlines the following ecological consequence of such a situation:
“One direct link between the current monetary system and the environment is the effect that recessions have on environmental regulation and investing in the long term. In a recession it is common to hear the argument that costs to businesses are too high due to regulations which are represented as onerous, and that the relaxation of these regulations would allow businesses to hire, resulting in reduced unemployment and increased output.”
“Although the validity of this argument is debatable, it is propagated by those who believe it to be true, by those who see the recession as an opportunity to lower their costs, and by those who did not believe the regulations were required in any case. While the benefits of environmental regulations accrue over the long-term, the government’s chances of re-election usually hinge on the short-term health of the economy. As such the long-term environmental benefits of regulation often lose out to short-term political and economic considerations.”
Furthermore, it is pointed out at Positivemoney.org quite simply that the current monetary system requires constant growth. Constant economic growth implies lucrative activity, much of which is again in the form of the expansion of existing industries of the type already commented upon above. The monetary system, according to the aforementioned source, is engaged in constant economic growth in four ways; to quote directly from the source:
Debt repayments: since loans have to be repaid in instalments on fixed dates people are incentivised to pursue activities that provide quick returns. People pay off debt by producing more goods and services. Higher levels of debt incentivise higher levels of growth.
Asset price bubbles occur as banks create money through lending into assets they can receive the largest returns on, e.g. housing. In order to maintain standards of living when faced with an increase in the cost of essentials e.g. rent, individuals must either work more in order to pay the higher prices, or borrow more to make up the difference. Both borrowing and working more increase economic growth.
Loan repayments: when loans are repaid money is destroyed and the money supply shrinks. This generally results in a self reinforcing recession. To avoid this, new loans need to made simultaneously, increasing a need for growth as above .
Indebtedness in society is liable to increase economic activity, as individuals struggle to pay off the interest on their debt. In other words, debt drives growth.
The need for continued economic growth fuelled partly by the fractional reserve money system is commented on at at wiki.mises.org24; the two “critics” are listed as David Korten and Henri Monibot: there “are also critics… who contend fractional reserve banking (by creating a necessity for indefinite economic growth) leads to environmental destruction and a sudden, catastrophic depletion of the earth’s natural resources as the unsustainable, exponential consumption of the world’s scarce natural resources reaches its inevitable limits.”
Such information contextualises the following statement made by neweconomics.org25: “From climate change to the financial crisis it is clear the current economic system is not fit for purpose”. It is pointed out at the same site26 that “there are serious questions as to whether a relatively unregulated system dominated by private money creation in the form of interest bearing debt is best suited to the challenges facing modern humanity.”
[Note: Extensive information about the counter-intuitive process of money creation and the consequences of the process can be found at wiki.mises.org, a site that describes itself as follows: “The Mises Institute is the world’s largest, oldest, and most influential educational institution devoted to promoting Austrian economics, freedom, and peace in the tradition of classical liberalism. Since 1982, the Mises Institute has provided both scholars and laymen with resources to broaden their understanding of the economic school of thought known as Austrian economics. This school is most closely associated with our namesake, economist Ludwig von Mises. We are the worldwide epicenter of the Austrian movement.” This is not a conspiracy-based website; indeed, it is highly credible and details economic views from various credible sources. The research conducted in this section is fully corroborated at the site, and dismissing the subject of enquiry as a conspiracy is not academically or logically viable.]
1http://www.federalreservehistory.org/People/DetailView/75 accessed 2 August 2014
2http://www.businessdictionary.com/definition/government-securities.html#ixzz3A4L8qARY accessed 12 August 2014
3http://financial-dictionary.thefreedictionary.com/Government+Security accessed 12 August 2014
5http://www.federalreserveeducation.org/faq/topics/fed_basics.cfm accessed 12 August 2014
7http://www.globalresearch.ca/who-owns-the-federal-reserve/10489 accessed 12 August 2014
8http://www.investopedia.com/terms/g/governmentsecurity.asp accessed 14 August 2014
9http://www.investopedia.com/terms/t/treasurybond.asp accessed 14 August 2014
13http://liberty-tree.ca/research/Modern.Money.Mechanics accessed 14 August 2014
15http://www.economist.com/content/global_debt_clock accessed 16 August 2014
16http://wiki.mises.org/wiki/Criticism_of_fractional_reserve_banking accessed 16 August 2014
19http://www.globalresearch.ca/who-owns-the-federal-reserve/10489 accessed 14 August 2014
21http://www.globalresearch.ca/who-owns-the-federal-reserve/10489 accessed 14 August 2014
23http://www.economist.com/content/global_debt_clock accessed 15 August 2014
24http://wiki.mises.org/wiki/Criticism_of_fractional_reserve_banking#cite_ref-212 accessed 16 August 2014
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.942936360836029,
"language": "en",
"url": "https://www.sofi.com/learn/content/how-to-calculate-dividend-payout/",
"token_count": 1715,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0947265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:39b3e078-980d-4957-ab85-0fa1ed7432c4>"
}
|
Whether you’re an executive at a big company or a new investor, it’s important to know about dividends, and how to calculate the dividend payout ratio.
A dividend is a portion of the company’s profit that they give back to investors. Dividends can be reinvested, helping investors maximize their returns over time. Knowing what to expect from one’s investments when dividends are paid out can be helpful when devising a strategy.
Learning how to calculate dividend payout ratio is easier than investors might expect. Before learning to use and calculate the dividend payout ratio formula, it’s important to understand how dividends work and what ratios are.
What is the Dividend Payout Ratio?
Dividends are small cash payments made to a company’s shareholders. In effect, a company takes a portion of its quarterly profits, divides it up among its shareholders, and pays it out in the form of dividends. The remainder is reinvested back into the business.
If an investor owns a stock or fund that pays dividends, they can expect a small payment from that company quarterly—roughly once every three months. The actual dividend payout ratio that shareholders receive varies wildly, and depends on the individual stock or company in question. But generally speaking, a 15% or 5% payout ratio, as calculated above, would be considered pretty low. Average payout ratios tend to be closer to 40%.
A dividend payout ratio is a comparison between a company’s profits and its dividend payout. When a company reports earnings, it also reports its total profits for the time period (usually a quarter). From those profits, a company will pay out dividends. If a company’s total profits were $100, and its dividend is $1, the dividend payout ratio would be 100:1.
In finance, ratios are pretty common. While the focus here is on dividend payout ratios, another common one is the price-to-earnings ratio, which is a comparison of a company’s share price to its earnings per share.
The Dividend Payout Ratio Formula
There are a few ways to calculate the dividend payout ratio. But probably the simplest method is to divide the total dividends paid out over a year (the sum of four quarters) and divide it by the net income, or earnings, during the same period.
To calculate a dividend payout ratio, the equation looks like this:
Dividends paid / Net income
There are other calculations that can also determine the ratio. For example, an alternative formula uses dividends per share and earnings per share as variables:
Dividends per share / Earnings per share
A third formula loops in a retention ratio, which tells us how much of a company’s profits are being retained for reinvestment, rather than paid out in dividends. Here’s the formula for the retention ratio:
Net income – Dividends paid / Net income
Then, the dividend payout ratio can be calculated by subtracting the retention ratio from one:
Dividend payout ratio = 1 – Retention ratio
For simplicity’s sake, it’s probably easiest to stick to the first formula. That’s because the only variables needed to conduct the operation are the total dividends paid and a company’s net income. Both of those variables should be relatively easy to find in a company’s financial statements, like an earnings report or annual report.
How to Calculate the Dividend Payout Ratio
Here’s an example of how to calculate the dividend payout ratio: Company X releases its annual report that shows earnings of $100 for the year, and that it issued $15 to shareholders in the form of dividends. Those are the two variables needed to calculate the dividend payout ratio: simply divide the total dividends paid by the total earnings:
15 / 100 = 0.15, or 15%
The dividend payout ratio in this example is 15%. That means that 85% of Company X’s earnings for the year were retained and reinvested by the company, while 15% of its earnings were returned to shareholders in the form of dividends.
Here’s another example: Company Z reports that it generated $6 billion in earnings during the year 2019. Also during that year, the company paid out $300 million in dividends to shareholders. Here’s what the formula would look like:
300,000,000 / 6,000,000,000 = 0.5, or 5%
While both of these examples are hypothetical, they are both useful in understanding how the dividend payout ratio is calculated.
Using Dividends as a Part of an Investing Strategy
A dividend payout ratio gives investors more insight into the health of a company. And there are some insights to be learned depending on how high or low the ratio is.
For instance, if the ratio is high—a company pays out relatively high dividends—that may be a sign that a company is established, or not necessarily looking to expand in the near future. Lower ratios can mean the opposite; a company retaining a higher percentage of its earnings may need that money to invest and expand its operations.
For investors, though, dividends are one of the primary ways that their holdings earn them money. Some investors choose to invest in stocks or funds that traditionally pay out high dividends, and often (sometimes monthly). These are often called “dividend stocks,” for obvious reasons. Investors can often choose to automatically reinvest the dividends they do earn, increasing the size of their holdings, and therefore, potentially earning even more dividends over time.
That can be a good strategy for investors looking for ways to generate passive income, and to earn returns from their investments despite market conditions. But it’s important to keep in mind that companies can and do cut or suspend their dividends , which can throw a wrench in an investor’s strategy. There are also tax implications to consider, as investors do owe taxes on income generated via dividends.
The dividend payout ratio formula is fairly straightforward: divide the company’s net income by the dividends paid. That ratio can give an investor insight into a company, and also help them decide if or how dividends fit into their overall strategy.
As a part of a strategy, investing in dividend stocks may be a way to grow a portfolio with less risk. But those stocks aren’t generally likely to increase in value at the same rate as stocks with lower dividend ratios. Knowing the role dividends play in a particular stock can be helpful in making informed decisions.
Dividends can be particularly attractive bonuses for some investors. SoFi Invest® offers weekly dividends, which gives investors flexibility in folding those dividends into their overall strategy, potentially boosting their saving and investing habits.
The information provided is not meant to provide investment or financial advice. Investment decisions should be based on an individual’s specific financial needs, goals and risk profile. SoFi can’t guarantee future financial performance. Advisory services offered through SoFi Wealth, LLC. SoFi Securities, LLC, member FINRA / SIPC . The umbrella term “SoFi Invest” refers to the three investment and trading platforms operated by Social Finance, Inc. and its affiliates (described below). Individual customer accounts may be subject to the terms applicable to one or more of the platforms below.
Tax Information: This article provides general background information only and is not intended to serve as legal or tax advice or as a substitute for legal counsel. You should consult your own attorney and/or tax advisor if you have a question requiring legal or tax advice.
For additional disclosures related to the SoFi Invest platforms described above, please visit www.sofi.com/legal.
Neither the Investment Advisor Representatives of SoFi Wealth, nor the Registered Representatives of SoFi Securities are compensated for the sale of any product or service sold through any SoFi Invest platform. Information related to lending products contained herein should not be construed as an offer to sell, solicitation to buy or a pre-qualification of any loan product offered by SoFi Lending Corp and/or its affiliates.
External Websites: The information and analysis provided through hyperlinks to third party websites, while believed to be accurate, cannot be guaranteed by SoFi. Links are provided for informational purposes and should not be viewed as an endorsement.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9865949749946594,
"language": "en",
"url": "https://exclusivepapers.com/essays/economics/history-of-money.php",
"token_count": 1705,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.4375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e4a15a70-2eca-4c52-aeec-d61bfd0757d5>"
}
|
Syncretism consists of the attempt to reconcile disparate or contradictory beliefs, often while melding practices of various schools of thought. The modern money system used in the world today is very intricate and integrated in the western countries. Money forms one of the basic units of civilization, without money modern development cherished around the globe would have not been achieved. Without trade and exchange of commodities by people from different regions world population would still be living in segregated units strained by lack of some resources but coping through used of hard alternatives that they once used (Robert, 2002).
With money a buyer chooses what, where, how and when to acquire commodities or services they require. All communities in the western region in their primitive settings had their own of exchange or special value commodities. In some communities value for their wealth was stored in the herds of livestock, valuable stones or other rare and beautiful products. These valuables were presented during community ceremonies such as weddings. Wealth was envisaged as possession of a lot of the valuable commodities amongst other members of the same society. Barter trade was the only mood of exchange during this age.
One form of the primitive exchange units used widely during this age was shell; it was used in for trade in America Africa and even Asia through the Trans-Atlantic trade and also the triangle trade. Shell that was derived to some islands in the Indian Ocean was in great demand both in India and china. Another commodity that was used for exchange during this age was wampum that was valued amongst the European countries.
One of the earliest forms of money used for commercial exchanges was found in Mesopotamia and Egypt by 3rd millennium BC. Gold was weighed to ascertain its value and the amount of commodities that would be exchanged for the same.
Between the 7th and the 3rd century BC coin casters in china started using bronze in molding valuable shapes that were used in exchange.
There were particular shapes that were replicated in most of the Chinese states; two of the most common shapes were knife blade and spade with handles. On this shapes there were chine symbols to show the identity of their originality. These shapes were particularly developed during the Zhou dynasty. During the ruin of Emperor Shi Huangdi first round coins were forged from bronze. They had a characteristic square hole at the centre which was common with coins from different parts of Europe. These design for these coins remained unaltered for more than two thousands years (Mauss, 2002, p.36).
Banking systems in Greece started in early 4th century BC before other primitive societies could also develop any form of banking. Entrepreneurs, public members and temple were able to make deposits, withdraw from their savings and get loans from these banks. The government made precise standards for determining the quality, size and weight of the currencies. Book keeping to control and track the banking history of various customers was also developed during this period (Mauss, 2002, p.37).Want an expert to write a paper for you Talk to an operator now
Book keeping was developed after it emerged that some customers were forced to carry large loads of coins from one point of transaction to the other. By the second century AD, banking was developed such that making deposits and loans were very easy and usury became a common practice in the region. Prosperity in terms of developing an elaborate banking system continued until the aggressive church out grew the governing emperors. This brought banking to a halt, people stated shying away from the bankers as the church took more stun measures on the usury and the interest rates charged on any borrowings. The church discouraged usury and made look like an offence (Davies, 1994, p.54). The banking systems that had been thriving for a few centuries were for once slowed and almost brought to an end.
Currencies that are used today in most parts of the world can be traced from Roman history. The coins that were developed by the Romans during the middle centuries were replicated in many parts of the world and are still in use to this day. The Byzantine Empire developed a golden coin, during the early trade and exchange the coin found its way to other regions. The origin of the first shillings was inspired by this coin (History of the World, 2002). Other coins were developed by the year 690; dinar used in Latin America. In the next century, France through king Pepin III introduced the first silver penny that was to be later used as standard coin in the larger European region. Standardization of penny with the shilling came later in history when the kingdom felt that the currencies needed to be standardized to allow unified exchange in the region.
The standardized currency ratio was penny: shilling: pound (1: 12: 20). Later, the currencies were changed during decimalization period in the French Revolution. Later, the currencies were changed; penny remained as a currency. The golden shilling became a yardstick value coin. The pound on the other hand became a unit for measuring weight. However, in other parts of Europe, pound and shilling were used as coins. The coins were instrumental as the banking systems in Europe and other parts of Europe were developing. Dollar currency origin is attributed to the penny that was also known as thaler in Bohemia (History of the World, 2002).
First paper money was used in china around the year 920 AD. Paper money developed during the five dynasty era. The greatest challenge that faced the authorities was making uniform and unique notes that would not be duplicated. They also had a problem in authenticating the legal notes and the fake notes. Authorities in china encouraged use of notes by making available lots of notes in the market. There a lot of notes in circulation at one time such that the sum totals value of the notes would buy all the treasures in the world. Soon the society was faced with inflation problem. Something had to be done to save the societies economy, the government banned the use of notes in china in the 15th century to cutout the consequences of excess notes in circulation.
Paper currency was first used in Europe in early 17th century. Stockholm Banco was established in 1656 by Palmstruch. Although the bank was private it had a lot of state influence. In 1961 customers were first issued with credit notes in exchange with silver coins they previously possessed. In 1966, the bank present the population with the first regular notes that were hand signed by eight people, this was used as an authenticating measure. Palmstruch, printed enough notes to redeem the number of coins in circulation. People could use the notes as long as they had the confidence of re-exchanging the notes for coins at the bank. The notes were becoming popular in the public that Palmstruch printed excess notes to serve the population. He failed to balance the coins value and his notes and notes out weighed the coin value. He was convicted in 1666 on account of fraud (Davies, 1994, p.57). Later the notes were introduced in many parts of Europe and regulated to avoid a repeat of the previous cases.
Gradually the public confidence with the notes grew especially now that the government had reserves for the currencies. Notes were used comfortably for exchange of commodities and services. Now the risk of the currency reserves becoming deflated ceased as the government would regulate printing and supply of the notes. The only problem that existed during the initial stages of note development was inflation as there were no developed modalities for controlling and regulating currency level in circulation.
In conclusion, money and currencies have been evolving for centuries to get to the point is today. Money is a vital tool in trade, with globalization and industrialization around the globe, trade between nations from all parts of the world is inevitable. To achieve the global millennium common goals that were set by world leaders and other individual targets the world needs very unified currencies. Other the years, acceptability of currencies from other regions have improved. Inter-region trade has been made easier and faster by using selective globally accepted currencies; the euro, American dollar and the British Pound (William, 1994, p.30).
In anticipation of global currency exchange, other forms of payments and money transfer have developed. Checks, credit and debt cards are used in transfer without physical money being availed. Although exchange of goods and services can be achieved with much ease, new methods are being developed everyday to make the process even more flexible and convenient.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9693409204483032,
"language": "en",
"url": "https://raterush.com/how-can-i-create-a-balanced-budget/",
"token_count": 2156,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.00125885009765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4b084f21-9159-4395-892c-3d8d2c9ea227>"
}
|
Budgeting is a vital skill for essentially every adult. It can be difficult to keep track of your finances, especially once you become completely financially independent. If you are capable of creating a budget and following it, this can make it much easier to keep control of your spending, regardless of the occasion and cause. However, not all budgets are created equal. It is, therefore, especially important that you are able to come up with a budget that is reasonable and well-balanced in order to get the most out of the practice. This article covers the way in which one can come up with a balanced budget, as well as some of the uses for budgeting.
Do I Really Need a Budget?
You may be wondering if you can seriously benefit from budgeting. You almost certainly can. Unless you have unlimited income, then budgeting can greatly benefit you and your financial security. Budgeting helps you with several of the most important tasks you can do when it comes to managing your finances. Here are some of the most important:
- Keeping track of expendable/disposable income
- Understanding how much of your income you can save
- Planning and investing for your future
- Setting money aside for important expenses
A good budget also helps you to be prepared for the unexpected. It is not uncommon for people to experience unexpected hardships or emergencies and not have the necessary funds available to deal with the situation.
How Should I Shape a Budget?
There are many different methods that you can use in order to come up with a budget. These should change, depending on what works best for you. Something that these methods tend to have in common is that they emphasize breaking up your income or assets into separate parts, which then allows you to decide how much you have of each type of asset and how you can use them best.
Traditional Method of Budgeting
Very generally speaking, most types of budgeting have you break up your finances into several separate categories, which often include:
- Emergency savings: It is advised that you have at least 3 months worth of savings minimum in an account which you do not touch under any circumstances, unless it is absolutely necessary. The idea with this is that you have enough assets to tide you over for a few months, in case you find yourself unemployed or unable to work for a period of time. Having this emergency money set aside means that you are able to either wait out the situation or find a new job without becoming destitute.
- Monthly income: This is the money that you earn per month after taxes have been deducted. If you are creating a budget for the financial year instead of for a month, then you can simply multiply this number by 12 and include the taxes you pay for a whole year instead of for a month. This figure should be the basis for your budget, as it lays out the actual amount you have to work with.
- Monthly expenses: These expenses are usually more or less the same from month to month, varying only slightly depending on external factors. They include expenses like rent, public transport or gas money, groceries, money spent consistently on hobbies or interests, eating out, etc. Anything you spend a consistent amount of money on from month to month should be included into this part of the budget in order to have the most accurate picture of what you spend and what you need to set aside per month.
- Varying expenses: These are the things you spend money on monthly but that vary considerably and therefore cannot be made into a concrete figure or planned for in that way. Such expenses include eating out or going to the movies, for example, or gifts that you may buy for special occasions, birthdays, and the like.
- Special occasions: This concerns the money you set aside for occasions or events out of the ordinary, such as going on holiday or money you might spend if a friend comes to visit you. Depending on how often these types of events come up for you and how much money you tend to spend, the amount you might want to set aside can vary, and it may not be entirely necessary to put aside a concrete amount every month. Instead, you might choose to have a savings account where you set aside some money every month for such occasions.
Some Considerations Regarding This Method of Budgeting
This is a good method of budgeting for the average person who has a relatively good understanding of their monthly income and expenditure, but it may not be quite so suited to people whose income varies or who do not have as good of an understanding of their expenditures. For example, somebody who works in hospitality or entertainment may have an income that is considerably varied from month to month, and they may need to rely more heavily on their savings during quiet periods than somebody who works in a sector or industry where income is consistent. Somebody who is a student or in education may also experience such variation in their income, and they may also benefit from a less regimented style of budgeting.
50%, 30%, 20% Budgeting Method
Another method of budgeting that is somewhat more straightforward is breaking up your income into chunks of 50%, 30%, and 20%. The idea here is that 50% of your income goes to necessities and consistent monthly expenses, such as rent, groceries, bills, etc. 30% then goes to special occasions month to month, like holidays, eating out, going to the movies, and the like. Finally, the remaining 20% goes into your savings, and should only be used in emergencies or situations when it is completely necessary. This method of budgeting is perhaps more flexible than the aforementioned, but it also requires that the person doing the budgeting is in a position where they do not need to spend more than 50% of their income on their necessities from month to month. However, if this is not the case for you but you still want to try this method, you could adjust the proportions of which part of your income or expenditures go into savings and what can be used (for example, if 60% of your income is spent on necessities, then you can delegate the remaining 40% to savings and miscellaneous expenditures).
How Should I Make the Most of Budgeting?
In order to make the most of a potential budget, some preparation may be required on your part. There is not much point in simply starting to try to budget your finances without first having a good understanding of how much you earn and spend. As such, it is recommended that you first spend at least a month keeping track of all your finances and that you try not to deviate in any way from how you would typically spend or use your money during this month. That way, you should end up with a thorough understanding of how you would normally spend your money. You can figure out what you spend money on, how often you spend it, how much you usually have left at the end of the month, and if you spend any of your money excessively or frivolously.
Some people find it confusing or difficult to keep track of spending and expenses, but there are a variety of ways to make it relatively straightforward and pain-free. Organization is key if you want to make this task as simple as possible. You can use a spreadsheet to keep track of all your major expenses and compare this to your monthly income, taxes, and bills, for example. This way, you can compare how much you earn to how much you spend in a way that is visual and concrete. It is also a good idea to keep receipts so that you have a resource to fall back on if your numbers do not add up entirely. There are also apps that you can use for these types of tasks, and some banks also offer services through apps or other technologies that can break down or categorize your expenditures and income from month to month. This is a great way to simplify both the process and understanding of your financial situation, which is vital if you want to develop a budget that best suits you and your needs.
Once you have done this, you can then start to experiment with different kinds of budgeting to see which works best for you. If you are not sure whether or not a budget is going to work for you or what exactly you should try, then you should give some thought to your financial goals and aims. Not everybody knows what they should be doing with their money and how they should try to both save and spend it. Maybe you don’t actually have any specific goals in mind, or you have never given any thought to how you use your money and what you want your future to look like financially. If this is the case, spend some time thinking about this, and where you want to be in 5, 10, and 15 years’ time, for example. That way, you can acquire a better understanding of what you want to be aiming towards when you are thinking about your budgeting and your financial goals.
How Do I Make My Budget Balance?
Once you have developed a budget that you feel suits your lifestyle and your financial goals, the next step is to put it into practice and see if you can get it to balance at the end of the month. This may not happen on the first attempt, but there is no need to worry if you go slightly into the red, as long as you can identify the reason for that and ensure that you are more mindful of your expenditures the month after. If you find that you are not able to get your budget to balance, then you should experiment with various methods of keeping track of your expenditures in order to ensure that you are not going over your budget in any area of your life or any arena that you might be putting your income towards.
If you find that you are consistently unable to balance your budget, then you may need to reconsider how you have designed it and whether or not you need to adjust it slightly so that it is better suited to your expenditures and lifestyle. This is not too difficult to do if you ensure that you keep track of your income and expenditures from month to month in order to see what exactly you are using your money on and in what quantities. It is relatively common that people spend more money on food than they realize, for example. If you begin to track your expenditures thoroughly and notice that this is the case, then you can reconsider whether or not you should adjust your spending or budget accordingly. If you find that you are using more money than you anticipated on something that is very important, then you might want to delegate less of your income to another part of your budget in order to have more money available for the thing that is important to you, while still ensuring that your budget balances.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9419461488723755,
"language": "en",
"url": "https://tawidnewsmag.com/possible-economic-impacts-of-falling-oil-prices-the-pandemic-and-the-looming-global-recession-onto-overseas-filipinos-and-their-remittances/",
"token_count": 1338,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.302734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:03c292f7-8e2a-4d4f-be09-490a52f006ae>"
}
|
Policy brief 2020-09
Ateneo Center for Economic Research and Development (ACERD)
The COVID-19 pandemic may well be the most challenging crisis facing the migration management system of the Philippines. The global dispersion of overseas Filipinos (estimated to be at least 10.3 million, in over-200 countries and territories) meets up with the global spread of the viral disease. Both the countries receiving overseas migrants and migrant-origin countries like the Philippines are now trying to survive, unleashing stimulus packages or rescue funds to meet the needs of their citizens.
Remittance flows from abroad are literally a major economic lifeline. This lifeline will then backstop whatever public funds the Philippine government is now unloading to meet urgent survival and social protection needs of Filipinos. The scale of Filipino households who continually receive foreign remittances? Around 12 percent of all Filipino households “have or had an OFW [overseas Filipino worker] member,” says the 2018 National Migration Survey (NMS).
With much of the global economy in a lockdown, many OFWs are unable to report for work and, at the same time, are unable to send money back home more frequently. In addition, declining oil prices in the past few weeks are a corollary challenge, threatening the stability of OFWs in the Middle East.
OFWs sent about US$ 30.13 billion in cash remittances in 2019, higher than the US$ 28.94 billion sent in 2018. The 2018 Survey on Overseas Filipinos (SOF) says there are about 2.3 million Filipino migrant workers. Meanwhile, stock estimates on overseas Filipinos (latest: 2013) disaggregates Filipinos overseas as follows: 4.2 million as temporary migrants (migrant workers), 4.8 million as permanent migrants, and 1.2 million as irregular migrants.
During the 2008-09 global financial crisis, the presence of OFWs in many parts of the world has spread the risk of slow levels of total remittance inflows to the Philippines. However, COVID-19’s spread has now reached literally the entire planet. As of April 2, over-940,000 people have been infected with COVID-19 (including some Filipino migrant workers, permanent residents and naturalized citizens).
Also, during the 2008-2009 crisis, oil prices did not go down to its present level —about US$ 22 per barrel.
The impacts of the 2008-2009 crisis on OFWs were not as severe as initially anticipated. Many OFWs remained in their working countries, adjusting their statuses there by deskilling (e.g. an engineer laid off, continued to work as an electrician) and by coping and riding through the short-term impacts of that crisis. Workers were still physically mobile at that time.
In the current scenario, many countries are on lockdown and all the oil producers in the Middle East (where nearly half of our OFWs are based) are at risk with falling oil prices. If this price trend continues, the Middle East might be forced to stop oil production and possibly lay off many workers —including Filipinos.
With the combined impacts of the global economic stoppage, lockdowns and declining oil prices, base-to-worst case scenarios could lead to:
a) Cash remittances potentially declining from US$ 30 billion in 2019 to US$ 27 billion (base case) to US$24 billion (worst case). That is roughly 10-to-20% or US$3 to US$6 billion less, year on year —this to become steepest drop of remittance inflows in Philippine migration history; and
b) About 300,00 to 400,000 OFWs being affected by lay-offs and pay cuts, not to mention that some of them may need to be repatriated.
Note also that in 2019, at least 121 countries and territories where Filipinos are sent lesser remittance amounts than in 2018. The total lesser remittance amounts from these 121 jurisdictions was US$ 1.36 billion.
These base-to-worst case scenarios are significant numbers hitting the economy externally and then internally. With overseas Filipinos’ remittances fueling national consumption, we can lose 20 to 40 percent of consumption due to the pass-through effect of remittances.
Some things that can be done now:
- Labor and foreign officials may have to start monitoring and informing the public how many overseas Filipinos will be displaced from their jobs —similar to efforts done during the 2008-2009 global economic crisis.
- Embassies and consulates have to anticipate and monitor expected job displacements affecting Filipinos. Diplomatic officials should also be given leeway to negotiate with ministries of labor possible steps to keep foreign workers and, to the extent possible, include them in their countries’ social protection programs.
- Globally-mapped information on these arrangements must be tracked. That way, these resources from host countries will give overseas Filipinos and their families some wherewithal apart from what migration and non-migration government agencies back home will be giving (e.g. social amelioration program funds under the Bayanihan to Heal as One Act). Resources coming from host countries will buy relevant Philippine government agencies (e.g. Overseas Workers Welfare Administration [OWWA], Social Security System [SSS], Philippine Health Insurance Corp. [PhilHealth]) some time.
- Labor and foreign officials may have to initiate dialogues with the International Labor Organization (ILO) and the International Organization for Migration (IOM) on how to assist distressed migrant workers affected by the pandemic.
- OWWA may have to offer Metro Manila-based temporary shelters as 14-day quarantine facilities for displaced returning OFWs.
- Since PhilHealth will be covering hospitalization expenses of COVID-19 cases, this should also apply to COVID-19-infected returning overseas workers through PhilHealth’s Overseas Workers Program (OWP).
- The SSS and its OFW membership program should allow OFW members to avail of the benefits of membership at this critical juncture.
- Prior to going overseas, OFWs are compelled to pay accredited private insurance companies insurance premiums so as to cover repatriation expenses. This arrangement must now be activated by the private insurance companies concerned.
Since the SOF is just a rider to the October round of the quarterly Labor Force Survey (LFS), we are just taking samples of the bigger universe of overseas Filipino workers (OFWs).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9753198623657227,
"language": "en",
"url": "https://www.bankrate.com/glossary/a/ability-to-pay/",
"token_count": 442,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0203857421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b99e7f4e-457c-487d-bead-fd3ad0a89cc2>"
}
|
The deadline for filing your 2020 taxes is May 17, 2021. Here’s what else to know.
What is ability to pay?
Ability to pay is a principle of taxation. Individuals who earn more income pay more tax, not because they use more government goods and services, but because taxpayers who earn more have the ability to pay more. The progressive tax, or higher tax rates for people with higher incomes, is based on this principle.
American tax code breaks taxpayers into tiers based on their annual income, whether individual or combined for couples that are married and filing jointly. Each tier is taxed at a different rate, based on a predetermined amount of what someone earning an income within each tier should theoretically be able to pay.
This tax system is designed to protect lower income earners who cannot afford to pay as much in taxes as those who earn more money. Conversely, higher earners must pay a greater percentage of their income to balance the system.
Ability to pay is not the same as straight income brackets. Rather, it is a designation of whether an individual taxpayer can pay his or her entire tax burden or not.
Lower income earners often get a tax discount that prevents them from needing to pay the whole percentage amount that they owe on their taxes, while higher income earners generally pay the full percentage amount.
Ability to pay is also known as a progressive tax, because it taxes different payers along a sliding scale according to income. Progressive taxation is a cornerstone of income redistribution, since lower earners generally require more government assistance through taxpayer dollars, even though they contribute proportionally less.
Critics of the ability to pay system believe that the practice discourages economic success since it burdens wealthier individuals with a disproportionate amount of taxation. However, because of increasing federal debt and government budgetary requirements, other solutions often are deemed even more painful for taxpayers.
Ability to pay example
If you earn $30,000 a year, you fall into a tax bracket that is taxed at 15 percent, following the ability to pay principle. Thus, your annual taxes owed are $4,500.
Curious what you’ll owe next tax season? Use our tax calculator to estimate your future taxes.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9570890665054321,
"language": "en",
"url": "https://www.bankruptcyinfo.com/blog/2018/01/tackling-debt-7-steps-to-creating-a-budget-that-works/",
"token_count": 582,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0732421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:60be5663-56c2-4b8b-b4de-6bd131e9d85d>"
}
|
Debt often feels like a race that you can never win. Debtors struggle to make payments every month, sometimes without even touching the principal balance of their loans and bills. One of the most fundamental solutions to debt is to create a budget. While a budget is not a be-all and end-all answer alone, it is an essential part of the solution. However, it is important to create one that works for you.
A budget will give you a clear picture of how much money you make vs how much you spend. Furthermore, it will help you make small adjustments in your spending that can add up to big differences at the end of the month. Follow these seven steps to get started.
1. Write down your fixed expenses
Fixed expenses are bills that cost the same amount of money for every payment. These will likely include rent, mortgage payments, utilities, loans, vehicle insurance and other monthly expenses. These are more predictable and will likely not change from month-to-month.
2. Write down all other expenses
Next, sit down and read your bank or credit card statement to determine variable monthly expenses. Find out how much you typically spend on groceries, gas, eating at restaurants, buying clothes, toys, medical costs, and unplanned events. Do these costs vary widely from month-to-month?
3. Write down how much money you make
List how much money you receive per month from your job, but also include things such as spousal support, child support and other sources of income.
4. Determine the difference
Subtract your expenses from how much money you make. The amount left is how much you are saving. The goal will be to adjust your budget so you can increase the money leftover every month to pay down your debt. You should aim to pay more than the minimum monthly payments on loans and credit card bills.
5. Find out where you can reduce variable spending
Variable spending is where a lot of money is lost every month. The little things add up. Buying lunch at work might be a small daily expense, but can cost over $100 every month. However, making little savings every month can be just as helpful. Try cutting down on eating out, buying clothes and those little impulsive purchases every week.
6. Make a plan
Create a spending budget at the beginning of every month. Write down what you spend every day that month in a spreadsheet. This way you can determine if you are on track as the month progresses.
7. Do not give up
Budgeting takes a lot of self-discipline, so give yourself some leeway if you do not make your budget the first month. You may need help from an attorney who specializes in debt relief. An attorney can help you create a workable budget and provide a number of avenues for debt relief.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9541910290718079,
"language": "en",
"url": "https://www.bostonglobe.com/metro/2018/01/23/baker-launches-commission-study-transportation-needs/CY0c4R3QQoWx3kOMk1FjaL/story.html",
"token_count": 660,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.126953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a5628659-6cad-4815-95c9-89fcc2de6183>"
}
|
An 18-member panel appointed by Governor Charlie Baker will study the state’s future transportation needs, after a report last year suggested Massachusetts is not financially prepared for changes to the climate and how people get around.
The Commission on the Future of Transportation in the Commonwealth will report back by December, after looking into how an array of factors could affect transportation in the state between 2020 and 2040. The areas of study include:
■ How the state can decrease greenhouse gas emissions that come from transportation;
■ What the state needs to do to protect the transportation system from climate change effects;
■ Whether the state should — and how it can — increase the number of electric-powered vehicles in the state;
■ How self-driving cars and on-demand transportation services like Uber and Lyft will affect Massachusetts and its public transportation systems.
“This commission will advise our administration on the future of transportation in Massachusetts that sensibly accounts for impending disruptions due to changes in technology, climate, demographics, and more,” Baker said in a statement.
The panel will be chaired by Steven Kadish, Baker’s former chief of staff. Other members include transportation planning professionals, climate scientists, academics, and industry officials.
One member of the new commission, Eileen McAnneny, is the president of the Massachusetts Taxpayers Foundation, a business-backed watchdog group. Last fall, the group published a report that called on the state to incorporate climate change into its planning, it would risk “exposing our transportation systems to potentially catastrophic damage or investing in obsolete assets.” The report also said the state could see lowered transportation-related revenue if electric-powered vehicles and ride-hailing services lead to lower gas tax collections and fees associated with car ownership.
Shortly after the report was released, the Baker administration said it would form the commission. It was unveiled Tuesday when the Republican governor signed an executive order establishing the commission.
The executive order noted that the state “does not have a statewide, comprehensive transportation blueprint” and would not be able to create one without the information the commission will seek. It hinted the commission would address funding questions, noting the industry changes could “affect the types of capital investments Massachusetts will need . . . as well as the sources of revenue to support such future infrastructure investments.”
Rafael Mares, a climate and transit advocate with the Conservation Law Foundation, celebrated the launch of the commission. But it will only be valuable if it “ends up in a blueprint, a vision and a plan for the Commonwealth that can actually be implemented,” including funding sources, he said.
“Otherwise, we’re spinning our wheels,” he added. “But analyzing and seeking counsel are the first steps.”
Some of the commission’s work might replicate or align with ongoing efforts at the state Department of Transportation and the MBTA, such as committees studying driverless vehicles and ride-hailing services, and the T’s long-term investment plan, which is expected to be released this year.
Additionally, Transportation Secretary Stephanie Pollack has said in recent months that the state must find ways to lower transportation-related emissions.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9376429319381714,
"language": "en",
"url": "https://www.techeblog.com/18-creepy-pictures-of-the-japanese-earthquake-captured-by-google/",
"token_count": 152,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5c110a03-eaeb-40bf-9a11-8d589738807d>"
}
|
Google wanted to help survivors of the Japanese earthquake share their photographs and videos. So, they created a website, “Mirai e no kioku”, which means “Memories for the Future”. That’s not all, Google also captured thousands of miles of Street View imagery of the affected areas. Continue reading to see more.
Early estimates placed insured losses from the earthquake alone at US$14.5 to $34.6 billion. The Bank of Japan offered Y15 trillion (US$183 billion) to the banking system on 14 March in an effort to normalize market conditions. The World Bank’s estimated economic cost was US$235 billion, making it the costliest natural disaster in world history.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9352748394012451,
"language": "en",
"url": "https://www.wri.org/blog/2010/12/response-eeis-timeline-environmental-regulations-utility-industry",
"token_count": 2966,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d97118d7-3f8c-47f6-90ba-d9b26f4d3d0a>"
}
|
After years of delay, EPA gets back on track in issuing rules that provide a path to a cleaner power fleet.
- Download As Presentation
- Listen to the Presentation
- Download as a Fact Sheet
- More WRI Climate Fact Sheets
After years of delay, the Environmental Protection Agency (EPA) is working to reduce dangerous and toxic pollutants released to the air and water by electric power plants, as required by the Clean Air Act and other statutes. Four key points about EPA’s actions are clear:
EPA, Greenhouse Gases, and the U.S. Economy
As the U.S. Environmental Protection Agency uses its authority to limit greenhouse gases and other pollutants, members of Congress are wondering what these rules mean for the people and industries in their states. In this series, the non-partisan World Resources Institute examines pending actions and what they mean for the U.S. economy:
- What Are Limits on EPA? Clean Air Act Holds Answers
- EPA, The Clean Air Act, and U.S. Manufacturing
- For EPA Regulations, Cost Predictions Are Overstated
- Response to EEI's Timeline of Environmental Regulations For the Utility Industry
- EPA Regulations: Not a Moratorium on Industrial Construction
- Electric Reliability under New EPA Power Plant Regulations: A Field Guide
- Myths & Facts About U.S. EPA Standards
- Contrary to assertions by industry groups, EPA is pursuing a realistic timeline over the next decade to bring the electric power industry into compliance with the law.
- In most cases the electric power sector has been on notice for several years (in some cases several decades) that these pollutants would be regulated.
- Without new regulations, these pollutants will continue to impair America’s waterways, heat the planet, perpetuate acid rain, and lead to preventable hospital visits and premature deaths.
- In each of its rulemakings, EPA provides for an extensive, open public process based on evidence. This leads to more robust and fair rules for the electric power sector. As EPA finalizes each rule, it will establish an increasingly clear pathway for investments in an American electric generation fleet for the 21st century.
The Wall Street Journal and executives major electric power corporations have frequently suggested that EPA’s regulatory timeline is unworkable1. The largest industry trade group, the Edison Electric Institute (EEI) has produced a slide that purports to display an onslaught of new requirements for power plants.2 EEI has been distributing this slide widely on Capitol Hill where it presumably hopes to win lawmakers’ support for additional delays in EPA regulation or even a stripping of EPA’s authority.
The EPA regulatory process is far from a “train wreck.” EEI’s misleading timeline, reproduced in Figure 1, mostly consists of procedural events and activities that will not impose a direct compliance obligation on power plants. This serves only to spread confusion about EPA’s actual regulatory schedule.
WRI has identified four categories of EPA activities on the EEI timeline that are potentially misleading. When these activities are removed, only the timing of actual new compliance obligations is left. In Figure 2, “X”s (color coded for each filter in the screening process) have been applied to remove events from the figure that are not consequential from a compliance standpoint. The screening filters are as follows:
- (Blue X's) Rules that have been remanded or vacated by court decisions that do not impose compliance obligations.
- (Green X's) Rules that are already in effect representing compliance obligations that already exist; there are no new requirements imposed by these rules.
- (Purple X's) Public input through the rulemaking process (leads to more robust and fair rules for the electric power sector, and should not be conflated with new compliance obligations).
- (Red X's) National Ambient Air Quality Standard (NAAQS) rules for various pollutants that set standards for states to achieve. They do not establish new requirements for electric generation units.3
Figure 2: Environmental Regulatory Requirements For the Utility Industry, Removing All But New Compliance Obligations
Figure 3 shows a more accurate picture of the timeline for new requirements applicable to electric power plants.
Figure 3: Regulatory Compliance Obligations for the Utility Industry
EPA is carrying out the intent of Congress (through the passage of the bipartisan Clean Air Act and subsequent amendments) to clean the nation’s air and water. These rules can help the United States transition to cleaner and more efficient power plants, by establishing a clear pathway for investments in an electric generation fleet for the 21st century.
The CAA requires EPA and states to regulate and reduce harmful pollutants from major emissions sources including power plants. To date, this framework has delivered substantial improvements in air quality and significant public health benefits estimated between $77 and $519 billion annually4. Over the next decade, power plants will be subject to new rules under the CAA as well as the Clean Water Act (CWA) and the Resource Conservation and Recovery Act (RCRA) to control substances that cause serious health problems and substantial damage to America’s natural resources. These rules will take effect after long lead times. In most cases industry has been on notice for years that these pollutants would be regulated.
The electric power sector has had substantial notice---in some cases for decades---that power plants would be subject to regulations to control dangerous pollutants.
Half of the regulations under consideration by EPA have been in the regulatory pipeline for over a decade. Due to administrative delays and litigation resulting in court decisions remanding or vacating previous rules, many of these rules have not been finalized or the final rules were reversed. In many cases Congress has set statutory deadlines for EPA to act, EPA has missed the deadlines, and courts have ordered EPA to act. Table 1 outlines the amount of time the electric sector has had to prepare for new regulations.
The case of mercury from power plants provides a good example of how much regulatory lag-time there has been for the electric power industry to prepare for new pollutant rules. The CAA required EPA to study mercury and other hazardous air pollutant (HAP) emissions from electric power plants and determine whether or not regulating these emissions would be necessary and appropriate. In 2000, EPA determined that regulations were appropriate, effectively putting the electric power industry on notice that controls on mercury would be required. EPA then proposed and finalized rules (including the Clean Air Mercury Rule) that were ultimately vacated by the courts, which found that EPA had not acted within the constraints of the CAA. EPA now intends to issue revised draft and final rules in accordance with CAA requirements in 2011. Compliance obligations would take effect in 2015.
Thus, the electric power industry has had 15 years to prepare, from the determination in 2000 to the expected date of compliance obligations in 2015.
Finalizing regulations provides certainty.
Finalizing regulation removes uncertainty that might otherwise stymie new investments. The ultimate stringency and compliance obligations for most of the regulations EPA is pursuing will remain uncertain until rules are final. The statutes – RCRA, CWA and the CAA – establish which pollutants will be subject to regulation and the relevant legal standards; the specifics are established during the EPA rulemakings. The longer it takes EPA to finalize new pollutant rules, the longer plant operators face uncertainty as to what will be required.
Not all EPA actions will create new regulatory regimes.
It is important to note that some EPA rules do not constitute new regulatory programs. For example, sulfur dioxide (SO2) emissions from power plants have been covered by cap-and-trade programs that began in 1995. Nitrogen oxides (NOx) emissions were the subject of a cap-and-trade program covering plants in the eastern half of the country since at least 2003. The Clean Air Interstate Rule and its successor, the Transport Rule, extend NOx cap-and-trade to new states and increase the stringency of requirements for units already subject to the cap-and-trade for NOx and SO2. Power plant operators are familiar with these regulatory frameworks and are familiar with their operation. While increasing the stringency of these rules may require additional investments in control strategies, there is no fundamentally new requirement in play.
|Pollutant||Notice that new or more stringent rules would be imposed5||Year in which compliance obligations will be imposed6||Regulatory lag-time||Comments|
|Mercury||2000||2015||15 years||After a study required by statute and subject to public review, EPA found in 2000 that it was “necessary and appropriate” to regulate mercury and other pollutants from power plants as HAPs|
|SO2 and NOx||1990 for initial rules.|
2003 for increased stringency of rules.
|Initially in 1995 for SO2 with increasing stringency beginning in 2010 (for SO2) and again in 2012. Technology standards for NOX were first imposed in 1995, Northeast NOx cap started in 1999; initial expansion in 2003, and then again in 2009||5 years for initial rules.|
6-7 years for more stringent rules.
|New rules for SO2 and NOX represent increasing stringency under existing frameworks.|
|Greenhouse Gases (GHGs)||2009 (December)||2011||13 months||EPA found that GHGs endanger public health and welfare. EPA rules to regulate GHGs from light-duty vehicles take effect on January 2, 2011, the CAA requires BACT for a pollutant once it is subject to regulation under the Act.|
|Coal Combustion Residuals (CCR, or Coal Ash)||2007 EPA Notice of Data Availability solicited initial reactions to EPA data.||No sooner than mid-2012, requirements phased in||At least 3 years||Initial requests for information were initiated in 2007, signaling the intention to regulate. Depending on EPA final rules timetables for compliance will vary.|
|Cooling water intake||1972||No sooner than 2014. Requirements are incorporated permit by permit, which could take up to 5 years||38 years||The CWA amendments of 1977 require these regulations but no final rule has been implemented due to delay and court orders|
|Power plant effluent||1982 CWA mandates periodic review of existing regulations for potential update.||2015 Final rule not expected before 2012. Requirements are incorporated permit by permit, which could take up to 5 years||23 years||Effluent guidelines are required to be reviewed periodically. The last update was in 1982.|
|Note: Regulatory lag-time is calculated from the date that it was made clear under statutory requirements and court decisions that new or more stringent rules would be pursued relative to the current expected date that compliance will be required.|
The EPA regulatory process provides opportunities for industry input.
There are few, if any surprises in the very public and largely transparent EPA regulatory process. Multiple events must take place before any actual compliance obligation is imposed on an electric power plant or any other regulated entity. The EPA must issue proposed rules and seek public comment. Some rulemakings are initiated with advanced notices of proposed rulemaking, so that the process has extra opportunities for industry and public comment, and some start with studies that are conducted with public input and comment. This process allows the electric power industry to have substantial input into the shape of new regulations and allows the industry to better understand what may be required of them by EPA when rules are finalized. Fears of agency overreach are misplaced given the built-in limitations on EPA’s authority contained in the CAA.
Often rules are litigated; one outcome can be to send the rule back to EPA for further work. Many of EPA’s rules are issued on schedules established by the federal courts – because EPA has already missed the statutory deadline for promulgation. Only the final rule imposes a direct compliance obligation – after which there are practical implications for power plant owners and operators as they make investments in their generation fleets.
Why is EPA regulating power plants at all?
EPA is responding to direction from Congress to reduce the human health and environmental effects of mercury (as well as other HAPs), SO2, NOx, greenhouse gases (GHGs), coal ash, cooling water intake and discharge, industrial water effluent. Mercury is a neurotoxin that causes brain damage. SO2 and NOX cause acid rain, regional haze and can cause or worsen asthma and aggravate cardio-pulmonary disease leading to increased hospital visits and premature death. A recent example of the dangers of coal ash was the major spill of ash at the Tennessee Valley Authority’s Kingston plant in 2008 where irresponsible containment of coal ash caused waterways and communities to be inundated with waste. Electric power plants are major sources of many pollutants that EPA is regulating or intends to regulate.
Electric power plants are a major source of pollutants that substantially contribute to ongoing public health and environmental problems that impose real costs to the economy. When just air pollutants are considered, electric power plants represent the following shares of total U.S. emissions in 2005:
- 70 percent of SO2 emissions
- 50 percent of mercury emissions
- 34 percent of GHG emissions
- 18 percent of NOX emissions
By controlling these emissions using appropriate regulations under clear statutory authority EPA will go a long way towards meeting its mandate to protect public health and welfare. The electric power industry has had substantial time to prepare for regulations and once rules are final the industry will have a clear regulatory roadmap to guide investments. Misleading charts that exaggerate EPA actions such as those distributed by EEI cause confusion that will only increase uncertainty for the electric power industry and jeopardize important efforts to protect public health.
Listen to the Presentation
The Edison Electric Institute has circulated a chart that grossly misrepresents the EPA regulatory timeline for coal fired power plants. Through this article, WRI is countering this misleading chart. ↩︎
If states believe that the only way to come into attainment of NAAQS standards is by obtaining additional reductions from electric generators, then the most likely way for states to effect those changes is through modification of the existing regulations that already control emissions of those same pollutants. EPA could undertake similar action through a future update to the Transport rule. ↩︎
Based on statutory requirements and court rulings. ↩︎
Assuming no additional delays in rulemaking due to administrative actions, litigation and/or court actions. ↩︎
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9097602367401123,
"language": "en",
"url": "https://nextbigwhat.com/national-solar-mission-india-details/",
"token_count": 529,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03662109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:23ae807f-9bac-43b8-99e4-0fc26428e1b9>"
}
|
[In continuation with our coverage on Solar Energy in India, we look at the much awaited National Solar Mission from the government.]
The Union government is supposed to launch the National Solar Mission on November 14, 2009 and here are the guidelines from the final draft
- Make India a global leader in solar energy and the mission envisages an installed solar generation capacity of 20,000 MW by 2020, 1,00,000 MW by 2030 and of 2,00,000 MW by 2050.
- The total expected funding from the government for the 30-year period will run to Rs. 85,000 crore to Rs. 105,000 crore.
- Between 2017 and 2020, the target is to achieve tariff parity with conventional grid power and achieve an installed capacity of 20 gigawatts (Gw) by 2020.
- 4-5GW of installed solar manufacturing capacity by 2017.
Implementation Phases of India’s National Solar Mission
Implementation will be in three phases – first phase (2009-12) will aim to achieve rapid scaling-up to drive down costs.
It will spur domestic manufacturing through the consolidation and expansion of on-going projects for urban, rural and off-grid applications. This will involve the promotion of commercial-scale solar utility plants, mandated installation of solar rooftop or on-site photo-voltaic applications in buildings and establishments of government and public sector undertakings. The target is 100 MW installed capacity here.
Second phase (2012-17) will focus on the commercial deployment of solar thermal power plants. This will involve storage options, and the promotion of solar lighting and heating systems on a large scale in market mode. This will be without subsidies but could include micro-financing options.
Third phase (i.e. 2017- 2020)’s goal is to achieve tariff parity with conventional grid power and achieve an installed capacity of 20 gigawatts (Gw) by 2020.
The mission objective is to drive down the cost of solar energy to as low as Rs. 4-5/Kwh by 2017-20, making solar energy competitive with respect to other fossil fuel based power sources.
Policy & Regulatory Framework of India’s National Solar Mission
The key design principle underlying the regulatory/incentive mechanisms are:
- Feed-in traffic that will be set for various applications by the respective state regulators.
- 10 year tax holiday.
- Custom duty and excise duty exemption on capital equipment and critical materials.
- Use of market based price discovery mechanism
What’s your opinion on India’s national solar mission?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9635425209999084,
"language": "en",
"url": "https://psmag.com/environment/maybe",
"token_count": 1214,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.240234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:872b4b3b-8bcb-40f7-8bfd-9a613f981624>"
}
|
The reasons why people don't have home Internet access generally fall into two main camps: They can't afford it, or they don't live where Internet is available. The latter is based on an infrastructure issue: Service providers don't have the incentive to run lines where potential customers are not. The former is a little simpler to tackle.
According to estimates from the Federal Communications Commission, 95 percent of households with incomes over $150,000 have high-speed Internet access; less than half of households with an income level under $25,000 have home Internet. Giving that second group extra dough for Internet usage, then, should help get them online. That's the idea behind the proposal recently circulated by FCC chairman Tom Wheeler.
It's elegant in its simplicity. For households eligible—and there's a substantial list of those that are—they'll be able to use a government subsidy of $9.25 a month for high-speed Internet purchases. In terms of whether or not it is better than not having an extra $9.25 a month to use, it certainly is. But is it enough?
It's best to have an understanding of where that $9.25 a month number comes from. It's not, as one might imagine, the result of an algorithm intended to find a sweet spot threshold to justify purchasing Internet. Rather, it's a relic.
In 1985, the FCC instituted the Lifeline program, which offered low-income families a subsidy to purchase landline phone service, the idea being that the true value of a national phone network would only be realized if as many people as possible had access. In 2008, the FCC began allowing funds from this program to be used on purchasing mobile service. The program under consideration now is the next, and biggest, tweak to the program.
"If they're making $12,000 a year, even $10 a month can be a real struggle. But if you only have to pay 75 cents a month [for Internet] that totally changes the calculus."
Where does the money come from? It's drawn from the Universal Service Fund, a pool created with money from the nation's various telecommunication services. The trick is they're allowed to pass on the fee to their customers. And they do: Search your latest phone bill for that small USF fee. In short, then, it's a fund created by all of us.
If this proposal is approved—voting takes place March 31—it will allow the fund to be used toward home Internet. It will also increase the money allocated to the project from $1.7 billion to $2.25 billion, which will allow up to five million more homes to utilize the service—although projections show that budget isn't going to be used immediately.
How big of a deal is this?
"There are no silver bullets, but it definitely is a game-changer," says Chike Aguh, the CEO of EveryoneOn, a non-profit focused on closing the digital divide. "Research tells us cost is the number one people why people are not online. If they're making $12,000 a year, even $10 a month can be a real struggle. But if you only have to pay 75 cents a month [for Internet] that totally changes the calculus."
Aguh gets that small figure for Internet service due to the fact that low-income families are dealing with a different set of available options than others. (An extra $10 a month would not bring my Internet cost to the same level.) The true impact of the subsidy, then, is only known when combining it with the low-cost broadband options that are increasingly becoming available.
"Comcast has one that's $10 a month, AT&T is going to have a very similar program," says Tom Koutsky, chief policy council for Connected Nation, a non-profit trying to bring affordable high-speed Internet to the country. "The market will meet this price point fairly aggressively and fairly quickly."
This is definitely a good thing then. But there are still a few issues with this modest amount.
The first is simple inflation. When the Lifeline was instituted in 1985, households were given a subsidy of $5.25 a month to use toward phone service. That has increased over the years, but not at the rate of inflation. If it had, the current subsidy would be worth $11.56 a month.
Second—and probably most importantly—is the quirk of how the $9.25 a month can be used. This isn't simply giving everyone who qualifies an extra $9.25 a month for Internet, but rather opening up what an existing $9.25 subsidy can be used for. If a household is already taking advantage of the subsidy—only 40 percent of those eligible actually take advantage of it—some or even all of the money is already put toward mobile and/or landline service. Once the proposal is approved, any portion of the available $9.25 can be funneled instead toward Internet service, forcing possibly tough decisions. Canceling a land line or a mobile phone plan in order to gain Internet access at home doesn't seem like the spirit of the plan's original intention.
This doesn't address the other problem when it comes to getting people online either. "Will it have an impact? I think it will," says Eric Frederick, vice president of community affairs for Connected Nation. “But for those customers that still live in rural areas where there isn't a great selection and costs are still very high, it might not mean much at all."
If this proposal passes, things will be better than they currently are. Having $9.25 a month to use toward Internet is better than having $0. Until the sum is raised, or split off from the previous telephonic uses of the subsidy, this seems more like a necessary first step than a giant leap forward.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9665194749832153,
"language": "en",
"url": "https://techibytes.com/the-non-fungible-tokens-in-nigeria-an-investment/",
"token_count": 1002,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.435546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:30d18221-4fd2-4caa-b7df-1504488b76fb>"
}
|
The world as we knew it has really changed over time with the introduction to amazing technologies that aid life and make things a little better, which non fungible tokens is not an exception.
NFTs are not so new but have made some waves recently which puts more eyes on them.
You may be asking what an NFT is, well the letters NFTs stand for a non fungible token, which means a “one-of-a-kind token”.
The NTFs Explained
Non-fungible means unique, it’s one of a kind and can’t be replaced with anything else. Currency units for example are fungible, 1 naira in my pocket can be traded for a 1 naira in your pocket, not 500 naira, It’s the same exact thing.
These NFTs are powered by blockchain technology, the same technology that powers the Ethereum cryptocurrencies.
Now the juicy part of the whole thing is that this technology is being used to grow a new market. The digital tokens can as well be taken to be sort of a certificate of ownership for virtual or physical assets which are stored in a blockchain.
If you were an artist or at least someone who creates digital works, Non Fungible tokens (Nfts) can give you ways to monetize your work.
Yes, it is possible to copy a digital file but an NFT is designed to give you ownership of the original creation.
If you think about famous paintings like the Mona Lisa for example, you can always get prints of that painting but only one person can own the original painting by Leonardo da Vinci.
People who are buying NFTs are supporting the artists they like as well as paying for the bragging rights to own these digital arts (and a blockchain entry as proof)
Non Fungible Tokens (NFTs) in Nigeria
A few weeks back a Nigerian artist Niyi Okeowo by name created a digital painting of the group and posted it on his Twitter page on the 22nd of February 2021.
(Tweet has been deleted)
This Painting now sold for 1 Eth which is approximately (~$1,900) on Rarible.
Nevertheless, this means so much so creative individuals who are into arts in Nigeria to monetize most of their arts using the NFTs.
As an investment vehicle, it is a highly speculative asset, the only reason you would buy an NFT is that you hope the price would go up in the future.
I think it’s quite interesting to see how technology (in this case blockchain) has created new markets (digital assets).
With this and the recent happenings around the NFTs including Jack Dorsey selling his ever Tweet, there can be said to be more coming around this technology most especially in the Nigerian Innovation Ecosystem for creative art individuals to profit/monetize their arts.
How do NFTs work?
Now we are most likely looking at a new sector being born, there are people who believe billions will be pumped into this over the next few years.
This was only possible because blockchain can document the ownership of anything that exists online.
Just like conventional art investing it’s more about the bragging rights that come with owning the original piece. There’s really no intrinsic value, the traditional art market is mostly based on the popularity of the artist.
If you bought a piece of art, you would hope that the artist grew in popularity which would also increase the value of their work. Same with these NTFs.
A lot of individuals still find it hard to understand how NFTs work, most people ask “how is the property save from pirated”, again I have to say that With Non Fungible tokens, your artwork can now be “tokenised” to create a unique digital certificate of ownership that can be bought and sold. NFTs also contain smart contracts that may give the artist, a cut of any future sale of the token and every sale is stored on the blockchain making it almost impossible for the records to be forged because the ledger is maintained by thousands of computers around the world.
So basically if you want to know how Non Fungible Tokens Work you should know that the Non fungible tokens work under the computer program called smart contract.
Basic Grasp on Smart Contract
A smart contract does something very simple – ensure that both parties to a contract fulfil their obligations and no one gets cheated.Techcabal
The Non fungible tokens seem close to the decentralized cryptocurrency we have around now, in the sense of being a digital asset, and also being recorded in the blockchain where it is almost impossible to alter with the records already stored in the ledger. You can understand how the Decentralized finance (defi) can benefit you
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9461118578910828,
"language": "en",
"url": "https://www.cloudandheat.com/how-meaningful-is-pue-as-a-measure-of-energy-efficiency-increasing-efficiency-as-a-means-to-an-end/",
"token_count": 1897,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.005279541015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ea365076-7fb9-4f9d-a030-9292940ab9bc>"
}
|
4.01.2017 - by
Power consumption is becoming increasingly important for information and communication technology (ICT), especially for data centers. While operators are under increasing cost pressure due to rising energy prices, the enormous consumption of resources for the required energy is also worrying. This calls politicians to prepare concrete regulations for data centre operators. The EU Directive 2012/27/EU on energy efficiency already called for measures to save energy consumption. Germany formulates this target in its 20-20-20 goals. Accordingly, primary energy consumption is to be reduced by 20 % by 2020.
In 2014, data centres in Germany were already responsible for the consumption of around 10 billion kWh, or 1.8 % of total German electricity consumption. The sheer size is only made clear by a comparison: This corresponds to the electricity consumption of around 3 million 3-person households. An incredible amount. On a global scale, the figures are becoming even more impressive. At 416 TWh, the electricity consumption of the data centers installed worldwide significantly exceeded the UK’s electricity requirement of around 300 TWh. And the demand for electricity is growing visibly, because the number of data centers is also growing continuously due to higher demand for computing power. In 2014, the number of physical servers in Germany had already risen to 1.7 million. The following diagram illustrates that this development is far from stagnating.
Figure 1: Power requirements for servers and data centers in Germany (Bitkom, 2014)
The increase in electricity consumption is not necessarily due to a lack of willingness to innovate. In 2014 alone, 800 million euros were invested in the modernization and new construction of data centers in Germany alone to limit this development.
An important milestone in the optimization of data centers was, in addition to technological developments in energy supply, cooling systems and waste heat utilization, in particular the systematic recording of power consumption. Only the analysis ft he actual situation made it possible to transparently evaluate measures to increase energy efficiency on the basis of energy performance indicators.
The development of an industry standard for determining the energy efficiency of data centers was promoted by The Green Grid. In 2007, the companies involved developed the key performance index of power usage effectiveness (PUE). It indicates the ratio of total energy consumption and the energy requirements of the IT components and is currently the only parameter that is used by different data centers to determine and compare the relative energy efficiency.
Ideally, all the energy is used for the IT infrastructure. In this case the PUE is 1.0, but since additional energy is always required for losses, UPS, lighting, control technology and the cooling system including the pumps, recirculation units and dry coolers, the value of 1.0 is only a theoretical target value. According to Bitkom, with a physical utilization of the available rack space of approx. 50%, a modern state-of-the-art data center should achieve a maximum PUE of 1.4 or lower.
The calculation of the PUE depends on the definition of the system limits within a data center, see Figure 2. The energy flows recorded at this point serve as the basis for determining the data center.
Figure 2 System boundaries of data centers for the calculation of the PUE (own representation)
However, there is scope for interpretation when it comes to defining the IT components to be included in the calculation. For this reason, a distinction is made between PUE maturity levels, which differ in the measuring methods, measuring intervals and measuring points used, see Figure 3.
Figure 3:PUE Maturities (Bitkom, 2015)
An important criterion for the comparability of the PUE of different data centers is the consideration of the same period. The power requirement for cooling the IT infrastructure is considerably higher in summer than in winter. This should be taken into account in the calculation, which is why the collection of twelve-month averages in particular is a guarantee for meaningful PUE measured values. If a PUE is specified for an observation period of less than one year, it is referred to as interim PUE (iPUE). This is of particular advantage when determining a real-time PUE, which can be used for live monitoring.
The PUE has become the industry standard for calculating energy efficiency in the course of standardization of data centers, in particular the DIN EN 50600 series of standards. This European standard not only addresses topics such as construction, electricity and climate, but also focuses holistically on all issues relevant to data centres, including management and operation.
Conventional air-cooled data centers make up a large part of the installed server systems to date. However, their biggest flaw is that cooling is often responsible for around 40 % of energy consumption due to air conditioning technology. Modern air-cooled data centers therefore achieve an average PUE of around 1.5. As an alternative concept, the use of water as a cooling medium has established itself. Water has significant physical advantages when absorbing heat compared to air, because not only is the heat capacity 3300 times higher, but the thermal conductivity is also 20 times higher than that of air. But water cooling is not just water cooling. A distinction is made between direct hot water cooling and indirect water cooling via sidecoolers. With sidecoolers, the heated air inside the servers is blown outwards by fans, analogous to air cooling. Air-water heat exchangers are then located here to cool the air. In direct hot water cooling, on the other hand, the water is conducted directly along the heat-emitting components in order to absorb the heat energy emitted. The efficiency of this cooling technology means that additional fans and air conditioning technology can be dispensed with and significant efficiency gains can be achieved.
A direct comparison of both cooling systems was carried out in a study by the U.S. Department of Energy. In an existing high-performance computing (HPC) data center, the air cooling system was replaced by one with direct water cooling. Based on the Linpac benchmark, the PUE could thus be compared for the same performance. Figure 4 clearly shows that the PUE of the system with water cooling is not only significantly lower than that of the air-cooled alternative, it is also associated with significantly less fluctuations.
Figure 4: PUE comparison of HPC cooling systems with direct water and air cooling for Linpac benchmark test (based on US Department of Energy, 2014)
The world’s lowest PUE is currently 1.014. This record value was achieved by Cloud&Heat Technologies and was possible primarily due to direct hot water cooling as well as demand control and preheating of the required cooling air through an underground car park. By comparison, Google’s PUE reaches 1.14 across data centers, with Facebook reporting a PUE of 1.09 for its largest data center in Prineville, Oregon.
On the way to ever lower PUEs, some companies are also trying out completely new concepts. The Internet service provider IGN from Munich has implemented a cooling system in cooperation with Rittal that uses groundwater as coolant. The cool groundwater is pumped from a well into a water circuit cooled by heat exchangers, which supplies eight redundant recirculation air conditioning systems. In combination with an optimized cold air duct, a PUE of 1.2 could be achieved. Microsoft’s research experiment “Project Natick” goes even further. In order to minimize the energy required for cooling, the low temperature of the sea is taken advantage of. A container data center was sunk in the Pacific Ocean. The cooling requirement was thus covered by the low ambient temperature. However, whether this is a model for the future remains to be seen.
However, an isolated view of PUE can lead to false conclusions about the actual efficiency of a data center. This is illustrated by the following example. Let’s assume that in a data center with 1,000 servers, 700 servers run at idle due to a lower overall utilization rate. To save energy, these 700 servers are switched off. Although this would result in a reduction in energy consumption, it would lead to an increase in PUE, since the denominator of the calculation formula, the power consumption of the remaining 300 servers, decreases much more than the meter, the total energy requirement. But there is also room for improvement in the remaining servers. The workload can be further increased by combining the workload on some servers through virtualization, for example with OpenStack, in order to shut down others. This saves energy and ultimately also costs.
In order to increase the informative value of energy performance indicators, the inclusion of further performance indicators in DIN EN 50600 is currently being discussed. A combination of these values should provide a more comprehensive picture of the efficiency of a data center. Energy recovery, for example, offers great potential for increasing the energy efficiency of data centers. If, for example, the heat energy emitted is fed into a heating system, energy consumption can be reduced here. In this case, however, the waste heat reduces the energy requirement at points outside the system boundaries of the data center and is therefore not included in the PUE analysis. One indicator that is more suitable for assessing a data center with regard to its energy recovery is “Energy Reuse Effectiveness” (ERE).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9250881671905518,
"language": "en",
"url": "https://www.gavstech.com/can-enterprises-gain-from-cognitive-automation/",
"token_count": 1047,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0849609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d237183a-178f-4947-806e-f74ba4df894a>"
}
|
What is cognitive automation (CA)?
“There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035,” stated Gray Scott. Cognitive automation is a subcategory of artificial intelligence (AI) technologies that imitates human behavior. Combined efforts of robotic process automation (RPA) and cognitive technologies such as natural language processing, image processing, pattern recognition and speech recognition has eased the automation process replacing humans. The best part of CA solutions are, they are pre-trained to automate certain business processes hence, they don’t need intervention of data scientists and specific models to operate on. Infact, a cognitive system can make more connection in a system without supervision using new structured and unstructured data.
Future of CA
There is a speedy evolution of CA with increasing investments in cognitive applications and software platforms. Market research indicates, approximately $2.5 billion has been invested in cognitive-related IT and business services. There is also an expectation of 70% rise in such investments by 2023. The focus areas where CA gained momentum are:
- Quality checks and system recommendations
- Diagnosis and treatment recommendations
- Customer service automation
- Automated threat detection and prevention
- Fraud analysis and investigation
Difference between normal automation and CA
There is a basic difference between normal IT automation and CA technologies. Let’s try to understand it with a use case where a customer while filling an e-form to open an account in a bank, leaves few sections blank. A normal IT automation will detect it, flag it red and reject the form as incomplete. This then, will need human intervention to fix the issue. CA, in a similar situation, will auto-correct the issue without any human intervention. This will increase operational efficiency, reduce time and effort of the process and improve customer satisfaction.
Enterprises’ need for CA
As rightly mentioned by McKinsey, 45% of human intervention in IT enterprises can be replaced by automation. Tasks with high volumes of data requires more time to complete. CA can prove worthy in such situations and reshape processes in an efficient way. Businesses are becoming complex with time, and enterprises face a lot of challenges daily like; ensuring customer satisfaction, guaranteeing compliance, staying in competition, increasing efficiency and decision making. CA helps to take care of those challenges in an all-encompassing manner. CA can improve efficiency to the extent of 30 – 60% in email management and quote processing. It ensures an overall improvement in operational scalability, compliance and quality of business. It reduces TAT and error rates, thus impacting enterprises positively.
Benefits of CA in general
A collaboration between RPA and CA has multiplied the scope of enterprises to operate successfully and reap benefits to the extent that enterprises are able to achieve ROI of up to 300% in few months’ time, research reveals. The benefits enterprises can enjoy by adopting CA are:
- It improves quality by reducing downtime and improving smart insights.
- It improves work efficiency and enhances productivity with pattern identification and automation.
- Cognitive computing and autonomous learning can reduce operational cost.
- A faster processing speed can impact business performance and increases job satisfaction resulting employee retention, since it boosts employee satisfaction and engagement.
- It increases business agility and innovation with provisioning of automation.
- As a part of CA, Natural Language Processor (NLP) is a tool used in cognitive computing. It has the capacity to communicate more effectively and resolve critical incidents. This increases customer satisfaction to a great extent.
Enterprises using CA for their benefit:
- A leading IT giant combined cloud automation service with cognition to reduce 50% of server downtime in last two years. It also reduced TAT through auto resolution of more than 1500 server tickets every month. There was reduction of critical incidents by 89% within six months of cognitive collaboration.
- An American technology giant introduced a virtual assistant as one of their cognitive tools. It could understand twenty-two languages and could handle service requests without human intervention. It eased the process of examining insurance policies for clients, help customers open bank accounts, help employees learn company policies and guidelines.
- A leading train service in UK used virtual assistant starting from refund process to handling their customer queries and complaints.
- A software company in USA uses cognitive computing technology to provide real-time investment recommendations.
- Cognitive computing technology used in media and entertainment industries can extract information related to user’s age, gender, company logo, certain personalities and locate profile and additional information using Media Asset Management Systems. This helps in answering queries, adding a hint of emotion and understanding while dealing with a customer.
Secondary research reveals that the Cognitive Robotic Process Automation (CRPA) market will witness a CAGR of 60.9% during 2017 – 2026. The impact CA has on enterprises is remarkable and it is an important step towards the cognitive journey. CA can continuously learn and initiate optimization in a managed, secured and reliable way to leverage operational data and fetch actionable insights. Hence, we can conclude that enterprises are best poised to gain considerably from cognitive automation.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9757640957832336,
"language": "en",
"url": "https://www.liveinsurancenews.com/new-yorkers-struggle-to-find-health-insurance-coverage/",
"token_count": 393,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.32421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f060be73-e645-4f04-88bd-9d494de67ffc>"
}
|
Federal law has helped make insurance more affordable, but not for everyone
Many of those living in New York are still unable to afford health insurance coverage, despite the provisions of the Affordable Care Act. The federal law provides many people with subsidies that can offset the cost of coverage, but even with subsidies, New Yorkers are finding it difficult to afford the policies they need. A recent survey from the Robert Wood Johnson Foundation found that cost is the greatest barrier preventing people from purchasing health insurance coverage.
Majority of consumers cannot afford health insurance coverage
The survey found that some 79% of those who shopped for insurance coverage simply could not afford it. Of these, 58% noted that they had $100 or less left after paying their bills, which means that they cannot justify the high cost of health insurance coverage. Wages have not grown at the same pace of insurance premiums, which has limited the options that consumers have when it comes to purchasing policies they are interested in.
Medicaid expansion makes coverage more accessible
While many people are struggling to afford health insurance, others have found great benefit from the Affordable Care Act. According to the survey, 74% of the 900,000 New York City residents that have used the state’s exchange to find policies receive Medicaid coverage. This is due to the state’s work to expand its Medicaid program, which has made health insurance coverage more accessible for a wide range of consumers.
Premiums continue to grow as medical costs rise
Finding ways to make insurance coverage less expensive is a difficult problem to solve. Subsidies have helped make coverage more accessible, but the growing cost of medical care is placing insurers under more financial pressure. In order to recover from financial losses, insurers have had to raise premiums, which is becoming burdensome for consumers. Those without health insurance scarcely have the funds needed to cover the cost of medical care, as well, and this continues to be a significant problem for many people.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8917369246482849,
"language": "en",
"url": "https://xplaind.com/554580/sales",
"token_count": 276,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0654296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:87a4d005-9034-46eb-82ae-37f66710788f>"
}
|
Sales budget is the first and basic component of master budget and it shows the expected number of sales units of a period and the expected price per unit. It also shows total sales which are simply the product of expected sales units and expected price per unit.
Sales Budget influences many of the other components of master budget either directly or indirectly. This is due to the reason that the total sales figure provided by sales budget is used as a base figure in other component budgets. For example the schedule of receipts from customers, the production budget, pro forma income statement, etc.
Due to the fact that many components of master budget rely on sales budget, the estimated sales volume and price must be forecasted with sufficient care and only reliable forecast techniques should be employed. Otherwise the master budget will be rendered ineffective for planning and control.
Format and Example
Where the price per unit is expected to remain constant during the period for all units in sales, the sales budget format will be simple as shown below.
|For the Year Ending December 30, 2010|
|× Price per Unit||$91||$92||$97||$112|
However if a business sells more than one product having different prices or the price per unit is expected to change during the period, its sales budget will be detailed.
by Irfanullah Jan, ACCA and last modified on
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8987353444099426,
"language": "en",
"url": "http://www.aolevel.org.cn/article/4562",
"token_count": 1842,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:654c3e5b-7933-4f2c-a22d-af63d1806fe7>"
}
|
Actual GDP Growth:Actual GDP growth is the growth in real GDP that currently occurs.
Aggregate Demand:Aggregate demand (AD) is the total amount of expenditure on goods and services in an economy.
Aggregate Supply:Aggregate supply is the total amount of supply of goods and services in an economy.
Balance of Payments:The balance of payments (BoP) is a record of all external financial transactions between one economy and the rest of the world.
Boom:A boom occurs if there is a major and rapid increase in real GDP.
Business Cycles:Business cycles are the pattern of booms and recessions in an economy over a period of time. Business cycles are the fluctuation of real GDP around the long-term trend growth rate.
Circular Flow of Income:A model of the economy that shows how households sell their labour to firms for an income and then use this income to buy goods and services produced by firms.
Claimant Count:A measure of unemployment, anyone claiming unemployment benefit is defined as unemployed by the claimant count.
Consumer Price Index (CPI):The CPI is a measure of inflation. The CPI is a price index of a weighted basket of goods and services that the average household buys.
Consumption:Consumption is total consumer expenditure on durables, non-durables and services.
Contractionary Fiscal Policy:A contractionary fiscal policy means so AD falls. Multiplier effects make AD fall further. AD shifts left so inflation falls and real GDP falls.
Cost-Push Inflation:Cost-push inflation occurs when LRAS shifts left because resource prices rise or wages rise, firms’ costs rise and their prices rise.
Credit Crunch:A situation where banks and other financial institutions decrease their lending or stop lending altogether.
Crowding In:An increase in government spending causes an increase in private investment (maybe the government invests in the infrastructure which encourages private firms to invest).
Crowding Out:An increase in government spending causes a decrease in private investment (maybe the government uses resources that private firms would have used).
Current Account:A record of an economy’s international trade in goods, services, investment income and transfers.
Deflation:Deflation is a fall in the average price level over a given time period.
Demand-Deficient Unemployment:AD is insufficient for all workers to be employed.
Demand-Pull Inflation:Demand-pull inflation occurs when AD rises, spare capacity falls, resources begin to run out so firms’ costs rise and prices rise.
Direct Tax:Taxes on consumers’ income (income tax) or firms’ profits (corporation tax).
Economic Growth:A percentage change in real GDP over a given time period.
Employment:Employment is the amount of workers with a job.
Exports:Exports are domestic goods and services sold to foreign agents.
Export-Led Growth:Export-led growth means an economy’s AD and real GDP is rising mainly because its exports are rising rapidly, this could be because the government are promoting exports.
Exchange Rate:An exchange rate (XR) is the price of one currency in terms of another.
Expansionary Fiscal Policy:An expansionary fiscal policy means so AD rises. Multiplier effects make AD rise further. AD shifts right so inflation rises and real GDP rises.
Fiscal Policy:Fiscal policy is the manipulation of government expenditure (G) and taxation (T) by the government to influence macroeconomic variables.
Frictional Unemployment:Frictional unemployment occurs when workers are moving between jobs. Workers are unemployed but searching for a new job.
Full Employment. An economy is at full employment if all resources are fully employed, no more can be produced.
Government Expenditure:Government expenditure is total expenditure by the government on goods and services.
Gross Domestic Product:Gross Domestic Product (GDP) measures the monetary value of output produced by an economy during a given time period.
Human Development Index (HDI):The HDI is a multidimensional measure of the economic development of an economy. The HDI measures a mix of income, health and education.
Hyperinflation:A period of rapid inflation.
ILO Unemployment:A measure of unemployment. The ONS carry out the Labour Force Survey. A survey of 60,000 working age people are interviewed four times per year by phone. A person is defined as unemployed if they have been looking for work in the last four weeks and if they are ready to work within the next two weeks.
Indirect Tax:Taxes on expenditure (Ad valorem or specific taxes).
Inflation:Inflation is a rise in the average price level over a given time period.
Injection:An injection into the circular flow is money coming into the economy (investment, government spending and exports).
Interest Elasticity of Investment:The responsiveness of investment to a change in interest rates.
Interest Rate:The interest rate is the additional money a saver receives for saving and the additional money a borrower pays for taking out a loan.
Investment:Investment is total investment expenditure by firms on buildings, machinery and the change in inventories.
Leakage:A leakage from the circular flow is money leaving the economy (saving, taxes and imports).
Loose Monetary Policy:A loose monetary policy causes interest rates to fall and AD to rise. Multiplier effects make AD rise further. Inflation rises and real GDP rises.
Long-Term Trend Growth Rate:The long-term trend growth rate is potential real GDP growth, the GDP growth that will occur if all resources are fully and efficiently employed. This increases if technology and/or knowledge improve.
Macroeconomic Objectives:The government’s main macroeconomic objectives are 1) High economic growth, 2) Low unemployment, 3) Low and stable inflation and 4) A current account surplus or low deficit.
Marginal Propensity to Consume:Measures how much each additional dollar of income is used for consumption. If the MPC is 0.9: As income rises by £1, consumption rises by £0.90.
Marginal Propensity to Save:Measures how much each additional dollar of income is saved. If the MPS is 0.1: As income rises by £1, savings rise by £0.10.
Menu Costs. As prices change, firms must change their prices and reprint menus, catalogues, websites and shop signs, this is costly for firms.
Monetary Policy:Monetary policy is the manipulation of monetary variables (interest rate and money supply) by the MPC to influence AD and inflation.
Multiplier:Any AD fluctuations are amplified by the multiplier through knock-on AD effects. An initial change in AD has a larger final impact on real GDP due to the multiplier.
Negative Output Gap:Occurs when real GDP is below the trend growth rate.
Net Exports:Net exports are exports minus imports (X-M).
Output Gap:The difference between actual or real GDP and the trend growth rate.
Positive Output Gap:Occurs when real GDP is above the trend growth rate.
Productive Capacity:Productive capacity refers to how much output an economy can produce.
Productivity:Productivity is output per worker.
Public Sector Net Cash Requirement:Public sector net cash requirement (PSNCR) is government borrowing over a period of time, the difference between government expenditure and tax revenue.
Quantitative Easing:Quantitative easing is the control of the money supply by the MPC to influence AD and inflation.
Real GDP:Real GDP is GDP adjusted for inflation.
Real Wage Unemployment:Real wage unemployment occurs when real wages are above the market-clearing level, there is excess labour supply, more people are willing and able to work at the going market wage than firms will employ.
Recession:A recession occurs if real GDP falls for two consecutive quarters.
Seasonal Unemployment:Seasonal unemployment occurs when workers are unemployed during the off-season.
Search Costs:As prices change, consumers incur search costs because they must keep up to date with all the new prices that firms charge.
Spare Capacity:An economy has spare capacity if some resources are unemployed. More resources can be employed and more can be produced.
Stagflation:A period of rising inflation and rising unemployment.
Structural Unemployment:Structural unemployment exists when there is a mismatch between labour’s skills and the skills required by employers.
Supply-Side Policies:Supply-side policies are designed to increase productivity and shift LRAS right.
Sustainable Growth:Economic growth is sustainable if the needs of future generations are not compromised by current consumption/production.
Unemployment:Unemployment is the amount of people willing and able to work at the market wage but without a job.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9324689507484436,
"language": "en",
"url": "https://blog.projectpiglet.com/2018/01/limits-of-granger-causality/",
"token_count": 2096,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.00010156631469726562,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d1fbb49a-ed47-466a-b728-82e6133b66c0>"
}
|
In this article: We discuss Granger Causality works and some of the common issues, drawbacks, and potential ways to improve the method(s). Building primarily off our previous article Pitfalls of Backtesting and insights gained from building ProjectPiglet.com.
One of the most common forms of analysis on the stock market is Granger Causality, which is a method for indicating one signal possibly causes another signal. This type of causality is often called “predictive causality”, as it does not for certain determine causality – it simply determines correlations at various time intervals.
Why Granger Causality? If you search “causality in the stock market“, you’ll be greeted with a list of links all mentioning “granger causality”:
In other words, it’s popular and Clive Granger won a Nobel on the matter. That being said, there are quite a few limitations. In this article, we’ll be covering a brief example of Granger Causality, as well as some of the common pitfalls and how brittle it can be.
What is Granger Causality?
Granger Causality (from Wikipedia) is defined as:
A time series X is said to Granger-cause Y if it can be shown, usually through a series of t-tests and F-tests on lagged values of X (and with lagged values of Y also included), that those X values provide statistically significant information about future values of Y.
In other words, Granger Causality is the analysis of trying to find out if one signal impacts another signal (such that it’s statistically significant). Pretty straightforward, and is even clearer with an image:
In a sense, it’s just one spike in a graph causing another spike at a later time. The real challenge with this is that this needs to be consistent. It has to repeatedly do this over the source of the entire dataset. This brings us to the next part: one of the fragile aspects of this method is that it often doesn’t account for seasonality.
Granger Causality and Seasonality
One common aspect of markets is that they are seasonal. Commodities (as it relates to the futures market) related to food are extremely impacted by seasonality. For instance, if there is a drought across Illinois and Indiana during the summer (killing the corn crop), then corn prices from Iowa would likely rise (i.e. the corn from Iowa would be worth more).
In the example, there may be decades where some pattern in the market holds and Granger Causality is relevant. For instance, during summer heat waves in Illinois, corn prices in Iowa increase. On the other hand, with the advent of irrigation methods that deliver water underground, heat waves may no longer impact crops. Thus, the causality of heat waves in Illinois may no longer impact the corn prices in Iowa.
If we then attempt to search for Granger Causality on the entire time range (a) pre-irrigation and (b) post irrigation, we will find there is no causality!
However, during the pre-irrigation time range we will find probable causality, and for post-irrigation time range we likely won’t find probable causality. Any time you combine two timeframes like this, the default is no Granger Causality (unless it’s a very small portion of the dataset). Bringing us to the conclusion, that:
Granger Causality is very sensitive to timeframe(s)
Just a few data points in either direction can break the analysis. This makes sense, as it is a way to evaluate if two time series are related. However, it does lead one to note how brittle this method can be.
Granger Causality and Sparse Datasets
Yet another potential issue with Granger Causality is sparse datasets. Let’s say we have dataset X and dataset Y: if dataset X has 200 data points and data set Y as 150 data points, how do you merge them? Assuming they are in (datetime, value) format, if we do an inner join on “datetime”, we get something that looks like the following:
Then we will have 150 data points in a combined X and Y dataset, i.e.: (datetime, x, y). Unforunately, this also means if the data is continuous (as most timeseries data is), then we have completely broke our Granger Causality analysis. In other words, we are just skipping over days, which would break any causality analysis.
In contrast, we could do an outer join:
We will have 200 data points in a combined X and Y dataset. Again, there’s an issue – it means we probably have empty values (Null, NULL, None, NaN, etc. ) where the Y data set didn’t have data (recall Y only had 150 data points). The dataset would then have various entries that look as such: (datetime, x, NULL).
To fix the empty values, we can attempt to use a forward or back fill technique. A forward/back fill technique is where you fill all the empty values with the previous or following location(s) real value.
This code could look like the following:
From the sound of it, this method sounds promising! You’ll end up with something that’s continuous with all real values. You’ll actually get a graph like this:
As you can see, there are large sections of time where the data is flat. Recall the seasonality issue with Granger Causality? This method of outer joins + forward / back filling will definitely cause issues, and lead to minimal to no meaningful correlations.
Sparse datasets make it very difficult (or impossible) to identify probable causality.
Granger Causality and Resampling
There is another option for us, and that is “resampling”. Where instead of just filling the empty values (Nulls / NaNs) with the previous or following real values, we actually resample the whole series. Resampling is a technique where we fill the holes in the data with what amounts to a guess of what we think the data could be.
Although there are quite a few techniques, in this example we’ll use the python package Scipy, with the Signal module.
At first glance, this appears to have solved some of the issues:
However, in reality it does not work; especially if the dataset starts or ends with NaN’s (at least when using the Scipy package):
If you notice, prior to the ~110 data point, the values are just oscillating up and down. The resampling method Scipy is using does not appear to be functional / practical with so few data points. This is because I selected data set for Bitcoin Cash (BCH) and the date range is prior to Bitcoin Cash (BCH) becoming a currency (i.e. there is no price information).
In a sense, this indicates it’s not possible (at least given the data provided) to attempt Granger Causality on the given date ranges. Small gaps in time can have dramatic impacts on whether or not “probable causality” exists.
When determining Granger Causaily it is extremely important to have two complete overlapping datasets.
Without two complete datasets, it’s impossible to identify whether or not there are correlations over various time ranges.
Resampling can cause artifacts that impact the Granger Causality method(s).
In fact, the most recent example was actually positive for Granger Causality (p-value < 0.05)… That is the worst scenario, as it is a false positive. In the example, the false positive occurs because when both datasets are resampled they had a matching oscillation… it wouldn’t have even been noticed if the raw data sets weren’t being reviewed.
This is probably the largest issue with Granger Causality: every dataset needs to be reviewed to see if it makes sense. Sometimes what at first appears to make sense, in reality the underlying data has been altered in some way (such as resampling).
Granger Causality and Non-Linear Regression
Changing gears a bit (before we get to a real-world ProjectPiglet.com example), it’s important to note that most Granger Causality uses linear regression. In other words, the method is searching for linear correlations between datasets:
However, in many cases – especially in the case of markets – correlations are highly likely to be non-linear. This is because markets are anti-inductive. In other words, every pattern discovered in a market creates a new pattern as people exploit that inefficiency. This is called the Efficient Market Hypothesis.
Ultimately, this means most implementations of Granger Causality are overly simplistic; as most correlations are certainly non-linear in nature. There are a large number of non-linear regression models, below is an example of Gaussian Process Regression:
Similar, non-linear regression techniques do appear to improve Granger Causality. This is probably due to most linear correlations already being priced into the market and the non-linear correlations will be where the potential profits are. It remains to be seen how effective this can be, as most research in this area is kept private (increasing profits of trading firms). What we can say is that non-linear methods do improve predictions on ProjectPiglet.com. They also require a larger dataset than their linear regression counterparts.
Overall, Granger Causality has quite a few potential pitfalls. It is useful for indicating a potential correlation, but is only a probable correlation. It can help to identify market inefficiencies and open the opportunity to make money, but will probably require more finesse than simple linear regression.
All that being said, hope you’ve found some of the insights useful!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9239750504493713,
"language": "en",
"url": "https://blog.sysfore.com/hadoop-and-big-data-analytics/",
"token_count": 1441,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.03564453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1a402a1b-3b39-4fec-a73c-8873963d3c5f>"
}
|
Gartner defines Big Data as “high volume, velocity and variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making”.
Big data is data that, by virtue of its velocity, volume, or variety (the three Vs), cannot be easily stored or analyzed with traditional methods.
The term covers each and every piece of data your organization has stored till now. It includes all the data stored both on-premises or in the cloud. It could be papers, digital, structured and non-structured data within your company.
There is a deluge of structured and unstructured data, that is generated every second. This is known as Big Data, which can be analyzed to help customers turn that data into insights. AWS provides a broad platform of managed services, infrastructure and tools to tackle your next Big Data project. It enables you to collect, store, process, analyze and visualize Big Data on the cloud. It provides all the hardware, software, infrastructure to maintain and scale, so that you focus on building your application.
Some of the common Big Data Customer scenarios include Web & E-Tailing, Telecommunications, Government, Healthcare & Life Science, Bank & Financial Services and Retail, where Big Data is continuously generated.
How Big Data is consumed by Businesses
Businesses can gain a lot of insight into how their product is being consumed, by analyzing the huge Big Data generated. Big Data analytics is an area of rapidly growing diversity. Analyzing large data sets requires significant compute capacity that can vary in size based on the amount of input data and the analysis required. This characteristic of big data workloads is ideally suited to the pay-as-you-go cloud computing model, where applications can easily scale up and down based on demand.
Using Big Data analytics will give you a clear picture about how your data is being generated and consumed by the customers. It can be used for predictive marketing and plans to increase your business. It provides:
- Early key indicators, gives insights into business trends resulting in business fortunes.
- Analytics results in business advantage.
- Get more precise analysis and results with more data.
Limitations of using the traditional analytics methods:
The advancements in technologies has resulted in huge volume of data being generated every second. Storing, processing, analyzing and getting quality results is time consuming, costly and ineffective in the current scenario.
- Only a limited amount of high fidelity raw data is available for analysis.
- Storage is limited by the high volume of raw data that is continuously generated.
- Moving data for computation doesn’t scale accordingly.
- Data is archived regularly to conserve space. This limits the amount of data that is available for the analytical tools.
- The perception that traditional data warehousing processes are too slow and limited in scalability.
- The ability to converge data from multiple data sources, both structured and unstructured.
- The realization that time to information is critical to extract value from data sources that include mobile devices, RFID, the web and a growing list of automated sensory technologies.
As requirements change you can easily resize your environment (horizontally or vertically) on AWS to meet your Amazon Web Services.
In addition, there are at least four major developmental segments that underline the diversity to be found within Big Data analytics. These segments are MapReduce, scalable database, real-time stream processing and Big Data appliance.
Using Hadoop for Big Data Analytics
There is a big difference between Big Data and Hadoop. The former is an asset, often a complex and ambiguous one, while the latter is a program that accomplishes a set of goals and objectives for dealing with that asset.
Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware. It provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs.
Hadoop is a framework, which allows processing of large data sets. It completes the tasks in minutes, while the same done using the RDMS would take hours.
Hadoop has 2 main components:
- HDFS – Hadoop Distributed File System (for Storage)
- MapReduce (for Processing)
Hadoop Distributed File System works
The Hadoop Distributed File System (HDFS) is the primary storage system used by Hadoop applications. It consists of HDFS clusters, which each contain one or more data nodes. Incoming data is split into segments and distributed across data nodes to support parallel processing. Each segment is then replicated on multiple data nodes to enable processing to continue in the event of a node failure.
While HDFS protects against some types of failure, it is not entirely fault tolerant. A single NameNode located on a single server is required. If this server fails, the entire file system shuts down. A secondary NameNode periodically backs up the primary. The backup data is used to restart the primary but cannot be used to maintain operation.
HDFS is typically used in a Hadoop installation, yet other distributed file systems are also supported. The Amazon S3 file system can be used but does not maintain information on the location of data segments, reducing the ability of Hadoop to survive server or rack failures. Other file systems such as open source CloudStore and the MapR file system can be used to do maintain location information.
Distributed processing is handled by MapReduce
The idea behind MapReduce is that Hadoop can first map a large data set, and then perform a reduction on that content for specific results. A reduce function can be thought of as a kind of filter for raw data. The HDFS system then acts to distribute data across a network or migrate it as necessary.
The MapReduce feature consists of one JobTracker and multiple TaskTrackers. Client applications submit jobs to the JobTracker, which assigns each job to a TaskTracker node. When HDFS or another location-aware file system is in use, JobTracker takes advantage of knowing the location of each data segment. It attempts to assign processing to the same node on which the required data has been placed.
Apache Hadoop users typically build their own parallelized computing clusters from commodity servers, each with dedicated storage in the form of a small disk array or solid-state drive (SSD) for performance. These are commonly referred to as “shared-nothing” architectures.
Big Data is getting Big and more important
As more and more data are collected, the analysis of these data requires scalable, flexible, and high-performing tools to provide analysis and insight in a timely fashion. Big Data analytics is a growing field, with the need to parse large data sets from multiple sources, and to produce information in real-time or near-real-time gaining importance. IT organizations are exploring various analytics technologies to parse web-based data sources and extract value from the social networking boom.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9490824341773987,
"language": "en",
"url": "https://blog.zolve.com/understanding-basic-banking-and-credit-terms-in-the-us/",
"token_count": 1912,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7bc7321a-4ccf-4e1f-af00-2ca2e018fa90>"
}
|
Banking terms in the US can seem confusing, especially for a new immigrant. Here’s a quick guide to help you stay informed.
You’ve moved to a new country. You’re dealing with a new culture, new language, new people, and new rules - and it is totally normal if you feel completely lost. But you headed there with a lot of expectations and a bag of dreams, so you would be much better prepared if you understood some important basics, especially when it comes to the country's financial language. It is very important to be aware of the system, and the financial and banking terms, especially when you’ve landed in a new place all alone. Don’t worry, it’s not difficult, and you’ll soon get the hang of it. In this post, we introduce some of the frequently used banking and credit terms in the US.
What type of banking system does the US have? The US banking system is one of the largest in the world and is often considered to be quite complicated, especially by immigrants. The US follows a dual banking system, where national banks are regulated at the federal level and state banks are regulated by their respective state laws. We will do a detailed dive into the American Banking System in a separate post. For now, let’s make a couple of quick lists of banking and credit terminology.
Know Your Basic Banking Terms
A type of deposit account into which you can deposit and withdraw funds at your will. Generally, there would not be any limit on these transactions, and there is nominal or no interest earned on the balance in your checking account. Checking accounts are mostly used for paying bills and receiving a salary, and for everyday transactions - which differentiates it from a savings account.
This type of deposit account is not intended for daily use, and is tailored towards saving. The money in this account earns a higher interest rate than that in a checking account, though these rates can vary according to the bank. It’s usually a good idea to let larger sums of money “sit” in a savings account and accumulate interest over time. One may deposit money into this account at any time, but certain types of withdrawals are limited to six per month.
Certificate of Deposit (CD)
This is a type of account that allows you to make deposits that are payable at the end of a specified time. It generally pays higher interest rates than a checking or savings account. In case of an early withdrawal, it will attract a penalty.
This nine-digit code acts as an identity of your bank based on its geographic location. It is at the bottom left-side of your cheque. When you make an online payment and are asked for a ‘checking account routing number’ or RTN (Routing Transit Number) or an ABA routing number, this nine-digit code is what you need to use.
An overdraft fee is incurred when your checking account does not have enough funds to cover a payment that is requested. The financial institution will pay what your account lacks, after which your account may have a negative balance. Overdraft fees can be quite hefty (an average of $35 per overdraft transaction).
Federal Deposit Insurance Corporation (FDIC)
The FDIC is an independent agency backed by the United States Government. When an FDIC-insured bank or savings association fails, FDIC insures the deposit account holders for their losses up to $250,000 per depositor, per insured bank, for each account ownership.
You may remember this from school, but to refresh your memory: this is the interest that applies to the principal amount and also on the newly earned interest.
Annual Percentage Rate (APR)
This is the rate of interest that the bank pays on your deposit account during a year. It does not include compounded interest.
Annual Percentage Yield (APY)
It is the amount of interest the bank pays on your deposits in the account during a year. It includes both the interest paid on the amount deposited in the account and compounded interest of the year.
A check that gets returned to the depositor if there is an insufficient amount in the account. The depositor must pay a returned item fee in this case.
Now you know some of the essential banking terms you may come across in your daily life. But this isn’t all there is to it. There’s a lot to learn about credit.
If you are a migrant, from whichever country, you may feel that the emphasis the US gives on credit is high. When you arrive in the US, you will need a phone, a house, a credit card, a loan - the list goes on and on. For all of this, you need to show your credit history. But wait, what is credit history? Let’s go over some seemingly complex but actually simple credit terms used in the US.
Basic Credit Terms in the US
A FICO score is a three-digit number ranging between 300 and 850 that assesses a borrower's credit risk. If the FICO score is high, it indicates that the credit is more likely to be repaid by the borrower. If it is low, the reverse is true, and a higher rate of interest may be charged on any credit or loan to compensate for the added risk. The FICO score is calculated from components such as your payment history, the amount of debt you hold, the length of your credit history, new credit, your credit mix, etc, all of which are combined in specific ratios to constitute your FICO Score. (In case you’re wondering, FICO stands for Fair Isaac Corporation. It’s a major analytics software company that changed its name to FICO in 2009. Its consumer credit scores are the most widely used; financial institutions refer to the FICO score to decide whether to lend money or issue credit.)
It's the record of the use of debt by any individual. In the US, it is maintained by three major credit bureaus: Experian, TransUnion, and Equifax. They track every individual's financial history and compile it to make a credit report. This helps lenders to define the terms of providing credit.
Secured Credit Card
Secured credit cards are designed for people with no or poor credit history. These credit cards are “secured” in the form of a refundable cash deposit. The issuer of the credit card holds the deposit, and the cardholder can use the card just like any other credit card in the market.
Secured credit cards generally carry high fees and complicated terms, taking advantage of the borrower's situation. For example, you can get a $1,000 secured credit card by depositing $1,000. But they are useful for establishing or rebuilding your credit history.
Unsecured Credit Card
These are the most common types of credit cards and do not require any collateral. Your credit history, financial strength, and earning potential will generally help you in getting these cards.
This is the total cost of borrowing, and includes interest and fees. These are applicable only when you carry over a balance from one billing cycle to the next. Your provider might have a minimum finance charge, which is levied if your finance charges are below this minimum amount. If, for example, the minimum finance charge is $0.40, and your finance charge is $0.25, you will still have to pay $0.40.
Credit line is also known as the credit limit. For your credit card, this is the maximum amount that you can use. Your credit score may have a substantial impact on determining this amount and it varies between providers.
This is the lowest amount to be paid on your monthly credit card statement. The industry standard of minimum payment includes interests, fees, and 1% of the total principal amount, but the details depend on your provider.
This is the duration when you are allowed to pay your credit card bill without any interest. It has to be at least 21 days.
When you try to make a transaction that will take your borrowing higher than the approved credit limit, this fee is charged. A customer can choose to opt-in or opt-out for this particular fee. Also, this fee cannot be higher than the difference between overall credit and credit limit.
A credit card chargeback happens when a transaction is reversed, which transfers credit from a merchant to a credit card customer. A return transaction goes through the customer's bank, the credit card payment exchange, and the merchant's bank, thus involving multiple parties. When customers challenge a transaction made from a merchant, they may initiate a chargeback.
A Step Closer to Understanding the American Banking System
Feeling more in control of the situation now? You have taken a big step toward understanding the American banking system after reading this post, and are on your way to understanding the A-Z of banking in the US. Keep educating yourself on the specifics of each term, and always make informed decisions when it comes to your money. Zolve can help you open a bank account in the US, so contact us when you’re ready for the move.
Follow this space for more posts like this. We hope they will help make your life more comfortable in the US.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9547470211982727,
"language": "en",
"url": "https://cleantechnica.com/2015/04/06/brazil-announces-huge-350-mw-floating-solar-power-plant/",
"token_count": 429,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.263671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9259a321-cdb9-4281-a3cd-184c8c7b0402>"
}
|
Brazil’s energy ministry has ranked the country’s various sources of energy as per their abundance, cheapness, renewability, and availability of the necessary technology. Among the available options, hydropower comes top, followed by wind power and biomass (mostly bagasse).
However the country has been reeling under its worst drought in 80 years. The Cantareira reservoir system, which serves more than nine million people in the state, is only 5% full. At the Alto Tietê reservoir network, which supplies three million people in greater Sao Paulo (South America’s largest city), water levels are below 15%.
A number of cities have taken to water rationing. With the reservoir levels falling too low to generate electricity, energy crises could be next in line due to the country’s dependence on hydropower on which it relies for up to 80% of its energy.
For sometime now, Brazil has been warming up to solar energy. Last year, Brazil’s National Electric Energy Agency (ANEEL), concluded its first exclusive solar power auction, providing 20-year PPAs to companies that will invest over $1.66 billion in 1,048 MW of solar power spread over 31 solar parks. Power production is expected to start by 2017. The country has now decided to further push solar energy.
According to reports, Brazil’s energy minister Eduardo Braga recently announced his government’s intentions to begin a series of pilot tests of floating solar power plants on hydroelectric dam reservoirs within a period of four months.
A 350 MW pilot project is being planned at the Balbina hydroelectric plant in the Amazon. The electricity thus generated is expected to cost between approximately $69 and $77 per MWh.
Ironically, the host for the project, the 250 MW Balbina hydroelectric plant, has long been a controversial project. In addition to the loss of habitat that occurred with its construction, it is claimed that methane released from the dam reservoir spread over 2,360 square kilometres, causing the facility to emit more greenhouse gases than most coal plants.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9369034767150879,
"language": "en",
"url": "https://www.imarcgroup.com/global-h1n1-vaccines-market",
"token_count": 671,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.15625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:29de5060-b753-4f43-aa80-e6a039ac2974>"
}
|
According to the latest report by IMARC Group, titled “H1N1 Vaccines Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2019-2024”, the global H1N1 vaccines market is currently exhibiting strong growth. H1N1 influenza is an extremely infectious respiratory disease caused by influenza viruses commonly found in pigs. It is transmitted through the saliva of the infected person that gets expelled into the air while coughing or sneezing. The symptoms may include high fever, persistent cough, reduced appetite, nasal secretions, body ache, red-watery eyes and headaches. Over the past few years, the flu has resulted in several deaths across the globe. As a result, H1N1 vaccination is being widely adopted as a preventive measure that aids the body to fight against the disease. The vaccine contains a small dose of the virus that is injected in the body and builds immunity against the same.
The increasing number of H1N1 flu cases and deteriorating immunity of the majority of the population have bolstered the sales of H1N1 vaccines worldwide. This can be accredited to sedentary lifestyles, hectic schedules and decreasing intake of healthy food items, especially among the working population. Moreover, significant growth in the geriatric population also acts as a major growth-inducing factor. Since the geriatric population is more likely to develop respiratory tract diseases and immunodeficiency disorders, the risk of acquiring H1N1 flu infection is higher for this population group. Apart from this, initiatives taken by several national and international organizations worldwide are also positively impacting the growth of the market. For instance, the World Health Organization (WHO) is constantly investing in the development of competent vaccines and their effective distribution.
- On the basis of vaccine type, the market has been divided into intramuscular, intranasal and intradermal. Amongst these, intramuscular represents the most preferred vaccine type.
- On the basis of the market type, the market has been bifurcated into public and private segments. At present, the public segment accounts for the majority of the total market share.
- Based on the key brand, the market has been categorized into Agripal, Fiuarix, Influgen, Influvac, Nasovac, Vaxigrip and others.
- On the geographical front, North America holds the leading position in the market. Other major regions include Europe, Asia Pacific, Middle East and Africa, and Latin America.
- The competitive landscape of the market has also been studied with the detailed profiles of the key players operating in the market.
IMARC Group is a leading market research company that offers management strategy and market research worldwide. We partner with clients in all sectors and regions to identify their highest-value opportunities, address their most critical challenges, and transform their businesses.
IMARC’s information products include major market, scientific, economic and technological developments for business leaders in pharmaceutical, industrial, and high technology organizations. Market forecasts and industry analysis for biotechnology, advanced materials, pharmaceuticals, food and beverage, travel and tourism, nanotechnology and novel processing methods are at the top of the company’s expertise.
Follow us on twitter : @imarcglobal
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9938300848007202,
"language": "en",
"url": "https://www.richardcleaver.com/2006/05/15/financial-planning/",
"token_count": 249,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0162353515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:27e8e4f9-2962-4141-b73a-26ebe7812466>"
}
|
I spoke to another individual last week about the importance of financial planning. She is married, both her and her husband work. However, they have no idea about the state of their money and they have accumulated little, if any, savings. They are 40.
40 is not too late to start. However, the statistics are quite disturbing. The U.S. Department of Health, Education and Welfare prepared a study which tracked a representative sample of people from 20 to 65 years of age. And they uncovered that by age 65, for every 100 people:
- 1 was wealthy
- 4 were well off
- 5 were still working because they had to
- 36 were dead
- 54 were dead broke, barely surviving off family or the government
According to Merrill Lynch, today’s average 50 year-old has only $2,300 saved toward retirement.
After I spoke with this person about financial planning, she thanked me although she admitted that she was quite concerned. I told her not to worry too much. The best time to start saving is now. The most advantaged savers are those who start young and keep with it for life.
$2,300 at age 50? Hard to believe.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9472893476486206,
"language": "en",
"url": "https://birgunjtoday.com/qa/what-is-the-meaning-of-limited.html",
"token_count": 1291,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d2e35fec-42ce-4624-9bc1-54779b0431f9>"
}
|
- Is a limited number or are a limited number?
- Can one person own a limited company?
- What are the disadvantages of private limited company?
- What are the pros and cons of a private limited company?
- What are the benefits of private limited company?
- Which is better LLP or Pvt Ltd company?
- How do you use limited in a sentence?
- Which company is best limited or private limited?
- What are the limiting words?
- What is limiting factor in English?
- What is difference between limited and private limited?
- Who controls a private limited company?
- Are limited to means?
- What type of word is limited?
- Why do companies put limited in their name?
- What is the full meaning of limited?
- What does limiting mean?
- What is a limited number?
- What is a limiting factor *?
Is a limited number or are a limited number?
and 4 are correct, but 1.
is a (much) more common sentence.
“a limited number of’ is plural, so “are” is correct..
Can one person own a limited company?
A limited company can be set up by a single individual who will be the sole shareholder and company director, or by multiple shareholders. Advantages of forming a limited company include: Liabilities such as debts or legal action are limited to the company.
What are the disadvantages of private limited company?
One of the main disadvantages of a private limited company is that it restricts the transfer ability of shares by its articles. In a private limited company the number of members in any case cannot exceed 200. Another disadvantage of private limited company is that it cannot issue prospectus to public.
What are the pros and cons of a private limited company?
Pros and Cons of a Private Limited CompanyLimited Liability. … Ease in Ownership and Share Transfer. … Attracts Investors. … Strict Regulations. … Difficult to Liquidate. … Complex Accounting and Auditing Requirements. … Necessary Employees.Feb 20, 2020
What are the benefits of private limited company?
Advantages of Private Limited CompanyNo Minimum Capital. No minimum capital is required to form a Private Limited Company. … Separate Legal Entity. … Limited Liability. … Fund Raising. … Free & Easy transfer of shares. … Uninterrupted existence. … FDI Allowed. … Builds Credibility.
Which is better LLP or Pvt Ltd company?
LLPs combine the operational advantages of a Company as well as the flexibility of Partnership Firms. The fee for incorporation of an LLP firm is very nominal as compared to that for Private Limited Company. The compliance requirements for an LLP are significantly lower than those for a private limited company.
How do you use limited in a sentence?
Limited sentence exampleAfter a few hours of limited rest, they were back in the saddle again. … Their time was limited if they were to visit the disputed property. … He must sense you’re limited to observing what happens.More items…
Which company is best limited or private limited?
Here is a list of features that differentiate a public company from a private limited company:FeaturesPublic Limited CompanyPrivate Limited CompanyMinimum members72Minimum directors32Maximum membersUnlimited200Minimum capital5,00,0001,00,0007 more rows•Jun 18, 2019
What are the limiting words?
Limiting words are usually those words in an assignment question which help you focus your discussion on the topic. They limit and define the essay and usually on specific areas.
What is limiting factor in English?
A limiting factor is anything that constrains a population’s size and slows or stops it from growing. Some examples of limiting factors are biotic, like food, mates, and competition with other organisms for resources.
What is difference between limited and private limited?
Ltd refers to Public Limited company and Pvt Ltd refers to private limited company. A company is called private limited when all its shares are in private hands. … On the other hand, the minimum number of shareholders in a Public Ltd Company is seven and there is no limit to the maximum number of shareholders.
Who controls a private limited company?
Who runs limited companies? Directors – known as company officers – manage limited companies and they can be shareholders as well. A limited company must have at least one director and most company owners are directors – meaning you can own and manage a limited company yourself or with others.
Are limited to means?
From Longman Dictionary of Contemporary Englishbe limited to somethingbe limited to somethingto exist or happen only in a particular place, group, or area of activity The damage was limited to the roof.
What type of word is limited?
adjective. confined within limits; restricted or circumscribed: a limited space; limited resources. Government. restricted with reference to governing powers by limitations prescribed in laws and in a constitution, as in limited monarchy; limited government.
Why do companies put limited in their name?
Because a limited company has separate finances and is legally distinct from its owners, shareholders have limited liability – meaning that owners and shareholders are not personally liable for any losses or debits incurred by their business.
What is the full meaning of limited?
Ltd.Ltd. is a standard abbreviation for “limited,” a form of corporate structure available in countries including the U.K., Ireland, and Canada. The term appears as a suffix that follows the company name, indicating that it is a private limited company.
What does limiting mean?
adjective. serving to restrict or restrain; restrictive; confining. … of the nature of a limiting adjective or a restrictive clause.
What is a limited number?
(a) limited (number): (a) restricted (number) adjective. Even though our resources are limited, we have still improved the product. We didn’t have a lot of money but it wasn’t a problem.
What is a limiting factor *?
Limiting factors are resources or other factors in the environment that can lower the population growth rate. Limiting factors include a low food supply and lack of space.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9065079689025879,
"language": "en",
"url": "https://legodesk.com/legopedia/wilful-defaulter/",
"token_count": 1977,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.419921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a0ad858f-bee5-421b-801e-f86ad5f741f2>"
}
|
Who is a Wilful Defaulter?
As per RBI Guidelines, a wilful defaulter is an entity or a person (borrower) who has not met the payment or repayment obligations to the lender despite having the ability to do so.
What do you mean by Wilful Default?
The word ‘default’ here refers to a failure to repay the loan taken. A wilful default, therefore, occurs when there is non-payment of a loan availed by a borrower from a bank or any other financial institution despite having the ability to pay it off. Additionally. diversion of funds for purposes other than the ones stated is also grounds to be declared a wilful defaulter.
Read Also – All about Section 185 of the Companies Act, 2013
Also, RBI has stated that a ‘default’ is said to be categorized as ‘wilful default’ only if it comes across as intentional, deliberate, and calculated.
Origin of the Concept of Wilful Default
The concept of Wilful Default originated in the year 1999 when the Central Vigilance Commission and the RBI issued instructions to the banks and other financial institutions to gather information on wilful defaulters of Rs. 25 lakhs and above.
Notably. with a rise in the Non-Performing assets, RBI was forced to strengthen the regulation with respect to wilful defaulters. However, various alterations were made by the RBI in their policies to identify the wilful defaulters.
Additionally, the RBI also issued guidelines on the procedures that should be initiated against them. However, in 2015, RBI finally issued a master circular stating clear instructions and directions to the banks and other financial institutions on identifying and dealing with wilful defaulters.
Scope of Wilful Default
As per RBI Guidelines, the scope of wilful default is wide and it covers the following areas:
- Deliberate Non-Payment of the Loan despite having Adequate Cash Flow
- Diversion or Siphoning of Funds harming the Health of the Lender Entity
- Misuse of Assets and Proceeds
- Misrepresentation / Falsification of Records
- Disposal of Securities without the Bank’s Knowledge
- Fraudulent transactions by the borrower
Read Also – Top 7 Current Legal Issues In India in 2020
Diversion of Funds
Diversion of funds refers to the utilization of funds by a borrower that is in deviation of the sanctioned terms of the lender. Additionally, it includes:
- Short Term Working Capital Funds utilized for Long Term Purposes
- Loan Granted for Purchase of a Specific Assets utilized for Other Purposes/ Activities
- Transferring Funds to Subsidiaries or Group Companies or Other Corporates by whatever modalities
- Routing Funds through any bank other than the Lender Bank without the permission of the Lender Bank
- Investing Loan Amount in Other Companies and Acquiring Equity/ Debt Instruments without the permission of the Lender Bank
- A shortfall in the Deployed Funds and the Difference Amount is Unaccounted for.
Siphoning of Funds
Siphoning of Funds refers to any funds borrowed from a bank or any other financial Institution, utilized for purposes that are not related to the operations of the borrower which harms the financial health of the entity or of the lender. However, the decision as to an instance amounting to Siphoning of Funds lies with the Lender. The decision is based on objective facts and circumstances of the case.
Cut-off Limits that Attract Penal Measures
As per RBI Guidelines, the penal measures would normally be attracted by all the borrowers or promoters who are involved in Diversion or Siphoning of Funds. However, the Central Vigilance Commission has fixed the present limit at Rs. 25 Lakhs. Under this, cases of wilful default by a bank or any Financial Institution amounting to Rs. 25 Lakhs and above should be reported to the RBI as mentioned in the circular. The same limit is applicable for Diversion or Siphoning of Funds
Measures to Check Correct Use of Funds
In the case of Project Financing in Banks or any Financial Institutions, it is important to check the end use of the fund. This refers to checking the way in which the borrower uses the funds granted by the lender. Notably, the borrower must use the loan amount for the reason specified in the application. Needless to say, as part of their loan policy, banks and financial institutions need to ensure that the funds allotted are used correctly. To this end, appropriate measures should be put in place. The following are some of the measures that the lenders can take for monitoring and ensuring end-use of funds:
-Scrutiny of Quarterly Progress Reports
-Scrutiny of Operating Statements and Balance sheets of the borrowers
-Regular Inspection of the Borrowers’ Assets
-Scrutiny of the Books of Accounts of the Borrowers maintained with other banks
-Regular in-person visits to the assisted units
-Periodical Stock Audit to check the working capital finance
-Periodical Audit of the Credit function of the lenders
Penal Action Against a Wilful Defaulter
There are Four Legislations to conduct action against a defaulter:
Securitization and Reconstruction of Financial Assets and Enforcement of Security Interest (SARFESI) Act– Gives Lender Bank the power to take control of the management of the business of the wilful defaulter
The Companies Act – For Punishment of Fraud. It also allows for prosecution of the directors, jail term and fines
Insolvency and Bankruptcy Code (IBC)– Includes mechanism for resolution of the default loan. It also has penal provisions and disqualifications of defaulters
Indian Penal Code (IPC)– For Punishment for fraud mainly in terms of misappropriation of property
How to Identify a Wilful Defaulter?
The RBI Master Circular dated July 01, 2015 helps identify a wilful defaulter. The mandates of the circular are as follows-
- When a unit has defaulted in meeting the payment or repayment obligations to the lender despite having the ability to do so.
- If the unit has defaulted in meeting the payment or repayment obligations to the lender and not utilizing the funds for the stated purposes and diverted funds for other purposes
- If the funds become untraceable
- When a Unit has defaulted in meeting the payment or repayment obligations to the lender and have disposed or removed the movable fixed assets or immovable property given for the purpose of securing the loan without the prior permission of the lender
Top 10 Wilful Defaulters in India
The wilful defaulter data released by the RBI comes from the centralized banking system database called Central Repository of Information on Large Credits include wilful defaulters of Rs. 5 Crore and above.
The following are the Top 10 Wilful Defaulters in India:
- Gitanjali Gems– Wilful default of Rs. 5,044 crore
- REI Agro– Wilful default of Rs. 4,197 crore
- Winsome Diamonds– Wilful default of Rs. 3,386 crore
- Ruchi Soya– Wilful default of Rs. 3,225 crore
- Rotomac Global– Wilful default of Rs. 2,844 crore
- Kingfisher Airlines– Wilful default of Rs. 2,488 crore
- Kudos Chemie– Wilful default of Rs. 2,326 crore
- Zoom Developers– Wilful default of Rs. 2,024 crore
- Deccan Chronicle– Wilful default of Rs. 1,951 crore
- ABG Shipyard– Wilful default of Rs. 1,875 crore
What happens to a Wilful Defaulter?
The following are the consequences which an entity or person will have to face if they are declared a Wilful Defaulter:
- No Bank or Financial Institution will provide any further finance to wilful defaulters
- Any Entrepreneurs/ Promoters who have been declared a Wilful Defaulter for Diversion of Funds, Siphoning of Funds, Misappropriation of Funds, etc will be debarred from institutional finance and floating new ventures for a period of five years from when they are declared a Wilful Defaulter in the list published by the RBI
- The lender can initiate criminal proceedings against wilful defaulters, depending on the case and amount of wilful default
- Further, the Banks/ Financial Institutions may adopt a pro-active approach for a change of management of the wilfully defaulting borrower unit.
- As per section 29A – IBC 2016, a wilful defaulter cannot be a resolution applicant.
Read Also – Grounds & Application for Condonation of delay
A tremendous increase in Non-Performing Assets is crippling the Banks and other Financial Institutions. Therefore, taking appropriate and swift action against wilful defaulters is the need of the hour. Additionally, RBI must do the needful to lay down strict and comprehensive policies by issuing circulars addressing the current issues which will put a better system in place that will not only identify but also clips the wings of these wilful defaulters. However, an increase in the level of scrutiny will guarantee that further finance will not be made available to the wilful defaulters that are draining the funds of the Banks/Financial Institutions.
Try our all-in-one Legal Practice Management Software Free Sign Up Now!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9405626058578491,
"language": "en",
"url": "https://metaglossary.com/define/unemployment",
"token_count": 206,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b3d6cfe6-0f36-4c5b-a643-a41a4b4d45b4>"
}
|
Quality or state of being not employed; -- used esp. in economics, of the condition of various social classes when temporarily thrown out of employment, as those engaged for short periods, those whose trade is decaying, and those least competent.
the state of being unemployed; lack of employment.
a condition that exists when a laborforce has workers able, ready and willing to accept employment and are without jobs, a statistical measure defined as a percentage of a laborforce. A tight labor market is generally indicated by a statistical reporting of less-than-3% unemployed because there is a bottom layer of unemployed considered to be unemployable. Events such as recent closings or layoffs or announcements of possible closings or layoffs may not show up in statistics or labor information available from indirect sources. For this reason it's always best to investigate a labor market or to gather data at the target location more about how the unemployed fit into location work. definition of unemployment defined definition of tight labor market defined definition of unemployed defined best source of labor information
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9358028769493103,
"language": "en",
"url": "https://www.colitetech.com/single-post/2020/03/25/renewables-to-dominate-electricity-generation-mix",
"token_count": 732,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.11474609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:eb74a271-095a-4a27-b733-83d14899c7a3>"
}
|
Renewables to Dominate Electricity Generation Mix
“For the first time ever, in April 2019, renewable energy outpaced coal by providing 23 percent of US power generation, compared to coal’s 20 percent share.” With more people, governments, and companies realizing the benefits of renewable energy, this trend will likely continue. Clean, abundant energy can power almost anything with the right combination of technology and expertise.
In 2018, two-thirds of our electricity came from fossil fuels; by 2050, two-thirds of it will come from zero-carbon energy. The renewable energy industry is rapidly advancing and constantly introducing new, innovative ideas. As we learn more, we apply new designs to current systems and develop safer, more efficient, and more streamlined technology. Material costs are falling, and more governments and organizations are committing to greater use of alternative energy sources; these trends will drive the expansion of clean energy in the US and around the world.
New Energy Outlook 2019 found that more than two-thirds of the global population today live in countries where solar or wind, if not both, are the cheapest source of new electricity generation. PV module costs have decreased 89% since 2010 and they expect another 34% decline from today to 2030. Turbine costs are down 40% since 2010 and they expect the cost of wind energy to drop 36% by 2030. Additionally, battery storage is becoming more prevalent in solar and wind systems to offset the intermittent nature of these resources. Fortunately, lithium ion battery costs have declined by 35% in the first half of 2019 and enables these systems to be competitive with traditional generation sources.
Simultaneously, local interest in renewable energy has boomed and communities put pressure on city and state governments to step up their sustainability activities. 42 states and territories, including Washington D.C., have adopted either a renewable portfolio standard or have set renewable energy goals . They mandate that a certain percentage of electricity sold by utilities must come from renewable sources, thus helping diversify the states’ and territories’ energy resources, promote domestic energy production, and encourage economic development. This provides utilities an external push for clean energy improvements and holds individual governments accountable to their commitments.
With these initiatives and decreasing costs of components, others will likely follow suit and invest in renewable energy systems as well. Companies and individuals will begin to purchase smaller systems for their immediate use and, over time, we can expect the United States’ electricity mix to become more diversified. Likewise, greater interest in renewables encourages investment into research for more efficient products. This in turn helps scale the industry and makes renewable energy more accessible to groups that didn’t consider it previously due to lack of knowledge or prohibitive costs and legislation. The joint efforts of governments, utilities, companies, and individuals will lead to substantial gains in the renewable energy sector. Hence, the US Energy Information Administration predicts 38% of electricity will be generated from renewables by 2050.
Source: U.S. Energy Information Administration, Annual Energy Outlook 2020 (AEO2020) Reference case
We are already witnessing renewable energy’s potential to revolutionize the way in which the world generates electricity. Particularly with the advancement of battery storage technology, intermittent natural resources such as solar and wind are becoming significantly more competitive with traditional sources. It can make an electric grid more resilient and flexible, providing reliable power to users for generations to come. Moreover, renewable energy does not produce carbon dioxide emissions like coal or natural gas and can help mitigate the effects of climate change on a wide scale. We will undoubtedly continue to use electricity; why not get it from a cleaner source?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9335200786590576,
"language": "en",
"url": "https://www.comba-telecom.com/zh-tw/blogs/item/1215-why-wireless-connectivity-is-key-to-urban-asia-s-economic-success",
"token_count": 1141,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0035858154296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9d9fc1d0-6b1e-4df1-975a-562d191d6bca>"
}
|
Wireless connectivity is the new currency in today’s digital age. The proliferation of smart, portable devices has fuelled the rise of digital natives who expect to be always connected everywhere they go, every single day.
Mobile workers, who are projected to make up 42% of the global workforce by 2020, rely more on connectivity than ever before to stay productive beyond their own cubicle. At the same time, connected travellers are increasingly demanding to stay plugged in so they can keep in touch with loved ones back home—and more importantly, access online maps and navigation apps to get around safely in a foreign land.
On a macro level, connectivity continues to transform the way we live. Policymakers are turning to smart city technologies, such as intelligent transportation systems, to overcome the challenges of rapid urbanisation. It’s no surprise why smart city technology investments in Asia-Pacific, home to some of the world’s largest and fastest growing urban areas, are expected to reach US$63.4 billion in seven years’ time. In 2012, Guangdong officials collaborated with local telco operators to send 30 million warning texts about the incoming typhoon Vicente, potentially saving the lives of many.
These point to a conclusion: that connectivity empowers people and nations to do more, and is key to economic development.
When networks buckle under bandwidth stress
However, as more people and devices get connected, mobile networks will start to feel the strain of a spectrum crunch, which is felt more strongly in densely populated locations and during special events.
Take the Longines Hong Kong International Races for instance, the country’s largest race day event. In 2015, it saw a record turnout of 80,000 spectators within the Sha Tin Racecourse. Poor reception and slow data throughput is a common occurrence during such large-scale events as massive phone usage overwhelms mobile networks. In some cases, users are even unable to make calls—possibly preventing those in need from reaching emergency services.
Service providers also face difficulties staying on track when it comes to keeping public transit passengers on subway systems seamlessly connected, especially during rush hours. Weak mobile handover capabilities, coupled with poor interference resistance and bandwidth that spreads too thin, spell slow network speeds and patchy signals, a source of frustration for daily commuters.
So how can service providers ensure no dip in network quality and reliability? Here are three ways they can ease congestion for bandwidth-hungry users and meet their expectations of an uninterrupted wireless experience.
Wireless Access Networks
A wireless access network comprises multiple small cells that are deployed across a location or within a building and act like a Wi-Fi access point, providing excellent quality of service but only over a limited footprint. Easy and relatively inexpensive to deploy, small cells are perfect for service providers and telco operators looking to boost connectivity in small, concentrated spaces.
As such, when it comes to providing multiple band coverage with multiple carriers, small cells fall short since they typically support only one or two bands with single operator, at one time, and the capacity is also limited to small hotspots. During large-scale events and in bigger spaces, it is probably more economical to deploy a distributed antenna system (DAS) than a large network of small cells, as a DAS offers significantly more network capacity than the latter with lesser resources.
Distributed Antenna Systems
Think of a DAS as a cell tower within a building or a large area, without the space requirements and associated cost. Through many small antennas, it provides multiple band coverage and greater cellular capacity than wireless access networks—ideal for congested, high-volume locations. However, DAS deployments can be costly and complex. They require a lot of testing, customisations and installation of special cabling which result in longer time-to-market.
Thankfully, new innovations are set to change this tune. For example, leading wireless service provider Comba Telecom, based in Hong Kong, has developed the most flexible active DAS solution, called ComFlex, which takes away the complexity in deployment to easily support multiple operators and multiple mobile technologies, whether 2G, 3G or LTE. Its compact and modular design further facilitates faster deployment and future upgrades to reduce product lead time and ensure optimum network performance. Comba’s ComFlex solution recently won the 2016 Grand Award at the Hong Kong Awards for Industries, topping all other innovations in the territory.
DAS with Access Networks
There are some use cases whereby service providers tap the combined strengths of wireless access networks and DAS solutions for seamless connectivity. A DAS can help overcome small cell frequency limitations. If additional capacity is needed throughout a building, service providers can easily install small cells at the head-end of a DAS to cope with the growing bandwidth demand instead of overprovisioning.
The caveat is that service providers need to carefully navigate infrastructure and cost constraints, as well as potential issues stemming from the coexistence of wireless systems before deployment to ensure that the high-tech system is a profitable venture.
Whether it is to enable telecommuting or regulate traffic through intelligent traffic lights, connectivity is poised to transform the way we live and work in years to come. Governments throughout Asia are looking to develop smart cities to benefit economic growth and quality of live, which can only be realised with robust connectivity in place. This opens up opportunities for service providers to step up and explore new methods of overcoming congested networks, ensuring a seamless mobile experience for all.
Strategy Analytics, Global Mobile Workforce Forecast 2015-2020, 2015
Navigant Research, Smart Cities: Asia Pacific, 2014
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9428589940071106,
"language": "en",
"url": "https://www.projectsmonitor.com/guest-articles/innovative-green-and-sustainable/",
"token_count": 805,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1044921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b6a9eeb1-bdef-4cc5-bb06-87f99a6ca3e1>"
}
|
Monnanda Appaiah, Managing Director, Wienerberger India, which was set up in 2006 as a wholly owned subsidiary of Wienerberger AG, a leading producer of clay building materials and the largest producer of bricks, explains the use of Thermo Bricks as an energy-efficient, cost-effective and environment-friendly building solution for modern day homes.
The construction industry is fast evolving with the advent of new technology. A number of products offering Green Building solutions are now available in the marketplace and the real estate industry is gradually adopting these latest innovations. A good number of individual house owners are also opting for green constructions or sustainable building solutions, mainly because of its insulation from climatic extremes among others.
Governments are also waking up to this need making concepts like rainwater harvesting mandatory among others. The effects of global warming are adverse, drastic temperature variations, drought situation, and floods, to name a few. Need of the hour is to mandate the usage of eco-friendly building material that could unburden the atmosphere from harmful gases.
Energy-efficient architecture or green building is a concept which is not new to the world but has grown in popularity in recent years. The perfect example to support this is the historic ‘Hawa Mahal’ in Jaipur. Although construction of the structure dates back to 1799, it is a ‘Green Building’ as temperature inside the monument is relatively low from the surroundings outside. The cooling effect created by presence of small perforations in the facade of the building is an example of clever architecture in a hot and humid country like India.
Green building materials are mainly made from natural products incorporating latest technology. The real estate and construction industries have taken a huge leap in terms of being eco-friendly as they are now focusing on efficiency of buildings with respect to the use of water, energy and materials while reducing the negative influences of the possible impact on an individual’s health and the environment by effecting better design, construction, operation and maintenance.
Many cities in India are witnessing erratic climate conditions, thanks to this rising global phenomena. Fact of the matter is that air-conditioning sales are growing 20 per cent a year in China and India; as middle classes grow, units become more affordable and temperatures rise with climate change. But this is only leading to the alarming situation where scientists have warned about the adverse impact of high electricity consumption leading to a deficit of natural resources. Adding to this disadvantage is the power cuts due to scarcity and the mounting bills that drastically go up with high consumption of electricity to run an air-conditioner.
A quick and innovative solution to keep your house cool without harming the environment is the ‘Thermo Brick’.
Thermo Bricks are used for external walls to achieve thermal comfort for indoors. These bricks have an excellent thermal insulation with a ‘U’ value of 0.6 W/m2K. This is the best thermal insulation currently available for any walling material in the country. The bricks provide comfortable and natural indoor climate therefore, avoiding use of air-conditioners most of the time. Even if used, it lowers the usage time largely which translates into savings in energy and hence money spent on electricity bill. In addition to this advantage, the bricks are built such that in winters they insulate the house naturally.
These bricks also eliminate the need for dual or thicker walls, thus increasing the carpet area of an individual house or an apartment. An individual can save a lot on labour cost as these bricks weigh only 12 kg which is 60 per cent less than the conventional cement blocks or solid clay bricks used in construction of a house or a building. The lightweight of bricks also enables savings in structural costs.
These bricks are purely built from natural clay which is sourced by desilting of the water tanks. Such tanks are available in rural areas around Kunigal-Tumkur district where manufacturing plant of Wienerberger is located.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9572820663452148,
"language": "en",
"url": "https://www.studymode.com/essays/Financial-Markets-And-Institutions-1647562.html",
"token_count": 372,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2294921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b363f6d6-bb5b-42b7-9185-c4cff7e271bd>"
}
|
Financial Markets and Institutions
In a global market when one country goes through an economic recession it can have drastic effects on the rest of the world. When the United States had a financial melt down it affect every country around the world. The financial crisis hurt the value of the dollar and it affect the way we trade and do business around the world. The financial institution that is having the greatest effect on our economic condition is the Investment Banks. Investment Banks are the Commercial side and involve underwriting issues of debt and equity, Mergers and Acquisitions, and corporate restructuring or advisement. Investment Banks directly effect how business are created and expand. Investment Banks help businesses grow and be profitable in the global market, they give out loans for businesses to get started and they also help merge businesses together to become more profitable. When Investment Banks don’t lend out money to start business or invest in other markets it potentially puts the entire economy and global business on hold. At the same time Investment Banks can help the US and Europe get out of this financial crisis. When an Investment bank lends money and invests in global markets it creates trade and new jobs for people. The more trading that is going on the more money that is being made and global markets thrive.
I believe that future trends and financial markets that will expand in the near future are Money Markets. Money Markets, short-term debt instruments are issued by economic units that require short-term funds and are purchased by economic units that have excess short-term funds. Once issued money market instruments trade in secondary markets. Money Markets are needed because of the immediate cash needs of individuals, corporations and governments do not coincide with their receipts of cash. I think they will expand mainly because of their high rate of interest (return), they are very liquid, have low...
Please join StudyMode to read the full document
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9343859553337097,
"language": "en",
"url": "https://www.texaselectricityplans.com/texas-electricity-plan-guide/the-eia-electric-power-annual-in-context/",
"token_count": 520,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.06591796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c77746f6-7b95-4ade-a734-dbea9accf8ce>"
}
|
Sometimes the sheer magnitude of electrical data out there can lead to a staggering feeling of information overload. Given that the EIA just released their Electric Power Annual, it’s important to at least interpret a broad overview of the existing trends in electricity throughout the country.
To name a few data points from the Electric Power Annual, the EIA shows that all total sales of electricity have risen steadily from the years 2008-2018. Full service providers sales have annually risen as well, but energy-only providers prove to be more volatile with their sales numbers. They are down considerably this calendar year.
Revenue has been up every year for the last decade in residential and commercial electricity. And as suspected, residential takes the lion’s share of sales when comparing both commercial and residential spaces.
Texas is the largest user of residential electricity in the country with Florida and California being the next largest, respectively. Texas alone rose by 13,000 (thousand Megawatt hours) in a year’s time. This also means that Texas had the highest revenue derived from electricity sales with 17,610 (million dollars) according to the available data. This is up a lot since just 2017. And California is close behind.
Comparing with Global Energy Consumption
To use the predominantly increasing figures from the Power Annual and contrast it with the Global Energy Consumption report that the EIA just released provides a clear case that energy consumption is simply rising around all together. This isn’t that surprising given our increasing population, but it’s important to take note of.
Additionally, the EIA thinks that as standards of living are rising throughout the world, so too electricity demand will rise commensurately.
China, Russia, and India will lead the pack for non-Organization for Economic Cooperation and Development (OECD) countries with a monumental 2.5% increase in building electricity usage per year from now until 2050. Conversely, OECD countries (like America, Australia, and Canada) will only rise by 0.6% every year because of more efficient technology and buildings.
By the time 2050 rolls around, OECD countries will decrease energy consumption by 3% and non-OECD countries will almost double their consumption. These are highly contingent on socioeconomic development and higher population densities.
Perhaps we all intuitively know that with an increase in population marks an increase in energy consumption. But as technology gets better and the world more climate conscious, we will have to check in with EIA projections in years to come to see if estimates continue along the same trajectory.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9311612248420715,
"language": "en",
"url": "http://www.green-growth.org.uk/article/beginner-s-guide-carbon-footprinting",
"token_count": 1652,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.037109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dd39c82f-95c3-4588-91ef-2670e9784695>"
}
|
Environmental Business Advisor Claire Scott explains what a carbon footprint is, how they are calculated and why it’s something every business should consider.
Within a few short years, knowing your carbon footprint has gone from a niche ‘nice to have’ to a relatively common occurrence in business. Many large UK companies and larger energy users have had to report on their carbon emissions by law for over 10 years, and many are beginning to make carbon a key part of their procurement process. In the public sector, buyers are increasingly asking for evidence of reducing carbon emissions as part of their social value commitments.
It’s not surprising, then, that we’re seeing more and more forward-thinking SMEs in Greater Manchester ask us about carbon footprinting and how to get ahead.
2021 is certainly the right year to get started. Being seen as a ‘green’ business has already become more important in the wake of COVID-19, and this year we’ll see it become centre stage as the Net Zero agenda takes off. Companies of all sizes from around the world are committing to net zero emissions – and you can’t set a target without measuring your carbon footprint first.
On a more practical level, measuring your carbon footprint provides many direct benefits. You can’t manage what you don’t measure, and by collecting the data you’ll need for a carbon footprint you’ll be able to identify and prioritise where you can make the biggest improvements to the way you use energy, fuel and other resources.
What exactly is a carbon footprint?
So what is a carbon footprint? Put simply, it’s a measure of your contribution to climate change. There are generally two types – organisational carbon footprints and product carbon footprints. This blog focuses on the former.
Your organisational carbon footprint stacks up all the greenhouse gas emissions you emit over a 12-month period and gives you a total figure expressed in tonnes of ‘carbon’, or to be more precise, carbon dioxide equivalent (CO2e). There are six key greenhouse gases emitted by human activities that contribute to global warming, but to make things easier we measure everything in relation to CO2 because it’s the most common.
There are many possible sources of greenhouse gas emissions from a business. To make managing them easier, we split them into three ‘scopes’:
Scope 1: These are the emissions from sources you own and control and are therefore directly responsible for. For most businesses, this will be any gas heating or fuel oil you burn on-site, and the fuel you use in your company vehicles. If you use industrial refrigeration or air conditioning, refrigerant losses would also be included here, along with any emissions that may be released directly during a manufacturing process.
Scope 2: These are the emissions you indirectly produce through the energy you purchase, which for most businesses is solely electricity. By using electricity, you are indirectly responsible for the greenhouse gases generated at source by the energy producer.
Scope 3: These are any other emissions you’re indirectly responsible for from sources outside your direct control, e.g. the goods and services you purchase, the distribution and use of your own goods and services by customers, the disposal of your waste, employee commuting or business travel, and so on.
For most emissions sources, there is a specific ‘emissions conversion factor’ to calculate the total carbon from that activity. For example, to measure the carbon emitted by a van that runs on diesel, you take the litres of diesel consumed by the van and multiply it by the corresponding emissions factor for diesel.
Data x Emissions Factor = Greenhouse gas emissions
The UK emissions factors are publicly available so you can do this yourself on a spreadsheet (which we can help you with), or you can use one of many tools available online that do it for you.
There are also a number of formal routes to verify your carbon footprint to a recognised standard, such as ISO 14064 or the Carbon Trust Standard, to name just two. But as I said above, if you’re just starting out, with the right data to hand you can do it yourself and achieve a good internal benchmark for future improvement.
A quick step-by-step guide
Decide what’s in scope
Start by setting the boundaries for your footprint. The best approach for you will depend on what your major emissions sources are, which sources you have most influence over and how much data is available to you. Carbon footprints should include scope 1 and 2 emissions as a minimum. Scope 3 emissions are more difficult to measure, so there is flexibility here in how much or little of these you include. Some of the more commonly measured scope 3 activities include emissions from waste going to landfill, water consumption and business travel.
For most small businesses measuring their footprint for the first time, the emissions from your heating, electricity consumption and vehicle use are a good start.
Collect the data
Once you’ve identified all the activities you want to measure, begin collecting data for each using a relevant metric, e.g. litres of fuel or mileage for vehicles, kWh of gas or electricity from your energy bill/meter, cubic metres of water from your water bill/meter, and so on. Track them in a spreadsheet, separating them out into the different scopes.
Calculate your emissions
To calculate your footprint, convert the data in your spreadsheet using the relevant CO2e conversion factor for each of your emissions, or use an online tool such as the Carbon Trust’s SME Carbon Footprint Calculator (others are available). It’s normal practice to calculate your carbon footprint on an annual basis. You may wish to align it with your accounting period.
Use it to identify improvements
Once you have your carbon footprint, use the data to identify the most suitable actions to reduce your emissions and make cost savings. If electricity use is by far your biggest contributor to your carbon footprint, for example, prioritise measures that reduce your electricity use.
Use your first carbon footprint as a baseline to set targets. The Science Based Targets initiative, which helps companies to set targets based on what climate science tells us we need to achieve, currently recommends an absolute reduction of 2.5 - 4.2% year-on-year as a minimum – although many companies are moving far faster.
Share your progress
Communicating your progress to stakeholders – both internal and external – is a great opportunity to demonstrate your commitment to improving environmental performance and tackling climate change.
You have probably come across the term carbon offsetting, where you purchase ‘credits’ from schemes that remove carbon from the atmosphere (often tree planting) to cancel out your own emissions. It can be tempting to jump for this option straightaway, but to get the most out of your carbon reduction journey it should be the last resort if one can’t eliminate or reduce the emissions themselves.
Offsetting your entire carbon footprint is a huge missed opportunity to make efficiency improvements in your business and benefit your bottom line. As a rule of thumb – focus on efficiency first, then look at indirect measures such as securing a 100% renewable electricity supply, and only explore offsetting once all other avenues have been exhausted.
You should also be cautious about the claims you make when offsetting your emissions. Not all offsetting schemes are equal. Be wary of the quality of what you’re purchasing, the wider impacts of the project you’re investing in and whether the credits are verified/guaranteed.
We can help
Our Resource Efficiency service is perfectly placed to start you off on your carbon reduction journey. Our specialist advisors can audit your business to identify the most effective efficiency measures, provide advice and support on data collection, and guide you through your carbon footprint calculation. We can also offer funding for eligible improvements through our Energy Efficiency Grant.
Look out for more blogs throughout 2021 to help you on your journey to net zero.
Posted under General Interest on 27 January 2021
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9366320371627808,
"language": "en",
"url": "https://ajmarciniak.wordpress.com/2021/02/17/weakest-link-to-ev-growth-is-the-material-supply-chain/",
"token_count": 1172,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.33203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2b3c17c6-4493-4e78-88ff-3737b682d5fe>"
}
|
Weakest link to EV growth is the material supply chain
There may not be enough minerals and metals in the world to achieve the planned EV growth
By Ronald Stein
Ambassador for Energy & Infrastructure, Irvine, California
The worldwide plans for EV domination of the vehicle population are like having the plans to build a large house without sufficient materials being available to ever finish the house.
The pressure to go green is increasing as countries are announcing plans to phase out petrol and diesel cars. Germany will stop the sale of all new petrol and diesel cars from 2030, Scotland from 2032, and France and the UK from 2040.
Even California, the current leader in America with 50 percent of the EV’s in country being in that state, has jumped onto the EV train with Democratic Governor Gavin Newsom, who will be on the 2021 Recall ballot, issued an Executive Order in 2020 to ban the sale of gas-powered vehicles in California by 2035.
A Tesla lithium EV battery weighs more than 1,000 pounds. While there are dozens of variations, such an EV battery typically contains about:
- 25 pounds of lithium,
- 30 pounds of cobalt,
- 60 pounds of nickel,
- 110 pounds of graphite,
- 90 pounds of copper,
Looking upstream at the ore grades, one can estimate the typical quantity of rock that must be extracted from the earth and processed to yield the pure minerals needed to fabricate that single battery:
- Lithium brines typically contain less than 0.1% lithium, so that entails some 25,000 pounds of brines to get the 25 pounds of pure lithium.
- Cobalt ore grades average about 0.1%, thus nearly 30,000 pounds of ore to get 30 pounds of cobalt.
- Nickel ore grades average about 1%, thus about 6,000 pounds of ore to get 60 pounds of nickel.
- Graphite ore is typically 10%, thus about 1,000 pounds per battery to get 100 pounds of graphite.
- Copper at about 0.6% in the ore, thus about 25,000 pounds of ore per battery to get 90 pounds of copper.
In total then, acquiring just these five elements to produce the 1,000-pound EV battery requires mining about 90,000 pounds of ore. To properly account for all the earth moved though—which is relevant to the overall environmental footprint, and mining machinery energy use—one needs to estimate the overburden, or the materials first dug up to get to the ore. Depending on ore type and location, overburden ranges from about 3 to 20 tons of earth removed to access each ton of ore.
This means that accessing about 90,000 pounds of ore requires digging and moving between 200,000 and over 1,500,000 pounds of earth—a rough average of more than 500,000 pounds of ore per battery.
According to Cambridge University Emeritus Professor of Technology Michael Kelly, replacing all the United Kingdom’s 32 million light duty vehicles with next-generation EVs would require huge quantities of materials to manufacture 32 million EV batteries:
- more than 50 percent of the world’s annual production of copper.
- 200 percent of its annual cobalt.
- 75 percent yearly lithium carbonate output; and
- nearly 100 percent of its entire annual production of neodymium.
One can easily see that the world may not have enough minerals and metals for the EV batteries to support the EV growth projections roadmap when you consider that today:
- Combined worldwide car sales in 2019 were more than 65 million vehicles annually.
- There are 1.2 billion vehicles on the world’s roads with projections of 2 billion by 2035.
Today, there are less than 8 million EV’s operating on the world’s highways. If EV projections come to reality by 2035, 5 to 7 percent of the 2 billion vehicles would equate to 125 million EV’s on the world’s roads, and potentially double that number if governments step up the pace of legislative change. However, looking at the UK study of the materials required for only 32 million EV batteries, there may not be enough materials in the world to finish the EV conversion plans.ADVERTISING
Further bad news is that a single digit penetration into the worlds projected 2 billion vehicles would also represent more than 125 BILLION pounds of lithium-ion batteries, just from those 125 million EV’s that will need to be disposed of in the decades ahead.
Zero and low emission vehicles are generally from the hybrid and electric car owners which are a scholarly bunch; over 70 percent of EV owners have a four-year college or post-graduate degree. This likely explains why the average household income of EV purchasers is upwards of $200,000. If you are not in that higher educated echelon and the high-income range of society, and a homeowner or resident of a NEW apartment that has charging access there may not be an appetite for an EV.
A recent 2021 California study shows that EV’s are driven half as much as internal combustion engine vehicles which further illustrates that EV’s are generally 2nd vehicles and not the primary workhorse vehicle for those few elites that can afford them.
Getting back to those plans to build a large house with an insufficient supply of materials to ever complete the house, maybe we should learn from the UK study of the materials required for only 32 million EV batteries (less than 7 percent of 2 billion vehicles in 2035) and set our sights on achieving an EV population that the world’s supply of the minerals and metals can support.
Ronald Stein, P.E.
Ambassador for Energy & Infrastructure
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9538029432296753,
"language": "en",
"url": "https://mabelkwong.com/2013/02/21/the-commercialisation-and-community-spirit-of-the-lunar-new-year-in-australia/",
"token_count": 1131,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2197265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:70aa2d80-44cf-40d8-b4c4-09e9badc9617>"
}
|
Today, the Lunar New Year is widely celebrated in Australia each year.
Lion dances and Asian cultural performances on city streets are a common sight here in the weeks leading up to start of the brand new lunar calendar. Various profit/not-for-profit bodies and everyday people from all walks of life frequently and tirelessly pitch in to organise and partake in these festivities.
However, interestingly enough, the Lunar New Year, also known as the Spring Festival, holds contrasting meanings for different groups Down Under.
The Lunar New Year, governments and corporate industries
The Lunar New Year is marked by extended public holidays in several parts of Asia. Many living here often take advantage of their mandatory time off from work, choosing to travel abroad. Australia is becoming a very popular destination with them given the country’s relatively friendly geographically distance.
The country’s viability as a tourist destination is constantly seen as an opportunity by the Australian government and the local tourism industry as a means to generate state revenue.
This year, hotels and luxury retail sectors here have been “pulling out all the stops” – for example putting up Chinese New Year decorations in hotel lobbies and offering Chinese food – to accommodate roughly 80,000 visitors from China and Hong Kong to Australia at the beginning of the year, cashing in on an ever increasing influx of Lunar New Year tourists. Although such generous hospitality seeks to recreate the authentic New Year atmosphere that these tourists are accustomed to in their homelands in Australia, ultimately this sector’s main aim is to reap financial profits.
In addition, Chinese tourists are spending more money than ever during the festive season here and Tourism Australia has urged Australian retailers to be more aware of this target market for their financial gains.
As such, Asian tourists are encouraged to spend during the Lunar New Year in Australia, spurred by profit-making local government sectors. They are enticed to exhibit frivolous material behaviour during this time of the year – a prime time of the year for respecting and reflecting on traditions – for the benefit of the local economy.
Moreover, established local companies often incorporate a “New Year” theme within promotional offers in order to attract the attention of Asian clients to utilise their services and products, exacerbating the commodification of this season.
For instance, the Big 4 Australian banks frequently pay homage to the season to their “valued Asian customers” in an attempt to make the festival “a lot of fun” in their branches. Auspicious Lunar New Year elements that tend to be close to hearts of many Asians are often prominently integrated within their banking advertisements and deals at this time of the year.
Ultimately, such strategic Lunar New Year marketing not only aims to attract Asians as clients but also functions as means for these companies to purport a much esteemed multicultural image in the eyes of every member of the public – potential clients.
Low-key Spring Festival celebrations among Australians
Unlike many countries in Asia, the Spring Festival is not heralded by public holidays in Australia. Rushing around and trooping from house to house collecting ang pows, boisterously catching-up with extended relatives and having too much to drink during this period seems somewhat toned down here for those who celebrate the Spring Festival, most likely due to the fact that many of their family members reside in their homelands.
Instead, it is common for them to mark the start of the Lunar New Year with a simple yum cha lunch or a trip down to the CBD’s Chinatown to watch the Lion Dances.
For David Kong, a descendent of Chinese philosopher Confucius who lives in Alice Springs, Chinese New Year celebrations in Australia are modest in scale. David is fond of painstakingly cooking authentic hand-made dumplings and sharing them with Chinese friends when this festival rolls around. Similarly, this year Kelly Cao and Eric Fan’s children headed down to the Riverland in Adelaide to learn more about their Chinese heritage.
Such relatively quiet activities may sound dull and lack the energetic liveliness of the celebrations held in parts of Asia. However, these small scale Lunar New Year celebrations in Australia are essentially about memory, namely the scrupulous recreation of homeland cultures.
More importantly, for those celebrating the Lunar New Year in Australia, these low-key festivities provide them with the opportunity to considerably acquaint and meticulously preserve their ethnic values and traditions, sans over-indulging on festive nibbles (or perhaps just a little bit), listening to family gossip and forgetting the true meaning of the Spring Festival.
Happy Lunar New Year in Australia
Enjoyment can definitely derived from the cosy Lunar New Year celebrations in Australia. What’s not to like about trying your very best to cook a New Year dish or organise a Chinatown trip on the first day of the New Year and everything goes well?
Although there are capitalist ideals beneath government and corporate private organisations’ Lunar New Year celebratory initiatives, there is no denying that they do aim to encourage both tourists and locals to have a joyous time or at the very least get into the harmonious spirit of a brand new lunar year.
It is fair to say that the Lunar New Year is not only celebrated but actually recognised and accepted in Australia as a yearly event today.
Picturesque Lunar New Year festivities Down Under often depict Australians of various backgrounds and races celebrating the festival together and cheerfully partaking in or admiring Asian cultures and performances, united as one. Happy multicultural moments.
And sometimes, perhaps most of the time, this is simply just that.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9294716715812683,
"language": "en",
"url": "https://www.americanprogress.org/issues/women/reports/2017/09/27/439527/paid-family-medical-leave-numbers/",
"token_count": 2440,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06689453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:cc09a211-986e-4f23-b76b-b062b16d6d5f>"
}
|
Paid family and medical leave—a program that allows workers to take paid time off for the birth or adoption of a new child or to care for their own illness or injury or that of a loved one—is now being widely debated among policymakers and the public. This fact sheet details some of the key data on the need for and support of a comprehensive national paid leave program.
Who has access to paid family and medical leave?
- Only 13 percent of private sector workers have paid family leave.1
- Only 41 percent of private sector workers have medical, or short-term disability, leave.2
- Three states—California, New Jersey, and Rhode Island—have implemented paid family and medical leave programs; three others—New York; Washington, D.C.; and Washington—have passed them.3
Women are the majority of caregivers, and women’s work is critical for family economic stability
- More than 64 percent of mothers were either primary or co-breadwinners in 2015—the year for which the most recent data are available.4
- The share of breadwinning mothers has increased from previous years, continuing a long-running trend of women’s earnings and economic contributions becoming increasingly important to their families.7
While women are a majority of caregivers, both women and men have caregiving responsibilities—and not just for the birth or adoption of a new child.8 Out of the 20 million people who take unpaid leave through the Family and Medical Leave Act (FMLA) each year:9
- 21 percent use it for the birth or adoption of a new child.
- 73 percent use it to care for their own injury or illness or that of a loved one.
- 55 percent is used for an employee’s own medical condition.
- 18 percent is used to care for the health conditions of a child, spouse, or parent.10
What are the costs and economic impacts of not having paid leave?
- Working families in the United States lose at least $20.6 billion in wages every year due to a lack of access to paid family and medical leave.11
- Lack of access to paid family and medical leave also has long-term economic effects, including lower labor force participation and reduced lifetime earnings as a result of time taken out of the labor force.
- In California, implementation of the state’s paid family and medical leave program led to an 8 percent increase in labor force participation of family caregivers in the short run and a 14 percent increase in labor force participation of family caregivers in the long run.12
Experience of business with paid leave
Nearly 10 years after the implementation of paid family and medical leave in California, a study found that:
- Nine out of 10 employers reported that paid family leave had a “positive effect” or “no noticeable effect” on productivity, performance, and profitability.
- On turnover and employee morale, 96 percent and 99 percent of employers, respectively, reported either “a positive effect” or “no noticeable effect.”
- 87 percent of employers reported no added costs due to the paid family leave program, and 9 percent of employers reported cost savings.13
Small businesses support paid leave
- Recent polling shows that 70 percent of small-business owners think it is important to establish a gender-neutral federal paid family and medical leave program that workers can use to care for themselves or a family member. Forty-two percent believe that establishing such a program is “very important.”14
- This support among small-business owners is growing. In 2013, a similar poll found that 45 percent of small-business owners supported proposals to create a publicly administered paid family and medical leave program.15 In just four years, that number has grown to 70 percent.16
- 61 percent of small-business owners support using a combination of employer and employee contributions to administer paid family and medical leave.17
Broad public support for paid family and medical leave
Almost 70 percent of those who voted for President Donald Trump support a national social insurance program for paid family and medical leave, alongside 89 percent of those who voted for Hillary Clinton.18
Working families need comprehensive paid family and medical leave
These data clearly show that working families need a comprehensive paid family and medical leave plan that addresses all of their caregiving needs. Here are the key features of a paid leave plan that meets the needs of working families:19
1. Available to all workers
Paid family and medical leave should be available to people regardless of the size or industry of their employers and whether they work full time, part time, or are self-employed. Workers should also have the ability to relocate or switch jobs without losing access to leave, and it must be gender neutral in the amount of leave offered.
2. Comprehensive and specific in addressing serious family and medical needs
This includes addressing a serious health condition of oneself or a family member; caring for a new baby, a newly adopted child, or a newly placed foster child; or addressing the needs that may arise from a family member’s deployment in the military.
3. Affordable and cost-effective
A paid leave plan should replace enough of a worker’s usual wages to support the time they need for care without jeopardizing their ability to afford basic necessities. It should also should coordinate with existing benefits offered by employers and state and federal programs and be affordable for employers.
To ensure that a paid family leave plan recognizes today’s diverse families and care responsibilities, it should be inclusive in its definition of “family,” including covering care for elders and recognizing same-sex families.
5. Available without adverse employment consequences
Any paid leave plan must include provisions to protect workers against discrimination or retaliation for needing or taking leave. It should not force employees to give up important workplace rights or labor protections in order to use paid leave.
There is currently a legislative proposal, the Family and Medical Insurance Leave Act (FAMILY Act) that meets these key features by creating a comprehensive federal paid family and medical leave insurance program.
Working families have a clear, demonstrated need for and support of a comprehensive national paid family and medical leave program. Policymakers must act to address this pressing issue so that American workers and families can meet their caregiving needs without sacrificing their economic security.
CAP’s Women’s Initiative is a comprehensive effort to marshal CAP’s broad expertise and promote public policies that enable women to participate fully in our economy and our society.
- U.S. Bureau of Labor Statistics, National Compensation Survey: Employee Benefits in the United States, March 2017 (U.S. Department of Labor, 2017), available at https://www.bls.gov/ncs/ebs/benefits/2017/ebbl0061.pdf. ↩
- Ibid. ↩
- National Partnership for Women & Families, “State Paid Family Leave Insurance Laws” (2017), available at http://www.nationalpartnership.org/research-library/work-family/paid-leave/state-paid-family-leave-laws.pdf. ↩
- Sarah Jane Glynn, “Breadwinning Mothers Are Increasingly the U.S. Norm” (Washington: Center for American Progress, 2016), available at https://www.americanprogress.org/issues/women/reports/2016/12/19/295203/breadwinning-mothers-are-increasingly-the-u-s-norm/. ↩
- Ibid. ↩
- Ibid. ↩
- Ibid. ↩
- National Alliance for Caregiving and the AARP Public Policy Institute, “Caregiving in the U.S.: 2015 Research Report” (2015), available at http://www.aarp.org/content/dam/aarp/ppi/2015/caregiving-in-the-united-states-2015-report-revised.pdf; U.S. Department of Labor, Bureau of Labor Statistics, “Table A-1, Time spent in detailed primary activities and percent of the civilian population engaging in each activity, averages per day by sex, 2015 annual averages,” available at http://www.bls.gov/tus/tables/a1_2015.pdf (last accessed August 2017). ↩
- Jacob Alex Klerman, Kelly Daley, and Alyssa Pozniak, “Family and Medical Leave in 2012: Technical Report” (Cambridge, MA: Abt Associates, 2012), available at https://www.dol.gov/asp/evaluation/fmla/FMLA-2012-Technical-Report.pdf. ↩
- Ibid. ↩
- Sarah Jane Glynn and Danielle Corley, “The Cost of Work-Family Policy Inaction: Quantifying the Costs Families Currently Face as a Result of Lacking U.S. Work-Family Policies” (Washington: Center for American Progress, 2016), available at https://www.americanprogress.org/issues/women/reports/2016/09/22/143877/the-cost-of-inaction/ . ↩
- Joelle Saad-Lessler and Kate Bahn, “The Importance of Paid Leave for Caregivers: Labor Force Participation Effects of California’s Paid Family and Medical Leave” (Center for American Progress, 2017), available at https://www.americanprogress.org/?p=439684. ↩
- Eileen Appelbaum and Ruth Milkman, “Leaves That Pay: Employer and Worker Experiences with Paid Family Leave in California” (Washington: Center for Economic Policy Research, 2011), available at cepr.net/index.php/publications/reports/leaves-that-pay. ↩
- Shilpa Phadke and Danielle Corley, “New Polling Shows that Small Businesses Strongly Support Paid Family and Medical Leave,” Center for American Progress, March 30, 2017, available at https://www.americanprogress.org/issues/economy/news/2017/03/30/429527/new-polling-shows-small-businesses-strongly-support-paid-family-medical-leave/. ↩
- Small Business Majority, “Small Businesses Support Family Medical Leave,” Press release, September 27, 2013, available at http://www.smallbusinessmajority.org/our-research/workforce/small-businesses-support-family-medical-leave. ↩
- Phadke and Corley, “New Polling Shows that Small Businesses Strongly Support Paid Family and Medical Leave.” ↩
- Ibid. ↩
- John Halpin and Karl Agne, “American Voters Did Not Endorse Trump’s Extremist Policy Agenda in 2016 Election” (Washington: Gerstein Bocian Agne Strategies and Center for American Progress, 2016), available at https://cdn.americanprogress.org/content/uploads/2016/11/21072333/CAP-2016PostelectionPollingMemo-Final.pdf ↩
- Center for American Progress and the National Partnership for Women & Families, “Key Features of a Paid Family and Medical Leave Program that Meets the Needs of Working Families” (2014), available at https://www.americanprogress.org/issues/economy/reports/2014/12/01/102244/key-features-of-a-paid-family-and-medical-leave-program-that-meets-the-needs-of-working-families/. ↩
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9491392970085144,
"language": "en",
"url": "https://www.arnoldporter.com/en/perspectives/publications/2018/01/overview-of-the-tax-cuts-and-jobs-act",
"token_count": 6141,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.056884765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1ac50be0-dbe0-4460-907d-f7625bd00eb4>"
}
|
Overview of the Tax Cuts and Jobs Act
On December 22, 2017, President Trump signed a new tax bill into law, informally referred to as the Tax Cuts and Jobs Act (Tax Act). As the biggest legislative tax overhaul in 30 years, the Tax Act will significantly impact both individuals and corporations. The Tax Act modifies the individual tax brackets and marginal rates, limits many individual deductions that were previously permitted (such as that for state and local taxes) and significantly changes the tax treatment of certain business income earned by individuals through a "pass-through" entity. The Tax Act also reduces the corporate income tax rate and materially changes the taxation of non-US earnings of multinational groups and investors. Although most of the individual income tax changes are scheduled to expire in eight years, the corporate income tax changes generally are permanent.
Individual Income Tax Provisions
Individual Tax Rates
The Tax Act maintains seven individual income tax brackets, but, for taxable years beginning on or after January 1, 2018, the top individual income tax rate is reduced to 37 percent (from 39.6 percent) for single filers with taxable income in excess of $500,000 (in excess of $600,000 for joint filers). Under the Tax Act, the income tax rates applicable to individuals are 10 percent, 12 percent, 22 percent, 24 percent, 32 percent, 35 percent and 37 percent.
The Tax Act does not make changes to the 20 percent tax rate for long-term capital gains and qualified dividend income, the 3.8 percent Medicare tax on certain levels of net investment income or the 0.9 percent additional Medicare tax.
The Tax Act doubles the standard deduction (to $12,000 for single filers and $24,000 for joint filers).
State and Local Tax Deductions
Under the Tax Act, as a general matter, an individual may only deduct state, local and non-US property taxes, and state and local sales taxes, when such taxes are paid or accrued in carrying on a trade or business. Moreover, under the Tax Act, an individual generally is disallowed a deduction for state and local income taxes. As an exception to the general rule, an individual may deduct up to $10,000 of state and local income and/or property taxes per year. Non-US real property taxes may not be deducted under this exception.
Mortgage Interest Deduction
The Tax Act provides that no more than $750,000 ($375,000 in the case of married taxpayers filing separately) may be treated as acquisition indebtedness for purposes of the mortgage interest deduction. For acquisition indebtedness incurred before December 15, 2017 (or refinancing of such debt in an amount not in excess of the prior debt), this limitation is $1,000,000 ($500,000 in the case of married taxpayers filing separately). The Tax Act suspends the interest deduction on home equity indebtedness.
20 Percent Deduction for Certain Business Income and REIT Dividends
The Tax Act provides new rules permitting certain individuals, trusts and estates to deduct up to 20 percent of their domestic "qualified business income" and 20 percent of their aggregate "qualified REIT dividends."
Business Income Deduction
Under the Tax Act, an individual, trust or estate is permitted to deduct, for a taxable year, 20 percent of the taxpayer's share of domestic "qualified business income" with respect to a "qualified trade or business" from a partnership (or other entity treated as a partnership), S corporation, or sole proprietorship (subject to two important limitations discussed below) (Business Income Deduction). This deduction generally results in an effective tax rate of 29.6 percent (assuming the new maximum individual income tax rate of 37 percent applies) on such income. In the case of a partnership or S corporation, the deduction amount is calculated at the partner or shareholder level.
"Qualified business income" generally means income, gain, deductions and loss with respect to any qualified trade or business (generally defined as any trade or business other than the trade or business of performing services as an employee) that are effectively connected with the conduct of a US trade or business. However, the Tax Act specifically excludes from qualified business income numerous items, including investment income (e.g., capital gains and losses, dividends, certain interest income), certain annuity payments, and guaranteed payments or reasonable compensation paid for services rendered.
The Business Income Deduction is subject to two important limitations for taxpayers generally with taxable income (i.e., income from all sources) in excess of a threshold of $157,500 for single filers and $315,000 for joint filers (indexed for inflation). (Taxpayers with taxable income below these threshold amounts generally are not subject to these limitations.) These limitations generally are phased in for taxpayers with taxable income above these threshold amounts.
First, the Business Income Deduction generally is limited to the greater of:
- 50 percent of the taxpayer's share of the W-2 wages paid by the qualified trade or business, or
- 25 percent of the taxpayer's share of the W-2 wages paid by the qualified trade or business, plus 2.5 percent of the taxpayer's share of the unadjusted basis (immediately after acquisition) of all tangible depreciable assets used in the trade or business.
Second, taxpayers in specified service trades or businesses (e.g., law, accounting, consulting, financial services, investment management) are, unless their taxable income is not in excess of the above-noted threshold amounts, precluded altogether from claiming the Business Income Deduction on income derived from such trade or business.
REIT Dividend Deduction
Under the Tax Act, an individual, trust or estate is permitted to deduct, for a taxable year, 20 percent of the taxpayer's aggregate "qualified REIT dividends" (defined as any REIT dividend other than a REIT capital gain dividend or a dividend attributable to the REIT's receipt of a "qualified dividend" otherwise taxable at preferential long term capital gain rates) and "qualified publicly traded partnership income" (generally including domestic business income allocated from a PTP not taxed as a corporation, but excluding investment-related items from PTPs) (REIT Dividend Deduction).
The two limitations to the Business Income Deduction do not apply to the REIT Dividend Deduction. Thus, individuals, trusts and estates who derive income that is not qualified business income or that is subject to the W-2 and basis limitations may, where possible, seek to transfer appropriate REIT-eligible assets to a private REIT. By doing so, such taxpayers may be able to claim the REIT Dividend Deduction.
The Tax Act (1) increases the percentage limit for charitable contributions of cash to public charities from 50 percent to 60 percent of an individual's adjusted gross income; (2) permanently denies a charitable deduction for payments made in exchange for college athletic event seating rights; and (3) permanently repeals the exception to the contemporaneous written acknowledgment requirement for contributions of $250 or more when the donor organization files the required return.
Other Individual Deductions and Exclusions
The Tax Act also makes a number of other important changes to the taxation of individuals, including the following:
- Suspends the overall limitation on itemized deductions (the so-called "Pease limitation").
- Suspends all miscellaneous itemized deductions (including the deduction for tax preparation expenses).
- Generally suspends the deduction for moving expenses and generally suspends the exclusion from gross income for qualified moving expense reimbursements.
- Limits the personal casualty loss deduction to losses incurred as a result of a federally-declared disaster.
- Reduces the threshold for deducting medical expenses to 7.5 percent of adjusted gross income for all taxpayers for taxable years beginning after December 31, 2016 and ending before January 1, 2019.
- Permanently modifies section 529 plans to allow such plans to fund up to $10,000 in annual expenses for elementary or secondary school tuition.
- Permanently repeals both the deductibility of alimony payments and the inclusion of alimony payments in the payee's gross income (effective for divorce decrees and separation agreements, and certain modifications thereof, entered into after 2018).
Individual Alternative Minimum Tax
The alternative minimum tax (AMT) for individuals is retained, except that the Tax Act provides higher exemption amounts of $70,300 for single taxpayers and $109,400 for joint filers. The phase-out thresholds are increased to $500,000 and $1 million for single and joint filers, respectively.
Effective Date and Expiration
Unless otherwise specifically indicated, the individual income tax changes made by the Tax Act, discussed above, are effective for taxable years beginning on or after January 1, 2018, and are scheduled to expire for taxable years beginning on or after January 1, 2026.
Estate, Gift and Generation-Skipping Transfer Taxes
The Tax Act doubles the tax exemption amounts applicable to the estate, gift and generation-skipping transfer taxes for the next eight years. From 2018 through 2025, the estate and gift tax unified exemption will be $10 million (instead of the current $5 million), adjusted for inflation retroactively from a base year of 2010. Accordingly, in 2018, the estate and gift tax unified exemption will be $11.2 million per individual. Because the estate and gift tax credit remains unified, any gifts made during a person's lifetime pursuant to this exemption will reduce, correspondingly, the amount of the tax exemption available at the time of such person's death. The Tax Act also doubles the exemption amount for generation-skipping transfers made in 2018 through 2025 to $10 million (from the current $5 million), adjusted for inflation retroactively from a base year of 2010. The Tax Act does not repeal the estate, gift or generation-skipping transfer taxes at any point.
Business Tax Provisions
Corporate Tax Rate
The Tax Act replaces the graduated corporate tax rates under the prior law, which imposed a maximum tax rate on corporations of 35 percent, with a reduced 21 percent flat rate.
Under prior law, corporations that received dividends from other corporations generally were allowed a dividends-received deduction (DRD) equal to 80% of the dividends received if the receiving corporation owned at least 20% of the distributing corporation, and a 70 percent deduction in all other cases. In light of the reduced corporate tax rates, to maintain similar effective tax rates on dividends paid between corporations, the Tax Act reduces the 80 percent DRD for dividends received from a 20 percent-owned corporation to 65 percent, and reduces the 70 percent DRD for dividends received from all other corporations to 50 percent.
Corporate Alternative Minimum Tax
Under prior law, the corporate AMT subjected US corporations to tax on an alternative taxable income (calculated with limitations on certain tax benefits allowed under the regular formula), at a tax rate of 20 percent, if this yielded a higher tax than the normal rates applied to the taxable income as generally computed. The Tax Act repeals the corporate AMT.
Net Operating Losses
Under prior law, net operating losses (NOLs) could be carried back two years and carried forward 20 years. The Tax Act eliminates, with limited exceptions, the ability to carry back NOLs but allows NOLs to be carried forward indefinitely. However, the annual deduction for NOL carryforwards is now limited to 80 percent of taxable income.
Limitation on Interest Expense Deduction
The Tax Act contains a new limitation on the deduction of interest incurred in connection with a trade or business. This provision replaces prior law, which only limited business interest paid by US corporations to related non-US corporations where such interest was not subject to full 30 percent withholding tax. The new interest expense limitation established by the Tax Act is broader and applies to interest expense of corporate and non-corporate taxpayers paid both to related and unrelated parties. This new rule generally limits the annual deduction of business interest expense to the sum of (1) business interest income and (2) 30 percent of "adjusted taxable income," defined as similar to "Earnings Before Interest Taxes Depreciation and Amortization" for 2018 through 2021, and "Earnings Before Interest and Taxes" thereafter. Interest disallowed in a year as a result of this limitation can be carried forward indefinitely.
This limitation does not apply to certain regulated public utilities and electing real property trades or businesses. Electing real property trades or businesses will, however, be required to use the alternative depreciation system to depreciate non-residential real property, residential rental property, and qualified improvement property. As a result, such electing trades or businesses are not eligible for bonus depreciation with respect to qualified improvement property (discussed below). As an additional exception, businesses with average gross receipts of $25 million or less for the preceding three-year period will not be subject to this limitation. Special rules apply for determining the limitation on interest expense of partnerships and S corporations.
Business Asset Expensing
The Tax Act expands the prior-law "bonus" depreciation by allowing taxpayers immediately to expense the entire cost (instead of only 50 percent, as under prior law) of certain depreciable tangible property and real property improvements acquired and placed in service after September 27, 2017 and before January 1, 2023 (with an additional year for property with longer recovery periods). This 100 percent bonus depreciation is phased down proportionately for qualifying property placed in service on or after January 1, 2023 and before January 1, 2027 (with an additional year for property with longer recovery periods). The Tax Act also expands the definition of property eligible for this bonus depreciation to include the first use by the taxpayer of property previously used by another party. This immediate expensing is not available for certain regulated public utilities and businesses using floor plan financing (i.e., car dealerships).
Like-Kind Exchanges Limited to Real Property
In general, certain assets with built-in gain can be exchanged for a similar replacement asset without recognizing the built-in gain for US federal income tax purposes in the exchanged asset (referred to as a "like-kind exchange"). In a qualifying like-kind exchange, the built-in gain in the exchanged asset is deferred and "carries over" to the replacement asset. Under prior law, this tax deferral was available for both personal and real property held for productive use in a trade or business, or held for investment purposes, and exchanged for property of a like-kind that also was held for productive use in a trade or business or for investment. The Tax Act restricts the tax deferral of a like-kind exchange solely to real property.
Three-Year Holding Period Requirement For Certain Partnership Interests
Individuals that provide financial and investment management services to investment funds often receive a partnership "profits interest" (or carried interest) in the fund as a part of their compensation. Although compensation generally is taxed at individual ordinary income tax rates (maximum 39.6 percent under prior law), allocation of fund profits with respect to such profits interests that relate to underlying long-term capital gain of the fund were, under prior law, taxable at preferential long-term capital gain rates (maximum 20 percent). Longterm capital gain rates generally apply to gain on the sale of a capital asset held for at least one year.
The Tax Act establishes an extended three-year holding period requirement in order for individuals to benefit from the long-term capital gain rates on gain recognized with respect to certain partnership profits interests (applicable partnership interest) that are received in exchange for substantial services provided in connection with an "applicable trade or business." An "applicable trade or business" generally is defined to mean (1) raising and returning capital and (2) either investment or development activities with respect to "specified assets." "Specified assets" are defined to include (1) securities, commodities, real estate held for rental or investment and cash or cash equivalents and (2) options or derivative contracts with respect to, and interests in partnerships relating to, any of these assets. This provision generally will apply to carried interests issued to fund and investment managers, but should not apply to partnership interests in partnerships engaged in operating businesses.
In the case of an applicable partnership interest, this new rule appears to impose a three-year holding period requirement both on (1) the assets of the partnership in the case of the allocation of partnership gain with respect to such assets and (2) the partnership interest in the case of gain on the sale of a relevant partnership interest by the partner. Treasury regulations or other guidance from the U.S. Internal Revenue Service (IRS) are necessary to confirm this.
Other Business Tax Provisions
The Tax Act makes other notable changes to the taxation of businesses, including, among other things, the following:
- Expands taxpayers eligible to use the cash method of accounting to include taxpayers with average annual gross receipts of $25 million or less during a preceding three-year period and beneficial treatment for such taxpayers in accounting for inventory and long-term contracts.
- Requires taxpayers to recognize an item of income no later than the taxable year in which such item is taken into account on financial statements.
- Requires research expenses to be capitalized and amortized ratably over a five-year period.
- Repeals the tax-free treatment for any contribution in aid of construction or any other contribution in aid of construction or any other contribution as a customer or potential customer, and any contribution by any governmental entity.
- Repeals the technical termination rules for partnerships.
- Repeals the domestic production activity deduction.
- Repeals the deduction for lobbying expenses with respect to legislation before local government bodies.
- Disallows deductions for business-related entertainment activities, employee transportation fringe benefits and, beginning in 2026, the cost of certain meals provided to employees for the convenience of the employer.
- Modifies a number of business credits provided under prior law.
International Tax Provisions
In general, US corporate and non-corporate taxpayers are subject to U S federal income tax on their worldwide income, regardless of source (i.e., a "worldwide tax regime"). Under prior law, this worldwide taxation included the taxation of dividends received by US corporations from non-US subsidiaries. Additionally, anti-tax deferral rules imposed current US federal income tax on certain types of passive and related-party income (Subpart F income) of certain non-US corporations (controlled foreign corporations or CFCs), even if the earnings of these corporations was not actually distributed to shareholders that are US persons. Generally, a CFC was defined as a non-US corporation that is greater than 50 percent owned (by vote or value) in the aggregate by "US shareholders." A "US shareholder" was defined as a US person that owned at least 10 percent of the vote of the non-US corporation at issue.
Modification of CFC Rules
The CFC rules described above remain largely intact but have been expanded in certain key areas by the Tax Act. Significantly, the definition of a "US shareholder" is expanded by the Tax Act to include US persons that own at least 10 percent of the vote or value of the non-US corporation at issue (i.e., not only vote, as under prior law). Additionally, effective for the last taxable year of non-US corporations beginning before January 1, 2018, the Tax Act expands the constructive ownership rules for purposes of determining whether a US person is a US shareholder of a non-US corporation and whether a non-US corporation is a CFC. These changes will result in more US persons being treated as US shareholders and more non-US corporations being treated as CFCs. The Tax Act makes a number of other changes to the CFC rules, including, among other things, the repeal of the requirement that a non-US corporation be a CFC for an uninterrupted period of 30 days or more during any taxable year for the consequences of the CFC rules to apply.
DRD for Dividends From Non-US Subsidiaries
In a move towards a "territorial" approach, the Tax Act established a 100 percent DRD for the non-US source portion of dividends distributed by a non-US corporation (except a so-called "passive foreign investment company" (PFIC) that is not also a CFC) to a US corporate shareholder owning at least 10 percent (of vote or value) of the distributing corporation. A one-year holding period of the non-US corporate stock is required to be eligible for this DRD. No foreign tax credit or deduction is allowed for any taxes paid or accrued with respect to a dividend that qualifies for this DRD. Additionally, US corporations will no longer receive indirect foreign tax credits along with dividends from a non-US subsidiary. A DRD is not available with respect to dividends paid by a CFC that received a deduction for the distribution (i.e., a "hybrid dividend"). Significantly, however, non-US income earned directly by a US corporation still is subject to full US federal income taxation.
Mandatory One-Time Deemed Repatriation
To transition to the DRD regime, the Tax Act establishes a mandatory one-time deemed repatriation of certain accumulated earnings of non-US subsidiaries to certain US shareholders. Specifically, for a non-US corporation's last tax year beginning before January 1, 2018, a US shareholder (whether corporate or not) owning at least 10 percent (of the vote) of a "deferred foreign income corporation" generally must include in income, as Subpart F income, the shareholder's pro-rata share of the untaxed accumulated post-1986 earnings and profits (E&P) of the non-US corporation. A "deferred foreign income corporation" generally is defined as (1) either (a) a CFC or (b) a non-US corporation with at least one US-corporate shareholder owning 10 percent (of the vote) of such corporation and (2) that has untaxed post-1986 E&P. The specific amount of earnings of a deferred foreign income corporation to be included is determined based on the greater amount as of November 2, 2017 or December 31, 2017. The shareholder's portion of earnings of the non-US corporation held in cash and cash equivalents is taxed at a 15.5 percent rate and the shareholder's portion of all other earnings is taxed at an 8 percent rate. A taxpayer may elect to pay the tax due under this provision in installments over an eight-year period. Additional rules for deferral of this tax liability apply for S corporations.
In the case of real estate investment trusts (REITs), this one-time inclusion of untaxed earnings is excluded for purposes of the REIT gross income test. In addition, REITs are permitted to elect to meet the annual distribution requirement to REIT shareholders with respect to this inclusion of earnings over an eight-year period.
Global Intangible Low-Taxed Income
The Tax Act creates a new category of income, so-called "global intangible low-taxed income" (GILTI), that will be currently includable pro rata in the income of the US shareholders of a CFC, similar to Subpart F income. This provision is viewed as an expansion of the CFC rules of the prior law because it subjects a portion of the active income generated by a CFC (and not just passive and related-party income) to current US federal income tax.
Generally, GILTI is equal to the amount that a U.S. shareholder's "net CFC tested income" exceeds the shareholder's "net deemed tangible income return." A shareholder's "net CFC tested income" generally is the shareholder's pro-rata share of the net income of a CFC, excluding Subpart F income, income that would otherwise be Subpart F income but for the application of the so-called "high tax kick-out" exception and income subject to US federal income tax. A shareholder's "net deemed tangible income return" generally is an amount equal to the excess of 10 percent of the shareholder's pro-rata share of the "qualified business asset investment" of the CFC (generally the CFC's adjusted basis of the depreciable property generating the CFC tested income) over the net interest expense taken into account in determining the shareholder's net CFC tested income. A US corporation, but not an individual, will benefit from a 50 percent deduction (reduced to 37.5 percent beginning in 2026) of GILTI included in income, resulting in a 10.5 percent effective tax rate on GILTI (13.125 percent beginning in 2026). The US federal income tax imposed on GILTI of a US corporation, but not an individual, will be further mitigated through the allowance of an indirect foreign tax credit in the amount of 80 percent of the foreign tax credits paid by the CFC with respect to the GILTI included by the US corporation. Individual shareholders, by contrast, pay full tax on GILTI and receive no foreign tax credits.
In a related provision, the Tax Act creates a new rule that imposes a reduced effective tax rate on so-called "foreign derived intangible income" generated by a US corporation, generally income generated from sales to, and services provided to, non-US persons outside of the United States.
Base Erosion Minimum Tax
Under the Tax Act, corporations (other than REITs, regulated investment companies, or RICs, and S corporations) meeting certain threshold requirements are required to pay a tax equal to the "base erosion minimum tax amount" for the tax year. The "base erosion minimum tax amount" generally is equal to the excess of 10 percent (5 percent in 2018 and 12.5 percent beginning in 2026) of the taxpayer's modified taxable income, which is determined with "base erosion payments" added back, over the corporation's regular income tax liability (reduced by certain tax credits). A "base erosion payment" generally is an amount that is paid to a related non-US party that is deductible to the taxpayer, but does not include payments included in the cost of goods sold. Deductible payments that would otherwise be treated as "base erosion payments" but are subject to US withholding tax at a rate of 30 percent will not be added back in determining modified taxable income. The base erosion minimum tax only applies to a corporation that has average annual gross receipts of at least $500 million for the preceding three-year period and generally at least three percent of the corporation's deductible payments in the year are made to related parties.
Non-US Investors Subject to Tax On Sale of Partnership Interests
In a recent case, Grecian Magnesite Mining, Industrial & Shipping Co., SA v. Commissioner, the Tax Court declined to follow the longstanding position of the IRS in Revenue Ruling 91-32 that a non-US partner is subject to U.S. federal income tax on gain from the sale of a partnership interest to the extent the partnership was engaged in a US trade or business. This case is currently being appealed by the IRS. The Tax Act includes a rule, effective for transfers of partnership interests on or after November 27, 2017, that essentially overrides the Tax Court decision and codifies Revenue Ruling 91-32. Under this new rule, gain or loss from the sale or exchange of a partnership interest will be treated as effectively connected with a US trade or business to the extent that the transferor would have had effectively connected gain or loss had the partnership sold all of its assets at fair market value as of the date of the sale or exchange. A 10 percent withholding tax is imposed, effective for transfers of partnership interests after December 31, 2017, on the gross purchase price upon a sale of a relevant partnership interest by a non-US person. This withholding requirement has been suspended, pending further IRS guidance, with respect to certain publicly traded partnership interests. Additionally, the Tax Act grants authority to the Treasury to issue regulations establishing the extent to which non-recognition provisions will apply in the case of transfers of relevant partnership interests.
Other International Tax Provisions
The Tax Act makes other notable changes to the taxation of international transactions, including, among other things, the following:
- Creates a new rule that denies deductions for payments of interest or royalties to non-US related parties where either such item of income or the recipient entity is characterized differently for US and non-US tax purposes and such item of income is not subject to non-US tax in the hands of the recipient. This rule will significantly limit the benefits of many cross-border "repo" transactions.
- Amends the definition of "intangible property" relevant for outbound restructurings and transfer pricing purposes to include workforce in place, goodwill, going concern value and any other item of intangible value.
- Eliminates the exception for transfers of certain property by a US person to a non-US corporation for use in the active conduct of a non-US trade or business.
The Tax Act was assembled very quickly, and there is little in the way of legislative history available to be used in interpreting certain provisions. The IRS has already issued notices intended to clarify the operation of certain aspects of the Act (such as the mandatory one-time transition tax on US shareholders with respect to their share of accumulated earnings of non-US subsidiaries). Further guidance, including regulations, as well as technical corrections legislation, should be forthcoming. We will, of course, keep our clients apprised of relevant developments as they emerge.
© Arnold & Porter Kaye Scholer LLP 2018 All Rights Reserved. NOTICE: ADVERTISING MATERIAL. Results depend upon a variety of factors unique to each matter. Prior results do not guarantee or predict a similar results in any future matter undertaken by the lawyer.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9698241353034973,
"language": "en",
"url": "https://www.edweek.org/policy-politics/sequestration-and-education-frequently-asked-questions/2013/03",
"token_count": 1241,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.45703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5cc7fa52-6636-4cc2-a0a2-df867db1dc5e>"
}
|
The arrival of the March 1 deadline for automatic federal spending cuts known as sequestration had policymakers and administrators from Washington to local school districts bracing for the possible effects. The outcome—political and financial—was unclear as of press time. Here, though, is a primer on what educators should know about “the sequester.” For up-to-the-minute developments on the federal budget crisis, follow Education Week‘s.
What exactly is “sequestration”? Sequestration is a series of across-the-board cuts to a broad range of federal programs, including those in the U.S. Department of Education, that was designed to hit on March 1, unless a last-ditch effort by Congress and the Obama administration stopped them. Programs in the Education Department would be cut by about 5.3 percent, according to the Government Accountability Office. The cuts wouldn’t be just for this year, either. They’re aimed at chopping $1.2 trillion out of the federal deficit over the next decade.
Where did these cuts come from? The threat of cuts was put in place as part of a deal to raise the federal debt ceiling back in August 2011. The cuts would affect both military spending, typically favored by Republicans, and domestic programs, typically favored by Democrats. The cuts were supposed to be so dire and distasteful to lawmakers on both sides of the aisle that Congress and the administration would be forced to work together on a long-term deficit-reduction deal to avert them. But that hadn’t happened as of last week, and it looked as though the cuts would become a reality, at least for a while. (Congress did delay the cuts once, as part of a deal to avert the “fiscal cliff” at the start of the year.)
When would school districts be affected? Most school districts wouldn’t get squeezed right away because key formula-funding programs—including Title I grants for districts and special education—are what’s called “forward funded.” Schools wouldn’t feel the pinch until the start of the 2013-14 school year. Still, many districts are already in the process of crafting their budgets for the coming school year, and they’d like to know what their funding will look like. The looming cuts had already made planning tough.
Would any school districts be affected right away? Some districts would get hit fairly soon under sequestration—some of them substantially. Among the hardest hit would be those in the Impact Aid program, which helps some 1,200 districts nationwide. Most impact-aid districts have a lot of Native American students or students whose parents work on military bases, or they may have federal land in or near the district. Their next federal payment, likely due out in April, would probably be smaller. But that would be unlikely to translate into widespread layoffs, according to John Forkenbrock, the president of the National Association of Federally Impacted Schools. Districts have known about the possible cuts for a long time and have prepared, he said, by doing things like delaying technology purchases. The big problem for impact-aid districts may come next year.
What about all those numbers the Obama administration is throwing out when it comes to job losses? It’s true that U.S. Secretary of Education Arne Duncan has said thatif the cuts are carried out. That number is pretty scary—the Obama administration has clearly been trying to get the public riled up against the cuts. But it’s tough to say how accurate the administration’s estimates are at this point.
So what will the cuts actually mean in districts? School districts spend a majority of their funding on personnel, so federal cuts could very well translate into layoffs. But a lot would depend on how states and districts decide to implement the cuts. The American Association of School Administrators conducted aback in July. Superintendents told the group they anticipated reducing professional development, cutting programs, and laying off some staff members. The bottom line? Schools have had time to prepare, but in many cases, the cuts would come on top of state and local reductions. Hard and fast figures on potential job losses aren’t available yet.
Are any U.S. Department of Education programs exempt from the cuts? Yes. Student loans and Pell Grants, which help needy students cover the cost of postsecondary education, are exempt.
What about early-childhood-education programs? The Head Start program could face a cut right away, but it’s unclear just how individual grantees would be affected. Under sequestration, Head Start programs that do not offer summer services either would end their current school year earlier than planned or delay the start of the next school year, the U.S. Department of Health and Human Services says. Year-round programs likely would decide not to fill openings after children age out. And grantees could also cut transportation services.
Are any other federal programs for children safe from the sequester? Many are. Temporary Assistance for Needy Families would not get cut, and neither would most school nutrition programs and child-health programs.
What about Education Department employees? Will they be furloughed? Secretary Duncan says it’s a possibility.
What about “maintenance of effort” and other technical issues, such as the state school improvement set-aside? Advocates have been asking about such implications for months, but the administration has yet to give them a good answer.
What happens now? As of Education Week‘s deadline, congressional leaders hadn’t put forth a serious, bipartisan bill that would actually avert or reverse sequestration. Congress still has another looming fiscal deadline: March 27. That’s when a temporary measure funding most of the federal government expires. Lawmakers may figure out a way to deal with sequestration by then.
A version of this article appeared in the March 06, 2013 edition of Education Week as Sequestration and Education: Frequently Asked Questions
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9201299548149109,
"language": "en",
"url": "https://www.gfoa.org/materials/estimated-useful-lives-capital-assets",
"token_count": 110,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0380859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8f39f011-996e-46fd-b167-73ab3b4fae06>"
}
|
Generally accepted accounting principles (GAAP) require, in most cases, that capital assets be depreciated. Depreciation is the systematic and rational allocation of the historical cost of a capital asset over its useful life. The estimated useful life assigned to a capital asset will directly affect the amount of depreciation expense reported each period in an accrual-based operating statement. Therefore, it is important to the quality of financial reporting that governments establish reasonable estimates of the useful lives of all of their depreciable capital assets.
- Publication date: January 2019
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.915760338306427,
"language": "en",
"url": "https://www.sampletemplate.net/ledger-template.html",
"token_count": 284,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.026611328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d1ed4822-c8a6-4fb5-bc01-c2662d241562>"
}
|
A good set of accounts is vital to any business today; it is proof of the business’ health. A ledger is a simple accounts document that records every transaction of the business that involves cash. This recording activity helps the company to track its revenue and expenses to identify the status of the company.
Every cent in cash or check is recorded as an input or output to the business. An inflow of revenue is called a ‘Credit’ while an outflow of funds is termed ‘Debit’. There must be a positive outcome from the Credit and Debit columns for the company to survive or progress in the business environment.
A ledger should be exercised on a daily basis, especially where the transactions are high. A monthly or yearly report can be generated for an overview of the credits and debits of the company. A ledger is usually handled by the accounts clerk who is familiar with at least the basic accounting structure so that the recording of transactions would be correct.
A ledger template can be in manual or electronic form. It is more tedious manually as the computation of balances must be accurate while an electronic version allows automatic computation of balances after the credit or debit values are entered.
The ledger template may contain:
* Company Name
* Transaction Item
* Check Number
* Check Amount
* Credit Column
* Debit Column
Consider the attached sample template for your convenience.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9681381583213806,
"language": "en",
"url": "http://mypaystreams.com/the-importance-of-small-business-training-courses/",
"token_count": 284,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0361328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b1bfc3ce-08cf-4386-bb8e-9c570186b8af>"
}
|
What Is The Definition Of A Small Business?
The Government refers to SMEs, ie Small and Medium sized Enterprises, which can cover any sized business with 1 to 250 employees. Obviously at the larger end of the scale, this is a sizeable business. For the purpose of this article we define small business as any company employing between 10 and 100 staff, with the term micro business being used for companies employing less than 10 people.
Barriers To Small Business Training
There can be many reasons why Small Businesses fail to fully embrace training in their organisations, often despite the obvious benefits, but here are some of the most common ones:
Despite all of these problems every small business invests in training in some shape or form. This may include on the job training, or local free training events, often sponsored by business support agencies, but the training is carried out nonetheless.
Overcoming The Barriers To Small Business Training
Small businesses need to be more strategic in their approach to training and include it as part of their overall strategy for growth. This can include:
Although it can be harder for Small Businesses to justify the investment in training, the returns on investment can be much more easily quantifiable. In a difficult market small businesses can move more quickly and respond to opportunities as they arise. If the team are all trained to work to their maximum efficiency, this can result in a rapid return on investment.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.