meta
dict | text
stringlengths 224
571k
|
|---|---|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9370574355125427,
"language": "en",
"url": "https://www.cmcmarkets.com/en-gb/learn-spread-betting/stochastics-and-rsi",
"token_count": 862,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7978fb13-7e70-41fb-b6e3-059c14b982e6>"
}
|
Join us as we analyse how and when to use oscillators, with a specific focus on RSI (relative strength index) and stochastic oscillators. RSI and stochastic oscillators are some of the most popular oscillators and many traders use them incorrectly. We cover the differences between RSI and stochastic oscillators and review best practices on how and when to use them.
Oscillators are a type of technical indicator that determines if an asset is overbought or oversold. Identifying trends is very important when trading and oscillators are used when a clear trend is not defined, along with other indicators such as moving averages. Therefore, if a market is experiencing a bull or market, oscillators may not be necessary. However, they are useful when a market is trading sideways or is particularly volatile with no clear trend.
Overbought and oversold levels are judged by trading volume. For example, if many investors are buying an asset, it will move towards overbought levels as the number of buyers slows down. This works in the same way for selling assets; an asset could enter an oversold situation if a large number of investors sell their stock and then slowly start to diminish over a specified time period.
Both the RSI and stochastic oscillators are popular price momentum oscillators that are popular amongst traders to forecast market trends. Both oscillators operate to determine if an asset is overbought or undersold but have varying methods to calculate their findings.
Both RSI and stochastic oscillators analyse overbought and oversold levels by measuring price momentum. A stochastic oscillator is based on the assumption that an assets current price will be closer to the highest price of its recent price range. In other words, stochastic oscillators use closing prices but also include the highs and low in a recent range. Whereas, an RSI would include just the closing prices of a recent trading period.
The RSI is the preferred tool by analysts, compared to the stochastic oscillator, but both are popular technical indicators that should be used in certain situations. As a general rule, the RSI indicator can prove more useful when markets are trending, while the stochastic indicator can be more insightful compared to the RSI in flat or choppy markets, where there is no clear trend.
The aim of both the RSI and stochastic indicators is similar, but they were designed differently and therefore, they have slightly different uses. As above, the RSI helps to distinguish when an asset’s price has moved too much, such as a market that has a trend, whereas stochastics are used to indicate when an asset’s price has reached the top or bottom of a trading range. As stochastics use closing prices, but also the top and bottom of a recent range, they are best suited for sideways markets with no clear bull or bear trend.
The RSI and stochastic oscillators are both momentum indicators that can be useful in different situations. They are, however, both types of oscillators that measure the acceleration of an assets price, indicating market entry and exit points based on overbought and oversold levels. However, an oscillator should not be used in solitary when to enter or exit a trade and is best used as a secondary indicator.
Experience our powerful online platform with pattern recognition scanner, price alerts and module linking.
Disclaimer: CMC Markets is an execution-only service provider. The material (whether or not it states any opinions) is for general information purposes only, and does not take into account your personal circumstances or objectives. Nothing in this material is (or should be considered to be) financial, investment or other advice on which reliance should be placed. No opinion given in the material constitutes a recommendation by CMC Markets or the author that any particular investment, security, transaction or investment strategy is suitable for any specific person. CMC Markets does not endorse or offer opinion on the trading strategies used by the author. Their trading strategies do not guarantee any return and CMC Markets shall not be held responsible for any loss that you may incur, either directly or indirectly, arising from any investment based on any information contained herein.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.948336660861969,
"language": "en",
"url": "https://www.detsad106.ru/on-liquidating-a-5341.html",
"token_count": 436,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.248046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:77c4c347-5178-4dc0-b678-b7985168d700>"
}
|
On liquidating a Live chat sexy free without resitration
Any transaction that offsets or closes out a long or short position. Liquidation also refers to a situation in which a company ceases operations and sells as many assets as it can; the company uses the cash to repay debt and, if possible, shareholders.
Liquidation often has a negative connotation for this reason. Case Study If eliminating dividends, laying off employees, selling subsidiaries, restructuring debt, and, finally, reorganization under Chapter 11 bankruptcy fail to resuscitate a business, the likely outcome is liquidation.
Depending upon statute, liquidation can precede or follow dissolution.
When a corporation undergoes liquidation, the money received by stockholders in lieu of their stock is usually treated as a sale or exchange of the stock resulting in its treatment as a capital gain or loss for Income Tax purposes.
It was expected the asset liquidation would result in creditors being paid only a portion of their claims while stockholders of the company would receive nothing.
Following a three-year attempt at reorganization under Chapter 11 bankruptcy, the firm announced it would close all 216 stores and liquidate its inventories and real estate.The settlement of the financial affairs of a business or individual through the sale of all assets and the distribution of the proceeds to creditors, heirs, or other parties with a legal claim.The liquidation of a corporation is not the same as its dissolution (the termination of its existence as a legal entity).The proceeds of the sale are used to discharge any outstanding liabilities to the creditors of the company.If there are insufficient funds to pay all creditors (INSOLVENCY), preferential creditors are paid first (for example the INLAND REVENUE for tax due), then ordinary creditors pro rata.
The function of a liquidator is to convert the assets of the company into cash, which is then distributed among the creditors to pay off (so far as possible) the debts of the company. Corporate changes: the property/casualty industry continued to consolidate, as state regulators kept a close watch on companies' viability and mergers and acquisitions maintained a healthy pace in 2001.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.7780781984329224,
"language": "en",
"url": "https://www.efinancialmodels.com/knowledge-base/excel-google-sheets-co/excel-functions-and-formulas/test-for-errors-except-na-with-excel-iserr-function/",
"token_count": 898,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.08349609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d6dc8cbe-60e8-48b2-8bf3-522969772476>"
}
|
Performing error checks in an Excel financial model is an important step. Various functions are available to check and handle errors such as functions that check for all errors, check exclusively for one error type, and functions that ignore a certain error type.
In this article, we will discuss the last function, specifically the ISERR Excel function.
What is the ISERR Function in Excel? How to use the Excel ISERR formula? Why does the ISERR Excel function exclude #NA error? This article will answer these questions and provide examples of how ISERR Function works.
What is The ISERR Function in Excel
The ISERR Function in Excel is under the Information Functions and tests for any errors except #NA error and results in TRUE/FALSE values. The Excel ISERR formula works with errors such as #REF, #VALUE, #DIV/0, #NUM, etc.
ISERR Excel Formula Syntax
– Cell reference, formula, or expression to be tested for any error. The ISERR Excel function excludes the #NA error. (Required Argument)
– When an error other than #NA error is identified, the ISERR Function returns TRUE.
– When no error or #NA error is identified, the ISERR Function returns FALSE.
ISERR Function in Excel Examples
Why ISERR Function Excludes #NA Error?
The #NA error is an abbreviation for Not Available or No Value Available and mainly occurs when a formula or function cannot find the value it was instructed to search and retrieve.
The #NA error commonly appears when the lookup value using VLOOKUP, HLOOKUP, MATCH, and INDEX and other lookup functions or combinations cannot be found. This does not necessarily mean there is a real error or there is something wrong with the function (false positive). It could be the lookup value is not valid and is correctly excluded from the list.
When a financial model contains lookup values, performing an error check with the ISERR Function is a suitable tool. Since the ISERR Excel formula ignores the #NA error, the financial model provides a more realistic error check. Using a stricter error check function such as ISERROR or IFERROR will lead to incorrectly identifying and removing #NA error in the financial model.
IF and ISERR Function in Excel Combined
The Excel ISERR formula is a helpful tool in testing and identifying error values. The ISERR Function then provides a simple and short result by returning TRUE/FALSE values. However, this can be limiting as it may not necessarily give more information to the end-user. A clearer, specific, and customized error message in a financial model is highly preferred.
Recall the IF function is a simple Logical function that:
- Evaluates or tests data, formula, or expression based on given criteria (Logical test)
- Depending on the outcome of the test, it performs a specific course of action otherwise an alternative course of action (Value if TRUE and Value if FALSE)
Applying the IF and ISERR Excel Function combined with the previous error examples, provides a clearer error message.
The flow of the IF and ISERR Excel functions combined.
When the combination of IF and ISERR Excel functions are applied to test the #NA error similar to the example below.
The IF and ISERR Excel functions return the value in cell B6, #NA error. Why? Since the ISERR Excel disregards the #NA error (FALSE), the IF function then returns the Value_if_False, cell B6.
Major Points to Remember Using Excel’s ISERR Function
- Incorrectly handling a true error in a financial model can lead to wrong decisions, it is wise to use Excel’s error-checking tools or create error checks specific to the financial model.
- ISERR Function in Excel tests for errors except for #NA error and returns TRUE/FALSE values.
- Combine IF and ISERR Excel functions to return error messages other than TRUE/FALSE.
- Since the ISERR Function ignores #NA error, use ISERROR and IFERROR functions instead to check all types of errors.
- To test #NA error only and ignore other errors, use ISNA or IFNA functions in Excel.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9631737470626831,
"language": "en",
"url": "https://www.genpaysdebitche.net/how-should-miners-prepare-for-ethereum-fork/",
"token_count": 1086,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.23828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:eba7c11a-3a1a-4f6c-bb51-45d1104812f7>"
}
|
How Should Miners Prepare For Ethereum Fork – The term “Ethereum Cryptocurrency ” is a fairly brand-new term in the world of finance and is related to digital currency itself. Well, it is a type of currency that is built on the “Ethereum ” platform.
Put simply, the task wants to reinvent how money is sent out all over the world. Now, digital currencies are truly simply digital deals between individuals. All you do is convert the currency you ‘re utilizing into whatever currency the recipient is utilizing if you desire to send out cash abroad. This can be a extremely slow and expensive procedure, particularly when you need to use different currency rates to make your deal worth your while.
What is needed is a way for people to make deals without having to deal with any currency at all. Essentially, this indicates you can take your cash and make a transaction that includes no currency at all. In order to achieve this, you would need to utilize something called “cryptocoins “. These are little wise agreements that run on the “blockchain “. They are accountable for making the whole deal as protected and safe as possible. Unfortunately, many people still aren ‘t rather sure what the “blockchain ” is, so this becomes their huge question.
Generally, the “blockchain ” resembles the Internet with cash. Think about it as a ledger where anything that ‘s been done is logged in. Any brand-new transactions are then added to the journal. Just like the Internet, there ‘s a lot of capacity for abuse with the ledger, which is why there ‘s always somebody who ‘s attempting to get a piece of it. That ‘s why we require cryptography in order to make sure that the ledger remains safe.
The problem with many digital currencies is they have a lot of similarities with standard currencies. All of the major economies print their own currency. This makes them really easy to track. Even if you understood how to locate all of the different governments ‘ currency logs, you still wouldn ‘t have the ability to figure out their rate of interest, their political activities, and even their latest economic reports. With this info, you could quickly control the value of the cash and make the most of their weaknesses.
By utilizing a digital currency based on cryptography, you ‘ll have the ability to make safe transactions that will be challenging to foil. You ‘ll also be able to make certain that you aren ‘t costs more than you should, given that there won ‘t be any paper routes left behind. As you understand, governments worldwide are fretted about terrorism, which is why they keep a close eye on any kind of deals that are made online.
There are some companies out there that are dealing with establishing brand-new types of cryptography that will be used on the Internet. In the mean time, there are several widely known cryptosystems that you can use in the meantime. Some popular examples of these include Zcash, Vitalik, Prypto, and ECDSA.
Since the Internet is used around the world, you desire to make sure that there isn ‘t going to be a problem when sending personal messages in between your computers. That ‘s what it ‘s truly all about.
It ‘s extremely comparable to what you would utilize for an ATM, only it ‘s much more advanced and personal. Most of the time, you can get this kind of cryptography for totally free, but if you ‘re prepared to pay for it, you ‘ll be able to get more security than ever before.
Even though there are plenty of locations to purchase this innovation, you ought to make sure that you ‘re dealing with a legitimate company that has a good reputation. You put on ‘t desire to put your monetary information at threat.
What ‘s fantastic about it is that it ‘s been proven to be protected, so it shouldn ‘t be hard to make the change from using codes and passwords to making this kind of individual recognition system necessary. There ‘s absolutely nothing worse than having all of your information stolen, isn ‘t it? It ‘s definitely not a very excellent sensation when someone gets hold of your social security number or other individual details.
The term “Ethereum Cryptocurrency ” is a relatively brand-new term in the world of financing and is associated to digital currency itself. Numerous people still aren ‘t rather sure what the “blockchain ” is, so this becomes their huge question.
Just like the Internet, there ‘s a lot of potential for abuse with the journal, which is why there ‘s always someone who ‘s trying to get a piece of it. You ‘ll also be able to make sure that you aren ‘t costs more than you should, considering that there won ‘t be any paper routes left behind. What ‘s excellent about it is that it ‘s been shown to be protected, so it shouldn ‘t be difficult to make the modification from using passwords and codes to making this kind of personal identification system necessary. How Should Miners Prepare For Ethereum Fork
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9434216022491455,
"language": "en",
"url": "https://www.infinitydecking.com.au/what-is-the-total-inventory-cost-the-inventory/",
"token_count": 1647,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0260009765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:df54d4a7-661e-48a8-b635-492ecf808182>"
}
|
So, the balance sheet has the cost of goods sold at $1 and the balance sheet retains the remaining inventory at $5.50. These methods are used to manage assumptions of cost flows related to inventory, stock repurchases (if purchased at different prices), and various other accounting purposes.
Since the cost of goods sold figure affects the company’s net income, it also affects the balance of retained earnings on the statement of retained earnings. On the balance sheet, incorrect inventory amounts affect both the reported ending inventory and retained earnings. Inventories appear on the balance How to Calculate Cost of Inventory sheet under the heading ” Current Assets,” which reports current assets in a descending order of liquidity. Because inventories are consumed or converted into cash within a year or one operating cycle, whichever is longer, inventories usually follow cash and receivables on the balance sheet.
Different accounting methods produce different results, because their flow of costs are based upon different assumptions. The FIFO method bases its cost flow on the chronological order purchases are made, while the LIFO method bases it cost flow in a reverse chronological order. The average cost method produces a cost flow based on a weighted average of unit costs. LIFO How to Calculate Cost of Inventory and weighted average cost flow assumptions may yield different end inventories and COGS in a perpetual inventory system than in a periodic inventory system due to the timing of the calculations. In the perpetual system, some of the oldest units calculated in the periodic units-on-hand ending inventory may get expended during a near inventory exhausting individual sale.
Such items as fresh dairy products, fruits, and vegetables should be sold on a FIFO basis. In these cases, an assumed first-in, first-out flow corresponds with the actual physical flow of goods. Inventory is generally valued at its cost and it is likely to be the largest component of the company’s current assets.
When Should A Company Use Last In, First Out (Lifo)?
In the LIFO system, the weighted average system, and the perpetual system, each sale moves the weighted average, so it is a moving weighted https://online-accounting.net/how-to-calculate-cost-of-inventory/ average for each sale. Inventory cost flow assumptions are necessary to determine the cost of goods sold and ending inventory.
Since the unit cost of inventory items will change over time, a company must select a cost flow assumption (FIFO, LIFO, average) for removing the costs from inventory and sending them to the cost of goods sold. In fact, an incorrect inventory valuation will cause two income statements to be incorrect. The reason is the ending inventory of one accounting period will automatically become the beginning inventory in the subsequent accounting period.
During periods of inflation, LIFO shows the largest cost of goods sold of any of the costing methods because the newest costs charged to cost of goods sold are also the highest costs. The larger the cost of goods sold, the smaller the net income. Those who favor LIFO argue that its use leads to a better matching of costs and revenues than the other methods.
This statement is true for some one-of-a-kind items, such as autos or real estate. For these items, use of any other method would seem illogical. However, one disadvantage of the specific identification method is that it permits the manipulation of income. When a company uses the Weighted-Average Method and prices are rising, its cost of goods sold is less than that obtained under LIFO, but more than that obtained under FIFO.
Incremental And Opportunity Costs—
So under FIFO, the cost of goods sold (COGS) for the first sales is $10. That $2 difference would significantly impact the company’s financial statements and tax filing. Choosing FIFO would have the impact of making its profit appear larger https://online-accounting.net/ for investors. Conversely, choosing LIFO would have the impact of making its profit appear smaller to the tax authorities. The money invested in inventory, forms a very large part of total costs involved in conducting the business.
Inventory turns is an indicator of how efficiently the inventory is managed. In simple terms it is the number of times the inventory is sold in a given time period. It can be arrived at by dividing the cost of goods sold by the average inventory cost for the given period. It is one of the most common methods of inventory valuation used by businesses as it is simple and easy to understand. During inflation, the FIFO method yields a higher value of the ending inventory, lower cost of goods sold, and a higher gross profit.
- The larger the cost of goods sold, the smaller the net income.
- During periods of inflation, LIFO shows the largest cost of goods sold of any of the costing methods because the newest costs charged to cost of goods sold are also the highest costs.
- Those who favor LIFO argue that its use leads to a better matching of costs and revenues than the other methods.
Types Of Inventory
LIFO, on the other hand, leads us to believe that companies want to sell their newest inventory, even if they still have old stock sitting around. LIFO’s a very American answer to the problem of inventory valuation, because in times of rising prices, it can lower a firm’s taxes. LIFO users will report higher cost of goods sold, and hence, less taxable income than if they used FIFO in inflationary times.
Inventory is also not as badly understated as under LIFO, but it is not as up-to-date as under FIFO. Weighted-average costing takes a middle-of-the-road approach. The Weighted-Average Method of inventory costing is a means of costing ending inventory using a weighted-average unit How to Calculate Cost of Inventory cost. Companies most often use the Weighted-Average Method to determine a cost for units that are basically the same, such as identical games in a toy store or identical electrical tools in a hardware store. Since the units are alike, firms can assign the same unit cost to them.
Since the costs of products may change during an accounting year, a company must select a cost flow assumption that How to Calculate Cost of Inventory it will use consistently. For instance, should the oldest cost be removed from inventory when an item is sold?
Do I Have To Keep Track Of Inventory?
Inventory valuation allows you to evaluate your Cost of Goods Sold (COGS) and, ultimately, your profitability. The most widely used methods for valuation are FIFO (first-in, first-out), LIFO (last-in, first-out) and WAC (weighted average cost).
Therefore, periodic and perpetual inventory procedures produce the same results for the specific identification method. The FIFO (first-in, first-out) method of inventory costing assumes that the costs of the first goods purchased are those charged to cost of goods sold when the company actually sells goods. This method assumes the first goods purchased are the first goods sold. In some companies, the first units in (bought) must be the first units out (sold) to avoid large losses from spoilage.
Inventory refers to the goods meant for sale or unsold goods. In manufacturing, it includes raw materials, semi-finished and finished goods. Inventory valuation is done at the end of every financial year to calculate the cost of goods sold and the cost of the unsold inventory.
This decision is critical and will affect a company’s gross margin, net income, and taxes, as well as future inventory valuations. Under FIFO, the first unit of inventory is recognized as the first sold off the shelves.
Which method yields the highest net income?
2. LIFO method results in the highest net income of $530.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9353910684585571,
"language": "en",
"url": "https://www.investinwhatsnext.org/About",
"token_count": 327,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0002651214599609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:94901f2d-e2cc-42fb-ae50-00764e158d5d>"
}
|
“Invest in What’s Next: Life After High School” is an online mini-course developed by the Federal Reserve Banks of Richmond and San Francisco to help students navigate their first major financial decision: what path to pursue after high school. The course encourages students to explore multiple post-secondary education paths and job options and to think about how investing in their knowledge and skills may contribute to their future well-being.
The course’s primary objective is to provide reliable economics-based information and tools to help high school students make informed decisions about post-secondary education. The course helps students begin planning their post-high school strategy by exploring and evaluating the costs and benefits of various education paths, while taking into account their job interests and desired lifestyle. Along the way, students develop personal finance and numeracy skills to help implement their strategy in the real world.
The course consists of three lessons, each requiring approximately 45 to 60 minutes of sit-down time, plus optional homework assignments. A dashboard directs and charts student progress in the course, allowing teachers and facilitators to tailor their own level of involvement.
Course Content and Treatments
The lessons present content in a highly interactive format, featuring data-driven treatments. These interactive treatments combine at the end of the course to build a plan that students can reference and update for life after high school.
For more information about the course, please visit our FAQ page.
To learn more about other education resources provided by the Federal Reserve Banks of Richmond and San Francisco, as well as other Reserve Banks across the Federal Reserve System, please visit our websites at:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9747666120529175,
"language": "en",
"url": "https://www.newbondstreetpawnbrokers.com/blog/poor-harvest-leaves-italian-wine-industry-trailing/",
"token_count": 630,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:624535a7-7297-4a62-830f-fd21ca8b930b>"
}
|
Poor harvest leaves Italian wine industry trailing
September 16, 2014
Italy is a country well known for producing fine wines, and has been doing so for over 2000 years, its balmy climates and low rainfall making it an ideal location for vineyards. The country has consistently been the world’s largest producer of wine for some time, with estimates saying that they produce between a quarter and a third of all wine globally. The country is home to 200,000 wineries of variable sizes, and over one million vineyards, which all contribute to the extensive industry.
However, the production of Italian wine is on course to take a 15% hit this year, due to poor harvest conditions caused by unusually wet weather. This means that last year’s harvest of 48 million hectolitres will not be matched; instead, it is projected that Italy will produce 41 million hectolitres, making 2014 the worst year for wine production since 1950. The projections, made by the national farmers’ association Coldiretti in a preliminary report released last week, ran alongside claims that Italy will lose their place as the world’s biggest wine producer to France.
Wine is big business in Italy, its production generating one of the largest sources of income to the national economy. In an average year the industry generates around €9.5 billion, half of which is made from sales overseas. Italy has been responsible for the highest amount of wine exported consistently since 2009, but thanks to this year’s poor harvest, this accolade could be taken by France or Spain. As well as impacting the livelihood of those working in Italy, it will do nothing to help the unemployment issues the country is facing, with only 58% of its residents in work. For comparison, the UK’s employment rate is 71%, and the average rate of employment in the EU is 65%.
Wine is also strongly wedded to Italian culture; in terms of global consumption, Italy comes third behind the United States and France. Numbers released by the Italian National Insititute of Statistics say that 40% of males report that they drink at least one glass of wine daily. The percentage of males who drink a glass of beer daily is only 7.7%. Italians are proud of their wine, around half of their total production is consumed domestically.
In reality, this is unlikely to affect the availability of Italian wine in British stores too much, but it might be worth holding onto that pricey Sicilian red from 2014. In years to come, it could be incredibly rare.
If you’re looking to pawn fine wines, get in touch with us today. Our Blenheim Street pawn shop is based in the heart of Mayfair. Appointments can be made, but are not 100% necessary; we’re always happy to take walk-ins. We look forward to seeing you – and your fine wines – very soon. Some of the wine we loan against includes Chateau Petrus, Chateau Margaux, Chateau Lafite and Chateau Mouton to name just a few.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9765624403953552,
"language": "en",
"url": "https://bakkencpapc.com/are-your-children-being-educated-about-money/",
"token_count": 495,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08349609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:63a1023b-1df0-47b5-88c0-7d43d1261b36>"
}
|
Three out of ten parents don’t talk to their children about money or have had just one major talk with their children on the subject, according to a U.S. survey conducted for the AICPA. Children tend to be over the age of ten by the time their parents first talk to them about money.
Above talking to children about finances, parents are more likely to talk to them about other important topics, such as:
- The importance of good manners
- The benefits of good eating habits
- The importance of getting good grades
- The dangers of drugs and alcohol
- The risks of smoking
It is important to teach children the right lessons about financial responsibility and help them to be prepared for a sound financial future.
Some tips for how to get these ideas across to your children:
- Start Early. Your children learn at a young age to want items, such as toys, clothes, or games, at this time, you should start teaching them about saving. Have them practice saving by putting away some of their birthday or allowance money to purchase the item they want, give them a goal to meet and once it has been met, let them buy the item. This will teach them the basics of delayed gratification and budgeting for a goal.
- Speak in Their Terms. Your child may have no interest in learning about the compounding interest on their college savings account; they are more likely to care about money to spend with friends or to buy a toy. Take this opportunity to teach about savings by relating it to something they will care enough about now to listen.
- Repeat Often. The more often you talk to your children about good financial decisions, the more likely it is to stick with them in their future. At meal times, talk about saving for big purchases, like vacations, and how it might affect budgets.
- Walk the Talk. As they say, actions speak louder than words. By giving in easily to your children if they make a fuss over a toy at store, then you will have a hard time convincing them that delayed gratification and sticking to a budget is effective.
Teaching your children now about the benefits of saving and budgeting is just as important as teaching them to be polite. These are basic skills that your children will need to know to be a well-rounded adult. For more information on what other financial knowledge you should be passing on to your children, contact your accountant today.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.963098406791687,
"language": "en",
"url": "https://devinit.org/blog/growth-leave-no-one-behind/",
"token_count": 1259,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.037109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:15b81880-6ff8-4642-b6b5-395e82e3b148>"
}
|
Can the world deliver on growth that leaves no one behind?
Amy Dodd argues that we need a common definition and metrics for measuring inclusive growth to ensure that we meet our commitment to leave no one behind
As key actors met in Washington DC last week for the World Bank and IMF Annual Meetings, growth was on the agenda. African countries are seeking growth, fast paced growth which will provide jobs for their young people, bring roads and services to rural areas and encourage private sector development that enables small and medium size enterprises to flourish. All very reasonable and sensible expectations. However, growth focused on increasing GDP, or other financial and economic measures, does not always bring equality, shared prosperity and opportunity for all. The debate on the relationship between growth, poverty reduction and inequality has been going on for decades. While the language changes – from ‘pro-poor growth’ to, more recently, ‘inclusive growth’ or even ‘green growth’ – it remains on the agenda and as a priority. But it often fails to materialise into policies on prompting growth at the country level. Agenda 2030 has 45 references to inclusion and 10 of them are specifically about ‘inclusive economic growth’. Five of the Sustainable Development Goals demand ‘inclusive’ progress (4, 8, 9,11 and 16) and the commitment to leave no one behind clearly demands that everyone is included.
Economic growth has helped lift many millions of people over the income poverty line since 2000. However, the impact was largely felt in China; other parts of the world did not fare so well. Evidence from many countries, particularly in Sub-Saharan Africa show that sustained national economic growth does not necessarily translate into poverty reduction. Despite all the talk about the benefits of growth being shared, the data is clear: the poorest people are being left behind as the gap between the poorest 20% of people, and everyone else – globally and in many countries – has grown. And this inequality manifests in people’s lives beyond their incomes – the poorest regions, for example, are likely to experience the worst health and education outcomes but also to receive the least funding (from their own governments and donors). Inclusion in the benefits of growth is not a given.
Figure 1: the income gap between the poorest 20% of people and everyone else has been growing
So what does it mean to ensure growth can be and is more inclusive?
How major institutions understand, operationalise and measure inclusive growth (including what metrics they use to test whether an investment is inclusive or not) is an important question and one that we have begun to explore by reviewing literature and frameworks from key institutions. The basic questions we are seeking to address are relatively simple ones – what do we mean by inclusive growth, and are there credible means of assessing who is being included in that growth?
A common definition of ‘inclusive growth’ and shared language are missing
Looking at publicly available information, there appears to be a lack of a mainstreamed definition of inclusive growth amongst key institutions – with most not providing a clear definition.
Yet, it is broadly understood as the movement of people out of poverty, tied to the reduction of inequality, through the prospect of “the benefits of a growing economy extending to all segments of society” (Mastercard Centre for Inclusive Growth).
The most common term used by organisations to refer to inclusive growth is the term itself, as noted by the RSA Commission in their 2017 Inclusive Growth Report: “Terminology may vary, but the underlying sense is the same, whether this is about ‘more and better jobs’, ‘quality jobs’, ‘closing the gap’, ‘an economy that works for everyone’ or ‘inclusive growth’”. The concept of inclusive growth is also clearly linked to concepts of economic prosperity and financial inclusion.
Better, more rigorous and comparable metrics are needed to measure progress
There is substantial variation in the ways in which institutions measure progress but the majority focus on some or all of the following: inequality, economic growth, and poverty. Metrics used to measure these were not always clear but, where they were, they focused on GDP, wages, measures of inequality, levels of investments, income, quality of life and poverty. Some also used specific tools or indices, such as the Multi-dimensional Poverty Index. But many were not explicit in how they were measuring the impact of investments and interventions aimed at delivering ‘inclusive growth’ or did not have appropriate metrics to assess it. The lack of common frameworks and metrics makes comparison, and learning, more challenging.
So while many institutions may have policies on inclusive growth, this does not seem to be reflected in clear definitions and metrics for measurement – and, if we are truly seeking to leave no one behind, this is a serious gap.
There is still some way to go in progressing the inclusive growth agenda from commitment to action – as the poorest people in the world are not reaping the benefits of broad-based economic growth. While this is also true for Agenda 2030 as a whole, a shared definition could help kickstart dialogue among institutions regarding which metrics would most effectively assess progress and encourage a more unified approach. Inclusive growth is possible but it requires global and national institutions to start from the same place, with a clear and agreed definition from which they can move forward, and share measures for tracking the progress of the poorest people.
Photo credit: Jonathan Ernst/World Bank
Priorities for the UK’s incoming Secretary of State Alok Sharma
As Alok Sharma takes office as Secretary of State, DI's Amy Dodd sets out key priorities for the UK and its global development agenda.
From review to delivery on the Global Goals – what should the immediate priorities be for the UK government?
On 26 June, the UK government published its Voluntary National Review measuring delivery against the Global Goals - but does it accurately capture progress?
Three priorities for the High-level Political Forum 2019
DI Director of Partnerships & Engagement Carolyn Culey sets out three key priorities for closing the gap between the poorest and the rest at HLPF 2019
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9347026944160461,
"language": "en",
"url": "https://m.ebrary.net/101997/communication/three_drivers_connectivity_data_attention",
"token_count": 4037,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.035400390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:30f95e19-9800-4ac5-a89b-6c7ada209c52>"
}
|
The Three Drivers: Connectivity, Data and Attention
Digital technology fires on three cylinders to power the network economy: connectivity, the creation and sharing of data, and scarcity of attention.
Abstract DCT has unleashed three powerful drivers: connectivity itself; the collection and parsing of voluminous data and a newly recognized resource bottleneck; and attention. These drivers present opportunities as well as challenges, which enable the network economy to innovate and develop. Connectivity generates transparency as information transfer is more efficient; data enables the creation of patterns and stories about individuals; and the trading of attention (or eyeballs) as a commodity, the central feature of advertising, reveals attention as the new scarce resource. Together, these forces propel a reconfiguration - unbundling and repackaging - of markets and products. Manifestation of DCT across markets is subtle and economic growth is uneven, sticking at the challenges in some sectors, but the race to adapt is enticing and we move forward.
Keywords Connectivity • Data • Attention as a scarce resource • Social capital
Most technology revolutions have addressed fairly practical problems. The steam engine and railroad enabled transportation; electricity enabled
© The Author(s) 2017 17
S. Bhatt, How Digital Communication Technology Shapes Markets,
Palgrave Advances in the Economics of Innovation and Technology,
production during non-daylight hours. What has the digital revolution done? DCT has enabled us to be virtually connected, and, by unleashing three key intertwined drivers on the economy, has transformed it into a network economy. First, we have instant, continuous, and ubiquitous connectivity, which has created a vast and complex network of connected individuals. Second, this information transfer enables the collection of massive amounts of data. Connectivity and data allow information to flow between market participants, thereby eliminating market intermediaries or traders. In this situation, buyers and sellers directly engage in transactions in a sharing economy where the surplus between the value of the product to the buyer and the cost of production to the seller is shared. Third, connectivity compels us to recognize attention as a scarce resource. The centrality of marketing, of advertisements, for the free consumption of information attests to this scarcity.
Let us consider each of these drivers in turn. What is the implication of the first - that is, of instant, continuous, and ubiquitous connectivity? Connectivity is both real and virtual. Railroads represent real connectivity while digitization generates virtual connectivity, but the underlying technology dynamic has accelerated. Consider that, in the industrial revolution, a whole century separated the 1769 invention of the steam engine by James Watt and the building of the first transcontinental railroad - the Transcontinental Union Pacific - in 1869. Less than a third of that time - only thirty years - passed between the introduction of the Apple II in 1977 and the iPhone (2007).1 In “The Dynamo and the Computer” Paul David introduces the “delay hypothesis” in a discussion of similar technological time lags: the introduction of electric machinery in the early 1920s took place some four decades after the first electric power station in 1882, two decades separate the discovery of the internal combustion engine and the development of the drive chain that transmitted power to the wheels. The idea is that it takes time for supporting adjustments to be made to the rest of the economic environment before the actual technology has a noticeable impact .
One of the most visible results of this new connectivity is “disruptive innovation,” or creative destruction, where many businesses from travel agencies and record stores to mapmaking and taxi dispatch have been disrupted. Disruption occurs when newer companies offer cheaper alternatives to products sold by established players and also when existing markets are redefined and the economic landscape reconfigured. Shane Greenstein makes the case that both structural and environmental factors played a role in this process of “innovation from the edges ... by suppliers who lacked power in the old market structure, who the central firms regarded as peripheral participants in the supply of services, and who perceived economic opportunities outside of the prevailing view” . This economic fluidity extends to new markets with products, heretofore undreamed of, that displace entire industries.
For example, sharing of private goods has been a common feature of society but “sharing” for a price is a novel development. The firm Airbnb involves sharing an underutilized personal space with another person(s) for a fee, crossing the boundary between home and hotel. The hallmark of the network economy is the matching of underutilized resources in market A (to create the supply), with an undersupplied resource in market B (to create the demand). This is often referred to as the sharing economy because the underutilized resource is frequently a privately owned good that is “shared” with others. Another description of the network economy is the “on- demand” economy, which refers to the notion that direct links between buyer and seller create a sense of immediacy in fulfillment of wants.
These direct links result from the elimination of intermediaries, creating a new way of consuming. Connectivity has reshaped the boundaries between markets and firms. Historically, intermediaries had been indispensable for trade to be consummated between individuals due to asymmetries in information, time, and geography. Now we have a TaskRabbit economy where people who want something are instantly connected with those who sell it. Technology has enabled detailed profiles, customer, reviews and rating systems about sellers on social networking sites, which create a compact of trust, reputation, responsibility, and rights (TRR&R), between buyer and seller. Firms have granular data about consumers and can differentiate products to accommodate diverse preferences.
Without intermediaries, the entire surplus, or the gap between value and cost, can be shared by sellers and buyers, with no leakage in commissions. How this surplus is allocated depends upon the bargaining process, and a reasonable outcome to this bargaining “game” depends upon the value of outside options to both parties. What is the buyer (or seller) giving up in order to enter the proposed sharing agreement? However, this is not an entirely rational calculation - emotion plays an important role. Neuroscientists have shown that the limbic system, the part of the brain that is host to attention and memory, is also host to emotion and reasoning. Hence, sellers have to activate emotion in order to capture attention and make a deal. In his acclaimed book, Antonio Damasio elaborates upon this crucial link between reason and emotion:
... work from my laboratory has shown that emotion is integral to the processes of reasoning and decision making, for worse and for better... It certainly does not seem true that reason stands to gain from operating without the leverage of emotion. On the contrary, emotion probably assists reasoning, especially when it comes to personal and social matters. [17, pp. 41-42]
The second key driver is vast amounts of static data, commonly called big data (BD), and new ways of acquiring information. Newly created links between individuals generate additional pathways of information gathering. Every connection and transaction, whether it is social, political, or economic, generates information. New links are context dependent - common friends, common interests, and social influence. Patterns emerge from all these nodes interacting and these patterns form the basis for new, dynamic data. These patterns may never be finished, so the network economy is an evolving, complex, and dynamic system and therefore more than simply a static knowledge economy.
BD, therefore, is real-time flow data, not just the stock of past data. It is important to distinguish between raw data and clean data. Clean data is data that has been processed, sorted, analyzed, and conceptualized. It provides information and increases transparency which reduces entry barriers. Transparency forces quick responses from firms, faster innovation, and customization simply to maintain market share, thus empowering the consumer. But it also empowers firms who can tailor their product offering to individual customers with the concomitant price increase.
However, and importantly, on the policy side, the question to be addressed is that of property rights to BD. In the absence of clearly defined property rights, individuals may violate privacy laws as articulated by the 4th Amendment, an issue discussed in detail in Chap. 7. However, there are instances where private data can be valuable public property. For example, in the event of a major health epidemic, vital information about location patterns of infected individuals is more valuable to the governing authority. This data could be accessed from the personal databank of individual smartphones. In the case of city congestion and environmentally sustainable transportation, shared information about traffic patterns and commuting schedules could allow organizations to create smart transportation infrastructure. The aggregation of private information, mostly unsolicited, from participants in the network is crowd sourcing of information, which creates BD. This aggregate body of private information cultivates a diversity of potential solutions to public problems. More generally, crowd sourcing encourages dialogue, develops public understanding of social problems, and motivates action. The example of Jun, Spain in Chap. 5 illustrates this idea.
The key to access any data and avoid privacy infringement issues is to create property rights over personal data so that individuals can voluntarily share their information at the right price. Then, like all personal property or private assets, this will give individuals control over their data. Lessig suggests just such a strategy in “protecting personal data through a property right. As with copyright, a privacy property right would create strong incentives in those who want to use that property to secure the appropriate consent.... people value privacy differently” . It is quite possible that a market for this data might lead to exorbitant prices. On the other hand, as is the case today, when data are public property and easily accessible, privacy concerns may lead people to hide data. Like a public park, individuals may not appreciate the full benefits of this shared resource and therefore may not support sharing data or the allocation of resources devoted to its collection. So more thought needs to be given to what the right balance is between making data private property versus public property.
The creation and proliferation of data presents opportunities in two key respects.
The first is recombinant innovation or combinatorial innovation, which involves combining disparate sets of information due to new links. Recombinant innovation is not invention, which is creating something new; it is not improvement, which involves a more efficient way of solving an old problem, much like tinkering along the margin. Innovation is a new way of solving an old problem. This builds organizational capital, which involves new ways of doing business: decision-making, hiring systems, incentive systems, and information flows. Companies “have dispensed with warehouses, trucks and full-time drivers and instead have become middlemen whose sole role is to connect customers with couriers” . Note that this connectivity has eliminated one layer of intermediary along the supply chain - the transportation link. So in effect, the supply chain has shrunk.
The second is the creation of social capital, which consists of shared values and mutual trust. Social capital is created when individuals have repeated trading interactions, inducing a climate of TRR&R, and then form social links in a focal closure. Once formed, social capital generates the opportunity and incentives to create yet more links for transactional purposes (either social, economic, or political). The social ties that bind create membership closure as individuals build trading relationships based on these ties. Social capital enables cooperation in the network economy where economic tensions are resolved via negotiating differences - a point that I will return to later in discussing competition versus cooperation in Chap. 8.
Social capital is bonding capital in tightly connected networks and bridging capital in networks with low embeddedness. For example, the shadow-banking network, which is the unregulated banking network, relies on bonding capital. The network has high embeddedness, with traders having multiple neighbors in common; so mutual trust is the basis of most transactions. Social capital can also exist in networks with low embeddedness, where a strong connection or link between two individuals in two distinct components can foster a bridge, creating bridging capital. For example, in the wholesale diamond industry, social capital is the bridge connecting the wholesale diamond industry in Antwerp, Belgium and Surat, India. In both centers, workers such as diamond cutters, financiers, distributors, and salespeople have long-standing social relationships cementing bonds of trust. Packages of cut diamonds are simply handed over and paid for without inspection. The reputation of each party to this transaction carries sufficient weight so that the diamonds being traded are indisputably adhering to the specifications of the contract.
The third key driver is scarcity of a resource, attention. Monetization of various publishers’ digital presence compels them to trade eyeballs on advertising exchanges as they would stocks and bonds. The marketing industry is well aware of this feature. The product underlying real-time advertising exchanges is individual attention. When Google or Facebook places ads on their site, they have sold your attention to advertisers. Search entries on Google and data from Facebook’s News Feed, for example, are translated into targeted ads on real-time bidding exchanges.
If attention was private property, with all the concomitant property rights, then individuals could choose to sell their time but, like privately owned land, it would not be appropriated without consent, as done currently by attention-grabbing sidebars and headers on various sites.2 Encroachment of one’s attention due to unsolicited information is equivalent to a violation of property rights. To place this idea into perspective, labor became a form of private property that could be bought and sold for hourly wages only after the Enclosure Movement in the 1500s in Tudor England. Common land was enclosed and transferred to individuals as their private property. Peasants who worked on this land were now displaced and had to trade their labor in the marketplace. Land and labor were traded in a world governed by contracts rather than common custom.3
Milgrom and Roberts “interpret ‘owning an asset’ to mean having the residual rights of control - that is the right to make any decision concerning the asset’s use that is not explicitly controlled by law or assigned to another by contract” [21, p. 291]. In this view, unsolicited information packets are encroaching upon private property when they capture attention. Private data, like attention, is subject to similar territorial disputes. Sherry Turkle makes the case that the debate should be rephrased from
the language of privacy rights to the language of control over one’s own data... The companies that collect our data would have responsibilities to protect it... [But] the person who provides the data retains control of how they are used. [22, p. 328]
Note that we have two separate notions of privacy. One is ownership rights over attention so individuals have a right to not be addressed, or be left alone. Any information requires attention to be appropriately absorbed. When random bits of information seize attention, they infringe on private property. This could be considered a violation of Fourth Amendment rights to personal property - “The right of the people to be secure in their persons.. .against unreasonable seizures.” The other is ownership rights over personal data, which has nothing to do with attention.
From an individual perspective the critical issue is that of autonomy. Economic agents want to have control over the decision to share attention and data. They want to decide if and how attention and data are to be used by others. The question of privacy then becomes one of allocation of control and decision-making authority. Subjecting attention to infringement by unsolicited information is equivalent to one’s private data being compromised. Both can be thought of as invasions of privacy.
- 1. Bill Gates has said, however, that, “the Altair 8800 is the first thing that deserves to be called a personal computer” . The Altair was a machine that hobbyists and hackers, and members of the Homebrew Computer Club in Menlo Park, California, received in a box containing parts that they could solder together and use.
- 2. Lawrence Lessig makes a more general case that “the protection of privacy would be stronger if people conceived of the right as a property right. People need to take ownership of this right and protect it, and propertizing is the traditional tool we use to identify and enable protection” .
- 3. In medieval times, when each village’s economy was isolated, common field agriculture was the custom and institutions were developed such that each laborer had a reasonable land allotment in the common fields. These allotments were scattered and no individual was able to experiment with new ideas or adopt any improvement without general approval but there was also no perceptible social gap between the laborer and farmer. The lord of the manor instituted the process of enclosure, primarily as a means for dispute resolution. Thus originated the institution of private property. The early acts dated to 1773 and were more local than national. However, surrounding a piece of land with hedges and ditches produced “rural depopulation and converted the villager from a peasant with medieval status to an agricultural laborer entirely dependent on a weekly wage.” The farmers who owned the enclosed private property benefited due to the increased rents .
4. However, due to the 1998 Copyright Term Extension Act passed by Congress, copyrights remain in effect until seventy years after the author’s death.
David, Paul. “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox.” American Economic Review (Papers and Proceedings), 80, no. 2, pp. 355-361 (1990).
Greenstein, Shane. How the Internet Became Commercial: Innovation, Privatization, and the Birth of a New Network. Princeton, NJ: Princeton University Press, 2016.
Damasio, Antonio. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Mariner Books, 2000.
Lessig, Lawrence. “Code Version 2.0.” Accessed June 26, 2016 from http://codev2.cc/download+remix/Lessig-Codev2.pdf
Miller, Claire. “Delivery Start-Ups are Back Like It’s 1999”.The New York Times, August 19, 2014.
Slater, Gilbert. “The English Peasantry and the Enclosure of Common Fields.” PhD thesis, University of London. Retrieved May 26, 2016 from https://books.google.com/books?id=ACEpAAAAYAAJ&printsec=front cover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false
Milgrom, Paul, and John Roberts. Economics, Organization and Management, 291. Upper Saddle River, NJ: Prentice-Hall, 1992.
Turkle, Sherry. Reclaiming Conversation: The Power of Talk in a Digital Age, 238. New York: Penguin Press, 2015.
Isaacson, Walter. The Innovators: How a Group of Hackers, Geniuses and Geeks Created The Digital Revolution. New York: Simon and Schuster, 2014.
Mokyr, Joel. A Culture of Growth: Origins of the Modern Economy. Princeton, NJ: Princeton University Press, 2014.
Bensinger, Greg. “Amazon Hails Cab for Delivery Test.” Wall Street Journal, November 6, 2014.
Varian, Hal, Joseph Farrell and Carl Shapiro. The Economics of Information Technology. Cambridge: Cambridge University Press, 2011.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.948814332485199,
"language": "en",
"url": "https://moneysoft.com/list-of-common-measures-of-value/",
"token_count": 442,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.08544921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6be14b30-00a8-4395-af8a-14d1dd8e2c98>"
}
|
List of Common Measures of Value
Book Value is the difference between a company’s Assets and Liabilities as stated on the current Balance Sheet. Book Value is an accounting term and does not provide a meaningful measure of the business value.
Liquidation Value is the net amount that would be realized if the business terminated and the assets are sold piecemeal. Liquidation can be “forced” or “orderly.”
Collateral Value is the amount of available secured credit based on the percentage that can be advanced against the estimated, appraised value of individual assets.
Insurable Value is the value used to determine the amount of insurance coverage that should be carried to fund buy-sell agreements or for liability, property and casualty insurance purposes.
Market Value is the price a business would command in an open market when exposed for sale for a reasonable period of time.
Fair Value for financial reporting is the price that would be received to sell an asset or paid to transfer a liability in an orderly transaction between market participants at the measurement date. For state legal matters pertaining to shareholders, Fair Value is generally defined by statute.
Fair Market Value is the price, expressed in terms of cash or equivalents, at which a business would be sold between a hypothetical willing and able buyer and a hypothetical willing and able seller, acting at arms length in an open and unrestricted market, when neither is under compulsion to buy or sell and when both parties have reasonable knowledge of the relevant facts.
Impaired Goodwill Value is a reduction in the value of Goodwill arising from annual valuations necessitated by FASB No. 142.
Fundamental or Intrinsic Value is the value that an investor considers, on the basis of an evaluation of available facts, to be the “true” or “real” value that will become the market value when other investors reach the same conclusion.
Investment Value is the value to a particular investor based on individual investment requirements and expectations.
Break-Up Value is the total value of a company’s separate operations (divisions, subsidiaries or business units) if they were sold separately on the open market.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9425708651542664,
"language": "en",
"url": "https://niallbyrneco.ie/information/limited-company/",
"token_count": 189,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.054443359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:65d5df15-b424-4781-8a09-4b0b202178ae>"
}
|
A limited company is a separate legal entity. The owners are shareholders, and its directors make decisions on behalf of the company. As a separate entity it has sole responsibility for its debts. Its liabilities are limited to the paid-up share capital, therefore the company is said to have "limited liability".
Advantages of a Limited Company
- Limited liability - in general, shareholders are only liable to lose the share capital they subscribe.
- Pension contributions can be made at the Company's expense.
- Raising finance can be less difficult.
- There can be many owners of the business.
Disadvantages of a Limited Company
- Limited liability may be neutralized as lenders, in practice may seek personal guarantees.
- Legislative requirements may be costly and time consuming.
- The need to prepare and file audited accounts with the Companies Registration Office.
- There are surcharges on undistributed investment incomes.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9390878677368164,
"language": "en",
"url": "https://uniquewritersbay.com/federal-reserve-system-creation-majors-roles-powers-responsibilities/",
"token_count": 500,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.25390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6e1fc6a0-6228-446d-8327-0eb1de361d16>"
}
|
Federal Reserve – Its Creation, Major Roles, Powers And Responsibilities
The Federal Reserve System among other government systems in the US stands out to play certain unique roles and responsibilities. It acts as the central bank of the United States. Since it was first created in 1913, its roles, responsibilities and goals have evolved through time to the current obligations it has(Lawrence, 1997).
To begin with, the Federal Reserve System’s main aim objective is to control the economy. It stands as a separate business entity from the government and therefore has no regulation under the government. It possesses the power to print currency and consequently, that of causing inflation. It can also lower or raise interest rates and to generally stabilize the country’s financial system(Wicker, 1966).
Apart from the powers, the Federal Reserve has a dual mandate of prize stabilization and employment, both of which are independent of each other. As such, the system conducts the nation’s monetary policy by manipulating credit and monetary conditions with an aspiration of getting maximum employment, moderate long-term interest rates and stable prices.Besides these responsibilities, the Federal Reserve regulates and supervises banking intuitions so there can be soundness and safety of the financial and banking system of the nation(Wicker, 1966). In this way, it also protects the credit rights of citizens who are consumers in the banking system.
The major roles of the Federal Reserve are essential because the organization is the gatekeeper of the U.S. economy and as the government’s bank, the organization is charged with the mandate to regulate the financial institutions of the nation. Overall, the promotion of sustainable growth, the promotion of high levels of employment, the moderation of long-term rates of interest and ensuring the stability of prices to help in the preservation of the dollar’s purchasing power all fall under the mandate of the Federal Reserve. This makes Federal Reserve the country’s money manager, the government’s bank, the banker’s bank, and the ultimate regulator of all the financial institution in the United States.
In summary, the Federal Reserve System maintains the stability of the nation’s financial system and taking up the systemic risk that is often a consequence and possibility in financial markets. It therefore ends up providing other auxiliary and relate services such as providing financial services to the US government, depository institutions, foreign institutions and others operating the nation’s payments system.
Order Unique Answer Now
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9499089121818542,
"language": "en",
"url": "https://www.cram.com/essay/Case-Analysis-Of-Exxon-Mobil/F3CYYCVLU64E5",
"token_count": 328,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1005859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e6851bc2-6e6f-4844-9067-ad55dbb742ed>"
}
|
Case Analysis Of Exxon Mobil
Definition :( Business Level Strategy) “Business- Level strategies are actions firms take to gain competitive advantages in a single market or industry”. (BLS, 102).ExxonMobil is one the few companies that has been able to lead the oil and gas industry through its cost leadership. Its large economies of scale makes it dominant firm in the market as well as cost leader in the industry. The powerful market position across the value chain allows the company to take advantage of the new emerging growth opportunities around the world. The overall geographic diversity across business model enables Exxon Mobil to decrease risks in a competitive turf and maximize profitability through less risky business portfolio. ExxonMobil chooses a cost leadership business strategy focuses on gaining advantages by reducing costs to below to those of all its competitors.
Business Level Strategy
Sources of cost advantage:
1) Size differences and economies of scale: …show more content…
ExxonMobil has expanded its horizon across the globe with established large extracting and refining low cost facilities in more than 17 countries like Indonesia, Brazil, Europe and Asian markets. Due to its size and strength the company is able to drive away the smaller companies by producing more volume at a fairly lower cost per unit The volume of oil production are carried out through specialized machines, highly skilled labor which is virtually impossible for small and startup companies to mimic. Specially, the integration between Exxon 's refineries and chemical manufacturing facilities is an advantage that no one can replicate. Approximately 13 of Exxon 's refineries totaling 3.4 mmbbl/d, or 63% of its capacity, are joint refining and chemical
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9390148520469666,
"language": "en",
"url": "https://www.findanattorney.co.za/content_bee-compliance",
"token_count": 1110,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.48828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6e5aeff9-12fe-4811-8314-fa30d852ac84>"
}
|
Article by listed attorney: NICOLENE SCHOEMAN
(please read about the latest BBBEE developements here)
According to Woolley, and as defined by the Broad-based Black Economic Empowerment Act 53 of 2003, broad-based black economic empowerment (BEE) exists for two purposes or functions:
the moral imperative, which is to eradicate the effects of oppression and unlawful expropriation during the reign of apartheid; and
an economic imperative, which must address the results of the policies and effects of apartheid that caused a marked difference in the standards of living between the rich and poor.1
The high level of unemployment in South Africa is one of the main reasons for the imperative towards successful transformation.
“if implemented properly and viewed as an opportunity, BEE could prove to be the best weapon not only to insure [sic] continued growth for South African businesses, but also as a skills transfer tool for millions of black people who were historically excluded.”2
BEE is a wide movement that functions on ownership, employment, procurement and advancement levels to accommodate members of the private sector, or holding companies, who may not want to sell the equity of the enterprise. This will ensure the creation of economic cooperation between the public and private sector, which is the key to economic development and growth.
The Act applies to “black people” as defined and is a generic term which means Africans, Coloureds and Indians. “Broad-based black economic empowerment” means the economic empowerment of all black people including women, workers, youth, people with disabilities and people living in rural areas through diverse but integrated socio-economic strategies.
Any enterprise with annual total revenue of R5 million or less qualifies as an Exempted Micro-Enterprise (EME). EMEs are deemed to have Level Four Contributor BEE status, which facilitates 100% BEE procurement recognition. If the EME is more than 50% owned by black people, the enterprise qualifies for a promotion to a Level Three Contributor BEE status, which allocates a 110% BEE procurement recognition.
Qualifying Small Enterprises (QSEs) are determined only by turnover. If their turnover is between R5 million and R35 million then they qualify as small enterprises. A QSE may choose any four out of the seven elements on the BEE Scorecard.
Any other business must comply with all seven score card elements.
When the scorecard has been completed, it should be submitted to a verification agency for validation. The agency will issue a verification certificate which is valid for one year.
Various aspects must be considered to calculate the BEE scorecard according to the Codes of Good Practice:
4.1 Direct empowerment
Ownership can exist on three levels: economic interest, non-encumbrance and equity control. Economic control is not defined, but refers to the equity interest of a member, as well as the assumption of all risk for liability and profit. On the other hand, equity control refers to the ability to appoint and remove directors with majority voting rights; the ability to control or direct majority votes; and the control and management of the business. Non-encumbrance means that owners with equity control can apply and enjoy their share as they deem fit, without any restrictions.
4.1.2 Management control:
Practically speaking, management control predominates. The first layer of this control is representation of black people at executive board level. The second is representation of black owners. Third is the involvement of black people in the daily operations and strategic decision-making at the most senior management levels. The final layer is the representation of black people in overall financial and management positions.
4.2 Human resource development
4.2.1 Employment equity:
Businesses must comply with the provisions of the Employment Equity Act to achieve equitable representation in the workplace. This refers to the empowerment and representation of designated groups by designated employers.
4.2.2 Skills development:
This section focuses on the development of existing employees and on improving their skills. This will ensure the growth of the economy and the availability of trained and skilled individuals to participate in that growth.
4.3 Indirect empowerment
4.3.1 Preferential procurement:
This part of the scorecard measures the extent to which companies procure goods and services from BEE compliant companies.
4.3.2 Enterprise development:
The capacity of black suppliers who are BEE compliant must be developed. Furthermore, this statement facilitates the assistance or accelerated development, sustainability and ultimate financial and operational independence of a beneficiary.
4.4.1 Corporate social investment:
This element was added in to ensure industry-specific flexibility. It aims to ensure that natural persons are able to generate income for themselves. This includes investment in rural development and infrastructural support in the same area or community, and also includes labour-intensive production. Generally, this refers to the after-tax expenditure of businesses that provide items as housing, bursaries and transport to the social wages of employees.
Nicolene Schoeman, Schoeman Attorneys (Cape Town)
1 WOOLLEY R
2005. Everyone’s Guide to Black economic empowerment and how to implement it. Paarl: Zebra Press.
2 KRUGER T
2005. ABC of BEE. HR magazine. September 2005: 36 – 37.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9266939759254456,
"language": "en",
"url": "https://www.supplychainbrain.com/articles/20730-research-says-global-gdp-growth-rate-may-drop-almost-40-percent-over-next-50-years",
"token_count": 187,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1865234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:799a5063-1c1f-4f73-89d2-1a091ca3fc4f>"
}
|
With populations aging and fertility rates dropping around the world, the growth rates of the past 50 years may prove to be the exception, not the rule. The latest research of the McKinsey Global Institute suggests that unless increases in labor productivity compensate for an aging workforce, the next 50 years will see a nearly 40 percent drop in GDP growth rates and a roughly 20 percent drop in the growth rate of per capita income around the world.
The potential for diminished growth varies considerably among countries. In the developed world, Canada and Germany are poised for the biggest drops in GDP growth rates. Saudi Arabia, Mexico, Russia and Brazil are most at risk in developing countries. Societies that fail to raise their game for the productivity needed to sustain growth will find it harder to achieve a host of desirable goals, such as reducing poverty in developing economies and meeting current social commitments in developed ones.
Timely, incisive articles delivered directly to your inbox.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9520742893218994,
"language": "en",
"url": "http://blog.esadvisors.net/2010/10/ratio-analysis-part-1.html",
"token_count": 520,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.042724609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:931354f7-fccf-427e-8bf7-e810e7879b95>"
}
|
Financial analysts often recommend ratio analysis as a way to measure the condition of a business. Many small business owners don’t know how to calculate the ratios or don’t understand what the ratios are telling them. We will discuss how to calculate important ratios and what they mean.
Financial ratios can be classified into four groups: liquidity ratios, activity ratios, leverage ratios, and profitability ratios. This week we will discuss liquidity ratios and leverage ratios.
Liquidity ratios help measure a business' ability to generate sufficient cash flow to pay it's current bills.
Liquidity is necessary to all business especially during economic downturns or slow periods for a company.
Current Ratio: This ratio is subject to seasonal fluctuations and is used to measure the ability of the business to meet its current liabilities out of current assets. A high ratio is needed if the business has difficulty borrowing on short notice.
Current Ratio = Current Assets/Current Liabilities
Quick (Acid-Test) Ratio: The quick ratio, also known as the acid-test ratio is an even stricter measure of liquidity and is what saved many businesses when the economy fell apart in 2009.
Quick Ratio = (Cash + Short Term Investments + Accounts Receivable)/Current Liabilities
Leverage (Solvency) Ratios. Solvency is the ability of the business to pay its long-term debts as they become due. An analysis of solvency looks at the long-term financial and operating structure of a business. The amount of long-term debt the business has is also considered. Solvency is affected by profitability, since in the long run no business will be able to meet its debts unless it is profitable.
Debt Ratio: The debt ratio compares total liabilities to total assets. It shows the percentage of total funds obtained from creditors. The more funding a business has from creditors, the more risk from a decrease in revenue and/or a decrease in profitability.
Debt Ratio = Total Liabilities/Total Assets
Times Interest Earned (Interest Coverage) Ratio: The times interest earned ratio reflects the number of times before-tax earnings cover interest expense. It is a safety margin indicator in the sense that it shows how much of a decline in earnings a business can safely survive.
Interest Coverage = Earnings before Interest and Taxes/Interest Expense
The key to all ratio analysis is what you compare the ratios to. Industry standards are important as well as the business' own history.
Next week, we will discuss activity and profitability ratios. What is your favorite ratio?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.938156247138977,
"language": "en",
"url": "http://www.rogercamrass.com/general-pages/blog/139/data-is-the-new-gold/?Auth=ae7874f379fac39b456c4977eb1d27a5",
"token_count": 1685,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.35546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:358e63ce-5ca4-4fc3-9ff9-8ea3900b2cb3>"
}
|
As we are propelled by COVID-19 into home working, online shopping and contact restricted to Zoom and social media, we might ponder where we end up during the new decade. In this paper Roger Camrass, Director of CIONET UK and visiting professor of the University of Surrey, draws comparisons between the colonisation of the ‘New World’ five hundred years ago and the colonisation of the digital ‘New World’ today.
Colonization of the Americas, 1492
When Christopher Columbus set sail in 1492 across the Atlantic, he discovered unchartered islands in the Caribbean just off the cost of the ‘New World’. At the same time John Cabot accidentally arrived on the shores of North America. Two centuries later the British, Spanish, and Portuguese had successfully colonised the ‘New World’, from the polar north to the southern tip of South America.
These explorers brought with them domestic viruses that wiped out most of the original inhabitants of the ‘New’ continent. With little local resistance, these European nations were able to plunder and carry home many valuable resources such as gold, potatoes and tobacco.
A new wave of colonisation: 2020
Sounds familiar? In just twenty years a similar process of colonisation has taken place where a handful of digital titans such as Facebook, Apple, Amazon, Netflix and Google (FAANG) in the West have captured and colonised the digital world with little resistance from incumbents. Instead of gold and tobacco, the valuable resource that these titans seek is ‘your data’ which is rapidly becoming the currency of the digital economy.
The COVID-19 virus has compressed this colonisation from decades down to months, fuelled by consumer demand for online services and a cash mountain of cheap money that is being poured into the FAANG and associated communities. It is quite possible that within the next 5-10 years these digital leaders and related start-ups will represent most of the equity value in the Western world. Companies such as Baidu, Alibaba and Tencent (BAT) are following suit in Asia. By 2030 little will remain of the ‘analogue’ world. Colonisation will be complete, and all ‘data’ will be in the hands of a few powerful monopolies. Let’s trace this remarkable journey.
Birth of the digital world
In April 1969 Vince Cerf, Robert Kahn and others began a programme sponsored by US defence agency, ARPA, to design an infinitely expandable, indestructible data network that became known at the ARPANET. In 1975 Roger Camrass (an early ARPANET research fellow) joined a team at MIT to help industrialise this early prototype by developing efficient data communication protocols. By 1983 the network architecture was complete and became known as TCP/IP. This enabled ARPA to assemble the ‘network of networks’ that is the foundation of today’s Internet.
Beyond this period an important event occurred in April 1989 when a scientist working at CERN laboratories in Switzerland, Sir Tim Berners-Lee, invented an information system that allows documents to be connected to other documents by hypertext links, enabling the user to search for information by moving from one document to another. One can say that with such an invention, referred to as the World Wide Web (www) we entered the modern digital era.
Work led by Roger Camrass at SRI in the nineties under a global research programme ‘Business in the Third Millennium’ identified the transformational powers of such an information network within business and social domains. It was at this time that start-ups such as Amazon and Google emerged to commercialise the technical capabilities of the Internet and related World Wide Web.
Enter the hyper-connected world
The current ubiquity of broadband connections and mobile networks around the globe has heralded in a new era of ‘hyper-connectivity’ with some two billion people now accessing mail, social media, and web sites daily. By the end of 2019, some 20% of consumer transactions had migrated to online channels. Digital leaders represented over 25% of S&P market value and were continuing to grow revenues in double digits. Elsewhere growth of incumbent barely kept up with inflation, and productivity in western economies flatlined:
The COVID-19 pandemic has accelerated the move to digital as it sent billions of office workers to their homes and restricted families to online rather than physical shopping. Fortunately, the investment in global cloud platforms such as AZURE, Google Cloud and AWS enabled a relatively smooth transition to take place. But the divergence between incumbents and digital leader stock market values continues to widen. The former witnessing sharp declines of 20-30% and the latter enjoying increases of similar amounts since January 2020.
This divergence has profound implications. Investment is attracted by healthy returns. The prospect for large incumbents is ever gloomier as they become starved of investment capital and dividends begin to dry up. In contrast, the digital leaders can continue to fuel their double-digit growth through access to ‘cheap money’.
New horizons – Hyper-personalisation and smart everything
A second wave of technologies following broadband and cloud platforms will include the Internet of Things (IoT), 5G mobile, Artificial Intelligence (AI) and Machine Learning, Blockchain, Edge computing, and 3D printing. These are set to further enhance the capabilities of the digital economy and the related strength of the digital leaders.
Such technologies enable us to capture, store, analyse and retrieve data from every possible source, both human and mechanical. As we enter the new decade, we are witnessing an explosion of data on the planet. More data has been created in the past two years than in the entire previous history of humans (as of 2018). Data is growing faster than ever before and by the year 2020, about 1.7 Megabytes of new information will be created every second for every human being on the planet
The implications are potentially transformational. We are about to witness the era of ‘smart everything’, from smart cities and homes to wearables that sense and respond to our every mood and desire. So far, digital leaders have succeeded by simplifying our ability to shop and communicate such as Amazon’s ‘One click’ transactions and Facebook’s social media platform. Now we will see the emphasis changing towards anticipation of need through accurate profiling of our personal habits. Note Google’s acquisition of Deep Minds.
Soon this will extend into the workplace as Artificial Intelligence helps corporates to analyse and automate workflows, displacing millions of jobs in the UK and elsewhere. Just as in modern factories, robots will take over repetitive activities leaving humans to concentrate on value adding tasks. In the words of Jason Kingdon, Chairman and CEO of Blue Prism, “we think there’s a future where the workplace consists of one third robots, one third humans and one third core IT”
Preparing for a digital future
The prospect of a handful of colonists such as FAANG and BAT dominating the entire digital economy is not a healthy one, especially if these titans influence and control over aspect of our lives through ownership of personal ‘data’. There are measures we can take as a society:
- Own and protect our personal data which is becoming the currency of the digital economy. This may take the form of digital assistants that act as intermediaries between us and the digital titans
- Encourage incumbent organisations to be more radical in their approach to digital transformation. Instead of trying to streamline the core, we believe that progress will only be made by creating new digital businesses out at the edge
- Alert governments and regulatory bodies to the danger of colonisation. Given the virtual nature of the digital economy they may struggle to win back control. Digital titans operate with relative impunity across borders. Time is running out.
For more information visit www.rogercamrass.com for blogs and publications
‘Protocol Problems associated with simple communications networks’ by Roger Camrass and Professor Robert Gallagher, MIT, published in February 1976 (visit www.rogercamrass.com for the original paper
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9536164999008179,
"language": "en",
"url": "https://biostrategyanalytics.com/2013/08/28/5-ways-to-boost-your-companys-stock-price/",
"token_count": 1850,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10302734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a0f94518-da73-4b79-b070-72cee2b75345>"
}
|
Stock Prices is the result of demand and supply forces in the capital markets. It is not necessarily linked with financial performance of the company, especially in the biotechnology sector. In fact, a significant amount of the biotech companies being acquired or after raising equity through an IPO (two of the main exit strategies) have negative Net Income. The reason is the huge amounts of capital required to push a product into the market as well as market access and reimbursement issues after the product has been approved. The capital markets and the investors are well aware of these issues and therefore they focus on companies that could bring great returns in the medium to long-term either in the form of dividends (which implies that the company needs to have positive net income) or stock price increase.
This article focuses on how a pharmaceutical company can boost its stock price. It should be noted that the suggested means are not definite and there are certain risks and pitfalls when using these methods. Hence, the disadvantages of these methods are also discussed.
Stock Repurchase (or Stock Buy-Back)
Stock repurchase has been a common method to boost share price. The reason is that in a stock buy-back the demand for the stock increases and hence its price. It is a way to convince the markets that the stock is reliable and that the company believes that its future performance will improve. A selected number of major stock repurchases of large pharma and biotech companies is shown below:
Pfizer: $10 billion Stock Repurchase Program (Announced in 2013)
Johnson & Johnson: $12.9 billion Accelerated Share Repurchase (Announced in 2012)
Amgen: $10 billion Share Repurchase Program (Announced in 2012)
Biogen: $3 billion Share Repurchase Program (Announced in 2007)
So how has these companies performed since the stock repurchase programs were announced? The following graphs show the stock price variation since the stock repurchase programs were announced.
It can be seen that in all cases there was a minor to major increase in stock price of these companies. However, there are large discrepancies; Pfizer shows a $1 increase in stock price while Biogen has experienced a 4-fold increase in its stock price. This indicates that there are numerous factors affecting the stock price such as M&A, regulatory & legal issues, company expectations and investor expectations. Therefore, one cannot simply draw conclusions from stock fluctuations but it can be, in some cases indicative of the impact of stock repurchase programs. A comprehensive and interesting analysis on this subject has been provided by Life Sci VC.
Financial Theory supports that capital structure does not have an effect on firm value; However, in the real world capital markets are largely based on psychology and every move can have an impact. Raising debt can lower the overall risk of the firm provided that the firm has not reached the point of financial distress yet (i.e. the firm is unreliable and unable to pay short-term debts). In addition, depending on the amount of debt raised and how it will be used it may have a positive effect on the stock price. An example is that of Pfizer that raised $13.5 bn. in debt (in the form of corporate bonds) in March 2009 and since then its stock price has been higher than the debt offering announcement.
The types of debt raised may also affect -indirectly- the stock price of the firm based on debtor’s timely returns and flexibility. The different types of debt are described below (As described by Bender and Ward, “Corporate Financial Strategy”, 2008):
- Secured Debt: Backed by a collateral, low interest rate and low risk (e.g. corporate bonds).
- Unsecured Debt: Partial covenants, medium interest rate and risk (e.g. debenture).
- Mezzanine Debt: Covenants may exist, high interest rate and risk, convertible to equity.
- Subordinated Debt: No collateral, very high interest rate and risk.
Selling preferred shares can also be considered as a way to finance a company. Although it is an equity measure, it features some characteristics of debt securities and is more directed to the financial performance of the company. Main characteristics of preferred shares (Miller, “Valuing a Preferred Stock”, 2007):
Convertible, cumulative preferred shares with fixed and adjustable dividend rates and voting rights are more likely to attract investors and increase the demand of the preferred stock which may allow the company to further improve the terms of the preferred stock thus leading to an improved enterprise value. This in the long-term may prove beneficial to the common stock as well.
Organisation Restructuring requires evaluating, valuing and prioritising the main assets of the company. For example, if your company has multiple business divisions and business units can have “subunits”. As an example, considering a fully integrated pharmaceutical company which its operations lie on two main therapeutic : oncology and cardiovascular in which they are both split in mature products and early-stage products. Valuing the projects or the business units based on financial performance (e.g. sales growth, EBIT margins) is crucial for the firm (see figure below).
If a business unit or a subunit performs well below than the overall performance of the firm then the firm may consider to either raise funds for that unit to organically grow or sell that business to another firm. This will show investors a willingness to grow, improve financial performance that could potentially (in the long-term) be rewarder through a higher demand for equity.
Mergers & Acquisitions (M&A)
Consolidation is a major trend in the pharmaceutical industry due to the high M&A activity in the sector. There is an extensive literature in the field of M&A and particularly its effect on shareholder value and stock price. The table below shows a number of studies that have examined this effect:
It can be seen that the majority of these studies conclude that the effect of M&A on stock return is positive.
It should be noted though that due to the fact that most of these studies have used econometric analyses (regression) as their methodology, a large time-series data is required for the effect of time-lags to be smaller in order for the model to show significant results. In other words, small time-lags are used thus implying that these positive effects are short-term while long-term effects of M&A on stock price is not completely visible.
If a company is profitable, a certain % or absolute amount of net income is usually reinvested to the company. The rest can be distributed to shareholders as dividends which can have a positive effect on stock price depending on the consistency and the (relative, i.e. compared to previous year) amount of dividends distributed.
An additional strategy can be using a small percentage of net income as capital investments to other companies. The figure below shows the types of investments (public equity, public debt and private) that can be made assessed by their level of risk and return (click on the figure to see graphs more clearly):
A portfolio of investments can be optimised by using as a benchmark: (i) average market return, or (ii) 6 month or 1-year average stock return of your company, (iii) Weighted Average Capital Cost (WACC) of your company, or (iv) Industry-specific index average return (e.g. NASDAQ Biotechnology Index – BTK) depending on the (expected) return that a company needs. In order to do that, a historical benchmarking of each type of investment should be performed. The next step is to model different combinations of investments (portfolios) to achieve the required return. Although different combinations may lead to the same required return, adjustments should be made based on the needs and preferences of the company. A sensitivity analysis is crucial as well, as some of the modelled portfolios might be highly sensitive to very few investments which makes the perceived risk high.
Overall, diversifying portfolio is a strategy that may be appreciated by capital markets, as the company will show its intention to diversify its risks and returns from different operations.
In this article 5 ways to boost your company’s stock price have been suggested: (i) Stock Repurchase, (ii) Raising Debt, (iii) Organisational Restructuring, (iv) Mergers and Acquisitions (M&A) and (v) Diversifying Portfolio. The pros and cons of each strategy have also been discussed. A combination of these strategies is more likely to have an impact on the stock price of your company. For example, a company can go through an organisational restructuring through which a certain amount of capital can be saved. Thereafter, the company can raise debt and use the “saved capital” and some of the debt to perform M&A, repurchase stock and diversify its portfolio, or a combination of the three.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9293027520179749,
"language": "en",
"url": "https://farmdesire.com/ostrich-cost/",
"token_count": 1474,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03564453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6d1521c3-ffaf-4054-bf74-4cc4de4d8aed>"
}
|
Most of the people’s minds just stuck at the corn and cattle farming, whenever they hear the word farming. However, there are many non-traditional farming systems like ostrich farming that are far more profitable than cattle farms.
Ostrich farming is the detailing unconventional business model with the most unconventional profit. It is the most advanced and advantageous type of agriculture farm that ends up in the form of a surprising amount of money.
The great words fit exactly on ostrich farming, that once you get in this business model, you will never think of profit in the same way again and you may just find your next venture.
The only problem in the low expansion of ostrich farming is less knowledge. People want to know the costs of raising emus and ostriches. There is much less organized knowledge than how much an ostrich costs? How much does an ostrich egg cost?
This guide clearly defines the cost to raise ostriches from the very first stage of buying eggs to selling them.
Let’s get started.
How much does an ostrich cost?
An ostrich costs around $525, having the age of 30-60 days. While the price doubles if the age of the bird goes beyond 90 days. While the yearlings cost around $2500/bird.
How much does an adult ostrich cost?
An adult ostrich costs around $7500 to $10,000 per bird. The high costs of the adult birds are due to the costs of raising the bird.
How much does an ostrich egg cost?
An ostrich egg has a varied cost of eggs. It varies with the size of eggs and the time of the year you are purchasing them. A non-fertile ostrich egg costs around $30-$50 per egg. While fertile ostrich eggs sell for $100.
How many eggs does an ostrich lay?
An ostrich lays around 12-18 eggs under natural conditions. While under the farm conditions, a single bird of ostrich can lay around 40-60 eggs a year. In the first year, the young females might produce 10-20 eggs but in subsequent years they will lay 40-60 eggs.
How much does it cost to raise an ostrich per month?
An estimated cost of raising a single bird of ostrich costs around $50 per month for chicks and $75 per month for yearlings. But in the case of adults, the cost rises to $100 per month.
These expenses include all the costs of feeding, vet, and general expenses.
Ostrich – a ratite (flightless bird) is the world’s largest living bird having a great body size that weighs more than 100 kgs. They use their large wings for balancing the body while running and communicating.
Raising the ostriches for meat, eggs and other profitable products is the most advanced form of non-traditional farming.
But there are many important things to consider before entering the ostrich framing. These important points are discussed in the lateral part of this guide.
Before this take a look at the cost of ostrich from buying eggs/birds to the annual costs.
Ostrich farming is sometimes called “future farming” because of its varied products and unconventional profits. The most common ostrich products are eggs, meat, hide, and feathers.
Ostrich egg is the most profitable product that benefits the farm owner the whole year around once the bird matures. On average, one ostrich egg has 47% protein and 2000 calories.
Feathers of the ostrich are used for cleaning the fine machinery and in the fashion industry. The best feathers are produced in the arid regions of the world.
Ostrich produces red meat that has the same taste and texture as beef. But this meat is high in protein and low in fat. A recent study shows that ostrich meat is far better than chicken and beef due to its high proteins and low fats.
Ostrich skin is known as hiding that is considered as the most luxurious leather in the world. Its popularity is by its thickness, durability, and softness. The hide is commonly used in shoes, jackets, and bags.
Revenues of raising ostriches
The revenue generated by raising ostriches comes forward in the form of products we get from ostriches. The products include meat, eggs, feathers, and leathers.
All of these products add great value to the revenues generated by ostrich farming.
Ostrich Eggs seems to be the less important item but it generates a lot of revenue. According to agrinet A single bird lays around 40-60 eggs per year. On average 50 eggs and a single egg sells for $40, each ostrich can give you $2000 per year.
An ostrich has a lifespan of 30 years that is likely to be $60,000.
Ostrich is slaughtered for meat at the age of 14 months commonly. At this age, it produces around 75-130 lbs of meat. The ground ostrich meat sells for $10-$15/lb while the filets sell for $25-$50/lb.
On average the meat selling generates the revenue of $1,500 per bird by keeping the $20/lb average price of meat.
Feathers and leathers
Another most demanding product of ostriches is their leather and feathers. An estimation reveals that an average bird produces around 14 sq. ft. of leather at the selling price of $40/SF and 4 lbs of feathers with a selling price of $40 per lb.
It collectively generates revenue of around $1470 per bird.
Nutritional value of Ostrich meat
|Per 100 g raw meat||Ostrich||Beef||Chicken|
How to start an ostrich farm
Ostrich Farm is the most practical and profitable farming model in the future and present. It generates revenue in several ways. But there are some things to consider while going to start an ostrich farm.
- Determine the revenue-generating method of your ostrich farm and how will you generate revenue by selling eggs or meat or feathers.
- Locate a good place of around 1 to 3 acres of land with good shelter conditions to protect from harsh weather.
- Ensure the proper supply of food and water. Ostrich can drink several gallons of water daily so make sure to keep it ready.
- Choose a variety of ostrich wisely. Red and blue neck ostriches are aggressive while the African blackbirds are easier to manage.
- If you are new to farming, then go for purchasing the young ones. Unhatched eggs and young ones require care at the start but they are inexpensive while the adults are expensive but produce eggs.(Resource)
An ostrich of age having 30-60 days costs almost $525. While a yearling costs around $2500 per bird. But the major thing is to choose the farm place correctly.
There are a lot of creative ways to generate revenue in ostrich farming. All you need is the wise decision taking power to choose the revenue-generating model.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.945999026298523,
"language": "en",
"url": "https://fundbuyerindex.com/news-and-updates/can-food-waste-reduction-improve-investor-returns",
"token_count": 950,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.076171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:41a097fb-6d6f-41fc-9ea7-aa43c3209872>"
}
|
When it comes to analysing the investment benefits of sustainable business approaches, the success of some initiatives are easier to measure than others.
For investors in funds aligned to the United Nations’ sustainable development goals (SDGs), the benefits of goal number 12 – which targets responsible consumption and production – can sometimes be tricky to quantify. Particularly, when it comes to understanding why they should care about food waste.
However, on Monday, the European Commission underscored the benefits of doing so, claiming that food businesses which invest in reducing waste in their production processes stand to make a staggering 14:1 return on their investment over the long-term.
“The business case for food waste prevention is convincing,” said the European Commission’s Jyrki Katainen, vice president for jobs, growth, investment and competitiveness.
Katainen was addressing delegates at the EU Platform on Food Losses and Food Waste. The Commission’s vice president went on to explain the motives behind a new delegated act, which will eventually require that member states monitor the level of food waste and take steps to manage it.
European food and agriculture companies stand to save up to €10bn by embracing technology that assists in reducing waste in the food chain.
“In food waste, as in life, what gets measured, gets managed,” Katainen said. “To be able to promote circularity in the food chain, we need to know where, what, how much and why we are losing food resources. We are making the decisive step to get this knowledge.”
As part of wider research efforts to understand the financial benefits of the circular economy, asset managers have been looking at why they should pressure companies to do more in this area.
In November, Rabobank released a report suggesting that European food and agriculture companies stand to save up to €10bn by embracing technology that assists in reducing waste in the food chain.
“European companies could save €5bn euros a year by introducing innovations in the field of harvesting and post-harvest storage,” explained Paul Bosch, food & agriculture supply chain analyst at Rabobank.
“They also stand to save another €2.5bn euros through food packaging innovations and €2.5bn more through monitoring the freshness of products more effectively.”
The findings by Bosch are echoed by numerous other financial analysts who have reached similar conclusions.
“We have pushed for a formal waste reduction programme to reduce food waste in supply chains and stores, while also engaging on a more sustainable approach to packaging,” said Emma Berntman of Hermes EOS.
“Food waste is an area which the large chains have understandably latched onto due to its high visibility to consumers.”
Berntman explains that emerging technologies are increasingly offering businesses the opportunities to make cost-savings. She says shipping companies, for example, are now able to reduce the amount of damaged food in transit by improved controlled environments which offer companies remote access to the conditions inside the containers.
Investors keen to harness this specific trend have myriad options. The Impax Food & Agriculture Strategy was launched in December 2012 and has so-far amassed assets of £633m (€736.8m). It targets investments in sustainable food companies and in businesses that can demonstrate innovation in resource efficiency and nutrition.
Alternatively, there is the Luxembourg-domiciled Pictet Nutrition fund, launched in 2009, which aims to invest in shares of companies in “nutrition-related sectors” with a focus on those businesses which are targeting improving the sustainability of production.
Sarasin & Partners, also offers a Food & Agriculture Opportunities fund, which is focussed on innovative trends in farming and the global food economy. Broader funds which focus specifically on agricultural practices are also available from Allianz, Amundi, BlackRock, DWS, KBI and Schroders.
For more insight on sustainable investing please click on www.esgclarity.com
Originally posted in Expert Investor
As sentiment towards unconstrained bond funds has seen a rise
A quarter of locals already make online purchases
92% of investors seek better understanding of their impact, study says
Register with us and we’ll tell you when get fresh data - so you’ll know what’s going to happen before it happens...
Fund Buyer Index
Expert Investor closely vet all applications on this website to ensure that applicants are verified fund investors. You will receive an email if your application is successful
Fund Buyer Index
Explore latest asset classes now
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9649008512496948,
"language": "en",
"url": "https://tablo.com/liz-wolf/simple-investment",
"token_count": 1220,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c1629918-b637-4c3e-8b1c-5953359078d8>"
}
|
Investing money on a consistent basis is the best way to ensure that you will have a healthy bank balance in the future.
Too many people let their money remain in checking and savings accounts, while they could be investing it and earning solid returns on a monthly basis.
However, it is important to be sensible while investing. Unless you are very lucky, it is unlikely that you will get rich quickly by investing money. Making the right investments is a long term process, and one that will prove fruitful when you look at your bank balance in ten or fifteen years.
Saving and Investing
When you earn a certain amount of money per month, it is important to set aside 30 to 40 percent for savings and investments. For example, someone earning $4000 a month should not be spending more than $2200 or $2300. The remaining $1700 to $1800 can be split among savings and investments. Save a portion so that you have cash at hand for a rainy day, but take the rest of the money and invest it in stocks, bonds, FOREX, or mutual funds. The options you have for investment are limitless. It is simply a matter of how much risk you are willing to take, and how quickly you want to see solid returns.
Where you invest your money depends on how many risks you are willing to take. Many people prefer to take less risks, and they invest in Roth IRA’s, Rollover IRA’s, 401(k)s, and government bonds. A portfolio made up of these financial instruments will give you solid returns, but you will not see more than a 10% gain on a yearly basis, and that is if the market does well.
If you are a risk taker, then stocks and the FOREX market will be more to your liking. Stocks can be bought individually, or you can speak to an investment bank and ask them to handle your money.
Forex trading can be done online or through an investor. The foreign exchange market is relatively straightforward. Let’s take an example where someone wants to make money off the Dinar to US Dollar exchange rate. To start off with, the investor will buy Dinar worth $100 at the current market exchange rate. Then, they will hold on to the money as Dinar in their account until the Dollar/Dinar exchange rate decreases. Once this happens, Dinar can be sold in exchange for dollars, with a profit made on the transaction.
When dealing with the Forex market, it is important to keep up with the constant changes. In the case of the US Dollar and Dinar exchange rate, it is important to keep up with any Dinar news that may affect the exchange rate.
How you invest and how much you invest will play a key role in the kinds of returns you see in 5, 10 and 15 years. It is important to remain patient with investing, and to only take calculated risks that you can afford to. Following the above steps will see you grow a healthy bank balance for the future.
A great way to make money, earn money is to invest it. It may run risks but the rewards are stupendous.
There are traditional ways of investing money like through bonds, savings, stocks and a business. Yet there are wacky ways to make more money out of your money.
Here are the top five wacky ways that your money can grow:
Collecting comic books
Have you ever collected comic books? Sure when you were younger because you loved the characters. But did you know that collecting it and preserving it in mint condition can earn you money? How? Simple, the 1960 Superman comic book is now valued at $280,000. The first issue of the Superman comic book can fetch easily a cool $350,000! Comic book addicts said that comic books get more valuable when a new character is added.
Buy domain names
This may be a passé way of earning money but you can buy domain names and keep it until someone is willing to buy it from you. However, there is a catch as some governments have been strict on this practice of cyber squatting. There have been laws made just to curb this practice. Still there is no harm in getting some of the names that can be interesting in the future which does not go against copyright, trademark laws and also cyber squatting. While you could have made thousands, if not millions in the early 1990s, those days are long gone.
Invest in wine
Wine tastes when it ages. This principle makes wine more expensive as they grow old. You can buy wine at a cheap price and let it age. Sell it in a future time and voila you get back your investment plus some profit. The good thing about this venture is that, good antique wines will be selling like hotcakes. You just need to invest in a great wine cellar to preserve the quality of your wines.
Collect old toys
It is similar to collecting comic books. Collecting vintage toys can be one of your cash cows. Did you know that an action figure that probably sold for less than $20 in 1978 can cost around $6,000 today? Yes! If you have a mint condition 1969 Barbie doll lying around that cost around $3 then could cost around $8,000 today.
Lend to lending clubs
Lending clubs are networks where you can make your money grow by lending it to people. You can lend as small as $25 in lending clubs that flourish all over the Internet. Don’t worry, the clubs peruse the credit scores and recommendations of friends to let people with good standing borrow some money from you for interest.
The bottom line is that there is no crazy idea when making money. Your imagination stands in the way of getting more money from what you already own. Sure, while some of these ways can help you make money, it’s going to be nothing such as bonds, stocks or even real estate.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9392945766448975,
"language": "en",
"url": "https://www.advancewithava.com/ava-aid/Porter's-Five-Forces",
"token_count": 503,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.05419921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:afcca9a6-7cb0-4bfe-9a11-e184d403cccd>"
}
|
Porter's Five Forces
Porter's Five Forces of Competitive Position Analysis were developed in 1979 by Michael E Porter of Harvard Business School as a simple framework for assessing and evaluating the competitive strength and position of a business organization.
This theory is based on the concept that there are five forces that determine the competitive intensity and attractiveness of a market. These help to identify where power lies in a business situation. It is useful to both understand the strength of an organization’s current competitive position, and the strength of a position that an organization may look to move into.
Strategic analysts often use this analysis to understand whether new products or services are potentially profitable. By understanding where power lies, the theory can also be used to identify areas of strength, to improve weaknesses and to avoid mistakes.
The five forces are:
1. Supplier power. How easy it is for suppliers to drive up prices? This is driven by the: number of suppliers of each essential input; uniqueness of their product or service; relative size and strength of the supplier; and cost of switching from one supplier to another.
2. Buyer power. How easy it is for buyers to drive prices down? Assess: number of buyers in the market; importance of each individual buyer to the organization; and cost to the buyer of switching from one supplier to another. If a business has just a few powerful buyers, they are often able to dictate terms.
3. Competitive rivalry. Understand the number and capability of competitors in the market. Many competitors, offering undifferentiated products and services, will reduce market attractiveness.
4. Threat of substitution. Where close substitute products exist in a market, it increases the likelihood of customers switching to alternatives in response to price increases. This reduces both the power of suppliers and the attractiveness of the market.
5. Threat of new entry. Profitable markets attract new entrants, which erodes profitability. Unless incumbents have strong and durable barriers to entry, for example, patents, economies of scale, capital requirements or government policies, then profitability will decline to a competitive rate.
When to use this model:
Where there are at least three competitors in the market
Desire to understand impact that government has or may have on the industry
Assessing the industry lifecycle stage – earlier stages will be more turbulent
Considering the dynamic/changing characteristics of the industry
NOTE: This model is not appropriate for an individual firm; it is designed for use on an industry basis
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9664109945297241,
"language": "en",
"url": "https://www.economist.com/finance-and-economics/2014/06/21/counting-the-cost-of-finance",
"token_count": 936,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.146484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8cd08651-9d40-4d35-adbb-68149779d259>"
}
|
EVERYBODY knows that the collapse of the financial system in 2008 was hugely costly for Western economies. But finance was taking a heavier toll on the economy even before Lehman Brothers went under.
That is the conclusion of a new paper by Guillaume Bazot of the Paris School of Economics which takes a different approach to measuring the overall cost of finance. Usually economists measure the contribution of the financial-services industry to GDP in terms of the “value added”, a measure which focuses on fees and spreads. Bankers typically make money by charging a higher rate for loans than they pay to depositors: the so-called 3-6-3 model (borrow at 3%, lend at 6% and be on the golf course by 3pm).
But modern banks also get income from securities in the form of capital gains, interest and dividends. This was increasingly true in the years before the financial crisis as banks became ever more heavily involved in underwriting, market-making and trading on their own accounts. This capital income is not included in the calculation of value-added. But Mr Bazot argues, “So long as capital income generates wages and profits to financial intermediaries, it is akin to an implicit consumption of financial services.”
When Mr Bazot adds capital income to the numbers, he finds that the financial industry’s share of GDP has been steadily increasing in recent decades (see chart). This is hardly surprising. Finance’s influence on the economy is emphasised every day on the nightly news, and the best and brightest graduates head for finance because, to quote Willie Sutton, “That’s where the money is.”
The trickier question is whether this much-expanded financial sector has become more efficient. There are some positive signs. The explicit cost of dealing in securities (in the form of commissions and bid-offer spreads) has come down. However, for the big institutions, it is much harder to judge the market impact of their dealing: prices may move sharply against them when they try to buy or sell in bulk. Investors also have access to low-cost fund management in the form of tracker and exchange-traded funds, although the past 30 years have also seen the rise of higher-charging private-equity and hedge funds.
Mr Bazot calculates a unit cost for finance by comparing the sector’s income with the stock of financial assets—“the real cost of the creation and maintenance of one euro of financial service over one year”. He finds that, outside France (where it has been stable), the unit cost has increased over the past 40 years; a 2012 paper by Thomas Philippon of New York University found a similar result for America.
This is slightly surprising, given the low level of interest rates. Historically, banks have performed best in periods of high interest rates, as they have been able to increase their spreads, charging borrowers a higher rate without compensating depositors (who are very slow to change accounts) to the same extent.
Mr Bazot did find a surge in unit costs in the late 1970s and early 1980s when interest rates were high. But in more recent times unit costs have not fallen as might have been expected, given the low level of interest rates since 2000. Instead the capital-market activities of the banks seem to have pushed costs higher.
The paper is a useful contribution to the debate about the role of the financial industry in the global economy. What justifies the high incomes earned by bankers and fund managers? One could argue that they have created a lower cost of capital for business in the form of low bond yields and high equity valuations. But that is a tricky case to make: low yields are more the consequence of central-bank policy and the low level of inflation.
An alternative view is that these higher incomes are what economists call rents: excess incomes earned by those with a privileged economic position. The financial industry is protected because governments and central banks will act to rescue it when it falters, in a way they would not do for chemicals, say. And the sector may also benefit from asymmetric information: some of the products it sells are highly complex and clients may not be aware of the full cost until well after a sale is made.
The central question that the finance industry needs to answer is this: why has its increased importance been associated with slower economic growth in the developed world and a greater number of asset bubbles?
This article appeared in the Finance & economics section of the print edition under the headline "Counting the cost of finance"
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9426549077033997,
"language": "en",
"url": "https://www.pcbb.com/bid/2018-02-01-Shopping-For-Fraudsters",
"token_count": 592,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.208984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:71bc5dca-bd41-4040-9923-dff8926ced31>"
}
|
In the end of last month, Amazon Go opened its doors in Seattle, as the first digital grocery store without any cashiers. Your shopping bag automatically identifies and adds up everything you gather, so you can just walk in and out and everything is simply charged to your credit card. To get started, visitors simply download a special app to use the digital shopping cart and check-out without waiting in line.
As this store format shows, machines can do many interesting things that make our lives easier. Machine learning is one area banks are focused on to gain a potential lift in the future.
Definitionally speaking, machine learning technology gives computers the ability to learn without explicit programming. IBM coined the term in 1959 while playing around with pattern recognition and data. Forbes reports machine learning "is a current application of artificial intelligence" around the concept that machines are used to access data and "learn for themselves." This is not to be confused with artificial intelligence (AI), which is "the broader concept of machines being able to carry out tasks in a way that we would consider 'smart'."
Hang in there as we go a bit deeper to try and clear out this technical fog. Machine learning has many amazing applications and some very important ones for bankers. It shows great promise for catching money launderers and other financial fraudsters for example. That said, the technology is perhaps 5Ys from major adoption here, because worries around regulatory approval and data have kept some financial institutions on the sidelines awaiting further guidance.
Machine learning's ability to create neural networks may be spooky to some, but that very complexity and strength may also be helpful to banks. Of course, it may also be one of the greatest barriers to more widespread adoption even as a tool against money laundering.
Eager to harness the new technology while also staying on the right side of regulators, some large banks employ data analytics that use some elements of machine learning. These usually also still let people direct the software's pattern search to ensure the bank is following regulations, law enforcement advisories, investigation results, etc. Doing so allows these banks to comb through mountains of customer data seeking patterns across multiple transactions quickly, while ensuring a human is also involved to steer things along.
Precision of any system of course goes back to the old saying of garbage-in-garbage-out. That means predictions depend a whole lot on having accurate input or data. Unfortunately, that isn't simple to supply for most banks.
Antifraud efforts can sometimes be enhanced with machines. For instance, PayPal says its internal algorithms have led to a 50% decline in false positive fraud alerts, big banks are actively using it and the SEC uses it to scan documents for fraud too.
Machine learning and AI continue to be promising technologies, so community banks should start at least thinking about how to use such technologies to boost opportunity and reduce costs and time over the next 5Y to 7Ys.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9797156453132629,
"language": "en",
"url": "https://www.shropshirelarder.org.uk/post/shropshire-s-credit-unions-could-help-you-save-money",
"token_count": 740,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1259765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:cb73fe84-2357-466c-90a6-7ae88847224d>"
}
|
What is a credit union?
A credit union is non-profit alternative to a commercial bank. Traditionally, they operate for a collective of people – brought together by a ‘common bond’ such as a community or employment – to create a financial institution that aims to help the community.
Everyone who has an account with them – even if they just put in £1 – is a partial owner.
And they’re very popular! The NHS has got one for its employees and in Ireland and the US they are used more than banks. There are even more ATMs in America for credit unions than banks! 84% of Just Credit Union members said they were ‘very likely’ to recommend to a friend.
Low interest loans & savings accounts
Credit Unions are there for everyone but often lend to people traditionally unable to access loans at reasonable rates. Credit Union’s recognize that people may have had financial problems in the past and are more interested in ensuring a loan is affordable rather than your financial history. If they find you can’t afford the loan then they won’t lend to you, so you don’t get into further financial hardship, but they will pinpoint services which can help you.
One reason people turn to credit unions over banks is that banks won’t give small scale loans as they’re hard to make money from. Credit Unions APR’s are capped which makes them a much safer place to borrow from than payday lenders with very high interest rates or illegal loan sharks. FairShare offers loans from £50 to £15,000 and Just Credit offer loans starting at £250.
But they’re more than just a loan. You will open a savings account with them to help you save and borrow at the same time, so by the time the loan is paid off you will hopefully have money left over!
Who are they for?
Both of Shropshire’s credit unions common bonds are geographical for Shropshire (Just Credit also covers Telford and Wrekin). That means anyone who lives, works or studies in Shropshire can be a member– including anyone who works for a Shropshire based company too.
Just Credit Union was originally set up by Shropshire Council to help people who were financially excluded for any reason – maybe by their credit history, divorce, redundancy, sickness, or living in a remote location. They’re now independent and can be used by anyone, but they still keep this as key to how they work. They are NOT financial or debt advisors but will refer people onto them where necessary and they also regularly share tools and information with their members to help them make the most of their money.
Credit unions vs banks
The main difference is that Credit Unions are NOT for profit. Whilst not a charity, Credit Unions have similar governance – with a volunteer board of directors and trustees who oversee management. . Credit Union’s are owned and managed for the benefit of their members and the profits are shared with the members who save with them.
Another difference is the size, credit unions are smaller and based in the community and they like to develop a personal connection with their members. All money saved with the credit union circulates in the local community, thereby supporting the local economy.
Get in touch now!
Both FairShare and Just Credit Union have adapted their systems to be largely online for Covid. Visit their websites to get set up.
You can find out more about Shropshire’s Credit Unions by visiting their websites:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9463180899620056,
"language": "en",
"url": "https://hahuzone.com/accounting-estates-and-trusts",
"token_count": 3579,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0908203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:cf536f90-aca1-424a-85e0-61f04c696e78>"
}
|
Accounting for Estates and Trusts
Estate accounting is concerned with accounting for the administration and distribution of the decedent’s property. In effect, this unit explores the legal and accounting aspects of estate administration and trust.
3.2 legal aspects of estate administration
According to Oxford current English dictionary, estate refers to person’s assets and liabilities at death. The estates of deceased persons (decedents), or missing person should be administered distributed, and accounted by certain laws. The deceased person may or may not leave his will about the estates.
If a person died with a valid will, he/she is considered to have died testate. This person is called testator.
The disposition of such person’s real and personal property is governed by the will of the testator.
On the other hand, a person may not leave his will in part/wholly about the real and personal proprieties. He/she is considered to have died (in testate). In this case, the distribution of such person is governed by the provisions of certain laws (laws of intestacy).
The administration of estate involves marshaling the estate’s assets, paying the debts of the decedent, and the distribution of the remaining in accordance with the testator’s wishes or the laws of intestacy. If the deceased left the will, the validity of the will should be validated. The process by which the validity of a will is established is called probating the will. Once a will has been admitted to probate, the court will proceed to appoint a personal representative of the deceased whose function is to administer the estate. This person is called an executor (executrix). An executor is a male fiduciary named in the will be the decedent to administer the estate. An executrix is a female fiduciary named in the will of the decedent to administer the estate.
If the decreased dies intestate, the court will appoint a personal representative who can administer the estate. This person is called an administrator (adiminstiratrix). An administrator is a male fiduciary appointed by a court to administer the estate of an in testate decedent. An administratrix is a female fiduciary appointed by a court to administer the estate of an intestate decedent. The appointed person is issued letters of administration as evidence of that individual’s authority to act as a fiduciary to administer the estate of the intestate decedent.
Once appointed, the personal representative will take the possession and control of the decedent’s property. If it is a business enterprise, the representative may continue to operate a business for some time (no longer than four months in USA) with the specified period (three months in USA), the personal representative must submit to the court an inventory of property owned by the decedent on the date of death. He/she also submits a list of any leins that exist against the property. If additional assets of the decedent are discovered after the filing of the inventory with the court, supplementary inventory reports must be filed with the court.
Claims against the estate
Once appointed, the representative must give public notice in a newspaper of general circulation at sometime interval. The purpose of the notice is to request that those who have claims against the estate present them within the specified time, or be forever barred from asserting such claims.
Some allowances and exemptions precede all claims against estate. These are described as follows:
1. Homestead allowance
It refers to an allowance of certain amount ($500 in USA) to a surviving spouse or surviving minor children of the decedent. This allowance is additional to any other share of the estate that passes to the spouse or children by the will.
2. Family allowance
It refers to a reasonable cash allowance (not to exceed $6000 for the 1st 12 months after death in USA) to the decedent’s surviving spouse and dependent children. Except homestead allowance, family allowance has priority over all claims against the estate.
3. Exempt property
It is a decedent’s household furnishings, automobile and other personal effects up to a value of certain amount ($3500 in USA) and not available to creditors of the estate to the surviving spouse and children.
After the above allowances are excluded, the representative pays in the claims in the following order:
- The expense of administering the estate
- The funeral expense as well as the hospital and medical expense of the decedent’s last illness.
- Debts and taxes that have preference under federal or state laws.
- All remaining claims
The Settlement of an Estate
Once the claims against the estate have been established and paid, the personal representative has the duty to distribute the remaining assets of the estate to persons entitled to it.
Distribution of Intestate
When a person has died intestate, his/her estate will be distributed in accordance with the applicable law. This is generally distributed to a spouse or blood relative. Real property is distributed to heirs under the laws of the state where the property is located (in USA). Personal property is distributed to next of kin under the laws of the state in which the decedent was domiciled.
Distribution of Testate
If a person dies testate, the distribution of the decedent’s property is mostly governed by the terms of the will. In such a situation, the gift of real property is called a devise. The recipient (beneficiary) is called a devisee. Testamentary gifts of personal property are called bequests or legacies. The recipient (beneficiary) is called legatee.
3.3 The Classification of Legacies
As defined above, legacy refers to the testamentary gifts of personal property. There are various types of legacies. They are described below:
a. A specific legacy
It is a gift of personal property specifically identified in the will such as a specific piece of Jewelry.
b. A demonstrative legacy
It is a testamentary gift payable out of a source specified in the will such as a specific amount of money to be paid out of a specific bank account or the proceeds from a specific insurance policy.
c. A general legacy
It is a gift of an indicated amount of money or quantity of something without designation as to source.
d. Residual legacy
It is a testamentary gift of property remaining in an estate after all debts have been satisfied and all other legacies have been distributed, or otherwise provided for.
3.4 Accounting Aspects of Estate Administration
Facilitating the reporting by the personal representative, called fiduciary to the court is the major purpose of estate accounting. The reporting involves two aspects. There are:
Accountability emphasizes that the personal representative is responsible for the assets of the deceased and for their administration and disposition. In this case, estate accounting reflects the assets for which the fiduciary is charged with responsibility and the distributions and payments to creditors and beneficiaries. The fiduciary is credited with the distributions and payments to creditors and beneficiaries.
2. The Distinction Between Principal and Income
The distinction between principal and income is the basic to estate accounting. Principal, also called corpus, is defined as the property set aside by the owner (or the person legally entitled to do so) so that it is held in trust for eventual delivery to remainderman. Remainderman is a person named to receive the principal of an estate at the conclusion of the income beneficiary’s interest. Principal consists of the net assets of the estate on the date of death.
Net Assets = Gross Assets – Liabilities
- Proceeds of insurance on property forming part of the principal
- Stock dividends and liquidating corporate distributions
- Rents or other types of revenues which already accrued at the date of death of the testator
- All proceeds from the sale or redemption of bonds
- Cash dividends declared prior to a decedent’s death.
Charges against estate principal include:
- All expenses incurred in connection with the settlement of an estate. These include funeral expenses, debts, estate taxes, interest on taxes, penalties on taxes, and family allowances.
- Part of court costs and accountant’s fees, attorneys’ fees, personal representatives fees, and trustees’ fees. The remaining part of these costs should be charged against income.
- Costs incurred in preparing principal property for sale or rent.
- Cost of investing and reinvesting principal assets
- Major repairs to principal assets
- Income taxes on receipts or gains allocable to principal
- Rental expenses payable at the date of death of the decedent.
Income is defined as the return in money, or property derived from the use of principal. Income represents the earnings on the net assets of the estate. Income includes
- Cash dividend
- Receipts from business and farming operations
- Any revenue earned during the administration of a decedent’s estate.
Income may be charged with the following items:
- Ordinary expenses incurred in the management and preservation of estate or trust property. This includes regularly recurring taxes assessed on the principal
- Water charges
- Insurance premiums
- Ordinary repairs
- Depletion and depreciation depending on the expressed intention of the testator with respect to the preservation of the principal of the estate
- Expenditures required to preserve the normal operating efficiency of depreciable assets.
Depending on the testator’s will, the income of the estate (or a portion of it) accrues for the benefit of one party for a stipulated period of time. The party who is entitled for the income of the estate is called income beneficiary for a limited period of time after which the principal is to be distributed to another party, called remainderman. Remainderman was defined earlier. To illustrate the difference between income beneficiary and remainderman, assume that Ato Bulcha has a business enterprise called Bulcha Company. W/ro Biftu is the spouse of Ato Bulcha. Ato Bulcha has also three children. Assume further that Ato Bulcha has died on May 10,2004, at which time the net asset of his business is Br. 200,000. Before his death, he expressed that income from his business is to be used by his spouse, and after her death the Br. 200,000 would be used by his children.
From the above description, the Br. 200,000 represents the principal. W/ro Biftu is called income beneficiary, Ato Bulcha’s children are called remaindermen.
3.5 Accounting and Reporting for Estates
As indicated earlier, the major focus of estate accounting is on the accountability for estate assets and for their proper administration and distribution. Therefore, regarding fiduciaries, the fundamental accounting equation is shown below:
Assets = Accountability
The accounts are primarily designed to maintain the distinction between principal (capital, or corpus), and income.
3.5.1 Accounts relating to principal
The following accounts are used in relation to principal
1. Individual asset accounts
The accounting for an estate begins when the fiduciary files an inventory of the decedent’s property with the court. At that time each asset account is debited at the asset’s fair market value. For example, if the decedent has cash of Br. 10,000 and inventory with a market value of Br. 15000 on the date of death, cash account is debited for Br. 10,000, and inventory account is debited for Br. 15,000.
2. Estate principal account
Estate principal account is credited when asset accounts are debited. In the above example, estate capital account is credited for Br. 25,000 (i.e 10,000 Br. 15,000 – 25,000) if the decedent had no liability. Estate principal account is the basic equity of the estate.
3. Assets Subsequently Discovered account
Assets Subsequently Discovered account is used to record assets that were not inventoried of the date of death of the estate. This account is credited when the market value of the asset discovered is debited to an appropriate asset account.
4. Gain (loss) on realization
This account is used to record any gain or loss upon the disposal of the deceased person’s assets. Loss on realization account is debited if loss arises on disposal of assets. On the other hand, if the disposal of assets results in gain, gain on realization account is credited.
5. Debts of Decedent Paid account
This account is used by the personal representative to indicate reduction of accountability for estate assets in the form of payment of debts and legacies. Legacy refers to a testamentary gift of personal property.
3.5.2 Accounts relating to income
The following accounts may be used in relation to income:
1. Estate Income account
Estate Income account is used to record income collections.
2. Expense accounts
They are used to record expenses allocable against the interests of income beneficiaries.
3. Distributions to Income Beneficiaries accounts
It is used to record the distribution of income to income beneficiary.
3.5.3 Reporting for estates
The personal representative is required to prepare reports for estates and submits to the court. He/she is required to prepare two types of reports. There are:
- Charge and discharge statement - principal
- Charge and discharge statement – income
3.6 Legal and Accounting Aspects of Trusts
Estate administration Vs Trust administration
Estate administration is generally a short-term process that aims at the expeditious distribution of estate assets. On the other hand, trust administration consists of the prudent management of funds over longer period of time.
A trust may be created by a living grantor who transfer property for the benefit of another person (beneficiary) to a trustee. The trustee is responsible to hold assets for the beneficiary. The income from a trust is ordinarily distributed periodically to an income beneficiary. The principal of the trust ultimately goes to a remainderman. The income beneficiary and the remainderman may be the same person.
The trust may be testate or instate. When a trust is created by a will, it is called a testamentary trust.
The accounting procedures for a trust are very similar to those of an estate. With respect to reporting, the trustee is required to fill an accounting with the court concerning events of the previous period, specifying the accounting period, and giving the names and addresses of the living beneficiaries. The trustee must give a statement of unpaid claims and reasons for non-payment within the reporting period. Besides, the trustee must render final accounting covering the period since the last intermediate accounting at the termination of the trust. He/she must also prepare the plan for the distribution of trust assets still on hand. To conclude, the function of the trustee is the administration of the trust, preservation of the assets, the discharging of liabilities, and the equitable distribution of principal and income to those entitled to them in accordance with applicable laws and the intent of legal requirements.
The planning for and the administration of estates and trusts involves accounting skills, and knowledge of tax and other specialized areas of law. The focus of estate and trust accounting is not on compliance with generally accepted accounting principles, rather on specialized bookkeeping practices and accounting statements that aim at carrying out the intent of the law and the intent of those who leave estates or create trusts.
A decedent died with testate leaves a will directing the distribution and administration of his/her property. Whether the decedent had died testate or intestate, the administration of the estate is normally under the jurisdiction of a court handling probate matters. The court issues lesser of testamentary if the decedent died testate, and letters of administration if the decedent died intestate.
Claims against estate are in the order of allowances and exemptions (homestead allowance, family allowance, and exempt property) and followed by creditors.
If the estate is sufficient to liquidate all of the debts of the decedents with some estate property remaining, the fiduciary may proceed with the distribution of the estate’s real property and its personal property. The gift of real property is called a devise and the recipient is a devisee. A gift of personal property is called a bequest or legacy and the recipient is a legatee. A legacy may be any of the following:
- Specific legacy – legacy specifically identified
- Demonstrative legacy – a sum of money payable out of a particular bank account
- General legacy – sum of money without naming the source of the funds
- Residual legacy – balance in the estate after paying all debts and other legacies.
The fiduciary has to classify the estate assets in to principal and income because income beneficiary is different from principal beneficiary. Accounts used in principal accounting and income accounting are different. Ultimately two separate statements are prepared by the fiduciary – charge and discharge statement – principal, and charge and discharge statement – income.
Finally, the administration of trusts is similar to that of estates – the administrator of a trust is called a trustee and the recipient of a trust’s benefits is a beneficiary.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.961065948009491,
"language": "en",
"url": "https://www.access2knowledge.org/business-finance/when-referring-to-student-loans-what-is-a-grace-period/",
"token_count": 964,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0311279296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0eea3b29-28ba-459f-8818-433800885ebd>"
}
|
Student debt continues to skyrocket in the United States with over 37 percent of adults under 30 reporting to have outstanding student debt. Student debt for Americans overall amounts to over $1.3 trillion.
This amount is almost triple the amount of student debt from a decade ago. When we look at only young adults with a bachelor’s degree or higher, this percentage increases to 53 percent. The extreme increase comes from the progressively increasing cost of higher education.
The pressure to obtain a degree continues to increase along with the higher price tag, while financial aid continues to decrease. For those with student loan debt, it is vital to understand what student loans are and how to put a plan in place to pay off the debt.
Young adults with student loan debt are more likely to be struggling financially and to hold second jobs. They also tend to value their degrees less than those without the burden of student loans.
One aspect of student loans that needs to be grasped is the student loan grace period.
Together we’ll tackle two questions.
First, what is a student loan?
Second, when referring to student loans, what is a grace period?
Student Loans Defined
Student loans are specifically designed to help students pay for their higher education. Most students don’t have the money necessary to make an investment in higher education on their own. Student loans provide a way to bridge this financial gap.
The student loans can be used to pay for tuition, books, living expenses, and other expenses associated with obtaining an education. The main differences between student loans and other traditional loans is the possibility to access much lower interest rates and the ability to defer payments of the loans until after education has been completed.
In the United States, there are two different kinds of student loans – federal student loans which are sponsored by the federal government and private student loans. The majority of student loans used by students are federally sponsored.
There are two types of federally sponsored loans – subsidized and unsubsidized. With subsidized loans, interest is not accrued while you are in school because the government pays for the interest. With unsubsidized loans, the interest accrues while you attend school.
Though federal loans are usually less expensive than private loans, the government makes a large profit on student loans partially because these loans cannot be discharged even when a bankruptcy is filed.
The interest payments of the loans far exceed the expense the government takes on from administrative costs or losses.
Repayment for student loans typically begin 6 months to 12 months after a student leaves school, whether the student finishes the program or not. Repayment also begins if the student drops below half time course load.
What is a Grace Period?
The grace period is the 6 to 12 months granted to a student after they leave school before repayment begins.
According to Homeroom, the official blog of the Department of Education –
“Your student loan grace period is a set amount of time after you graduate, leave school, or drop below half-time enrollment before you must begin repayment on your loan. For most student loans, the grace period is six months but in some instances, a grace period could be longer. The grace period gives you time to get financially settled and to select your repayment plan.”
During this period, it is important for students to prepare for the coming period of repayment.
The first thing to do during your grace period is to get organized. Sit down and track all of your student loans and read over the agreements for all of them.
After this contact your loan servicer. This is the company that handles your federal student loans. Discuss your repayment options with the servicer and begin to complete the tasks necessary to tackle your student loans.
Maintain contact with your servicer and communicate any circumstances that will affect your ability to repay your loans.
After careful consideration of your repayment options and making sure you have a thorough understanding of debt consolidation, decide on your repayment plan.
Though you might be given or assigned a repayment plan when you first begin, you can change your repayment plan at any time should your circumstances change.
One of the biggest benefits of federal student loans is the flexibility of the repayment options. There are even loan forgiveness programs for teachers and other who serve in public service positions
When it comes to repayment of your student loans, know your options and take advantage of your grace period.
Latest posts by Rob Davis (see all)
- What U.S. President Refused To Use The Telephone While In Office? - Aug 5, 2019
- What Invention Did Michael Jackson Receive a Patent For? - Jul 31, 2019
- What Is The Longest Running Television Game Show? - Jun 24, 2019
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9543688297271729,
"language": "en",
"url": "https://www.mortgageguideuk.co.uk/the-effect-of-falling-house-prices-in-uk/",
"token_count": 1145,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0146484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e0618888-13ec-477e-985a-1bff2592ef1b>"
}
|
What happens if there is a fall in House Prices in the UK?
If house prices fall, what will happen to the economy and homeowners? Are falling house prices always a bad thing?
1. Effect on Wealth.
The first effect of falling house prices is to directly reduce the wealth of homeowners (wealth is of course different to income). Housing is by far the biggest form of wealth in the UK. Therefore, a fall in house prices will definitely reduce consumer confidence. This will lead to lower levels of spending; people will be more reluctant to undertake risky investments and borrowing.
2. Effect on Equity Withdrawal.
Rising house prices enable equity withdrawal. This means that homeowners can remortgage their house to consolidate unsecured loans and / or gain more money to spend. In recent years, this equity withdrawal has played a significant role in boosting consumer spending in the UK. Falling house prices would bring to a halt equity withdrawal; therefore, consumer spending will increase at a slower rate or even fall.
3. Housing is a significant Barometer of the state of the Economy.
More than any other country, in the UK, house prices and the housing market are seen as a barometer to the state of the economy. Any fall in house prices is likely to receive significant press coverage. (even tenuous predictions of falling house prices have been frequently making the front page of newspapers like the Daily Mail and Express) This magnifies the effect on consumer confidence. It is also the nature of the media to exaggerate any fall in house prices. For example, it is easy to pick statistics which exaggerate the extent of any fall (e.g. choose certain location, choose a certain month and magnify by 12, confuse fall in prices with fall in house price inflation). However, dire predictions of falling house prices can become a self fulfilling prices. If people believe house prices will “collapse” it will deter many from buying, and therefore, the fall in house prices will be greater.
4. Effect on Economic Growth.
A fall in house prices will reduce consumer spending and AD in the economy. Therefore, this could lead to lower growth. It is possible, it could even contribute to a full blown recession (negative economic growth for 2 consecutive quarters) For example, in 1991-92 house prices fell by 15%; this was a significant effect on causing the last recession of 19991. A fall in house prices doesn’t necessarily cause a recession; there are many other factors that effect growth like investment, government spending; a fall in wealth doesn’t reduce income. However, because of the importance of housing to the UK economy, it is quite possible falling house prices could cause a negative multiplier effect and lead the economy into recession.
5. The Benefits of Falling House Prices
If house prices fall and it causes the expected fall in consumer spending, it is very likely to reduce inflationary pressures in the economy. A fall in the in the inflation rate will enable the MPC to consider reducing interest rates. Note, the MPC doesn’t reduce interest rates to stop house prices falling. – They reduce interest rates because inflation falls below their target of inflation. The fall in interest rates reduces the cost of mortgage repayments. This is good news for those with high mortgage interest repayments. It may also moderate the fall in house prices because falling interest rates make buying a house increasingly attractive. Therefore, if you have no need to remortgage or sell your house, falling house prices can actually be beneficial for many homeowners.
6. First Time Buyers.
Another positive impact for falling house prices is that it will help to make buying a house more realistic for first time buyers. The last decade has seen the ratio of house prices increase much faster than incomes. The effect of this is that many first time buyers struggle to buy. The effect of this is particularly felt in areas like London and the South East. The shortage of affordable housing has caused a shortage of key public sector workers, such as; teachers, nurses and policemen. This is having an adverse effect on local economies. Councils are increasingly look to immigration to fill shortages of nursing. Thus a fall in house prices, or an extended period of flat house prices, would enable the house price to earnings ratio to become better.
7. Depends on the Extent.
Some people confuse a fall in house prices with a fall in the rate of growth. For example, recent headlines about the UK housing market include:
“Big drop in UK house price inflation”
At first glance this may appear prices are falling. However, what they mean is that house prices are now growing at 7% a year, rather than increasing at 9% a year. It is possible that a fall in house price inflation to 1% a year could actually have similar effects to falling house prices. Clearly the impact of falling house prices depends on the severity. A modest fall of 1-2% is not too drastic. A fall of 10% would be very serious.
Evidence in the UK
Evidence from the UK suggests, that a sustained fall in house prices can play a crucial role in causing an economic recession (fall in Real GDP).
During the two big house price crashes of 1991-94 and 2007-08, there was also a recession. Falling house prices were not the only cause of recession, but it was a significant contribution.
Also, it is worth bearing in mind, that the negative economic growth also tends to exaggerate the fall in house prices. When unemployment is rising, demand for buying a new house tends to fall.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9383336305618286,
"language": "en",
"url": "https://www.t3.com/news/government-plans-to-boost-large-scale-green-energy-storage-and-reduce-energy-bills-in-the-process",
"token_count": 697,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.07568359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4cdfb178-643c-41ff-bbaa-01dee865a6b5>"
}
|
The UK Government has announced that it will relax planning legislation around the construction of large-scale batteries, to make it easier for energy storage systems to be developed. It’s hoped that by loosening these restrictions, the number of batteries on the country’s electricity grid could be trebled to over 100. It's also thought that the storage cells themselves will be up to "five times bigger" than those currently in use. The Government wants these batteries to primarily be used to store renewable energy from the country’s wind and solar farms.
Wider potential benefits could involve the creation of more jobs in the green energy sector, while the increased efficiency and supply of renewables could mean a reduction in energy costs for consumers - which means lower bills. It’s also another example of the UK’s efforts to lower carbon levels and hit Net Zero by 2050.
- Lower your energy bills: compare prices in your area now
A ‘smarter’ electricity grid
These legislative changes will mean that storage projects over 50 megawatts (MW) in England and over 350MW in Wales will be approved. More green energy resources can then be "stored and used all year round". Presently, planning permission is needed from local authorities before such projects can be rolled out.
The Government has also stated that these battery technologies will be a part of the country’s ‘smarter electricity grid’, which will support a wider integration of low-carbon power sources. It predicts that this could lead to savings of up to £40bn by the middle of the century.
Managing peaks and troughs in demand
It's hoped that large-scale batteries can be used to keep the National Grid’s green energy levels more consistent, and help manage demand more effectively.
Despite having the "largest installed capacity of offshore wind in the world," the speed and availability of wind isn't always a constant in the UK. That means that green wind energy is sometimes created when we already have a surplus. At other times, only low levels can be produced.
Having more - and larger - batteries in place should help the Grid handle surplus energy. It should also work to keep the overall mix of renewables on the Grid at higher levels when generation is not as productive.
The UK Minister for Energy and Clean Growth, Kwasi Kwarteng, has described the Government’s move as “key to capturing the full value of renewables”. He championed the plans as a means of fostering the UK’s “smarter electricity network” and “creating more green-collar jobs”.
The Head of Markets at electricity system operator National Grid ESO, Katye O’Neill, also praised the plans, stating that the battery storage will “manage the peaks and troughs in demand”, and help make the electricity system more efficient, keeping costs down for consumers.
FIND THE BEST ENERGY DEAL FOR YOUR HOME
We've partnered with MoneySupermarket to help you find the best energy deals in your area. Our energy comparison tool takes less than five minutes to use, and could save you hundreds on your energy bills. Just tell us your post code and how much gas and electricity you use, and we'll show you the best tariffs in your area - plus our latest exclusive offers. Save money on your energy bills now
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9401952624320984,
"language": "en",
"url": "https://finance.laws.com/loan-amortization",
"token_count": 730,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.036376953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dd1a9725-3cf0-459e-8f3f-268d7b6ae31b>"
}
|
What is Loan Amortization?
In regards to economics, amortization refers to the distribution of a single lump-sum cash flow into many smaller installments, as determined through an amortization table or schedule.
Amortization is a loan with a unique repayment structure. Unlike other models, each repayment in an amortization consists of satisfying both the principal balance and the interest attached to the loan.
Amortization is used in loan repayments, most commonly in mortgage loans or sinking funds. The payments are divided into equal amounts for the duration of the maturity schedule. Because of this uniformity, the amortization is regarded as the simplest repayment model.
Payment towards the amortization is mostly applied to the interest of the loan at the beginning of the amortization schedule, while an increased percentage of payment is used to satisfy the principal at the end of the amortization loan.
In an accounting sense, loan amortization refers to expensing the cost of acquisition from the residual value of intangible assets such as patents, trademarks, copyrights or other forms of intellectual property.
In a more common sense, amortization refers to the tangible process of paying off a debt, such as a loan or a mortgage. The process in a loan amortization is satisfied through the delivery of regular payments made at uniform times. A portion of each payment is used to satisfy the interest while the remaining payment amount is applied towards the principal balance. The percentage that goes into satisfying both the interest and the principal balance is determined through the amortization schedule.
Loan amortization schedules are deciphered by the macro-economic conditions of the market, (primarily the interest rates) the credit score of the borrower and the intricacies that revolve around the specific loan.
How Do I Amortize a Loan?
A lender will amortize a loan to pay-off the outstanding balance through the delivery of equal payments on a regular schedule. These payments are structured so that the borrower satisfies both the principal and interest with the delivery of each equal payment.
Payments and amortization calculators are available on a number of lending websites; these tools facilitate the construction of an amortization schedule. If the lender wishes to understand the variable and inner-workings of the amortization calculation, please observe the below figures and steps:
• P= Principal amount (the initial amount of the loan)
• I= The annual interest rate (a figure from 1 to 100 percent)
• L= The length in years of the loan or the loan over which the loan is amortized
• J= The monthly interest
• N= The number of months over which a loan is amortized
To calculate the amortization, first take 1+J then take that figure to the minus N power. Take this number; subtract that figure from the number 1. Next, take the inverse of that and multiply the result by J then P. This figure represents the monthly payment (M). To calculate the amortization table you will need to do the following:
Step 1: Calculate H (P X J) to observe the current monthly interest rate.
Step 2: Calculate C= M-H to observe the monthly payment minus the monthly interest rates—this figure is the principal amount for that particular month.
Step 3: Calculate Q=P-C to observe the new principal balance for the loan.
Step 4: Set P equal to Q and observe Step 1 until the value of Q goes to zero.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9700125455856323,
"language": "en",
"url": "https://homeguides.sfgate.com/public-housing-assistance-1949.html",
"token_count": 628,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:acc74efe-5700-4b8a-87b2-a0973a14ebff>"
}
|
What is Public Housing Assistance?
Public housing assistance, or PHA, is a group of federal programs designed to aid in subsidizing rents for low-income individuals and families. They're administered by various city and state public housing authorities. The U.S. Department of Housing and Urban Development (HUD) oversees the use of federal PHA programs by those cities and states. Today's programs can trace their history back to the government's initial efforts undertaken during the Great Depression.
The federal government's tentative first efforts to aid the needy in terms of housing assistance first began during the Depression-era 1930s. At that time, many were without work or the means to afford housing. In the 1960s federal initiatives led to increases in funding for public housing projects. As well, subsidies to aid in paying rent for those living outside of projects was initiated. A special 1974 law created the Section 8 voucher program still in use today.
It was the Housing and Community Development Act of 1974 that brought Section 8 voucher programs to life. An amendment to a 1937 federal housing law, the 1974 act provided for vouchers that paid for about 70 percent of an eligible tenant's rent. Section 8 vouchers are the main type of public housing assistance given by the federal government. The Housing Choice Voucher Program--which is the official name for federal Section 8 programs--is administered by HUD.
In addition to Section 8 vouchers, the other major form of PHA consists of public housing itself. It first gained popularity in the 1960s, as vast tracts of apartment blocks were erected in many communities around the country. These apartment blocks were meant for low-income families and individuals. Most were overseen by local housing authorities, or HAs. Today, there are still such communities around the country. Their popularity, though, has decreased as housing vouchers grow more popular.
HUD relies on local HAs to determine eligibility for public housing assistance. Factors determining such eligibility include an applicant's gross annual income. In addition, HAs will look at whether the applicant is elderly, has a disability or heads a family. Lastly, HAs are charged with determining an applicant's citizenship status. Only U.S. citizens or eligible immigrants are allowed to receive PHA. HAs also conduct background checks on applicants to ensure they'll be good tenants.
When applying to a local HA or a HUD field office for PHA, keep in mind that income is closely scrutinized. HUD calls its formula for determining an applicant's income the Total Tenant Payment, or TTP. HAs are allowed to make certain deductions from an applicant's gross annual income, though. Minimum rent payments can be as low as $25 and as high as 30 percent of monthly adjusted income, once allowed deductions are factored in.
Tony Guerra served more than 20 years in the U.S. Navy. He also spent seven years as an airline operations manager. Guerra is a former realtor, real-estate salesperson, associate broker and real-estate education instructor. He holds a master's degree in management and a bachelor's degree in interdisciplinary studies.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.93611741065979,
"language": "en",
"url": "https://howtodiscuss.com/t/anchoring-bias/13737",
"token_count": 100,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2373046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b6f35521-60b8-4ca7-a1bc-b8894d30d3c8>"
}
|
Definition of Anchoring bias:
The act of basing a judgment on a familiar reference point that is incomplete or irrelevant to the problem that is being solved. An example is when a consumer judges the relative value of a product or service from a company on the basis of the cost in some previous period of time. Or, an investor may judge that a stock price is overvalued or undervalued based on that stocks previous high share price.
Meaning of Anchoring bias & Anchoring bias Definition
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9438551068305969,
"language": "en",
"url": "https://mbanotesworld.com/the-nature-of-cost-accounting/",
"token_count": 405,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1044921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1084fcc9-3fec-419e-99a2-2d8c042883c0>"
}
|
In the planning phase cost accounting deals with the future. It helps management the budget the future of predetermined materials cost wages and salaries and the other cost of manufacturing and marketing products. These cost might be used to assist in setting price and disclosing the profit that will the result considering competition and other economics conditions. Cost information is also provide to add management with the problem such as capital expenditure decision expansion facilities for increased sales or production nake-or-buy decision or purchase-or-less decision.
In the control phase cost accounting deals with the present accounting current results with predetermined standards and budget cost control to be effective depends up on proper cost planning for each activity function and condition. Via the cost accounting media management is informed frequently of those operating functions that fail to contribute their share to the total profit or that perform inefficiently thereby leading to profit erosion.
Periodically generally and the end of the fiscal period cost accounting deal with the past cost of the purpose of the profit determination and thereby with the allocation of historical cost to period of time. At this point cost accounting procedure is particularly concerned with the application manufacturing cost to units of product to be capitalized in the ending inventory and transferred to cost of goods sold as shipments are made.
More especially cost accounting is charged with the task of :
Establishing cost methods and procedure that permit the control and if possible reduction or improvement of costs.
Aiding participating in the creation and execution of the plans and budgets.
Creating inventory values for costing and pricing and described by law and at times controlling physical quantities.
Determine company cost and profit for an annual or shorter accounting period in total or by segment as determined by management or required by government regulations.
Providing management with cost information in connection with problems that involve a choice from among two or more alternative courses that is decision making. The decision may be to enter a new market develop the cost for a new product discontinue a product line buy or lease equipment or take other action to increase profits to solve problems
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9382532835006714,
"language": "en",
"url": "https://resources.trusaic.com/data-quality-management-hub/clean-data-essential-for-reliable-data-analysis",
"token_count": 987,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.34375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:08d52f12-c19a-4230-9812-704ccff2d59f>"
}
|
How much confidence can you place in the conclusions of your data analysis? For regulatory compliance actions, the cost of errors can involve fines, lawsuits, and damage to the brand. When being wrong involves significant costs, it is important to consider ways in which errors in the underlying data could be undermining your results.
Many data analyses are intended to predict changes in outcomes based on changes in inputs. A set of measurements on inputs are used to make predictions about one or more measured outcomes. Varying an input leads to a predicted response in the outcome. Errors in either set of measurements – inputs or outcomes – make it harder to make accurate predictions of the outcome response.
For example, in conducting a pay equity analysis, the predicted responses are the wages that employees would have been paid after removing apparent wage disparities. Here inputs include the employees’ genders as well as measures of the employees’ productivity. Outcomes are the wages received by the employees. Errors in measures of productivity or in the wages paid weaken the accuracy of predictions of the wage response.
This post focuses on some of the consequences of the simplest form of these measurement errors – random errors. Random errors can infect the measurements of outcomes, input factors, or both. Measurement errors are random if values are overstated about as often as they are understated. In the case of pay equity, random errors would imply that employee-level measures of wages, tenure, education, training, and/or hours worked, are as likely to be overstated as understated. For classification variables, such as benefits eligibility, part-time status, and the presence or absence of hazardous working conditions, random errors imply that misclassification in either direction is equally likely.
Such “noisy” data tend to obscure the relationship between the true inputs and the true outcomes. Fortunately, statistical methods regularly report the accuracy of responses as “confidence intervals” – high and low values for the response. For example, a pay equity analysis may estimate that females are paid 10% less than males. With this should come a confidence interval – for example, providing a range estimate that the pay disparity is between 8% and 12%.
Confidence intervals should always be consulted in evaluating the conclusions of data analysis. When noise infects the measurement of the outcome variable (e.g., wages), confidence intervals expand appropriately to reflect the loss of precision arising from this additional measurement noise. If confidence intervals are too large to be relied upon, it is worth assessing whether the outcomes may be affected by measurement error and if so, to evaluate the costs of cleaning those outcome measures.
However, depending on the type of measurement errors affecting your data, confidence intervals can be misleading as measures of accuracy. Specifically, when random errors affect the measurement of an input (e.g., years of education) confidence intervals also expand (i.e., are less precise), but the response of the outcome to the input is also biased downwards towards zero.
For example, when education measurements are affected by noise, the measured response of wages to education will likely be smaller than it really is. The measured effect might be that average wages rise by three percent per additional year of education, but the true effect could be five percent. The confidence interval for the measured, three percent education effect might provide a range of two percent to four percent, systematically lower than the true effect of five percent. If correctly measured, education would play a larger role in explaining apparent wage disparities. This bias is well-known, and is referred to variously as “attenuation” or “dilution” bias. Standard confidence intervals do not correct for this bias.
There is a second consequence when random measurement error affects an input. When the effects of multiple inputs on an outcome are measured, measurement error in one input will generally distort responses for all inputs, not just the one measured with error. In the pay equity example, measurement error in education can also distort measured wage responses to tenure, training, job type, and even the effect of gender itself on wages.
While the measurement error of the infected input makes its effect on the output seem smaller, in general, the direction of distortions affecting other inputs is unknown. For example, the measured average wage difference due to gender could be artificially magnified by measurement error in other inputs, such as education. As well, the measured effect of an accurately recorded factor such as job type on wages can be artificially large or small due to measurement error in another factor (e.g., education).
For these reasons, evaluating whether measurement error affects inputs in your analysis is an even higher priority than whether measurement error affects outputs. As the saying goes, “Garbage In, Garbage Out” or “GIGO”. Clean input data are foundational and essential to drawing reliable conclusions from your data analysis.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9401217103004456,
"language": "en",
"url": "https://www.aol.com/article/2014/06/12/10-000-suicides-linked-to-economic-recession/20911661/",
"token_count": 695,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1865234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ed6f5b35-ea1c-4113-87fe-4bdc712a5f77>"
}
|
It turns out the economic downturn, or the Great Recession, might have hit more than the world's pocketbooks. A new study claims the recession also caused more than 10,000 suicides in North America and Europe.
The study found that between 2007 and 2010, suicides increased by 4.5 percent in Canada, 4.8 percent in the United States and 6.5 percent in the European Union, resulting in about 10,000 more suicides than usual.
The study, performed by the University of Oxford and the London School of Hygiene and Tropical Medicine, used information from the World Health Organization. The researchers report rates were also 4 times higher among men. (Via Voice of America)
The study explains that job loss, home foreclosure and debt are leading causes of suicide during an economic downturn. (Via WSTM)
David Stuckler, the study's co-author, told Al Jazeera: "Suicides are just the tip of the iceberg. This data reveals a looming mental health crisis in Europe and North America. In these hard economic times, this research suggests it is critical to look for ways of protecting those who are likely to be hardest hit."
The study suggests nations that invest more into assistance for the unemployed reduce the risk of suicide. The authors estimate that for every $100 invested into such programs that suicide risk is lowered by .4 percent. (Via University of Oxford)
Struckler says Sweden was one country who had strong support for the unemployed or for those who were struggling financially and their suicide rate did not rise during the recession.
A professor of epidemiology at Columbia University not associated with the study said the findings have important implications for policy makers. "The social welfare aspects of economic downturns like this can't be ignored. When our economic belts get a little too tight, we shouldn't be cutting things that help the average Joe." (Via USA Today)
This latest study seems to reinforce the findings of another, smaller-scaled study from earlier this year.
A University of Portsmouth study links spending cuts in Greece to more than 500 male suicides between 2009 and 2010.
Aaron Reeves, a researcher on this most recent, says the data points out that rising suicide rates might not be inevitable seeing that they weren't observed everywhere.
- Savings Interest Rates SkyRocket After Fed Meeting
- With the Recent Rate Hike, CDs Become Best Investment
- Mortgage Rates Remain Low by Historical Standards – Experts Urge Locki…
- Are you a homeowner? Refinance rates at 2.06% APR.
- You've got thousands of reasons to refinance ($$$)
- Mortgage payoff tactic eliminates 15 years of payments
- I bought 3 of Amazon’s most popular teddy pieces — here’s what I’m …
- Don’t miss the last chance to snag these 6 gorgeous sweaters for over …
- 80,000 Amazon ratings prove why people love these comfortable face mas…
- Forget the 30yr mortgage if you owe less than $822K (Do this instea…
- How to pay off your house ASAP (So simple it's unbelievable)
- Congress Gives Veterans A Generous Mortgage Relief Program
- Today's Top Mortgage Rates in Your Area
- Reach Your Financial Goals and Help Your Money Grow with these Top Inv…
- Drowning in Credit Card Debt? Consider a Debt Consolidation Loan!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9390140771865845,
"language": "en",
"url": "https://www.dcsl.com/technology-101-what-is-data-mining/",
"token_count": 1385,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1767578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9df58b15-2fdb-4fa9-95d5-f117f09cb616>"
}
|
Technology 101 – What is data mining?
Data mining is the process of analysing data sets with a database management system in order to discover patterns and meaning. What begins with large collections of linked or disparate information becomes valuable knowledge through the workings of highly specialised computer software. Statistics, artificial intelligence and databases come together to translate all sorts of hard facts into something understandable. From there, a business can see what trends and predictions should be informing their next move as well as identify what transactions may be fraudulent, which clients are likely to leave for a competitor and what measures to take to avoid these scenarios.
In short, a computer’s data management software turns a mountain of straw into a sheet of pure business intelligence gold. Sounds complicated, doesn’t it? As it turns out, that’s not exactly true, and retail, finance, healthcare, transportation and manufacturing organisations are all keen to make use of the vast amount of data waiting for them.
How does it work?
Through the use of database software, data mining is able to summarise, or make patterns from, a body of information. The patterns can take several shapes:
- Unusual records, or anomaly detection, where certain observation points called outliers do not match the typical pattern. These may take the form of product contaminants, newly emerged trends or fraud – basically they show where something unusual has happened that does not match the rest of the observations. They provide clear red flags once the data is analysed, a key point for problem-solving, security breaches and quality control for businesses.
- Dependencies or association rule mining put together often unexpected things and highlight the relationship between the two. An apocryphal anecdote regarding association rule mining alleged that young men buying diapers also usually also bought beer.
- Groupings of data records, or cluster analysis, to show similarities and differences between data sets with things in common. If Product A is bought by middle-aged men as well as teenage girls every March or April (a strange similarity perhaps), a business may discover the item is the perfect Mother’s Day present.
From there further analysis may take place, such as predictive analysis or machine learning (both of which go beyond data mining itself but are the next logical step). This can help a business increase productivity through a better understanding of its raw data.
As a side note, databases are measured in gigabytes and terabytes, with 1 terabyte being equivalent to 2 million books. A large corporation may experience tens of millions of point-of-sales transactions in a single day. Think of how much customer information could be gleaned in evaluating an hour’s worth of credit card or telephone use – it’s staggering.
Why might businesses need it?
By not truly examining the data gathered in every aspect of its workings, a business loses a huge opportunity. With data mining, surveys, demographics and sales figures and so much more can flesh out a fuller picture of where a company stands, as well as help create precise risk models and spot-on marketing campaigns. Below are a few examples of what a business can do with data mining:
- Improve marketing and strengthen branding. Customer surveys and client feedback can be used by a marketing department to focus on new areas for growth and ways to fix recurrent problems. What products would sell like hotcakes and where frequent gripes accumulate are identified here.
- Increase revenue. Data mining will uncover your best sellers and sort through the statistics to paint an accurate picture of what customers actually want (versus what the Head of Sales thinks they want).
- Communicate more effectively. Here you’ll be able to see what contact strategies are proving most popular. Can you really know whether it’s printed ads, emails or social media presence that is reaching the most people without data mining? Sure, if you want to add it up by hand. Finding out how to target a ready audience effectively will mean you won’t need to waste time, money and postage reaching out in the wrong way.
- Don’t repeat past mistakes. Because data mining turns facts and figures into a complete representation of a business’s position, it can also show progress – or lack of it. Whether it’s a graph showing a sales slump or emerging trends a company has kept in line with, data mining patterns can help with predicting and preparing for the next opportunities.
- Enter new markets. Some databases offer information gathered by other companies that can be used to investigate potential customer areas, improve the sales tactics currently out there and provide better services. It’s worth noting, though, that sharing information is mostly illegal so check your sources carefully to make sure consumers’ privacy isn’t being breached. Data sharing is usually done between partner organisations, so if you have just such a valuable link don’t let this gold mine go to waste.
One interesting example of effective data mining was the launch of the United Nations Federal Credit Union (UNFCU) global credit card in 2011. Aimed at frequent overseas travellers, the Visa card needed to be marketed as effectively as possible in order for maximum take-up. They contained the now ubiquitous embedded computer chips that made customer signatures redundant (a feature less popular in the US but very common everywhere else). Through the use of its marketing database UNFCU advertised to 30,000 individuals in high-income households who travelled frequently. The result was an astounding 3 per cent response rate, when a large financial institution could only garner 0.5 per cent on average. After a 10-week campaign, applications for the card rose more than 100 percent – a clear case of data mining driving success.
Data mining now versus 10 years ago
Almost 50 years ago, data was mined through ledgers, tapes and floppy disks. By the 1980s computers picked up pace and their increased storage capacity allowed for relational databases to be kept. From there we went to online analytical processing and data warehouses, and now the storage capability has further increased, with advanced computer algorithms doing the heavy lifting.
At the moment, data mining is something that most businesses are able to incorporate – and really should. It’s not just another buzzword. Getting some help from a bespoke software company in building a database and harnessing the power of data mining can be the most effective and least painful way of doing so.
What’s in store for the future?
As consumer groups multiply and diversify we will likely see smaller, niche marketing campaigns designed to catch their attention. Information will be much more widely available to everyone (especially with the advent of Big Data collecting and linking everything), and savvy companies will use it to get the edge on their particular offerings. The sooner this can be done, of course, the better!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9523991942405701,
"language": "en",
"url": "https://www.litrg.org.uk/tax-guides/tax-basics/what-scottish-income-tax/what-devolution",
"token_count": 879,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1416015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3a734194-4d5f-430c-a4da-b0c69485c057>"
}
|
⚠️ We are currently updating our 2021/22 tax guidance across the website
What is devolution?
On this page we briefly explain what devolution is. We also explain which tax powers are devolved in each of Scotland, Wales and Northern Ireland.
What is devolution?
Devolution provides Scotland, Wales and Northern Ireland with forms of self-government within the United Kingdom. This includes the transfer of legislative powers to the Scottish parliament, the National Assembly for Wales and the Northern Ireland Assembly and the granting of powers to the Scottish government, the Welsh government and the Northern Ireland Executive.
The UK parliament remains sovereign in law and still legislates for Scotland, Wales and Northern Ireland. By convention, it does not do so for devolved matters without the consent of the relevant parliament or assembly.
Unlike Scotland, Wales and Northern Ireland, England does not have its own government and legislature. Different powers are devolved to each of Scotland, Wales and Northern Ireland.
On this page we look at devolved taxes and future proposals.
Which taxes are devolved in Scotland?
Powers over local taxation rest with Scotland – in particular, this means that decisions about council tax and non-domestic (business) rates for Scotland are made in Scotland. These taxes are administered and collected by local councils.
Following the Calman Commission, the Scotland Act 2012 and the Scotland Act 2016, two fully devolved taxes apply in Scotland from 1 April 2015. These are:
- land and buildings transaction tax, which replaced stamp duty land tax on transactions taking place in Scotland; and
- Scottish landfill tax, which replaced landfill tax on transactions taking place in Scotland.
Revenue Scotland is responsible for the collection and administration of these two devolved taxes and has published its Charter of Standards and Values. You can find out more information about the Scottish tax authority on the Revenue Scotland website.
Revenue Scotland is responsible for the collection and administration of these two devolved taxes. You can find out more information on the Revenue Scotland website.
Air passenger duty is due to be fully devolved to Scotland and replaced by air departure tax, but this is on hold.
Aggregates levy is due to be devolved to Scotland.
The Scotland Act 2012 gave the Scottish parliament the power to introduce a Scottish rate of income tax. The Scottish rate of income tax (SRIT) took effect on 6 April 2016 and applied to Scottish taxpayers during the tax year 2016/17. HMRC were responsible for collecting and administering the SRIT. Scottish income tax replaced the SRIT with effect from 6 April 2017.
It was proposed that revenues from the first 10 percentage points of the standard rate of VAT and the first 2.5 percentage points of the reduced rate of VAT applicable to Scotland should be assigned to Scotland. This is currently on hold.
Which taxes are devolved in Wales?
Following the Silk Commission and the Wales Act 2014, some tax powers are being devolved to the Welsh Assembly. Since April 2018 there has been a fully devolved Welsh land transaction tax and a fully devolved Welsh landfill disposals tax; these replaced stamp duty land tax and landfill tax on transactions taking place in Wales. The Welsh government has set up the Welsh Revenue Authority (WRA) to administer these devolved taxes and it has published its Charter for joint values, behaviours and standards.
The Welsh Assembly can already pass laws in respect of non-domestic rates (business rates) and council tax – these are classed as local taxation, rather than devolved taxes.
The Silk Commission also recommended the introduction of Welsh rates of income tax. These apply from April 2019. The Welsh rates of income tax apply to the non-savings and non-dividend income of Welsh taxpayers.
Which taxes are devolved in Northern Ireland?
The Northern Ireland Assembly can pass laws in respect of local taxation: domestic rates (Northern Ireland’s equivalent of council tax) and non-domestic (business) rates.
A law has been passed providing for the devolution of corporation tax powers to the Northern Ireland Assembly, but this is subject to commencement regulations. Northern Ireland had intended to set its own rate of corporation tax from April 2018.
Where can I find more information?
There is more information about devolution on GOV.UK.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9374623894691467,
"language": "en",
"url": "http://wakenyacanada.com/canada-honours-viola-davis-10bill-unveiled/",
"token_count": 409,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.45703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:01e7af4b-eeb0-4679-9c56-0854fa1aa85c>"
}
|
Canada took its currency in a new direction in more ways than one with the redesigned $10 bill. The purple polymer note is the first to feature a woman of colour, and the first vertically oriented bill issued in this country.
The Viola Desmond $10 bill, unveiled on Thursday at a Halifax library by her sister, Finance Minister Bill Morneau and Bank of Canada Governor Stephen Poloz, honours the black Nova Scotian businesswoman for her civil rights activism. The bill also includes images of the Canadian Museum for Human Rights and an eagle feather, said to represent reconciliation with Indigenous peoples.
The Bank of Canada said the vertical orientation allowed for a more prominent image of Desmond, and distinguishes the new $10 bill from the current roster of polymer notes.
Desmond is the first black person – and the first non-royal woman – on a regularly circulating Canadian bank note. Last year, Agnes Macphail’s image was featured on a commemorative $10 bank note celebrating the 150th anniversary of Confederation.
“Our bank notes are designed not only to be a secure and durable means of payment, but also to be works of art that tell the stories of Canada. This new $10 fits that bill,” Poloz said in a statement. “I’m immensely proud of all the innovation that went into this note.”
The reoriented design drew mixed reactions online.
While Canadians are accustomed to horizontal political figures, animals, and monarchs greeting them when they reach into their wallets, Switzerland, Bermuda, Israel, Venezuela, Argentina and Cape Verde have embraced vertical design.
One design firm suggests vertical formatting is more intuitive, given the way people handle their money.
“You tend to hold a wallet or purse vertically when searching for notes. The majority of people hand over notes vertically when making purchases,” Designboom author Andy Butler wrote in 2010. “All machines accept notes vertically. Therefore a vertical note makes more sense.”
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9419803619384766,
"language": "en",
"url": "http://www.inquiriesjournal.com/articles/1053/7/roosevelts-recession-a-historical-and-econometric-examination-of-the-roots-of-the-1937-recession",
"token_count": 5095,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.045654296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0c228c51-6785-4ff1-9c1a-971dc8a3005b>"
}
|
Roosevelt's Recession: A Historical and Econometric Examination of the Roots of the 1937 Recession
Searching for a Cause: An Econometric Analysis
Of the factors hypothesized to have caused the Recession, the goal of this chapter is to examine, quantitatively, the predictive power of each hypothesized variable on an indicator of overall economic wellbeing. What complicates an analysis of the 1937 Recession is the specificity of the hypothesized casual factors. For example, the wage hypothesis centers not on increased wages in general, but instead on the NLRA-induced wage increase. As another example, Friedman’s money supply argument hinges not on the decrease in money supply, but instead on the decrease in money supply as caused by the reserve requirement increases. The prolonged and unresolved debate about the Recession, coupled with the very narrow scope of each hypothesized factor, necessitates the formation of finely tailored variables. In this chapter, variable choices are explained in detail to provide the reader a clear understanding of the choices made and to allow the reader to gauge the applicability of the variables to the arguments they are intended to capture.
Before embarking any further on a quantitative analysis, it is important to draw a distinction between intent and methodological outcomes. As is usually the case with time series regression analysis of historical time periods, modeling is made more difficult by the lack of accurate data and the infrequency of historical data recording. If data availability is of concern in a regression analysis, practitioners often decide to expand the number of observations. In other words, the focal time period is expanded or examined at smaller sub-intervals. This decision serves as a preventative measure against model irrelevancy. The number of observations is of high importance in statistical testing. Generally, the larger the number of observations, the greater the likelihood that a model can accurately deduce minute relationships. Unfortunately, given the time period of interest, the issue of observation count is more pronounced.
This analysis seeks to gauge the magnitude of impact that each proposed cause of the Recession had on bringing about the downturn. However, on a relatively short ten-year timeline, the Recession is flanked on both ends by historical events that had unprecedented effects on both society and the economy. Both the Great Depression (early 1930s) and World War II (early 1940s) impacted the economy on a scale larger than that of the Recession. Given that the focus is only the 1937 Recession, this analysis must be confined to the months between January 1935 and December 1938.176 The imposed time constraint allows a model to more accurately calculate the comparatively smaller magnitude of only recession-related developments.
Although it poses drawbacks, the short time interval used provides an uncommon benefit. Because economic indicators fluctuated widely and unexpectedly during the four-year time span examined, autocorrelation poses less of an issue. Simply put, autocorrelation exists when a variable is a function of its former self. In the context of severe autocorrelation, the significance of model results must be closely scrutinized.
A comprehensive measure that captures the 1937 downturn must be used as the dependent variable across all models in order to examine the causal role played by the different hypothesized factors. Traditionally, studies of the Recession use either industrial production or gross domestic product as the regressand. Later in this chapter, both measures are gauged for their applicability and usefulness.
The industrial production variable was obtained from the Federal Reserve.177 The Federal Reserve’s monthly dataset, indexed to January 1935, measures the real output of all manufacturing, mining, and electric and gas utility facilities located in the United States. The variable is plotted in Figure 9.
The search for the alternative regressand, GDP, was more elusive. During the focal time period, GDP had not yet been adopted as a measure of aggregate output. Furthermore, historic measures of output used during the time period are incompatible for use in this study because they are provided only on an annual basis. Luckily, other papers concerned with this time period have addressed the issue of monthly data availability. This study utilizes as the alternative regressand a monthly real GDP variable obtained from the Gordon-Krenn monthly and quarterly dataset for 1913-1954.178 Figure 10 plots the real GDP variable.
Gordon and Krenn’s dataset provides reliable and important data that is unavailable from traditional sources, like the NBER. Facing the same issue of data availability, Gordon and Krenn converted annual GDP component data to quarterly and monthly intervals, using the Chow-Lin interpolation. For each annual GDP component, Gordon and Krenn used monthly NBER datasets, chosen for their high correlation with the annual GDP component of interest, to ensure an accurate conversion process. Finally, they summed the new monthly component data to provide a measure of monthly GDP.179
Although both industrial production and GDP show a clear decline during the Recession, each variable varies significantly in the percent change realized. For comparative purposes, Figure 10 combines industrial production, as shown in Figure 9, and real GDP. During the Recession, the decline in industrial production was greater than that of GDP because the Recession impacted industry most severely. Given the difference in percent change, both indicators are used in order to examine the downturn in a more nuanced manner.
Figure 10. Industrial production index and real GDP in billions, 1935-1938. Source: Refer to footnotes 177 and 178.
The fiscal policy variable used in this study, real government expenditures in 1937 dollars, was obtained from the Gordon and Krenn dataset and is plotted Figure 11. They transformed NBER series 15005, federal budget expenditures, into real terms. Subsequently, they removed the value of transfer payments, like the Soldier’s Bonus, that were included in the original NBER series. They removed transfer payments because their government-spending variable was to be used in calculating monthly GDP.180 Gordon and Krenn’s dataset is used because it requires no further adjustment. However, there may be need for a dummy variable to account for the large dip in spending that occurred during the first two months of bonus disbursement.
The wage increase caused by the NLRA was historically hypothesized to have contributed to the Recession. Although recent research discounts this view, it is nonetheless important to examine the casual impact, if any, of the NLRA. The search for a wage variable required the fulfillment of two prerequisites:
At first glance, the first requirement may seem overly restrictive given the Recession’s national impact. However, it’s important to remember that the manufacturing industry, arguably the most important and largest industry at the time, was directly impacted by NLRA regulations. With this point in mind, the variable search was confined to the manufacturing sector in the interest of excluding wage effects from large but non-regulated industries. Fulfillment of the second requirement is crucial to this study because a wage variable is included to test only the impact of the NLRA-induced wage increases on the economy. A wage variable that fluctuates with non-NLRA factors the interpretation of the NLRA’s impact more difficult.
The variable search started with NBER macrohistory database series 08283, production worker wage cost per unit of output. However, this variable’s denominator, manufacturing output, declined during the Recession to a greater degree than its numerator: wages. The variable is incompatible for use because the NLRA-induced wage increase is not the only factor causing major fluctuations in value.
The variable ultimately chosen for use in modeling was wages per hour worked in manufacturing, a slight variation of the aforementioned series, wages per unit output in manufacturing. The new variable was created by dividing the index of real factory payrolls (NBER macrohistory series 08242) by the index of production worker manhours in manufacturing (NBER macrohistory series 08265).181 Wages per manhours worked, henceforth referred to as the wage variable, was chosen because the variable minimizes fluctuations caused by the Recession, while it maximizes fluctuations caused by the NLRA. The wage variable highlights the impact of the NLRA because it is less reflective of broader changes in the economy. By construction, it provides a clearer understanding of labor costs because wages are considered per unit output realized.
Figure 12 and Figure 13 visualize the desirability of the chosen wage variable. As shown in Figure 12, industrial production, real wages, and manhours worked exhibit a near 1:1 relationship. On the other hand, Figure 13, a plot of wages per manhour worked, shows more nuanced fluctuations. It’s important to note that during the NLRA-impact period, from 1936 to late-1937, the wage variable realizes a 10% increase. The size of the increase is comparable to the increase in manufacturing sector wages reported by Cole and Ohanian and plotted in the previous chapter, Figure 7. The post-1937 plot of the wage variable is also comparable to the post-1937 plateau of manufacturing sector wages in Figure 7.
Figure 12. Index of real wages, industrial production, and production worker manhours. Data adapted from: NBER macrohistory database series 08242 and 08265.
Figure 13. Wage variable used in study, index of real wages per production worker manhours.
It is necessary to examine the banking sectors’ response to the increased reserve requirements prior to choosing a variable that reflects the impact of monetary policy on the money supply.182 However, the NBER macrohistory database does not provide distinct datasets for excess and required reserves held. As a workaround, two NBER datasets were used to calculate distinct measures for excess and required reserves.183 To split the NBER’s total reserves dataset into its excess and required component parts, the equations below were solved for each monthly observation point:
Total reserves, required reserves, and excess reserves are plotted in Figure 14. Vertical lines mark the Federal Reserve’s three reserve requirement increases. For each line plot, a line of best fit is superimposed for the months preceding the Federal Reserve’s first increase in August 1936. Similarly, a line of best fit is added to each plot onwards from April 1937; April is situated one month after the second reserve requirement increase and one month before the third.
Figure 14 shows that banks responded to the reserve requirement increases by further padding their reserves. After the first increase, excess reserves declined as banks shifted funds to required holdings. However, after the third increase, banks started to accumulate excess reserves. Given that total required reserves remained relatively stable after the third increase, and given that total excess reserves increased during the same time, it can be assumed that banks accumulated excess reserves by liquidating assets that would have otherwise been put to alternative use. Clearly, the risk-averse banking sector desired a cushion of excess reserves.
In his paper on the 1937 recession, Velde also concluded that excess reserve accumulation was reactionary. Instead of examining aggregated national data like that used in Figure 14, Velde studied the behavior of banks by member class. He found that central reserve city banks, like those in New York and Chicago, in 1937 were “considerably closer to their [reserve] limit than banks in reserve cities and country banks.”184 Central reserve city banks faced significantly greater excess reserve depletion than other member banks. This class of member banks was at the forefront of excess reserve accumulation.185 Therefore, the post-April 1937 positive slope of Figure 14’s excess reserves plot was most greatly influenced by central reserve city bank accumulation.
Although the Federal Reserve succeeded in making excess reserves unusable, banks responded by hoarding as excess even more assets. During the focal time period, the banking sector treated excess reserves not as a pool of money to be utilized, but as a lifeline locked away for use in times of crises. Having confirmed the banking sector’s excess reserve accumulation as reactionary accumulation, the measure of money supply to be used in this study will exclude both required and excess reserves from its sum.
It is the unfortunate reality that some contemporary studies of the Recession utilize standardized measures of the money supply, like M1 or M2, to characterize their variables without explicitly defining the component parts of the measure used. Because contemporary measures of money supply did not exist in the 1930s, it is sometimes difficult to neatly classify long-defunct institutions, like the postal savings bank, under measures crafted for a modern banking system. Adding to the confusion when examining historic studies, the standardized measures in use today were the product of a 1980s redefinition of the existing “M”s. To avoid any confusion, the money supply variable used in this study will not be classified as any current standard measure.
NBER macrohistory database series 14144a, money stock in billions of dollars, is used in the creation of the money supply variable. Series 14144a is the seasonally adjusted sum of currency held by the public, all commercial bank demand deposits, and all commercial bank time deposits. Prior to creating a money supply variable, total reserves held (series 14064) and money stock (series 14144a) were deflated using Gordon and Krenn’s GDP deflator.186
Figure 15 plots two possible renditions of money supply. The dashed line plots real money supply, the deflated series 14144a. By construction, the series includes in its sum total reserves held. The solid line plots real money supply excluding real total reserves.187 As is clear in the post-1937 plot period, the increase in total reserves, caused mainly by an accumulation of new excess reserves, had a negative impact on the already decreasing money supply. Whereas prior to 1937, both line plots in the figure have a near 1:1 relationship, in the months after 1937, money supply exclusive of total reserves decreased to a greater extent than money supply inclusive of reserves. Therefore, the variable used in subsequent modeling is money supply excluding reserves, henceforth referred to only as money supply.188
Figure 15. Real money supply including and excluding total reserves. Data adapted from: NBER macrohistory database series 14064 and 14144a.
Federal Budget Receipts
Although not of much interest, a tax variable is used in modeling. NBER series 15004, total federal budget receipts in millions, was chosen for use. The NBER’s dataset was transformed into billions of dollars and deflated using the Gordon and Krenn deflator. Because it was not seasonally adjusted by the NBER, the variable was smoothed using a moving average of the previous, present, and future month. Figure 16 plots real federal budget receipts before and after smoothing.
Examining the Variables
The variables are log-transformed in order to normalize the data. A log-log model allows for a simple interpretation of the coefficients: a 1% increase in an independent variable leads to a percent change of the regressand that is equivalent to the value of the coefficient output. To gain a better understanding of the variables under consideration, and to aid in model specification, focus was paid to identifying the time-series properties of each variable.
To test for serial correlation of the logged real GDP variable, a correlogram was performed. The correlogram provides two important measures, the autocorrelation and partial autocorrelation functions. The autocorrelation function measures the variable’s correlation with itself at each lag. The partial autocorrelation function, at each lag, is a regression of the variable and that lag, holding all others lags constant. Figure A-1 plots GDP’s autocorrelation function. The plot shows that GDP is significantly correlated with up to three previous years. The plot shows a prolonged and somewhat smooth decay of the autocorrelation function, hinting that the data may be non-stationary. Given the decay of the autocorrelation function, GDP is an autoregressive process.
The partial autocorrelation function of GDP is plotted in Figure A-2. The plot provides a more precise visualization of the autoregressive process. As shown in the figure, when the effect of the first lag is controlled for, the correlation of all other lags is, generally, insignificant. Although the function shows significant correlations at lags 1, 6, 13, 16, and 21, it should be underscored that the function cuts off and remains insignificant for five lags after the first. Given the non-zero partial autocorrelation of the first lag, coupled with the long lag delay before other significant spikes arise, it is assumed that GDP is an AR(1) process. The assumption of an AR(1) process is underscored by Figure A-3, a scatterplot of GDP and its first lag, and Figure A-4, a scatterplot of GDP and its second lag. Both plots have clear upward trends that are usually associated with an autoregressive processes.
Given the very high correlation between the income variable and its lags, coupled with the previous tests that indicate an autoregressive process, the analysis continues by examining the type of autoregressive process seen in GDP. To test for a unit root, a Dickey-Fuller test was performed. The null hypothesis is that the variable contains a unit root. The test was performed and produced an output of -2.841, with the p-value being 0.0526. The low p-value barely misses the 95% significance level, indicating that the null hypothesis cannot be rejected. The test was also performed using an increasing number of lags (5-11). The additional tests confirmed that the null hypothesis couldn’t be rejected. Having confirmed the existence of a unit root, the variable is now said to be a random walk. Further Dickey-Fuller tests were used to specify the type of random walk.
The variable can either be a random walk with drift or without drift. A random walk without drift is a process where the current value of a variable is composed of its past values plus an error term. A random walk with drift is, essentially, a random walk with an added constant parameter. GDP was tested for drift and test’s p-value was 0.0034. The null hypothesis can be rejected, and the results held with the use of varying lag lengths. GDP is a random walk without drift. GDP, a variable confirmed to be a random walk without drift, was also tested for a deterministic time trend. The test statistic output was -1.642 with a p-value of .7754. GDP is a unit root around a deterministic time trend. A regression of GDP and time further confirmed the existence of a time trend. Because the GDP variable is non-stationary, it is used in modeling in its differenced form.
The tests performed above were repeated on the industrial production variable. Figure A-5, the autocorrelation function of industrial production, shows a steep but smooth decay. Figure A-6, the partial autocorrelation function, is significant at lags 1, 2, with some peaks after lag 10. These graphs suggest that industrial production is an autoregressive. Next, a Dickey Fuller test for unit root was performed. With and without varying lags included, the test output indicated a unit root. The test was repeated with trend added. At varying lags, the test output confirmed the existence of a unit root with a deterministic time trend. However, the test for drift produced inconclusive results. Therefore, the assumption is made that industrial production is a unit root with a deterministic time trend. Like the GDP variable, industrial production will be differenced when used as a regressand. Given that both possible dependent variables are used in differenced form, independent variables will also be differenced to make the interpretation of model results simpler.
Building a Model
To consider whether industrial production or GDP should be used as the dependent variable, each regressor was individually regressed by up to 4 differenced lags on the two possible regressands. After each regression, a Durbin-Watson test and Breusch-Godfrey test was performed to look for autocorrelation between the regressor variable and the regressand under consideration. The simple regression results are reported in Table A-4 and the Breusch-Godfrey test outputs are reported in Table A-5. Overwhelmingly, when each independent variable was individually regressed on industrial production, the Durbin-Watson test and the Breusch-Godfrey test indicated autocorrelation of the error terms. Therefore, GDP was chosen as the dependent variable to be used in modeling.
The first model used included 3 differenced lags of each independent variable. The results of Model 1 are reported in Table A-1, alongside results for models soon to be discussed. Due to the use of lagged variables in Model 1, the Breusch-Godfrey (BG) test was the prime test of interest. Unlike the Durbin-Watson test, the BG test allows for lagged dependent variables and tests for higher order autoregressive processes. The test output is reported in Table A-2. Based on the reported p-values, the null hypothesis that serial correlation does not exist, was not rejected. In other words, the assumption was made that serial correlation does not exist.
Before further analyzing results, Model 1 residuals were tested for problems. Figure A-7 plots residuals against fitted values. The graph shows a random scatter, a preliminary indicator that the results were favorable. Furthermore, Figure A-8 shows that the residuals are somewhat normally distributed. It should be noted that Figure A-9, a scatterplot of the residuals against time, appears to be random with no patterns. The autocorrelation function, Figure A-10, shows no lags of significance, and the partial autocorrelation function, Figure A-11, shows no lags of significance. A runs test indicated that the residuals had 24 runs. The p-value output provided was 0.72; the null hypothesis is that the residuals were produced from a random process, which was not rejected. This test further indicated that autocorrelation was not a problem in the model. Finally, Dickey Fuller tests without drift, with drift, and with trend were performed on the residuals. All three tests had p-value outputs of 0.00, thereby raising no concerns regarding a unit root of the residuals. Based on testing of the residuals, the standard error outputs provided in Model 1 were not biased.
To continue improving the model, the wage variable was incorporated in Model 1.1 in its unlogged form. The output of Model 1.1 is reported in Table A-1. The fit of the model improved slightly and the regression results remained comparable to Model 1. Given this slight improvement, coupled with the favorable BG test results reported in Table A-2, the wage variable was retained for use in its unlogged form.
Due to their removal of the Soldier’s Bonus impact, the Gordon and Krenn data for government spending declines sharply in June 1936 and July 1936. In Model 2, a time dummy variable for these two months was included to remove a potential source of bias. The inclusion of the dummy variable removed a slight bias from regression results; however, the dummy variable was insignificant in all models.
Model 3 builds upon Model 2 by using a different measure of the money supply variable. In line with other studies of the Recession, the money supply variable in Model 3 was changed to include total reserves. The results of this regression, with all other variables kept unchanged from Model 2, are reported in Table A-3. Although the fiscal policy coefficient does not change as a result of the variable change, the coefficient of money supply increased from .42 in Model 2 to .61 in Model 3. These results should be considered surprising because the variable, having strayed from the argument that reserve hoarding decreased money supply, has now a greater impact in the regression. The money supply coefficients in Model 3 are counterintuitive and the results suggest that other studies, utilizing measures inclusive of reserves, may have inadvertently inflated the role of money supply. Given that the government-spending coefficient remained unchanged in Model 3 and given the reasons for excluding reserves, the next model retained money supply exclusive of reserves.
The final model examined, Model 4, was a cleaned regression of only the significant variable-level lags.189 Reported in Table A-3, the fit of this regression remained high while the variable coefficients and beta coefficients continued to mirror those previously modeled. Furthermore, this model’s BG test, as reported in Table A-2, produced an output even more favorable than that of the other models. Model 4’s residual plots, Figure A- 12 to Figure A- 15, were all favorable.Continued on Next Page »
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9072458744049072,
"language": "en",
"url": "https://financetrain.com/delta-neutrality/",
"token_count": 383,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.031005859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3289183d-b6b1-447e-9ae2-fda5637eaa30>"
}
|
Delta neutral refers to a portfolio of underlying assets, where the value of the portfolio is unaffected by small changes in the value of the underlying assets.
Such a portfolio is created by taking positions in options such that the positive and negative deltas of various positions will offset each other thereby creating a portfolio whose value is insensitive to changes in the prices.
It is an important concept for institutional traders who establish large positions using various option strategies such as straddles, strangles, and ratio spreads.
We will take an example of a strangle to explain how delta neutrality can be achieved.
A Strangle Example
A stock currently trades at $50. The annual volatility of the stock is estimated to be 20%. The risk-free rate is 5%.
An options trader decides to write six-month strangles using $45 puts and $55 calls.
As a quick refresher, a strangle involves taking position in both a call and put with different strike prices but with the same maturity and underlying asset. This option strategy is profitable only if there are large movements in the price of the underlying asset.
The two options will have different deltas, so the trader will not write an equal number of puts and calls.
How many puts and calls should the trader use?
The delta for the call option is 0.3344
The delta for the put option is -0.1603
The ratio of the two deltas is -0.1603/0.3344 = -0.48. This means that delta neutrality is achieved by writing 0.48 calls for each put.
One approximate delta neutral combination is to write 25 puts and 12 calls.
Delta neutrality is useful for strategies in which a trader is neutral about the future dynamics for the market. So, the trader doesn’t assume either a bullish or a bearish position.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9165940284729004,
"language": "en",
"url": "https://financetrain.com/overview-of-mergers-acquisitions/",
"token_count": 678,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.00567626953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e4ffbdb2-e5a1-4249-84af-eb6fb0f8d3e8>"
}
|
Acquisitions: When an acquiring company buys a portion of a target company.
Merger: When an acquiring company buys all of a target company; the acquirer remains and the acquired no longer exists as an independent corporate entity.
M&A transactions can be segmented by the manners in which the acquired is integrated with the acquirer.
- Subsidiary Merger: the target becomes a subsidiary of the acquiring company. The acquiring company may use this form of integration in order to retain the brand recognition of the acquired entity.
- Statutory Merger: the acquired no longer exists; it becomes part of the acquirer.
- Consolidation: neither the acquired, nor the acquirer remain, rather both combine to form a new company.
- Mergers can also be described by the way the business operations of the acquirer and the target relate to one another.
- Horizontal Mergers: the combination of two companies in the same business line. For example, one beverage production company may decide to purchase another beverage production company.
- Vertical Mergers: the purchase of a target company which performs an upstream or downstream function in the acquirer’s industry value chain.
- Backward Integration: the acquirer purchases a company closer to the raw material extraction phase of the industry value chain. For example, a natural gas commercial distributer may decide to purchase a natural gas miner.
- Forward Integration: the acquirer purchases a company closer to the market delivery phase of the industry value chain. For example, a gold miner may decide to purchase a chain of retail jewelry stores.
- Conglomerate Merger: this is the case where an acquirer purchases a company in an unrelated line of business. For example, an airplane manufacturer may decide to purchase a chain of hospitals.
Reasons for M&A
Ideally, mergers are executed with the expectation that the target will increase the equity value of the acquirer. Below some common merger motivations are described.
- Cost Synergies: Mergers have the potential to lower costs for the combined companies, either through the elimination of redundant functions or by eliminating profits from “middle-man” points in the value chain.
- Revenue Synergies: Mergers may provide the combined companies an opportunity to cross sell complementary products.
- Growth: An acquisition might provide a company with more rapid growth potential than organic growth provided by reinvesting earnings.
- Pricing Power: A horizontal merger can reduce competition and allow the acquirer to raise its prices. A vertical merger can allow the acquirer to better control prices downstream or upstream in the value chain. When a merger has the potential to provide an acquirer with too much market power, government regulations may prevent the merger from taking place.
- Increased Capability: An acquiring company may pursue a target for its in-house technical expertise.
- Unlocking Value: An acquirer may view a target as underperforming financially and feel confident that it can facilitate the realization of the target’s full potential after taking control.
- Diversification: Companies themselves are investors who seek to reduce risk and increase returns through the successful deployment of capital.
- International M&A Concerns: Companies may engage in M&A beyond their domestic borders for multiple financial or strategic reasons.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9276739358901978,
"language": "en",
"url": "https://opln.org/category/resources/",
"token_count": 526,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.053955078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:71507244-c426-4afc-bf02-b38d1b9b0e0b>"
}
|
Action on plastic pollution has been slowed considerably during the COVID-19 pandemic – but there’s a new emerging angle that could help rebuild momentum for the transition to a greener and more circular society. Governments at the World Trade Organization (WTO) are also showing increased interest in tackling plastics pollution.
Paradigm Shift brings together some of the most prominent voices in the circular space on what it will take for the global community to make the transition to circularity. This publication takes a systems–level view of the challenge and focuses on solutions—upstream, downstream, and across sectors—with critical takeaways that you can use to advance your circular economy mission.
Learn how over the last 6 years, this leading national nonprofit has:
° Leveraged more than $90 million in impact.
° Reached more than 77 million households.
° Helped more than 1,500 U.S. communities overcome recycling challenges.
° Invested over 53 million in recycling infrastructure.
° Delivered new recycling carts to more than 700,000 U.S. households.
° Reduced 251,000 metric tons of carbon emissions.
° Diverted more than 230 million pounds of recyclables from landfills into the recycling stream.
° Reduced contamination by 40% and increased the value of cleaner recyclables by $20 per ton in pilot communities.
First-of-its-kind modeling analysis describes actions needed to stop plastic from entering the ocean.
Transparent 2020: Major Companies Come Together in Unprecedented Step Toward Transparency on Global Plastic Waste Crisis
“In its first year, ReSource: Plastic has begun to tap into the massive potential that companies have to become key levers that can actually help change the course of this global problem – but also their willingness and ability to act together,” said Sheila Bonini of the OPLN Advisory Board Member company, World Wildlife Fund.
OPLN Member companies Procter & Gamble and The Coca-Cola Company, together with other Principal Members of ReSource: Plastic – Keurig Dr Pepper, McDonald’s and Starbucks contributed to the report, Transparent 2020, which examines the plastic footprints of these leading global companies and provides a detailed look at the challenges and potential solutions for tackling the plastic pollution problem.
Our Principal Members have shown an impressive dedication to transparency, providing data that will ultimately drive the accountability, collaboration and ambition needed to incentivize a movement toward comprehensive reporting and progress across the private sector.
The Principal Members hope these efforts will inspire other companies to take similar action.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9727757573127747,
"language": "en",
"url": "https://oregonbusinessreport.com/2010/12/why-irelands-economy-fell/",
"token_count": 779,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.376953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:13995bc6-97b4-4803-829e-7437949ef330>"
}
|
Ireland’s Economic Crisis: A Brief Summary
By Bill Conerly,
Conerly Consulting, Businomics
Ireland’s economy was stagnant in the early years of the 20th century. Liberalization of international trade taxes and rules in the 1960s allowed the economy to begin keeping pace with the rest of Europe, but that wasn’t saying much. Government budget deficits in the 1970s and early 1980s were accompanied by European-style anemic growth. The budget deficits were resolved through spending cuts. Then other policy changes made it more difficult for the government to resume deficit finance.
In the late 1970s, tax rates were reduced. Rate reduction followed rate reduction, and the highest personal income tax rate fell from 80 percent in 1975 to 44 percent in 2001. Corporate tax rates were also reduced, from 40 percent in 1996 to 24 percent in 2000, with even lower rates for companies involved in manufacturing or internationally traded services. Tariff rates were reduced further, and the republic received an inflow of aid, which it spent on infrastructure projects.
The result was the “Celtic Tiger,” a name derived from Asia’s Four Tigers that had grown rapidly (Hong Kong, Singapore, South Korea and Taiwan). For 19 years in a row, Ireland’s growth rate exceeded that of Europe.
However, the real estate boom that swept the United States also swept the world, including Ireland. The nation had survived a housing bubble in the late 1990s, but the banking sector, emboldened by the global boom, doubled its assets in just three years, lending to Irish and non-Irish alike. When the global bubble burst, the banks were in a severe crisis. The government feared that institutional providers of funds to the banks would withdraw, leading to a collapse of the banking sector. To prevent such a run, the government guaranteed the senior debt of the banks.
Most personal financial experts advise individuals against co-signing notes for friends and family members. Someone should have told Ireland that the advice is especially valuable to people going heavily into debt themselves. As recession hit the Irish economy, government spending accelerated.
Now the Irish government has tons of its own debt, and its bank bond guarantees total 200 percent of GDP. Other European countries, as well as the International Monetary Fund, are pumping loans into Ireland to stave off a collapse.
The Irish government had resisted outside calls for the bondholders to take a haircut. (The Economist headline read, “Time to send the barber home?”) Allowing private creditors to escape damage seems silly at first, but it has more logic when the creditors had previously received an explicit government guarantee. A haircut for creditors could well be considered a default by the Irish government.
Looking around Europe, many banks in the core (France and Germany) have large holdings of bonds from the periphery (Ireland, Portugal, Spain, Italy and Greece). Some are suggesting an additional round of stress tests for major banks on the Continent. With borrowing difficult for governments, more austerity—and possibly social unrest—are certain to come. Credit will be tight, due to the banking industry’s difficult situation with sovereign debt.
Europe could possibly lapse into recession, especially if one of the weak countries cannot refinance existing notes as they come due. On the positive side, though, the global economy is making progress, which will help both European exports and attitudes. A recession is not certain by any means—I’d still put the odds no higher than 20 percent.
Unfortunately, the American economy is stuck in low gear. Losing Europe to a double dip would stress our own economy, probably to the recession point itself. This is a significant risk, and one that bears some contingency planning by banks and businesses and families in the United States.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9599646329879761,
"language": "en",
"url": "https://smallbusiness.chron.com/legal-description-fixed-expenses-36598.html",
"token_count": 464,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07958984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:be958562-e9a1-4087-a62a-094563b30fe9>"
}
|
Legal Description of Fixed Expenses
When you analyze the cost of production for goods sold, your profits, and the pricing structure for your company's products, fixed costs impact the bottom line profits differently than variable costs. Understanding the difference between the two and the relationship of fixed expenses to your cost of doing business is important for your budgeting process.
Fixed Expense Description
Fixed expenses are the expenses that remain unaffected by changes in production volume. They are the expenses that your business must pay regardless of performance or production, such as rent or mortgage payments, insurance, license fees, and the salaries and associated costs for full-time administrative employees. These costs remain constant despite any increase or decrease in product manufacturing.
Along with fixed costs are mixed expenses. These are costs that may appear to be variable, but may reach a point where they are fixed. These costs include usage-based utilities, such as electricity. When your company is at the peak of production, your electricity consumption and corresponding invoices will be higher, making the electricity payment a variable expense that increases as your production increases. If you lower or cease production, the electric bill still accumulates as a fixed cost that remains relatively constant. This is a mixed expense because it can qualify as both fixed and variable, depending on the situation.
Variable expenses change based on your level of production. For example, employees who work on an hourly basis and work extra hours when production is at its highest represent variable cost, as do parts and manufacturing supplies that increase when you produce additional volume.
When you build a budget with fixed and variable costs, the first step for a reasonable estimate is to forecast your production rates. While fixed expenses will remain a constant, you need to know how many products you plan to manufacture, and the variable cost per product, in order to budget for the other types of costs. Multiply the cost per product by the number of products you are going to manufacture to obtain the variable cost budget. Use the same process to calculate mixed expenses. Add the fixed expenses to your budget as the final step.
Tara Kimball is a former accounting professional with more than 10 years of experience in corporate finance and small business accounting. She has also worked in desktop support and network management. Her articles have appeared in various online publications.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9748905301094055,
"language": "en",
"url": "https://vivekkaul.com/2014/11/29/gdp-growth-at-5-3-a-lot-needs-to-be-done-for-the-economy-to-see-acche-din-again/",
"token_count": 1641,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.302734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e40d530d-7d1d-483d-ad46-93c867220e2e>"
}
|
India has largely been a centrally planned economy since independence. The central planning increased dramatically in the second term of the previous United Progressive Alliance (UPA) government.
This led to a situation where India’s economy grew at greater than 8% in the aftermath of the financial crisis, when economic growth was collapsing all around the world. But this extra central planning has created many problems for the Indian economy since then.
As Bill Bonner writes in Hormegeddon—How Too Much Of a Good Thing Leads to Disaster, “Central planning can do a good job of imitating real progress at least in the short run.” And that is what precisely what happened in India, in the aftermath of the financial crisis.
The government expenditure exploded. In 2007-2008, the total government expenditure stood at Rs 7,12,671 crore. This doubled to Rs 14,10,372 crore by 2012-2013. This increased spending by the government landed up as income in the hands of the citizens, and they in turn spend the money. And this ensured that the Indian economy kept growing at a fast pace though economic growth was slowing down world over.
A substantial amount of this increased government spending was directly distributed to citizens through schemes like Mahatma Gandhi National Rural Employment Guarantee Scheme. The minimum support price offered on rice and wheat was also increased much more than was the case in the past.
This led to rural income growing at a faster rate than it had in the past. Initially, it did not matter. But as time passed this increased income translated into high inflation, particularly high food inflation.
Further, the trouble was that the government wasn’t earning all this money that it was spending. Between 2007-2008 and 2012-2013, the total income of the government did not go up at the same pace as its expenditure (it went up by around 57%), and the government borrowed more to make up for the difference.
The fiscal deficit in 2007-2008 was Rs 1,26,912 crore. This shot up by 286% to Rs 4,90,190 crore by 2012-2013. Fiscal deficit is the difference between what a government earns and what it spends. And the government makes up for the difference through increased borrowing.
This increased borrowing by the government crowded out other borrowers, that is, there wasn’t enough left on the table for other borrowers to borrow. This meant banks had to offer higher rates of interest to attract deposits. This pushed up interest rates at which they loaned out money as well.
Also, to control the high inflation, the Reserve Bank had to push up the repo rate, or the rate at which it lends to banks. Further, during the good years, the corporates loaded up on debt, borrowing much more than they could ever repay. A major portion these loans were taken by crony capitalists from public sector banks.
All these reasons led to what analysts call the “India growth story” coming to an end. High inflation forced people to cut down on spending as incomes did not keep pace with expenditure. Economic growth fell to around 5% from double digit levels and that is where it has stayed for a while now.
It was widely expected that with Narendra Modi taking over as the prime minister, the Indian economy will start seeing acche din soon. But that hasn’t happened. For the three month period July to September 2014, the economic growth, as measured by the growth in the gross domestic product (GDP), was at 5.3%. During the period April to June 2014 the economy had grown at 5.7%.
The financing, insurance, real estate and business services sector which formed a little over 22% of the GDP during the period, grew by an impressive 9.5%. But other sectors did not do so well.
Agriculture which formed around 10.8% of the total GDP during the quarter grew by 3.2%. It had grown by 5% during the same period last year. Manufacturing which formed around 14.6% of the total GDP during the quarter was more or less flat at 0.1%. In fact, the size of manufacturing sector has fallen by 1.4% in comparison to the period between April and June 2014.
What this tells us clearly is that sustainable economic growth cannot be created by the government giving away money to citizens and then hoping that they spend it and create economic growth. For sustainable economic growth to happen a country needs to produce things. As the Say’s Law states “A product is no sooner created, than it, from that instant, affords a market for other products to the full extent of its own value.” The law essentially states that the production of goods ensures that the workers and suppliers of these goods are paid enough for them to be able to buy all the other goods that are being produced. A pithier version of this law is, “Supply creates its own demand.”
In an Indian context this is even more important given that nearly 60% of the population remains dependent on directly or indirectly dependent on agriculture, even though agriculture now forms a minor part of the overall economy. What this tells us is that the sector has many more people than it should. Hence, people need to be moved from agriculture to other sectors like manufacturing. And for that to happen jobs need to be created in these sectors.
The government recently launched the Make in India programme to create jobs in the manufacturing sector. But just launching the programme is not good enough. For companies to make products in India a lot of other things need to be provided. They need access to electricity all the time and for that to happen we need to sort out the mess our coal sector is in. The physical infrastructure of roads, railways and ports needs to improve. The ease of doing business needs to go up considerably and so on.
As Daron Acemoglu and James A. Robinson write in Why Nations Fail—The Origins of Power, Prosperity and Poverty regarding the industrial revolution that happened in Great Britain in the 19th century: “The English state aggressively…worked to promote domestic industry…by removing barriers to the expansion of industrial activity.” Similar barriers need to be removed in India as well. Also, entrepreneurs need to be confident that their contracts and property rights will be respected.
These things are easier said than done. What makes the scenario even more difficult in the Indian case is that Indian businessmen who operate in the infrastructure sector are not the most honest people going around. Raghuram Rajan, the governor of the Reserve Bank of India, more or less hinted at it in a recent speech.
As he said “The amount recovered from cases decided in 2013-14 under DRTs (debt recovery tribunals) was Rs. 30,590 crore while the outstanding value of debt sought to be recovered was a huge Rs. 2,36,600 crore. Thus recovery was only 13% of the amount at stake. Worse, even though the law indicates that cases before the DRT should be disposed off in 6 months, only about a fourth of the cases pending at the beginning of the year are disposed off during the year – suggesting a four year wait even if the tribunals focus only on old cases.”
If incumbent businessmen do not repay their loans and then banks cannot recover those loans, banks will not lend or charge a higher rate of interest when they lend. And this does not help the businessmen currently looking to expand their businesses by borrowing.
To conclude, there is a lot that the government needs to do to get economic growth up and running again. The only action that one has seen from the government until now is demanding that the RBI cuts the repo rate. Now only if creating economic growth was simply about cutting interest rates.
The article appeared originally on www.FirstBiz.com on Nov 29, 2014
(Vivek Kaul is the author of the Easy Money trilogy. He tweets @kaul_vivek)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8984332084655762,
"language": "en",
"url": "https://www.myexcelonline.com/category/formulas/other/",
"token_count": 859,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8b922163-560e-4660-a296-b1ae599cdb3b>"
}
|
All You Need to Know About
Other Formulas in Excel
The Excel Other Formulas are not very well-known, but you will be surprised on the usefulness of these. We have in store for you Financial, Array Excel Formulas, and a lot of hidden nuggets in the Excel Formula world!
Here are the top things on what you can do with Other Formulas in Excel:
Jump To A Cell Reference Within An Excel Formula
When writing, editing or auditing Excel formulas you will come across a scenario where you want to view and access the referenced cells within a formula argument.
This is helpful if you want to check how the formula works or to make any changes to the formula.
There is a cool tip where you can jump to the referenced cell or range within the formula and make your changes.
STEP 1: Double click inside your Excel formula
STEP 2: Select the formula argument that you want to edit with your mouse
STEP 3: Press F5 which will bring up the Go To dialogue box and press OK
STEP 4: This will take you to the referenced cell/range
STEP 5: You can select the new range with your mouse and also make any changes to the formula bar
STEP 6: Press Enter and your formula is updated
Calculate your Monthly Investment with Excel’s FV Formula
What does it do?
Calculates the compound interest
=FV(rate, nper, pmt, [pv])
What it means:
=FV(interest rate, number of periods, periodic payment, initial amount)
Computing the compound interest of an initial investment is easy for a fixed number of years. But let’s add an additional challenge.
What if you are also putting in monthly contributions to your investment? Now that’s a lot more challenging to compute now!
How much would be available for you at the end of your investment?
Thankfully there is an easy way to calculate this with Excel’s FV formula! FV stands for Future Value.
In our example below, we have the table of values that we need to get the compound interest or Future Value from:
There are two important concepts we need to use since we are using monthly contributions:
- Since our interest rate is the annual rate, we will have to divide it by 12 to make it monthly
- We will need to convert our number of years into number of months by multiplying it by 12
I explain how you can do this below:
STEP 1: We need to enter the FV function in a blank cell:
STEP 2: The FV arguments:
What is the rate of the interest?
Select the cell containing the interest rate and divide it by 12 to get the monthly interest rate (make sure that this is in a percentage):
How many periods?
Select the cell containing the number of years and multiply it by 12 to get the number of months:
What is the periodic payment?
Select the cell that contains your monthly contribution (this is your periodic payment):
=FV(B9/12, C9*12, D9,
What is the initial amount?
PV stands for present value, the initial amount. Multiply the entire result by -1.
=FV(B9/12, C9*12, D9, A9) * -1
Apply the same formula to the rest of the cells by dragging the lower right corner downwards.
You now have all of the compound interest results!
How to Remove Formulas in Excel
There are times when I have an Excel worksheet full of formulas and I want to hard code the results and remove the formulas completely.
This is very easy to do in Excel!
Here is our sample worksheet which has the following formulas in Column E:
I explain how you can remove formulas in Excel below:
STEP 1: Select all the cells that have formulas:
STEP 2: Right click and select Copy:
STEP 3: Right click again and select Paste Values:
Now you will see that the values are only retained and the formulas are now gone!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9584202766418457,
"language": "en",
"url": "https://investotrend.com/best-bonds-to-buy/",
"token_count": 1443,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9c6cafa6-e91c-403a-9193-0fbb9ef7a1fb>"
}
|
Bonds are financial instruments that people use in long-term investments to protect against economic downturns. There are corporate bonds and public bonds, so it is important that you know which ones are more profitable. Bonds have less financial risks compared to other assets such as stocks, which is a great advantage.
An advantage of the bonds is that people receive a fixed amount of money, which allows you to have a very good income. Before you know which are the most profitable bonds in the market, you must know their functions. Bonds are debt securities issued by governments or by private organizations seeking to raise funds to finance themselves.
Just like stocks, if you invest money in a bond, you would be financing the issuer and receiving money for it. The difference between the shares is that when you buy the investor is part of the company’s shareholders. On the other hand, when you buy bonds, you become a creditor of the entity or company lending your money at a percentage.
Thanks to the bonds, the investment recovers after the end of the time, and a percentage are obtained for the debt. All bonds have an expiration date where issuers agree to return the money. The operation of the bonds is issued by a fixed income, which gives the creditor power to know the return.
When the governments issued a bond, the idea is for the investor to keep it until the end of the term. The downside of bonds is that they are not funded instruments that are guaranteed, which carries some risks. When buying bonds in the stock market, an investment of income is made that is highly variable, which generates advantages.
How to Stipulate the Interest of a Bond?
When talking about public bonds, it is said that the state estimates the interest through auctions. When an issuer has higher solvency risks, the interest allows the investor to earn more money. In the case of the secondary market or private companies, interest is estimated depending on the real-time of the debt.
Both primary (public) and secondary (private) markets require the creditworthiness of the issuer to stipulate the interest on the bond. Depending on the amount of the investments, an estimate is made of the future value to be received by the rate of return on investment.
Types of Bonds that Exist.
Bond types facilitate a payment commitment between the investor and the holder or contributor of the investment. Most financial securities have an international standard name to determine them. You must know all these terms to make your investments worthwhile, achieving the best market benefits.
An important thing to understand is that the types of bonds depend on the following characteristics:
1. Bonds according to the issuer:
They are subdivided into public bonds issued by governments and private bonds issued by private organizations.
2. Bonds according to maturity terms.
In public and private markets, they can get bonds of two, three, five, and even ten years, each having their advantages. Two-year bonds are known as letters, and those issued for ten are known as obligations.
3. Bonds according to Credit Quality or Ranking.
These bounds depend on the rating of the agencies of which there are some known worldwide. In this case, the agencies determine investment-grade bonds, speculation bonds, and junk bonds.
- Investment-grade bonds: Interest is lower because companies have a higher credit rating.
- Speculation or High yield grade bonds: Companies have a lower rating level giving higher interests.
- Junk Bonds: They are bonds that are too low for the ratings of the agencies.
4. Coupon Bonds
There are fixed coupon bonds where you can withdraw an interesting period for what you end up receiving for your earnings. On the other hand, if you opt for a zero-coupon bond, you must wait for the maturity to receive the investment and interest.
5. Convertible bonds.
They are bonds that, after their maturities, can be converted into shares within the issuing company.
6. Perpetual Debt Bonds.
These bonds have no maturities, and the government does not return the investment, but the issuer issues interest in life. This class of interest is higher because it has a higher investment risk than other types.
7. Green Bonds.
This is the latest trend in the market; In this class, bonds are sustainable projects and the government finance it over time. Investments, in this case, are for ecological projects.
What are the Best 2020 Bonds You Can Invest In?
It is common that when you go to make your investments, you need websites where you can find the best investment bonds. Joining a bond fund is the best thing you can do to invest your money more safely.
The present list corresponds to general searches others can do where you can invest with confidence and ease:
1. Western Asset Corporate Bond Fund (SIGAX)
This is one of the best-known funds. What is sought in this bond fund is to place 80% of the debt? Over the past year, this fund achieved a high return of 10%, which is quite promising.
2. Fidelity Capital & Long Income Fund (FAGIX)
This is an excellent fund to invest, giving an average of over 8%. The idea is that you invest in lower-quality debt securities to have a safer investment.
3. SDPR Portfolio Long Term Corporate Bond ETF (SPLB)
This is an investment fund where you invest in US degrees obtaining 10-year bonds. Its track record is long-running for a decade. The returns of this investment fund are up to 4%.
4. Metropolitan West High Yield Bond Fund (MWHYX.
You can use this fund to buy your bonds if you need to make a high investment due to its results. The profitability in the last year was 4%, which gives you a lot of confidence to invest. High yield bonds are the strength of this corporation, and you can get many companies to serve as holders.
5. Fidelity High Yield Factor ETF (FDHY)
It showed profitability during the last year of 2.8% placing itself among the first positions to look for bonds. This company began to work in the year 2002, being to date blameless in the qualifications of companies. Here you can find food companies and also from the health sector.
6. Xtrackers Low Beta High Yield Bond EFF (HYDW)
If you look for records of this bond corporation, you can see that its profitability at the end of the year was 5.21%. In this company, you will find the option of investing in long-term bonds with high returns. Bond holdings are focused on the healthcare provider. Infrastructure and others.
Investing in a bond can be a great way for those who want to receive long-term money with stipulated interest returns. According to the types of bonds, you can choose the one that you think is for you and search for the investment fund. You can search for other money investments on the stock exchange or the FOREX market.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9387304782867432,
"language": "en",
"url": "https://marketbusinessnews.com/who-benefit-ai-development/205298/",
"token_count": 777,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0301513671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:37e83179-c952-4f3f-951d-d98e97588fbd>"
}
|
There is a whole variety of business use cases for AI. Artificial Intelligence can predict call volume in call centers to support staffing decisions, predict customer behavior, recommend products that the customers will enjoy, classify customers, forecast product demand, detect fraudulent credit card transactions, filter spam email, but also detect faulty products on a production line, provide language translation, generate captions for images, power chatbots, and even help diagnose patients.
According to McKinsey, 47% of organizations have already implemented AI in at least one function in their business processes, taking advantage of Machine Learning (ML), Deep Learning, Natural Language Processing (NLP) or Predictive Analysis. The companies that will be late with implementing AI into their service may soon fall behind their competitors.
What industries can benefit from AI development?
AI can be used across different industries. E-commerce, telecoms, healthcare, HRs, agriculture, security, law, education, transportation, finance, SaaS – they can all benefit from using AI. What they all share in common is data – AI proved to have a huge impact on all the data-related tasks, including its processing, analyzing, finding patterns, and building predictions.
Top use cases for Artificial Intelligence
Let’s see how companies among different industries can use AI!
1. Better search. Natural Language Processing (NPL) can help clients of the online retailers narrow search results to the most relevant ones – also with the voice search.
2. Personalized recommendations. Personalization has a huge impact on clients’ purchase decisions. The ability to successfully suggest clients their next buy (or next content to consume) can be a game changer for the retail, but also for the media publishers, SaaS products, or the telecoms.
3. Better customer service. A lot of queries can be handled by chatbots which significantly shortens the time of response. With more complex issues, the bot is able to identify the right specialist to handle it and forward the message there. Such a solution can be used by all the companies that offer their services online.
4. Administrative workflow automation. Solutions such as voice-to-text transcriptions combined with NLP and structuring the information into a report can save a lot of any staff’s time. AI can take care of the legal research, processing the CVs and shortlisting the best candidates, automate administrative tasks at the doctors’, in schools or at offices.
5. Detection of the defects. Neural networks fed with the roentgen pictures of different forms of cancers at different levels of the advance is already proved to diagnose cancer better than the qualified doctors. It misses the lesions less often and is less likely to misdiagnose. In a similar way, AI can be able to spot the defects of the machines in the factories, cyber attacks, or plants diseases.
6. Churn predictions. By analyzing the behavioral patterns, AI can successfully predict which customers are most likely to churn. With that information and some product recommendations, the consultants are able to prevent the churn by offering their clients a better offer at the right time.
7. Reducing employee retention. Using behavioral patterns, predictive models can also predict which of the employees are most likely to look for the new job. Combine it with personalized recommendations, identifying the right benefits or training options to help them develop, and you may be able to prevent your employees from leaving.
It’s all about data
As you can see, Artificial Intelligence can be used across different industries. The decision about going into AI Development in a company, however, should not be determined by a simple will of “having AI”. In the first place, Artificial Intelligence has to solve business problems and help achieve some specific goals.
What is Artificial Intelligence?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.950474739074707,
"language": "en",
"url": "https://racetozero.unfccc.int/shipping-needs-5-zero-carbon-fuels-by-2030-to-meet-green-goal/",
"token_count": 1136,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.07666015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1503ceda-a89a-4c5b-b6ee-fb6ebfb39fd5>"
}
|
Increasing appetite for EVs represents an unprecedented opportunity to not only lower emissions but reinvent the industry and create jobs for a new climate economy.
Shipping needs 5% zero-carbon fuels by 2030 to meet green goalContainer shipping, ammonia and liquefied petroleum gas shipping, as well as voyages on niche international routes could help industry reach the 5% milestone in 2030, according to new study.
Zero carbon fuels must represent 5% of international shipping’s fuel mix by 2030 for the global fleet to achieve complete decarbonisation by 2050, according to new research.
The study for the Getting to Zero Coalition found that the adoption rate is feasible with hydrogen-based fuels, especially ammonia, likely to play a big role.
Meeting this 5% target would put shipping on course to reach other important milestones that will enable full decarbonisation by 2050, according to the report conducted jointly by maritime consultancy UMAS and the COP26 Climate Champions.
A 2019 study by UMAS had already assessed that for full mid-century decarbonisation, zero-emissions fuels would have to account for 27% of shipping’s energy mix by 2036 and for 93% of it by 2046.
Hitting this new 5% goal by 2030, which translates to almost 16m tonnes of heavy fuel oil equivalent, would accelerate the adoption of zero-carbon fuels to desired levels during the following years, according to the study’s models.
This mid-century decarbonisation target is different from the IMO’s target of reducing total greenhouse gas emissions by at least 50% by 2050 compared with 2008.
UMAS had reported that shipping would get to zero emissions in 2070 if it followed the IMO target’s trajectory.
“Though the Getting to Zero Coalition has not yet aligned on a target year for full decarbonisation, it is preferable to have a 2030 target that enables decarbonisation in line with the Paris Agreement,” the study said.
UMAS has also already estimated that for full shipping decarbonisation by 2050, around $1.4trn to $1.9trn in investments will be necessary between 2030 and 2050.
The latest study argues that even though the Getting to Zero Coalition has set its 2030 commitment for zero-emissions vessels, adding a quantifiable target like the 5% zero-carbon fuel adoption, would help it attain that goal more easily.
This target could give energy companies greater confidence that there will be demand for green fuels and it could mobilise cargo owners to pay a premium for zero-emission fuels based on their freight share, the study said.
Additionally, investors could better quantify the level of investment needed across the value chain to make it happen.
“Shipowners could plan investments in new builds and retrofits, and regulators could be called on to ensure a level playing field is in place to enable the transition,” the study added.
Specific segments, such as container shipping, ammonia and liquefied petroleum gas shipping, as well as voyages on particular international routes could spearhead this 5% fuel adoption rate.
Container shipping is a suitable candidate given the small amount of ports and trades that dominate the sector, while certain non-container routes, such as Chile–US and Japan–Australia, could also contribute due to “enabling conditions for first movers of zero-emission fuels”, according to the study.
“If ammonia is selected, ammonia and LPG tankers are well suited to be first movers, as storage, systems and crew are well adapted to this fuel. This is also true for ships used to transport other hydrogen-derived fuels,” the study said.
Apart from this 5% uptake from international shipping, domestic shipping could contribute another 2% to 3% share of zero-emissions fuel in shipping’s total energy mix.
With 32 countries accounting for 50% of domestic shipping, they can hit this 2030 target by running 30% of their combined fleet on zero-emissions sources, according to the study.
Much of this energy picture will depend on the ability to supply shipping with the zero-carbon fuels, especially those derived from hydrogen.
The new report noted that the Getting to Zero Coalition’s “zero-carbon energy sources” is intended to include fuels derived from zero-carbon electricity, biomass and the use of carbon capture and sequestration.
“The definition includes green hydrogen and its derivatives, such as ammonia and methanol, blue hydrogen and its derivatives, as well as sustainable biofuels,” it said.
But it excludes energy sources derived from carbon capture and utilisation based on the combustion of fossil fuels.
“In terms of scalability, the hydrogen-derived fuels have the biggest long-term potential for rapid scaling in the following decades and should be a significant part of the 2030 fuel mix,” the study said.
UMAS has estimated that the 5% zero-carbon fuel share in 2030 equates to around 0.64 exajoules, or 15.8m tonnes of heavy fuel oil equivalent. If ammonia ends up becoming the favoured zero-carbon fuel during the coming decade, UMAS found that these 0.64 exajoules for shipping would require roughly 60 gigawatt of green hydrogen electrolyser capacity.
“60 gigawatt of green hydrogen electrolyser capacity for shipping by 2030 is achievable when considering the large-scale ambitions announced by leading economies,” the study said.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9443302750587463,
"language": "en",
"url": "https://tendomag.com/how-does-charter-expansion-affect-school-district-finances/",
"token_count": 1344,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.4609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ade6e8e6-845b-4d9f-bfaf-a543393883b7>"
}
|
In November 2016, a Massachusetts poll question on whether or not or not to make bigger the charter school zone drew countrywide attention. Over $33 million in marketing campaign spending poured into the commonwealth in what emerged as one of the most highly-priced poll questions in the united states of America. A majority voted in opposition to a boom inside the cap, effectively slicing off in addition constitution boom in a few of the nation’s city centers. A new look at by means of people (Camille Terrier and Matthew Ridley) digs into one of the important issues voiced through critics of the notion to lift the charter cap: how constitution boom affects college district price range and their college students’ success.
Charter schools have been at the start conceived as a way to spur innovation in conventional public faculties, the concept being that competition from the constitution quarter would possibly lead districts to reallocate spending in ways that beautify student success. But the constitution area’s fast growth has raised issues about the financial strain imposed on district schools. When a pupil switches to a charter college, public funding usually follows the scholar, so constitution faculties are frequently criticized for draining resources from district schools. Several current research has certainly observed that charter expansion may have poor financial spillover outcomes on traditional public colleges.
In an effort to avoid a big, unexpected discount in investment for district colleges, numerous states, including Massachusetts, have followed repayment schemes below which a district is reimbursed (completely, or in part, for a set range of years) for the investment that follows constitution students out of the district. In such contexts, the internet monetary impact of charter enlargement is doubtful.
To quantify constitution expansion’s results on college district price range and student fulfillment below reimbursement schemes, we studied a reform that caused a big enlargement of the constitution sector in Massachusetts. Our consequences display that higher charter attendance elevated in keeping with-pupil expenditures in district schools and shifted college district expenses in the direction of education and far from support services (which encompasses things like pupil counseling and instructor education) and college administration. We additionally discover that the big charter enlargement generated modest nice results on achievement amongst college students who remain in district faculties.
The most important undertaking to analyzing constitution enlargement’s effects on district faculties is that charter faculties do now not decide wherein to find or enlarge at random. If constitution colleges locate or amplify mainly in districts which might be increasingly fiscally burdened, for instance, increasing districts will show worse fiscal pressure—but in this situation, monetary stress is a cause, now not an impact, of charter enlargement. This makes it tough to distinguish the outcomes of constitution expansion on district colleges from other factors or tendencies.
To deal with this venture, we make the most a coverage exchange in Massachusetts that brought about a huge charter quarter enlargement. In 2011, the state raised the limit at the funding districts should allocate to constitution colleges from nine percentage to 18 percentage in districts wherein scholar achievement is inside the bottom ten percent statewide. Over the next four years, the proportion of students attending a constitution school jumped from 7 percentage to 12 percent in districts that extended their charter sectors. We use an information-pushed method to become aware of “control” districts, that is, a group of districts that did not expand their constitution sectors after the 2011 reform, however, had the equal evolving charter share and monetary styles before the reform. These manage districts are as similar as possible to the “increasing” districts in terms of characteristics and developments before the expansion; the main distinction is they did now not increase after the reform. Their post-reform monetary and educational results can, therefore, be used to capture what could have passed off to the increasing districts had their charter quarter now not improved. This approach constitutes a methodological improvement over preceding research that checks economic spillovers in small samples of districts, making their results potentially touchy to districts’ specificities.
The visible depiction of this exercise is pretty telling (see Figure 1): when the percentage of students within the district attending a charter faculty jumps, general according to-pupil spending in district colleges follows in shape. After the reform (denoted through the vertical line), overall in keeping with-pupil fees increased with the aid of 4.8 percent greater in increasing districts than within the synthetic manage institution of nonexpanding districts. This brief-term impact is regular with, and probably an outcome of, transient reimbursement resource for expanding districts. Beyond this common impact on in keeping with-scholar spending, we display that conventional public faculties (TPSs) in increasing districts also reallocate their expenditures: Per-student spending on practice accelerated via five.2% extra in expanding districts than in nonexpanding districts, at the same time as inline with student spending on aid services dropped by means of four.4% more in expanding districts.
The fact that schools facing charter competition shift sources from guide offerings to practice indicate they understand spending on practice as extra valued by ability students and their families than spending on support services. However, there’s evidence that cutting spending on student help can harm student attainment, raising questions on how the constitution enlargement affected student success.
We locate that charter sector expansion has small nice effects on pupil fulfillment, even though the effects aren’t constantly statistically sizable. An increase of 5 percentage points (from 10 percent to 15 percentage) in constitution school attendance increases non-charter pupil test rankings through 0.03 popular deviations in math and 0.02 in ELA, a modest improvement. These effects are consistent with preceding research that show constitution growth has a confined impact on scholar fulfillment in conventional public colleges.
Because the Massachusetts compensation investment scheme is best temporary, a natural query is what happens after the quiet of the refund duration? We used charter school openings prior to 2011 to research charter expansion’s long-time period, put up-reimbursement results. In the longer run, and especially after the compensation period ends (i.E. Whilst per-pupil revenue returns to its pre-enlargement level), we find that charter growth’s effective effects on both expenses and fulfillment tend to disappear (though without turning into terrible). Our results also endorse that the positive outcomes on achievement are largest five to six years after charters amplify. These findings are regular with studies suggesting it takes numerous years for multiplied spending to affect success. The reimbursement scheme seems to insulate districts from the fast-term economic surprise of constitution region expansion, letting them adjust through the years and avoid any bad effects.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9615380764007568,
"language": "en",
"url": "https://www.cardiosmart.org/news/2020/5/extra-tax-on-sugary-drinks-cuts-consumer-purchases",
"token_count": 654,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2451171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ab67113f-4af1-46f3-8e4e-950a29e1993d>"
}
|
By CardioSmart News
Drinking too many sugary drinks has been linked to a number of health problems, including weight gain, type 2 diabetes, heart disease, tooth decay and more. These harmful effects have led to efforts to curb consumers’ intake of these sugar-loaded beverages, which are full of calories and have little nutritional value.
A recent study, published in the Annals of Internal Medicine, showed that increasing the price of soda and other sugar-sweetened beverages by adding a tax—in this case, 1 cent per ounce applied at the point of sale—led to substantially fewer drinks being sold in the greater Chicago area.
Researchers at the University of Illinois at Chicago School of Public Health aimed to measure the impact of a tax in effect from Aug. 6 to Nov. 25 in 2017 in Cook County, Illinois. The soda tax was intended to improve public health by reducing the purchase and, therefore, consumption of sugary drinks and raising funds for the county, according to the researchers. But the controversial tax lasted only four months and was repealed.
To study purchase patterns and behaviors, the research team used data from store scanners to track the number of beverages sold in Cook County, and within its 2-mile border area, in supermarket, grocery, convenience, and other stores before and after the tax was in place . They also compared this data to purchases made during the same time periods in St. Louis County, Mo., where there was no sweetened beverage tax.
The tax effectively reduced consumption of sugar sweetened beverages, researchers found. These drinks are known to contribute to many chronic health conditions. The data showed that, on average, sales of taxed sweetened beverages in Cook County decreased by 462,155 ounces and a net 21%. There was no significant increase in the untaxed beverage sales. Sales of soda dropped the most and those of energy drinks the least. The largest impact was seen on the sale of cases and liters of soda, which carried the greatest tax burden. Family-size soda sales fell by 34%, whereas individual-size soda dropped only 10%.
One unintended consequence of the local tax was an increase in purchases across state lines. But these purchases were limited to the taxed sweetened beverages, not for untaxed beverages, which the researchers said reinforces the idea that changes in buying patterns were to avoid the added sweetened beverage tax. Without accounting for the increase in cross-border shopping, the volume of taxed sugary drinks decreased by 27% relative to purchases in St. Louis as the comparison site while the added tax was in place.
The study is limited by its short duration . It also included only ready-to-drink beverages and not powdered drink mixes, frozen juices, fountain drinks or energy shots.
Still, like taxes levied on cigarettes, this is one strategy being studied to try to limit the consumption of sugary drinks.
For more information, about heart-healthy eating, go to CardioSmart.org/EatBetter.
Read the original article: “The Impact of a Sweetened Beverage Tax on Beverage Volume Sold in Cook County, Illinois, and Its Border Area,” Annals of Internal Medicine, March 17, 2020.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9626951813697815,
"language": "en",
"url": "https://www.dovly.com/post/what-is-a-credit-report",
"token_count": 664,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.04296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a303b278-3474-4c08-9f83-2915bd926097>"
}
|
If you don’t know much about credit, you may be wondering, “What is a credit report?” A credit report is a record prepared by a credit bureau that details the way you’ve handled borrowed money. It provides information about you that can be used by potential future lenders to decide if they want to loan you money. It shows how much debt you have, whether you pay your bills on time and how long you’ve been managing borrowed money.
If you’ve had any major problems managing your money, this will appear on your credit report. Potential lenders can tell immediately if you’ve ever gone longer than 30 days late before paying a bill. They can also tell whether an account has ever gone into collection or if you’ve declared bankruptcy.
Main Credit Bureaus
Most potential lenders rely on the information provided by the three main credit bureaus, also known as credit reporting agencies. The three main credit bureaus are TransUnion, Equifax, and Experian. These companies collect information about your creditworthiness. These reports are usually similar but aren’t identical. Some of your creditors may choose to report to only one of the credit bureaus rather than reporting to all three.
There are smaller credit reporting agencies, but almost all potential lenders rely on the information compiled by the three main agencies.
What’s on a Credit Report?
Your credit report includes personal identifying information such as your name, address, birth date, phone numbers, and social security number. It includes a list of your credit accounts, which can be open or closed, as well as what type of credit it is, such as a credit card, mortgage, or personal loan.
Public record items are included, such as bankruptcies, foreclosures, judgments, or liens. It also shows how many inquiries there have been from people considering extending credit to you.
All this information is compiled and used to determine your credit score. This score is used by potential creditors to decide whether they believe you’d pay back money to them if they extended credit to you.
Making Sure Your Credit Report is Accurate
The information on your credit report may be used not only by potential creditors but also by potential landlords, employers, or insurance providers. Since this information can be used in so many different ways, it’s important to make sure there isn’t any incorrect information on your credit reports. Surprisingly, as many as two out of every three people have found an error on their credit report.
Consumers are entitled to a free credit report annually. If you find an error on your credit report, it’s important that you dispute it immediately. Errors may include clerical errors by your creditors such as reporting an incorrect balance, reporting an account as open that’s paid off, or reporting the same account twice. You can dispute an error directly with the credit bureau or get the help of a credit repair agency.
Dovly’s automated credit repair engine can make the process of finding and disputing errors easy. Get in touch with us today to find out how Dovly can help you fix any inaccuracies you find on your credit report.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9487394690513611,
"language": "en",
"url": "https://www.insuranceopedia.com/definition/1551/depreciation-insurance",
"token_count": 240,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.04150390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:09927257-7036-4b0b-88f6-305ee6414bab>"
}
|
Definition - What does Depreciation Insurance mean?
Depreciation insurance, or zero depreciation coverage, is a provision in a property insurance policy that covers the actual value of the property prior to the loss of value it experiences over time. It overlooks the diminished value of the property due to depreciation of its market cost, damage, or wear and tear.
Insuranceopedia explains Depreciation Insurance
Most properties lose their value over time. A used car is not as valuable as a new car. Real estate prices might plunge, making a property less valuable.
If your policy includes depreciation coverage, there is no loss of value or deduction because of these factors. The policy will pay out an amount equal to what the insured has paid to purchase the property.
The downside to this coverage is that, because the payouts and the risk to the insurance company are higher, so are the premiums.
How Well Do You Know Your Life Insurance?
The more you know about life insurance, the better prepared you are to find the best coverage for you.
Whether you're just starting to look into life insurance coverage or you've carried a policy for years, there's always something to learn.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9768024682998657,
"language": "en",
"url": "https://www.startup-book.com/2015/08/13/elon-musk-the-new-steve-jobs-insane-or-genius-part-3/",
"token_count": 1972,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.216796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:62400c26-2968-404a-9dd0-93fbfe054279>"
}
|
How does innovation work
It’s really a fascinating book and obviously Elon Musk is too. A really unique and tough character. And obviosuly, very much criticized and hated too. One such harsh critics comes from the MIT Technology Review with Tech’s Enduring Great-Man Myth by Amanda Schaffer. You should read it. I just extract two sentences:
– “To put it another way, do we really think that if Jobs and Musk had never come along, there would have been no smartphone revolution, no surge of interest in electric vehicles?” Well, this is a critical question about the source of innovation. Society or individuals. The question is relevant for science too.
– “It’s precisely because we admire Musk and think his contributions are important that we need to get real about where his success actually comes from.” This is a quote from Mariana Mazzucato whom I have often quoted here. her book The Entrepreneurial State is a Must Read. It deals with the role of government in innovation. my stronger and stronger belief with years is that the governement makes things possible (science, technology and invention, innovation) but without exceptional individuals – often geniuses, sometimes to the border of insanity – I am not sure so much happens.
Now let me quote more Ashley Vance because the final chapters are as great as the first ones. These quotes show that despite the high role of the governement, it’s not sufficient to explain how innovation works.
As Tesla turned into a star in modern American industry, its closest rivals were obliterated. Fisker Automotive filed for bankruptcy and was bought by a Chinese auto parts company in 2014. One of its main investors was Ray Lane, a venture capitalist at Kleiner Perkins Caufield & Byers. Lane had cost Kleiner Perkins a chance to invest in Tesla and then backed Fisker – a disastrous move that tarnished the firm’s brand and Lane’s reputation. Better Place was another start-up that enjoyed more hype than Fisker and Tesla put together and raised close to $1 billion to build electric cars and battery-swapping stations. The company never produced much of anything and declared bankruptcy in 2013.
The guys like Straubel who had been at Tesla since the beginning are quick to remind people that the chance to build an awesome electric car had been there all along. “It’s not really like there was a rush to this idea, and we got there first,” Straubel said. “It’s frequently forgotten in hindsight that people thought this was the shittiest business opportunity on the planet. The venture capitalists were all running for the hills.” What separated Tesla from the competition was the willingness to charge after its vision without compromise, a complete commitment to execute to Musks’s standards.
During the entire period of SolarCity’s growth, Silicon Valley had dumped huge amounts of money into green technology companies with mostly disastrous results. There was the automotive flubs like Fisker and Better Place, and Solyndra, the solar cell maker that conservatives loved to hold up as a cautionary tale of government spending and cronyism run amok. Some of the most famous venture capitalists in history, like John Doerr and Vinod Khosla, were ripped apart by the local and national press for their failed green investments. The story was almost always the same. People had thrown money at green technology because it seemed like the right thing to do, not because it made business sense. From new kinds of energy storage systems to electric cars and solar panels, the technology never quite lived up to its billing and required too much government funding and too many incentives to create a viable market. Much of this criticism was fair. It’s just that there was this Elon Musk guy hanging around who seemed to have figured something out that everyone else had missed. “We had a blanket rule against investing in clean-tech companies for about a decade,” said Peter Thiel, the PayPal cofounder and venture capitalist and Founders Fund. “On the macro level, we were right because clean tech as a sector was quite bad. But on the micro level, it looks like Elon has the two most successful clean-tech companies in the US. We would rather explain his success as being a fluke. There’s the whole Iron Man thing in which he’s presented as a cartoonish businessman – this very unusual animal at the zoo. But there is now a degree to which you have to ask whether his success is an indictment on the rest of us who have been working on much more incremental things. To the extent that the world still doubts Elon, I think it’s a reflection on the insanity of the world and not on the supposed insanity of Elon.” [Pages 320-21]
Tony Fadell about Musk
Tony Fadell, the former Apple executive, credited with bringing the iPod ad iPhone to market, has characterized the smartphone as representative of a type of super-cycle in which hardware and software have reached a critical point of maturity. Electronics are good and cheap, while software is more reliable and sophisticated. […] Google has its self-driving cars and has acquired dozens of robotics companies as it looks to merge code and machine. […] And a host of start-ups have begun infusing medical devices with powerful software to help people monitor and analyze their bodies and diagnose conditions. […] Zee Aero, a start-up in Mountain View, has a couple of former SpaceX staffers on hand and is working on a secretive new type of transport. A flying car at last? Perhaps. […] For Fadell, Musk’s work sits at the highest of this trend. “Whether it’s Tesla or SpaceX, you are talking about combining the old-world science of manufacturing with low-cost, consumer-grade technology. You put these things together, and they morph into something we have never seen before. All of a sudden there is a wholesale change. It’s a step function.” [Pages 351-52] Doesn’t this remind you of Zero to One by peter thiel.
Larry Page about Musk
Google has invested more than just about any other technology company into’s Musk’s sort of moon-shot projects: self-driving cars, robots, and even a cash prize to get a machine onto the moon cheaply. The company, however, operates under a set of constraints and expectations that come with employing tens of thousands of people and being analyzed constantly by investors. It’s with this in mind that Page sometimes feels a bit envious of Musk, who has managed to make radical ideas the basis of his companies. “If you think about Silicon Valley or corporate leaders in general, they’re not usually lacking in money,” Page said. “If you have all this money, which presumably you’re going to give away and couldn’t even spend it all if you wanted to, why then are you devoting your time to a company that’s not really doing anything good? That’s why I find Elon to be an inspiring example. He said, ‘Well, what should I really do in this world? Solve cars, global warming, and make humans multiplanetary.’ I mean those are pretty compelling goals, and now he has businesses to do that.” [Page 353]
Larry Page about education
This is a very interesting piece [pages 355-56] not linked to Musk: “I don’t think we’re doing a good job as a society deciding what things are really important to do.” Page said. “I think like we’re just not educating people in this kind of general way. You should have a pretty broad engineering and scientific background. You have some leadership training and a bit of MBA training or knowledge of how to run things, organize stuff, and raise money. I don’t think most people are doing that, and it’s a big problem. Engineers are usually trained in a very fixed area. When you’re able to think about all of these disciplines together, you kind of think differently and can dream of much crazier things and how they might work. I think that’s really an important thing for the world. That’s how we make progress.” [Pages 355-56]
Some final words about Musk
It’s funny in a way that Musk spends so much time talking about man’s survival but isn’t willing to address the consequences of what his lifestyle does to his body. “Elon came to the conclusion early in his career that life is short,” Straubel said. “If you really embrace this, it leaves you with the obvious conclusion that you should be working as hard as you can”. Suffering though has always been Musk’s thing. The kids at school tortured him. His father played brutal mind games. Musk then abused himself by working inhumane hours and forever pushing his businesses to the edge. The idea of work-life balance seems meaningless in this context. […] He feels that the suffering helped to make him who he is and gave him extra reserves of strength and will. [Page 356]
As Thiel said, Musk may well have gone so far as to give people hope and to have renewed their faith in what technology can do for mankind. [Page 356]
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9423537850379944,
"language": "en",
"url": "https://www.whizwriters.com/political-science-384/",
"token_count": 588,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.259765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e956db8d-956e-4ce2-bedf-4892655c1cb0>"
}
|
Page 1 – MCQs
- Question (TCO 5) When it comes to electing officials, which factor matters the most to voters in both presidential and parliamentary elections?
- Question (TCO 5) Who receives the most attention in both parliamentary and presidential systems?
- Question (TCO 5) Describe how the United States expands its cabinet.
- Question (TCO 7) Radicals use the term political economy instead of _____ to describe their critique of capitalism and the inequitable distribution of wealth among nations.
- Question (TCO 7) How do Keynesian economic policies differ from the traditional laissezfaire policies developed by Adam Smith?
- Question (TCO 7) What event is largely considered responsible for deterring Johnson’s War on Poverty?
- Question (TCO 7) Which of the following is an increasing financial concern of the Medicare program?
- Question (TCO 7) Why are many politicians wary about limiting Social Security and Medicare expenses?
- Question (TCO 7) How does the American welfare state compare to those of other industrialized nations?
- Question (TCO 7) Theoretically, what are the consequences if the government assumes the burden of bad loans?
- Question (TCO 9) What is the most common response to serious domestic unrest?
- Question (TCO 9) Riots triggered by police beating youths, protests against globalization, and labor strikes against austerity are all examples of _____.
Page 2 – Essays
- Question (TCO 2) What types of states are most likely to become authoritarian? Why? Along the same lines, what authoritarian states have been most likely to democratize? Under what circumstances does this democratization occur and why? Based on previous findings, describe one country you think is likely to democratize in the near future.
- Question (TCO 3) Compare and contrast interest groups and political parties. In your response, be sure to provide examples their similarities and differences. In addition, please assess what advantages interest groups offer that political parties don’t and then what advantages d political parties offer that interest groups don’t.
- Question (TCO 6) Since the end of WWII, international relations have been framed by the conflict between liberal governments and communist ideals. Compare and contrast the features of these systems and assess their continued impact on the global community. Please be certain to explain classical and modern liberalism, socialism, and communism within your responses and provide examples to support your points.
- Question (TCO 8) Today’s world seems to be moving beyond sovereignty and toward supranational leadership to cooperate on issues of global importance. What are some of these issues? How might they be solved through supranational cooperation? Does such cooperation impede the sovereignty of independent nations? Please sure to include specific examples in supporting your points.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9647399187088013,
"language": "en",
"url": "http://power-posts.com/2013/10/types-of-car-insurances/",
"token_count": 629,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.046630859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:67f724e4-1b89-4b61-8137-f6dbd2aff1b8>"
}
|
Insurance is to minimize the risk for the property or for the life. The coverage of the insurance is based on the policy company offered and the insurer selected method. When the coverage is increased the premium for the insurance is also increase. Most of the states at present insurance for cars are made compulsory. Now we will see the different types of insurances available for the cars.
- Liability insurance: When a person is in the event of the car accident and the police is decided that mistake is that person, he/she have to pay the loss occurred by the event. In such those cases liability insurance is helps to protect from loss. The coverage is to the person injured and the property damaged. So there is no need to pay from the pocket.
- Collision insurance: Whenever a person is in the event of accident and the car is damaged the insurance company will pay for it. When the car is totally damaged the company will pay for it. But the payment is based on the present value of your car. For example the car is used for fifteen months the company calculated the market rate and that amount will give.
- Comprehensive insurance: Liability insurance and collision insurances are cover the risk only when the car met an accident. But in case of theft or animal collisions or any other damage, liability and collision insurances will not cover. In case of comprehensive insurance it will cover. When your car is installed with anti theft and tracking system the policy premiums will come down.
- Uninsured motorist protection: When you met an accident with a uninsured motorist the uninsured motorist protection will cover your and your car expenses. When the motorist is having insurance and the amount is not sufficient to meet the expenses, remaining expenses will covered by this policy. If the expenses are met with the coverage amount, uninsured motorist protection will not pay any amount.
- Medical and personal injury protection: When you meet an event of accident and are injured, medical expenses have to be paid. In case of Medical and personal injury protection policy, all the medical expenses are covered. Along with you, other passengers’ medical expenses also come under the coverage. It does not matter whose fault it was.
- No fault insurance: No fault insurance policy will cover the injuries and property damage. Whose fault resulted in accident is not taken into account. No-fault insurance is more expensive when compared to other insurance policies.
- Gap insurance: Gap insurance covers the risk gap between the car market value and the actual payment that has to be paid to the financing company. For example, A person have a car market value of $15,000 and the loan have to pay $20,000. In the event of theft or accident or any reason, he loses the car. Basically the car insurer compensates for $15,000 but still he has to pay $20,000 to the financing company. The remaining amount $5,000 will be covered under gap insurance protection.
Before taking any car insurance policy consider the above all types of policies. It will help to take a good decision for you.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8911986351013184,
"language": "en",
"url": "https://blog.gfar.net/2016/03/04/yap-proposal-113-elongating-shelf-life-of-mushrooms-animesh-khadka-nepal/",
"token_count": 1541,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.007476806640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0912e23c-c319-4004-b59a-bb6c5e84465e>"
}
|
The global trend of processed foods is increasing in developed countries and in developing countries, like Nepal, too. Nepalis’ food habits are also changing over time. So, mushroom farming has become one option for farmers, for their livelihoods. After investing a lot in the infrastructure of mushroom farming, farmers try to gain return of investment (ROI) within the minimum time possible and balance the break-even point.
There are about 69,000 species of mushrooms available globally, while only 2,000 species from 30 genera are edible mushroom and commercially cultivated (Niazi, 2015). The global mushroom market has reached USD 29,427.92 million in 2013 and is expected to be USD 50,034.12 million in 2019 (Anonymous, 2015).
The Nepalese mushroom market possesses very few post-harvest and handling operations. Mushrooms, being a high-moisture content food, are prone to different types of contamination (Ajayi et al., 2015). Generally, the moisture content of mushrooms is above 70% (Whole_food_catlog, 2015). With such a high moisture content, it takes up a lot of space and is highly perishable. A simple technology of packaging mushroom in low-density polythene (LDPE) has been introduced but cannot sustain the mushrooms long enough because LDPE is a low-moisture barrier.
Drying with modified packaging can be one of the best solutions for the problem. Drying the mushroom to low-moisture content till 12 to 15% enables mushroom be packed in as small area as possible, which means one kilogram of fresh mushrooms can be dried to 220 grams or even less. Packaging also plays a vital role in marketing the dried mushroom, since it can be packed in multilayer packaging film with attractive printing. So, the packaging attracts the consumers and a value-added price for the dried mushroom with very little effort.
The project location will be in Pokhara Sub-Metropolitan City where we have direct contact with the mushroom farmers. As the city receives the highest rainfall, the humidity is very humid and favourable for mushroom cultivation, and we have personal contact with the farmers.
Since, in Nepalese department stores we can find the Japanese dehydrated mushroom, an idea came to mind: why not enable Nepalese mushroom entrepreneurs to adapt this technology and sell their dehydrated mushroom regionally, at first, nationally afterwards, and then internationally?
Adaptation of these technologies uses a little more capital but has a high return-on-investment (ROI). The solar dryers will be constructed near the farming zone where the mushrooms are harvested and left for drying. After the mushrooms reach a critical moisture content they will packed in the horizontal fill and seal machine with a shelf-life of at least three to six months or more. Such a long shelf-life product enables farmers to market their product for a longer time.
Having expertise of post-harvest agriculture (among the YPARD members), I thought to apply for it. The proposed budget is:
- Culture purchase USD 20 per batch (maximum of 20 batches)
- LDPE for the mushroom harvest USD 50
- Bamboo for mushroom harvest USD 100
- Multilayer packaging films USD 500
- Semi-mechanized solar dryer USD 1,000 (4ft x 10 ft or 8 ft x 5 ft]
- Potassium metabishulphite (preservative agent) USD 200
- Nitrogen flushing packaging machine USD 500
- Land lease USD 400 per year
- Labour cost USD 500
- Marketing and promotion USD 750
The project’s success will be determined, first, by the break-even point (neither profit nor loss), by its fixed cost, variable cost and variable numbers. After the project reaches break-even then it will be true profitable, will be determined accordingly and the success of the project. Once the project crossed the break-even then it is self-sustainable.
Another aspect of success of the project will be determined by the quality parameters. The proximate and some ultimate analysis of the dehydrated mushroom will be analysed in the Pokhara Bigyan Tatha Prabidhi Campus as well as some in DFTQC. The packaged mushroom follow Nepalese Food Law (1966), and shows net weight, gross weight, manufacturing weight, expiry date, proximate composition, manufacturers address, price, and batch number so a complete traceable system could be made.
I am a microbiologist as well as agricultural engineer. I am member of YPARD and have Advanced Training in Agricultural Engineering (ATAE) from the Indian Institute of Technology, Kharagpur, specializing in packaging materials and shelf-life elongation.
Ajayi, O., Obadina, A., Idowu, M., Adegunwa, M., Kajihausa, O., Sanni, L., Asagbra, Y., Ashiru, B. and Tomlins, K. (2015). Effect of packaging materials on the chemical composition and microbiological quality of edible mushroom (Pleurotus ostreatus) grown on cassava peels. Food Sci. Nutr. 3 (4), 284-291.
Anonymous. (2015). Mushroom Market by Type, by Application , & by Region – Global Trends & Forecast to 2019. Research and Market. Retrieved from http://www.researchandmarkets.com/reports/3070244/mushroom-market-by-type-by-application-and-by#description. [Accessed 1 March, 2015].
Niazi, A. R. (2015). World production of edible mushroom and edible mushrooms of Pakistan. Retrieved from http://www.slideshare.net/jannatiftikhar/world-production-of-edible-mushrooms-and-edible-mushrooms-of-pakistan. [Accessed 1 March, 2016].
Whole_food_catlog. (2015). Water content of mushroom. Retrieved from http://wholefoodcatalog.info/nutrient/water/mushrooms/. [Accessed 1 March, 2016].
Blogpost and picture submitted by Er. Animesh Khadka (Nepal): khadka.animesh[at]gmail.com
The content, structure and grammar are at the discretion of the author only.
This post is published as proposal #113 of “YAP” – our “Youth Agripreneur Project”.
The first selection of the winners will be based on the number of comments, likes and views each proposal gets.
As a reader, you can support this speaker’s entry:
- Leave a comment (question, suggestion,..) on this project in the comment field at the bottom of this page
- Support the post by clicking the “Like” button below (only possible for those with a com account)
- Spread this post via your social media channels, using the hashtag: #GCARD3
Have a look at the other “YAP” proposals too!
As a donor, support young agripreneurs and sponsor this unique project.
Check out the side column for our current sponsors. “YAP” is part of the #GCARD3 process, the third Global Conference on Agricultural Research for Development.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9428672194480896,
"language": "en",
"url": "https://davidicke.com/2021/03/18/things-about-cryptocurrency-you-need-to-know/",
"token_count": 857,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:716db857-5ba5-4bae-b86f-1dc3b2b03fd8>"
}
|
The decentralized cryptocurrency was created in 2011 by Blockchain, the label assigned to the mysterious founder or owner of this digital money. Purchases are registered in a chain that displays the transaction history for each device which can be used to verify identity.
Buying a cryptocurrency is distinct from investing in a portfolio or a loan since blockchain is not a company. As a result, there are no business hedge funds or Type 10-Ks for analysis. And unlike trading in digital coins, bitcoin is also not distributed by the money supply or supported by the government, so monetary policy, inflation, but economic development measurements that usually affect the economy’s valuation do not extend to the blockchain. On the other hand, Bitcoin values are based on the following factors:
Supplies And Demand
States of guaranteed currency values can partially regulate just how much of their currency needs to flow by modifying the present value, changing exchange conditions, or intervening in active operations. Through these choices, the banking system will theoretically influence the stock rate of the currency. Cryptocurrency supply is affected in two distinct forms. First, that bitcoin specification enables the creation of total supply at a set pace. Fresh bitcoins are added in bitqs the economy as miners activate transaction chains, and the rate at which cryptocurrency exchanges are released is programmed to slow away over time. Good example: acceleration has declined from 6.9percent of the total (2016) to 4.4percentage points in 2017 with 4.0%. (2018).
Though bitcoin may have been the most possibly the best blockchain, several other tokens are competing for online communication. Though bitcoin is the top market capitalization choice, altcoins like Ethereum (ETH), Bittrex, virtual currencies (BCH), moon coin (LTC), and Ectosomes are amongst these direct rivals as of April 2020. Besides, a new initial exchange offering (Litecoin ( LTC) is continuously also on the horizon due to comparatively few government regulations. The close race is a good thing for consumers, as the widespread rivalry drives rates down. Luckily for cryptocurrency, its strong profile offers it an advantage over its rivals.
Costs Of Production
Though bitcoins are abstract, they are still manufactured goods and incur actual profit margin electricity usage, becoming the most significant factor. Cryptocurrency ‘processing’ as this is known, depends on a complex computational math puzzle that miners are competing to remedy first to do just that is awarded a chain of freshly minted cryptocurrencies and any service fees which have already been accrued after the last blocks were identified. What is peculiar regarding bitcoin output is that outside of most manufactured products, the bitcoin method enables just one chunk of bitcoins to be still generated on average per 10 minutes.
Money Exchange Availability
Much as share investors trade securities through indices such as NYSE, Nada, including FTSE, coin holders trade tokens over Cryptocurrencies, 577dn, and other platforms. Like conventional currency markets, these websites enable investors to swap cryptographic protocol pairs called Litecoin or Blockchain Dollar. The more successful a network gets, the better it will attract additional members to generate a network impact. And by capitalizing on its competitive strength, it can lay down rules for the addition of other finances. For example, the Clear Agreement’s issuance regarding Future Transactions (SAFT) system aims to define whether ICOs might comply only with corporate law.
Regulations and Legal Issues
The exponential growth in cryptocurrency prominence has led officials to question how to define certain digital assets. When the Financial Accounting Standards Board (SEC) classifies bitcoins as commodities, the American Multilateral Investment Guarantee Agency (CFTC) finds that bitcoin is a contract. This mystery about which regulator would lay down rules for cryptocurrencies has generated unpredictability rising market cap stocks. The industry has also seen the walk of several financial instruments that utilize bitcoin as a base commodity, such as ETFs, options, and other derivatives.
Are You Supposed To Invest In Bitcoin?
Many associate the rapid inflation of cryptocurrencies with the speculative bubble caused by Hyperinflation and in Nordic countries in the late 1700s. While it is generally necessary for regulations to safeguard consumers, it is supposed to take time until the global influence of tokens is felt.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9658647179603577,
"language": "en",
"url": "https://definitions.uslegal.com/b/below-the-line-costs/",
"token_count": 119,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0791015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:59305b04-0ac4-4d71-aa85-941ee66bc70d>"
}
|
Below the Line Costs Law and Legal Definition
Below the line costs are a part of the production expenses of a picture. It accounts for the technical expenses and labor costs incurred on set construction, crew, and camera equipment, film stock, developing and printing of a film. Below the line costs are usually fixed as compared to the above the line costs that is variable. Remunerations to the non-starring cast members and the technical crew are examples of below the line costs. This part of the budget also accounts for the use of the studio and its technical equipment, travel and location costs.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9273141026496887,
"language": "en",
"url": "https://www.biologiq.ie/2014/03/biogas-lights-and-shadows-of-a-promising-technology/",
"token_count": 1314,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.048095703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e3281766-e989-414e-815a-b4f0b806565e>"
}
|
By Mario A. Rosato
Like any other technology, anaerobic digestion is good in itself but unfortunately EU policies and some local lobbies have in many cases created a fertile soil for speculation to grow about its potential. Beyond this political polemic and ideological dogma, experience has proven that only biogas plants designed and managed with rational criteria can help the environment, the community and the economy at the same time. The following decalogue (ten commandments) may be of help to anybody wishing to invest in a biogas plant and obtain a sustainable return on the investment:
- Design your biogas plant to run on waste biomass
- Energy crops are complementary feedstocks, not fuels
- Make a survey of the feedstocks available for free in your territory before designing your biogas plant
- Always check the seasonal variations of biomass availability before sizing your digester
- Do not blindly trust the tables of biogas yields published in the literature
- Thermal use of biogas should have priority over electricity generation
- Electricity generation is acceptable only if the waste heat can be employed for any useful purpose other than just heating the digester
- Design your plant for energy self-sufficiency at family or community scale
- Dimension the plant proportionally to the area of arable land and number of animals in the farm
- Avoid “branded” plants, employ local contractors and take full control of your project
The first commandment is the key to sustainability: converting the waste biomass into useful methane under controlled conditions is the best way to tackle global warming. Conversely, as far as there is already available organic waste in a given territory, it makes no sense utilising arable land for energy crops.
When bank clerks and capital investment groups evaluate a biogas plant they usually assume that any feedstock can be bought anywhere, in any desired quantity, at any time and at the same price for the next 20 years. None of these suppositions is true so many projects turn out to be complete failures, from both ecological and economic perspectives.
Another dogma that proves deleterious for the sustainability of a biogas project is assuming that the scientific literature on biochemical methane potential (BMP) of different potential substrates are absolute physical constants for plant design. Organic matter is heterogeneous by nature, and the net methane yield of a given biomass depends on a long list of uncontrollable factors (soil, rainfall, species or variety cultivated, sunlight during the growth of the plant, how it was harvested and ensiled …). Frequently, the use of a certain biomass is artificially pushed by some company having a particular vested interest in selling the seeds or just because the plant designers are familiar with that type of biomass. A case in point is provided by corn, a plant that seldom yields more that 310 Nm3 CH4/ton of volatile solids (VS) while other substrates, for instance clover, can yield 319 Nm3 CH4/ton VS and common grass silage can yield 318 Nm3 CH4/ton VS. Nevertheless, the main biogas plant builders (mostly German) defend corn as the only substrate valid for keeping the plant running smoothly, on the base of their own (national) vision and experience that is not necessarily applicable to other geographical contexts.
Electricity generation from biogas is another distortion introduced by the motor manufacturer and public utilities lobbies in Central Europe. Electricity cannot be (easily) stored and so electricity generation by biogas implies, in over 80% of the cases, that 60% of the energy value of the methane is just dissipated to the atmosphere as residual heat. Purified biogas, called biomethane, can fully replace natural gas and petrol for domestic heating, cooking, sanitary water production and as a car fuel. The combustion of biomethane is cleaner than many other petroleum-derived fuels, its carbon emissions are neutral, and it can be easily and safely stored in low pressure balloons or mid-pressure steel cylinders.
Another aberration of European renewable energy directives is a lack of consideration of the local scale: for example, if a farmer has 10 hectares of land and just 20 cows, it is not logical to allow him to build a 1 MW electric power biogas plant just because “it is renewable energy”. In this regard, Germany, Spain and Italy are bad examples as there is no legal restriction on the size of the plant in relation to the size of the farm. This situation has resulted in the construction of biogas plants that consume all the crop produced in the farm, leading to the need to import fodder for the cattle, which means importing surplus nitrogen. This imported nitrogen adds to that already contained in the digestates (as anaerobic digestion does not consume nitrogen) so that some places are already showing symptoms of nitrification (underground water with increasing nitrites content and eutrophication of rivers and lakes).
Finally, there is a misconception (encouraged by banks and investment groups) that the bigger the supplier of turnkey biogas technology, the safer the investment will be. In general the experience proves quite the contrary: big specialised biogas plant manufacturers often do not have their own construction workers and employ local contractors for the erection of the plant, keeping most of the commercial margin and having little or no control on the construction site, materials and quality of execution.
Anaerobic digestion, if properly planned, built and managed, is a technology that helps reducing the environmental impact of human activities, provides enhanced energy independence to local communities by partly replacing petrol, and constitutes a more natural way to manage the carbon:nitrogen balance of soil, thereby preserving its fertility. Degrading organic waste in a controlled manner helps reduce the propagation of disease and the pollution of underground and surface water bodies. The only necessary ingredient to benefit from all of these advantages is to design a plant that is perfectly integrated into local realities.
About Dr. Mario A. Rosato:
Mario A. Rosato, CEO of Sustainable Technologies SL is an electric, electronic and environmental engineer, developer of advanced anaerobic digestion processes, professor of renewable energies at the University of Pordenone (Italy) and scientific journalist specialized in agricultural energy technologies.
As part of its drive to increase the number of sustainability services it provides, BioLogiQ will be starting a collaboration with Dr. Mario A. Rosato.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.929957389831543,
"language": "en",
"url": "https://www.cep-americas.com/single-post/2017/02/14/why-invest-in-an-eco-intelligent-circular-economy",
"token_count": 2098,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.048583984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fff1ca5b-b128-4ee3-9098-d644f4322aa6>"
}
|
Why invest in an Eco-Intelligent Circular Economy?
Among the biggest global challenges we are confronted with are access to fresh water, food, energy, and other essential resources necessary to maintain human life and the modern societies we know today, while remaining within the carrying capacity of the Earth. In this blog, I will discuss the merits of investing in Eco-Intelligent Circular Economy, which offers a rare opportunity for creative, tangible solutions to many of these challenges in a way that is accessible, effective and sustainable.
How we got here
Modern society has been made possible due to human creativity and the invention of technologies making it possible to transform relatively easily accessible and affordable natural resources into high-density energy sources (e.g. coal, petroleum, and gas). These lead to the Industrial Revolution, and grew into the ability to manipulate and use materials to manufacture products and infrastructure, up to complete mega-cities. This ability has brought significant progress and prosperity among many communities around the globe.
However, this industrialization of society has also created a greater inter-dependency and vulnerabilities to impacts of the economic system. The current system, based on a “take-make-waste” linear model relies on an ineffective use of available resources provided by nature, and leads to unintended consequences, such as waste, pollution, and contamination of the air, water, food and other basic needs to human beings and environment. Maintaining this ongoing linear economic system requires the continued extraction of more natural resources, which are increasingly becoming scarce and continue to generate waste, which negatively affects the health of humans as well as the environment.
ASDF argues that many of the major social and environmental problems we are currently facing can be derived from (1) the failures in the proper selection of non-toxic primary materials and chemicals, (2) lack of rational use of natural resources, (3) and a failure in design of basic products, up to the level of the current linear economic operating system. There is a realization that the current economic model is the principle cause and at the same time the only sphere where human intervention can lead to improving or solving the global food, water, energy, and materials crises. This can be done by re-thinking and re-designing the current economic model and transforming it into an eco-intelligent Circular Economic model based on the Cradle-to-Cradle® design principles.
Moving towards a Circular Economy
The term Circular Economy is gaining global traction as a means for concerted international action to help create a new industrial model and economic system that is better aligned to the rules of nature, while allowing for human beings to continue to maintain the standards of modern society. Due to its recent emergence as a new terminology, it is still evolving without a formal global consensus on its definition yet. Since the concept incorporates and significantly relies on the design principles of Cradle-to-Cradle®, it recognizes the need to re-think the way we are making products and move towards a circular economic model.
The Ellen MacArthur Foundation describes Circular Economy as one that is “restorative by design, and which aims to keep products, components and materials at their highest utility and value, at all times”. They highlight five principles to realize a Circular Economy: “(1) Circular economy is a global economic model that decouples economic growth and development from the consumption of finite resources; (2) Distinguishes between and separates technical and biological materials, keeping them at their highest value at all times; (3) Focuses on effective design and use of materials to optimize their flow and maintain or increase technical and natural resource stocks; (4) Provides new opportunities for innovation across fields such as product design, service and business models, food, farming, biological feedstocks and products; and (5) Establishes a framework and building blocks for a resilient system able to work in the longer term” (Ellen MacArthur Foundation, 2013). William McDonough defines “Circular Economy” as “a resourceful economic system and innovation engine, providing clean materials, energy, water and human ingenuity. In essence, the Circular Economy puts the “re” back in resources” (MBDC, 2015).
ASDF acknowledges Cradle-to-Cradle® design principles as the foundation of a so-called Circular Economy and presents the need for using the specific terms “Eco-Intelligent” and “Circular” combined, since “Circular Economy” itself does not guarantee that once you figured out how to close the loop of materials and resources use in the economic model by design, this is done respecting the limits and boundaries and the regenerative capacity of the Earth’s ecosystem processes.
As example of this distinction, consider the rate of water extraction and use from aquifers for the manufacturing of products that are compatible with the Circular Economy concept. This approach acknowledges that the ever-increasing pace and need for more products by a continuously growing global population may turn out to be very difficult to balance with nature’s rate and capacity to regenerate the aquifer with fresh water. That capacity is highly dependent on the climate conditions and other natural phenomena, which may result to be significantly slower.
Cradle-to-Cradle® is a registered trademark of MBDC and is an innovation platform for designing beneficial economic, social, and environmental products, processes and systems based on in-depth scientific analysis and assessment. Cradle-to-Cradle® design is characterized by three principles derived from nature: (1) Everything is a resource for something else, (2) Use clean energy, and (3) Celebrate diversity.
Cradle-to-Cradle® thinking includes the recognition that in nature, the “waste” of one system becomes food for another. Everything can be designed to either be safely returned to the soil as “biological nutrients”, or collected after use, dissasembled and re-utilized as high quality materials for new products as “technical nutrients” without contamination. Furthermore, it recognizes that living things thrive on the energy of current solar income and that human constructs can use renewable energy sources while supporting human and environmental health. And as nature celebrates diversity, designs and solutions should respond to the challenges and opportunities offered by each location in an elegant and effective manner. Rather than seeking to minimize the harm humans inflict, Cradle to Cradle® reframes design as an intentional positive, regenerative force. This paradigm shift reveals opportunities to improve quality, increase value, and spur innovation (MBDC, 2015).
Thus, understanding and recognizing that the Economy and Human Society will continue to operate within the Ecosystem of the Earth, it is important to understand (1) the regenerative capacity and rate of ecosystem processes, and (2) develop intelligence regarding how to achieve a proper balance between the continued rate of extraction due to population growth, the duration of the use cycles in the circular economy to satisfy the needs of the global population, and the net decrease in available natural capital on Earth. Therefore, the development of an “Eco-Intelligent Circular Economy” is vital to finding a long-term solution.
In other words, the new economic model should not only be regenerative and circular by design, but also intelligently balanced with ecosystem processes. The Circular Economy system needs to consider that, although you may have managed to mimic the regenerative capacity of nature, this still will need to be done in line or pace with the regenerative capacity of the earth’s ecosystem processes, as these continue to provide the basic life support needs, such as oxygen, water, energy, food, and other resources to allow human beings to thrive on the planet.
ASDF believes that an “Eco-Intelligent Circular Economy” is entirely compatible with the concept of Sustainable Development and considers it a suitable framework to allow for a systemic societal paradigm shift toward an Eco-Intelligent Circular Economic model that is better aligned to nature’s rules and fundamentals.
While having a long-term strategic vision based on an Eco-Intelligent Circular Economy is necessary, even more so is the need for a concrete pragmatic mechanism to realize this long-term vision. Therefore, ASDF has established a formal alliance with MBDC, the developers of the Cradle-to-Cradle® design framework, and has specialized and built up its in-house capacity to properly integrate the Cradle-to-Cradle® design principles as the basis for all its interventions and implementation of its projects and initiatives to transition to an Eco-Intelligent Circular Economy.
In conclusion, ASDF’s strategic development plan to realizing sustainable development, is to continue to allocate its accumulated knowledge, efforts, skills, and resources to bringing about innovative and practical solutions to address concrete air, water, energy, food, and material problems and challenges inspired by the Cradle-to-Cradle® design principles to facilitate the transition toward Eco-Intelligent Circular Economies in the Americas. Moving forward, we must recognize that the Economy is subject to the Society, and that the Society is in turn dependent on the Ecology that provides the basic life support to allow for the human being to survive and thrive on planet Earth.
For more information please visit www.sustainableamericas.com.
Kevin de Cuba is the Co-Founder and Executive Director of the Americas Sustainable Development Foundation (ASDF). Over the past 9 years he has specialized in the topics of Cradle-to-Cradle® and Circular Economy and has been a pioneer in creating awareness, capacity building and triggering action in Latin America and the Caribbean regarding these topics. Mr. de Cuba has a bachelor degree in Environmental Technology Engineering, with specialization in Waste Management, obtained from the Technical University of VanHall-Larenstein (VHL) and has a MSc. Degree in Sustainable Development, with specialization in Energy and Materials, from the Copernicus Institute at the University of Utrecht, the Netherlands.
© Copyright 2017 Americas Sustainable Development Foundation (ASDF). All Rights Reserved
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9541121125221252,
"language": "en",
"url": "https://www.genpaysdebitche.net/how-many-agi-crypto-should-i-buy/",
"token_count": 693,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.056396484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e520f798-fef2-4878-97e3-38f188ed0b36>"
}
|
How Many Agi Crypto Should I Buy – Merely put, Cryptocurrency is digital money that can be utilized in location of traditional currency. The difference between Cryptocurrency and Blockchains is that there is no centralization or ledger system in location. In essence, Cryptocurrency is an open source procedure based on peer-to Peer transaction innovations that can be executed on a dispersed computer system network.
One specific way in which the Ethereum Project is attempting to solve the problem of smart contracts is through the Foundation. The Ethereum Foundation was established with the objective of developing software options around wise contract functionality. The Foundation has released its open source libraries under an open license.
For starters, the major difference in between the Bitcoin Project and the Ethereum Project is that the former does not have a governing board and for that reason is open to factors from all strolls of life. The Ethereum Project takes pleasure in a much more regulated environment.
As for the jobs underlying the Ethereum Platform, they are both aiming to offer users with a new method to participate in the decentralized exchange. The major distinctions between the 2 are that the Bitcoin procedure does not use the Proof Of Consensus (POC) procedure that the Ethereum Project makes use of.
On the other hand, the Ethereum Project has actually taken an aggressive approach to scale the network while also dealing with scalability problems. In contrast to the Satoshi Roundtable, which focused on increasing the block size, the Ethereum Project will be able to carry out improvements to the UTX protocol that increase deal speed and decrease costs.
The significant difference in between the 2 platforms originates from the operational system that the two teams use. The decentralized aspect of the Linux Foundation and the Bitcoin Unlimited Association represent a standard model of governance that positions an emphasis on strong community participation and the promotion of agreement. By contrast, the ethereal foundation is dedicated to building a system that is versatile enough to accommodate changes and add brand-new features as the needs of the users and the market change. This design of governance has actually been adopted by several distributed application teams as a means of handling their jobs.
The significant difference in between the 2 platforms comes from the truth that the Bitcoin neighborhood is largely self-sufficient, while the Ethereum Project anticipates the participation of miners to fund its advancement. By contrast, the Ethereum network is open to contributors who will contribute code to the Ethereum software stack, forming what is referred to as “code forks “. This feature increases the level of involvement desired by the community. This design also differs from the Byzantine Fault model that was embraced by the Byzantine algorithm when it was used in forex trading.
As with any other open source innovation, much controversy surrounds the relationship in between the Linux Foundation and the Ethereum Project. The Facebook group is supporting the work of the Ethereum Project by providing their own structure and producing applications that incorporate with it.
Simply put, Cryptocurrency is digital money that can be used in place of conventional currency. Essentially, the word Cryptocurrency comes from the Greek word Crypto which implies coin and Currency. In essence, Cryptocurrency is simply as old as Blockchains. The distinction between Cryptocurrency and Blockchains is that there is no centralization or journal system in place. In essence, Cryptocurrency is an open source procedure based on peer-to Peer deal technologies that can be performed on a distributed computer network. How Many Agi Crypto Should I Buy
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9250043034553528,
"language": "en",
"url": "https://www.siteware.co/en/strategic-management/what-is-scenario-analysis-in-strategic-management/",
"token_count": 2345,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0283203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2585014a-6a00-40b7-ae88-648683aa4886>"
}
|
Scenario analysis in strategic management: what are the best tools?
The fact that we are in a world that changes rapidly and with great intensity, and that this has great influence on business and economic-financial scenarios, is no secret to anyone.
And it is increasingly necessary for companies to be attentive to these changes, be they in the behavior of consumers, the economy, the government, or even in the competition.
Organizations need to analyze their position in this economic scenario in detail and predict and prevent negative factors influencing it.
At the same time, they need to highlight their strengths and direct their strategies to succeed in this mutant environment.
And this is where Scenario Analysis in strategic management comes into play.
In this post, you’ll understand what scenario analysis is, you’ll see some internal and external scenario analysis tools, examples, and how to use them all in a company’s strategic assessment.
What is Scenario Analysis in strategic management?
Scenario Analysis is a concept disseminated by studies and consultancies that came to be widely used as a management tool, despite having its origin in military theory.
It allows strategies to be established considering a future context.
In this case, factors that can drive the business are identified, waiting to get advanced before different types of scenarios in strategic planning.
Scenario Analysis is what a company’s strategies will be based on, so it is of extreme importance in the design of Strategic Management.
Its main function is to analyze the context (internal and external) in which the company is inserted.
Then, the future factors that are likely to occur are identified, allowing a clearer view of the current scenario and allowing more informed and accurate decision making.
It is important to note that the main function of scenario building in strategic management is not to try to predict the future, but to identify factors that can become real in the long run.
Tips for Effective Scenario Planning
When analyzing scenarios for a business plan, competitor analysis and strategic management play a key role.
Only then will it be possible to make an adequate projection of scenarios.
Here are some important actions:
- Think strategically and analyze the adversary and the environment in detail, discovering points that are in fact relevant for the analysis of scenarios and identification of risks, always being objective.
- Get to know your competitors well, by analyzing the competitive environment, detecting your strengths, identifying points that you are ahead of them, and at the same time the threats, that is, the areas where they can overcome you.
- Despite all the efforts you can put into performing the analysis of the internal and external environment of your organization, it is impossible to be 100% unbiased. So, get people who know your business and are familiar with the method, to review the analysis and contribute.
- Make use of the internal and external scenario analysis tools established and used by the big companies in the market.
How to use scenario and construction analysis in strategic planning
As we have seen, Scenario Analysis is a process that can be simple, which allows companies of the most diverse branches and sizes to use it as part of their definition of strategic planning.
The construction of company strategic scenarios uses factors that are common to all of them, being necessary only that each one analyzes the internal and external environment of the organizations and the market.
Organizational Scenarios Analysis helps in the direction and accuracy of strategic planning through a broad analysis of the corporate environment.
This will result in the creation or adaptation of new strategies or action plans to minimize risk and maximize opportunities and chances of success.
Partial Scenario Analysis tools in strategic planning
As we have said, in order to study scenarios, several factors must be taken into account.
From the concept of economic scenarios, through to the analysis of the competitive environment and the use of internal and external scenario analysis tools.
Without this, there is no way to build a good example of critical business analysis.
In this context, we have selected some internal and external scenario analysis tools that will be of great help in the planning of organizational scenarios:
- Competitive environment analysis;
- PESTEL (and enlarged PESTEL);
- Analysis of the internal and external environments of organizations.
Lets learn about these strategic planning tools for projecting organizational scenarios?
But before proceeding, watch this interesting animation produced by SEBRAE, focused on scenario analysis and risk identification.
Porter: competitive analysis
Analyzing competitors during strategic planning is critical.
And this tool idealized by Professor Michael Porter from Harvard is one of the most sacred when it comes to the strategic analysis of a company.
Here’s how to use them in building a company’s scenarios:
- Rivalry between competitors: knowing the other companies that operate in your market segment is fundamental. The rivalry between competitors tends to be greater when there are more companies present in the market and there are smaller differences between what they offer. Seek out the strengths and weaknesses of each company, get to know your target audience, and figure out how to meet their needs better than your competitors do.
- Suppliers negotiation power: the more suppliers you have, the less they can dictate prices and delivery times. Remember that they are also suppliers to your competitors, who may try to dominate some of them with exclusivity contracts.
- Threat of substitute products: are those that do not belong to the same category that you produce, but that meet the same needs for your customers. A famous example is the case of butter and margarine. Find out what are the features and benefits of your products that make them positively differentiate from substitutes.
- Threat of entry of new competitors: what are the entry barriers that can prevent the emergence of new competitors in your market? The need for high investments for installation, patents, government regulation, consolidated brands and complex technologies often inhibit the entry of new competitors.
- Client negotiation power: those who define the characteristics, positioning and price of their products, is always, in essence, the customer. The greater the number of competitors and the similarity between products, the greater the bargaining power they have. Differentiation is the way to try to control this scenario.
PESTEL Risk Analysis
The PESTEL analysis is used for scenario studies and is fully focused on the external environment.
The name PESTEL is derived from the initials of the letters of different types of scenarios that strategic planning demands be analyzed.
Strategic scenarios of the PESTEL analysis:
For each of these points, a scenario analysis should be done for the business plan, defining opportunities and threats (which are also used in SWOT analysis).
For example, when looking at the concept of economic scenarios, factors such as these could be listed:
- Opportunity: Lower interest rate and dollar will facilitate financing and import of production inputs.
- Threat: The increase in the rate of a certain tax will bring a significant increase in production expenses.
Although some of the scenarios used in strategic scenarios are considered to be very useful, some feel that the PESTEL scenario analysis of a company could be even more complete, including other factors, or better detailing the six that it already uses.
Topics for studying organizational scenario analysis:
- Great Upheavals: Strong changes in government, downfall of ministers, wars, reforms and new laws.
- Big Uncertainties: Inflation, deflation, increasing unemployment, increased or decreased consumption, increased or decreased interest rates, strikes, exchange rates.
- Big ambiguities: High unemployment and higher consumption due to low interest in savings or stockpiling due to the fear of inflation.
- Optimal Statistical Data: They are considered optimal due to the seriousness of the information source.
- Questionable Statistical Data: Do not use for decisions in strategic planning due to the low credibility of the source.
- Serious Elevation of Costs: Import or export taxes, scarcity due to high demand, difficult labor force.
- Severe Scarcity of Raw Material: Off-season, shortage due to ecological or production reasons, importation restricted by law.
- Strong State Interventions: New tax or tax rules, prohibitions on sale or production.
- Strong Social Interventions: Strikes, pressures from ethical, religious, union, or environmental protection groups.
- Serious Technological Deficiencies: Technology still unknown, costly, not available in your location, need to hire foreigners.
- Strong Modifications in the Level of Consumption: Due to fads consumption will fall or increase. Consumption will be indispensable or almost nonexistent.
A key point that can not be overlooked in scenario analysis in strategic management are the social and behavioral impacts that the advent of new technologies, such as the internet and cloud computing, have been causing.
The study of so-called X, Y (Millennials) and Z generations is a mandatory part of any scenario analysis in strategic management and risk identification.
SWOT: Internal and External Scenario Analysis Tool
The analysis of the internal and external environment of organizations is usually done with the help of the SWOT matrix.
In fact, this is one of the most important business strategic scenario analysis tools.
The use of SWOT in strategic management seeks to identify the strengths and weaknesses of a company (internal environment) and opportunities and threats (external environment).
Let us better understand what SWOT analysis is for defining these two environments?
Internal Environment – Forces and Weaknesses:
Everything you can control within your company is composed of the strengths and weaknesses of your internal environment.
- Thus, for example, a company with a reputation for innovation, with state-of-the-art facilities and high employee engagement, can list these characteristics as strengths.
- On the other hand, a company that has distribution difficulties, low market share, high costs of raising financial resources and low economies of scale has these points as troubling weaknesses.
External Environment – Opportunities and Threats:
Forces of nature, economic policy, social and behavioral changes are among some of the external factors over which your company has no control.
Thus, for example, a company with a reputation for innovation, with state-of-the-art facilities and high employee engagement, can list these characteristics as strengths.
As we showed in the previous topic, the best way to do external scenario planning is to use the PESTEL analysis, which can be extended with other factors specific to your market segment.
Crossing Strengths and Weaknesses with Opportunities and Threats:
It is here where SWOT analysis in strategic management shows you results.
With it, you must define:
- Which of your strengths can potentiate opportunities?
- Which strengths can defend you against threats?
- Which of your weaknesses can potentiate threats?
- Which of your weaknesses can hurt opportunities?
Based on these strategic scenarios, you must define action plans to strengthen your weaknesses or use your strengths to take full advantage of the opportunities and adequately defend yourself from the threats.
After all these explanations, strategic planning tools, and scenario-setting definitions, has this activity become clearer to you?
If you want this task to become even easier and more agile, use strategic planning software to do your scenario analysis like STRATWs One, and do it all with the help of technology, based on real data and ease of collaboration between teams.
Revolutionize the management of your company with STRATWs One
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9715651273727417,
"language": "en",
"url": "https://www.thenewecologist.com/2010/05/large-businesses-believe-in-the-environmental-move/",
"token_count": 238,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2197265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b1bd5f31-9eb1-4b50-8c1c-86c493b0ad0c>"
}
|
According to global reports, the large corporations are willing to spend more on climate changes and measures that reduce the global warming. The statistic clearly shows that the biggest players in the business are making more investments in order to fight the climate change.
Increasing energy efficiency is also another concern of the companies, especially those that are leading in industry sectors such as utilities, energy generating, communications and technology.
The effort of the business is now supported by governments and organizations all over the world.
The good news for the ecology is that the climate change initiatives are increasing, so does the green efforts.
Nearly 70 percents of the large corporations that are with revenue of $1 billion are said to plan large investments on the climate change and global warming.
In the next two years the companies will also increase the percentage of their green initiatives from 0.5 percent to nearly 6 percent. The reports also show the exact intentions of the largest corporations.
Over 92 percent of them are saying that energy costs is their concern as they will refresh their economical plans and invest into green energy that cuts the costs and the waste. Investing into energy efficiency and recycling is also part of the plan.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9353974461555481,
"language": "en",
"url": "https://oliveloaded.com/nabteb-2021-commerce-verified-answers/",
"token_count": 1417,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3a835d17-99d2-4508-ad7e-716836463194>"
}
|
A statement of accounts : is a document that reflects all transactions that took place between you and a particular customer for a given period of time.
(b) An invoice,: bill or tab is a commercial document issued by a seller to a buyer, relating to a sale transaction and indicating the products, quantities, and agreed prices for products or services the seller had provided the buyer.
(c) Quotation: is a document sent to a potential customer offering to sell goods or services at a certain price, under specified conditions.
(d) A receipt: this a written acknowledgment that something of value has been transferred from one party to another.
(e) A cash book: is a financial journal that contains all cash receipts and disbursements, including bank deposits and withdrawals.
Trade is a basic economic concept involving the buying and selling of goods and services, with compensation paid by a buyer to a seller, or the exchange of goods or services between parties.
(i) WHOLESALER :
(a) Wholesalers buy from the manufactures and sell goods to the retailers.
(b) Wholesalers usually sell on credit to the retailers.
(c) They specialise in a particular product.
(d) They buy in bulk quantities from the manufacturers and sell in small quantities to the retailers.
(e) Wholesalers always deliver goods at the doorstep of the retailers.
(f) A wholesaler needs mainly a godown to stock the goods he handles.
(a) Retailers buy from the wholesalers and sell goods to the consumers.
(b) Retailers usually sell for cash.
(c) They deal in different kinds of goods.
(d) They buy in small quantities from the wholesalers and sell in smaller quantities to the ultimate consumers.
(e) Retailers usually sell at their shops. They provide door delivery only at the request of the consumers.
(f) A retailer needs a shop or a showroom to sell.
Commerce is the conduct of trade among economic agents , Generally, commerce refers to the exchange of goods, services, or something of value, between businesses or entities.
(i) Commerce facilitates the exchange of goods and services through trading.
(ii) Commerce provides employment opportunities for a lot of people.
(iii) It increases the standard of living of the people through provision of variety of goods.
(iv) Commerce is key to trade and the money generated boosts the economy
(v)Commerce can bring in outside trade which can open the doors to the lucrative export market
(vi) Commerce creates jobs, from the suppliers to the staff working in the shops
Credit sales are payments that are not made until several days or weeks after a product has been delivered
(i) Acceptance :
Acceptance is exactly what it sounds like: the person receiving the offer agrees to the conditions of the offer. Acceptance must be voluntary. This means that a person who signs a contract when a gun is pointed directly at him is legally not able to accept the offer, because he is under duress.
(ii) The Offer:
The offer is the “why” of the contract, or what a party agrees to either do or not to do upon signing the contract. For example, in a real estate contract, the seller will offer to sell the property to the buyer for a certain price.
Consideration is what one party will “pay” to complete the contract. Payment is a loose term when defining consideration in a contract, because what a party gets for signing the contract isn’t always money.
Those signing the contract and entering into the contract agreement must be competent. This means that they are of legal age to sign a contract; they have the mental capacity to understand what they are signing; and they are not impaired at the time of signing – meaning they are not under the influence of drugs or alcohol.
(v) Legal Intent:
This requirement for a contract refers to the intention of each party. Often, friends and family members will come to a loose arrangement but they never intend for it to be legally binding, that is, they do not intend that one person could sue the other if someone does not do what they said they would do. This type of agreement is not a valid contract because there is no legal intent.
[Pick any five]
(i) Order Cheque.
(ii) Crossed Cheque.
(iii) Open cheque.
(iv) Post-Dated Cheque.
(iv) Stale Cheque.
(v) Traveller’s Cheque.
(vi) Self Cheque.
(i) Checking and Operating Accounts:
The most common benefit of a commercial bank for small business is that it’s a safe place to keep your money.
(ii) Debit and Credit Cards:
Banks offer a variety of small-business debit and credit cards. Debit cards usually come with your checking or operating account.
(iii) Lines of Credit:
If you think you might need credit but don’t want to pay interest on a large loan, you can choose to open a line of credit with a bank.
(iv) Commercial Small Business Loans:
Banks offer loans for purchasing equipment, paying bills, buying a company vehicle or buying real estate.
(v) Banks Offer Advice
Your bank can offer you small-business advice in a number of areas, such as tax planning, retirement accounts, insurance, payroll management, creating financial documents and managing your cash flow, points out Inc. magazine.
business environment is a marketing term and refers to factors and forces that affect a firm’s ability to build and maintain successful customer relationships.
(i) Cultural environments: are environments shaped by human activities, such as cultural landscapes in the countryside, forests, urban areas and cities, fixed archaeological structures on land or water, constructions and built environments from different ages, along with bridges, roads, power lines and industrial and.
(iii) A legal environment : is a laws which are passed by the government for business operation
(iv) political environment is the government actions which affect the operations of a company or business.
Branding is a marketing practice in which a company creates a name, symbol or design that is easily identifiable as belonging to the company.
(i) Branding improves recognition.
(ii) Branding creates trust.
(iii) Branding supports advertising.
(iv) Branding builds financial value.
(v) Branding inspires employees.
(vi) Branding generates new customers.
(i) it has huge development costs
(ii) it has Limited quality flexibility.
(iii) Changing the perception for the brand is hard.
Solved by BREMAG TEAM
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9477693438529968,
"language": "en",
"url": "https://rogerpielkejr.blogspot.com/2013/08/the-problem-with-apollo-analogies.html",
"token_count": 314,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.154296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:247afa51-d605-41d0-88bb-bdc7636e037a>"
}
|
Last week I had a letter in the FT, commenting on the latest call for a new Apollo project, this time for solar energy. I explained that Apollo is a poor analogy for difficult challenges:
Going to the moon was easy by comparison
From Prof Roger Pielke, Jr.
Sir, David King and Richard Layard (“We need a new Apollo mission to harness the sun’s power,” Comment, August 2) call for new spending on solar energy technology of the magnitude that was spent on the Apollo moon missions in the 1960s and 1970s: “To match the spending on the Apollo project would require only 0.05 per cent of each year’s gross domestic product for 10 years from each G20 country.”
Over the next 10 years, assuming an aggregate 4 per cent gross domestic product growth rate across the G20, this new spending would equate to more than $430bn. However, in 2013 dollars the Apollo moon mission cost a relatively paltry $130bn. What they are really calling for is spend more than three times the cost of the Apollo missions.
The problem with Apollo analogies is that going to the moon was easy in comparison to the challenge of doubling or tripling global energy supply, while at the same time all but eliminating carbon dioxide emissions. Sir David and Lord Layard are in the ballpark on the scale of investment that is needed. They just have their analogy wrong.
Roger Pielke, Jr, University of Colorado, Boulder, CO, US
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9465107917785645,
"language": "en",
"url": "https://rsmus.com/what-we-do/services/tax/credits-and-incentives/excise-tax-consulting/guide-to-excise-taxes-5-things-every-business-should-know.html",
"token_count": 1649,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.039794921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:64d697ed-773e-4076-a3d4-339a4b9a796a>"
}
|
Guide to excise taxes: 5 things every business should know
Often to their detriment, many businesses do not focus on excise taxes—and, in fact, may be unaware of them entirely. The truth is that excise taxes may have a serious financial impact on companies, and in some cases, excise tax exposure may be material to a business. In the current uncertain economy, reducing excise tax liability and identifying excise tax credits can enhance a company’s bottom line and liquidity. Moreover, evaluating excise taxes and planning opportunities can improve a company’s EBITDA, as excise taxes are treated as an above-the-line cost of goods sold.
1. What are excise taxes?
Excise taxes are taxes imposed on commodities or activities, such as fuel, tobacco or wagering, and are typically reflected in the final price of such products and services. Excise taxes are passed on to the consumer, who is often unaware of them.1
The excise taxes companies pay are enacted by legislatures to serve as user fees with various purposes: to discourage certain behavior, to promote certain activities or simply to raise revenue. Businesses can benefit from excise taxes: For example, taxes imposed on fuels and heavy vehicles fund highway improvements; taxes imposed on air transportation fund airport and airway systems. Similarly, certain environmental excise taxes were enacted to discourage the use of CFCs and other chemicals that harm the ozone layer. Use of biodiesel or alternative fuel over petroleum-based products has been encouraged by renewable fuel credits. Taxes on alcohol and tobacco raise revenue for the Treasury General Fund.
The federal government collects over $100 billion of excise taxes per year.2 Two agencies under the U.S. Treasury Department administer all existing excise taxes: the Internal Revenue Service (IRS) and the Alcohol and Tobacco Tax and Trade Bureau (TTB). The major taxes IRS administers relate to:
- Petroleum-based fuel (including gasoline, diesel fuel, kerosene, and crude oil)
- Alternative fuel
- Heavy vehicles
- Sporting goods (sport fishing equipment, bows and arrows)
- Air transportation
- Environmental taxes
- Affordable Care Act taxes and fees
- Foreign insurance
TTB administers taxes related to:
- Alcohol (beer, wine, distilled spirits)
It is important to note that each excise tax has its own rules for imposition, rate, tax base, exemptions and credits. Compliance obligations for excise taxes can often be quite complex. In most cases, excise taxes are reported quarterly, and semi-monthly deposits are due. Additionally, some companies must file inventory reports or other reporting obligations, even if they are not liable for the tax.
2. Which industries are affected by excise taxes?
Many more than one would think. The primary sectors affected by excise taxes include energy, transportation (ground, air and water), industrial manufacturing, food and beverage, certain consumer goods and life science. Importers and exporters may also be subject to certain excise taxes. Even banks, insurance companies and credit card issuers may encounter excise taxes.
Furthermore, end users in industries such as building, construction, power and utilities, aerospace and defense, farming and logistics may claim certain refundable excise tax credits. For example, users of taxed fuel in off-highway businesses or in farming can claim fuel credits of up to $0.243 per gallon.3 Users of propane in forklifts, such as those used in manufacturing facilities or distributors, may be eligible for alternative fuel credits of up to $0.50 per gallon equivalent of propane, which equates to roughly $1,000 per year per forklift.4 Manufacturers of non-beverage products such as perfumes, food products or medicines that use taxed alcohol in production may qualify for a drawback of the tax paid, up to $13.50 per gallon. Nonprofit entities or state and local governments often qualify for exemptions from excise tax or credits on the purchase of taxed articles such as fuel, tires, and firearms.
3. When are excise taxes material to businesses?
Excise taxes may present a material issue to businesses that are liable for the tax, whether known or unknown. While some businesses have immaterial or no excise tax liabilities, some companies’ excise tax liability can soar to millions of dollars per quarter. For example, fuel marketers, distributors, and blenders may be liable for excise tax on the gallons of fuel in a terminal or blended in their tanker trucks. Air transportation providers, including charter companies and freight operators, must collect and pay significant air transportation excise taxes.5
Even small, thriving companies may face large IRS-proposed excise tax assessments. Some companies have encountered excise tax bills so significant it could bankrupt them.
Some companies are unaware that they should be reporting excise taxes. For example, importers of used highway tractors, trailers or specialized mobile job site equipment are often surprised when the IRS initiates an examination asserting a 12% excise tax is due on vehicles they used in their business in the United States. Similarly, importers of electronic articles containing circuit boards (including computers, monitors, cars, trucks, cameras, and other digital items) may face exposure for the ozone-depleting chemicals excise tax.
Additionally, excise tax-specific registrations and penalties can apply to companies unaware of excise tax responsibilities. Certain fuel owners and petroleum marketers and traders who run afoul of the excise tax registration rules may not only face unexpected tax on their fuel trades, but could also potentially be subject to a penalty of $10,000 plus $1,000 per day for failure to register where required.6 Even taxpayers you wouldn't think of being subject to excise taxes can find themselves on the receiving end of an excise tax—such as a bank that owns fuel. Companies that should collect excise but fail to collect or remit may face trust fund recovery penalties and personal liabilities for corporate officers.7 With respect to fuel tax credit claims, the government has the power to assert a 100% penalty on any claims that are excessive.8
4. How can evaluating excise taxes improve a company’s profitability and liquidity?
In these uncertain times, businesses are looking for ways to reduce risk, improve profitability and generate liquidity. Evaluating excise tax positions and credit opportunities could improve a company’s bottom line. Excise taxes are generally recorded as a cost of goods sold, so finding ways to reduce a company’s excise tax liability and identifying credit opportunities can improve a company’s operational efficiency margins. These savings can enhance a company’s earnings before interest, tax, depreciation and amortization (EBITDA). In addition, many excise tax credits are refundable and may provide cash to companies with net operating losses or in situations where income tax credits cannot be utilized. With private equity, identifying ways to reduce excise tax or increase credits for operating companies can increase margins across similarly situated companies in the fund.
5. Which next steps should my company consider?
All companies can benefit from a fresh look at excise taxes. Consider performing a rapid assessment for excise taxes. This assessment includes:
- Reviewing whether the business faces any excise tax exposure
- Evaluating opportunities for reducing existing excise tax liabilities
- Identifying credit opportunities
- Reviewing compliance operations for improving efficiency
- Identifying excise tax costs passed on by vendors and reviewing whether tax has been properly determined
Should an area of risk be identified, the company can take a deeper dive into remediating the problem. If a credit or savings opportunity is uncovered, this may ultimately improve the company’s profitability and operational efficiency.
First published in Tax Executive magazine, November/December 2020 issue.
1 For the purposes of this article (unless otherwise noted), the term ‘excise tax’ refers to federal excise taxes under Title 26 of the United States Code.
2 Excise Tax Statistics
3 IRC 6427
4 IRC 6426
5 Note that the CARES Act provides for an excise tax aviation holiday through the end of 2020.
6 IRC 6719
8 IRC 6675
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9260478615760803,
"language": "en",
"url": "https://supplant.me/sdgs-goal-1/",
"token_count": 433,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01434326171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:589fc0e2-d060-427a-93d8-7f62a8810052>"
}
|
Ending Extreme Poverty
ENDING EXTREME POVERTY
At SupPlant, we strive to provide all small-scale farmers the opportunity to grow by easing and lowering the cost of data and knowledge. SupPlant provides farmers with more information about the needs of their plants, enabling them to save resources such as water and power usage from pumps. Meanwhile, yields increase and the improved quality of the crops increases potential income. SupPlant lowers the operating cost and increases potential income for farmers around the world. This has a positive impact on the social economics of farmers, providing them with more possibilities to raise themselves out of poverty while encouraging more food production where it’s needed most.
About Ending Extreme Poverty
The share of the world’s population living in extreme poverty declined from 15.7% in 2010 to 10.0% in 2015; however, the pace of global poverty reduction has been decelerating. Nowcast estimates put the global poverty rate in 2019 at 8.2%. Even before the COVID-19 pandemic, progress towards Goal 1 had slowed, and the world was not on track to ending extreme poverty by 2030. Now, as the world anticipates the worst economic fallout since the Great Depression, tens of millions of people will be pushed back into poverty, undoing years of steady improvement.
Even before COVID-19, baseline projections suggested that 6% of the global population would still be living in extreme poverty in 2030, missing the target of ending poverty. Assuming the pandemic remains at levels currently expected and that activity recovers later this year, the poverty rate is projected to reach 8.8% in 2020. This is the first rise in global poverty since 1998, and close to the 2017 level. An estimated 71 million additional people will be living in extreme poverty due to COVID-19. Southern Asia and sub-Saharan Africa are expected to see the largest increases in extreme poverty, with an additional 32 million and 26 million people, respectively, living below the international poverty line as a result of the pandemic.
For more information on this goal, visit the Sustainable Development Goal indicators website.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9408984184265137,
"language": "en",
"url": "https://thesupplychainlab.blog/2018/07/22/boosting-africas-trade-and-development-by-tackling-supply-chain-challenges/",
"token_count": 4583,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.04248046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fa499b97-a415-4c15-b31a-a7c45fd32f45>"
}
|
What are supply chains and why do they matter to Africa?
A supply chain is a process by which products or services are sourced, produced, and distributed to the customer. Supply chains, therefore, play a critical role in the ability of African countries to trade amongst themselves and to participate in and benefit from trade with the rest of the world. For the most part, supply chains in Africa — across a range of industries and sectors including agricultural, mineral, manufacturing, and retail — are weak and poorly integrated, prone to inefficiencies in cost and timeliness, and often lack the value addition that would allow African countries to realize more of the benefits of trade and boost development. Africa, perhaps more than any other region, could benefit from urgent attention to address its supply chain challenges.
It is estimated that by 2050, Africa will account for over half of the global population growth. The populations of 26 African countries are likely to at least double in size between 2017 and 2050. Furthermore, approximately 170 million Africans will enter the labor market between 2010 and 2020. With a growing population eager for jobs and growth opportunities, African governments and Regional Economic Communities (RECs) are seeking ways to increase their share of the global economy, and to reduce their dependency on raw and unprocessed commodity exports, including by implementing measures to improve intra-African trade and trade with the rest of the world. A key part of these measures focuses on remedying the continent’s supply chain challenges and enhancing its role in global value chains.
An overview of Africa’s trade
Heavy Reliance on Commodities and Intermediate Products
According to the International Monetary Fund (IMF), over the decade ending in 2014, Africa accounted for 6 of the 10 fastest growing economies in the world. Africa continued on this trajectory and recorded USD $317 billion in exports in 2016. However, overall, the continent’s role in the global economy has been diminishing. For example, African exports contributed less than 6 percent to the global economy in 1980, but this had decreased to just 2.2 percent in 2016. This is in stark contrast with East Asia’s share, which increased from 2.25 percent in 1970 to 17.8 percent in 2010.
A key contributing factor to the low trade volumes was the region’s continued heavy reliance on primary commodities and intermediate goods (or products that require further processing for utilization in the production of finished goods) combined with increased pressure from the drops in commodity prices during this period. It was estimated that intermediate goods alone accounted for approximately 60 percent of Africa’s total imports and over 80 percent of its exports. The large share of primary commodities and intermediate goods speaks to the low level of industrialization in Africa, which has lagged behind other regions. For example, only 6 percent of Africa’s jobs are in the manufacturing sector. Furthermore, the share of African manufacturers in total merchandise exports globally was only 18.5 percent in 2013, while imports of manufacturing intermediates have grown.
Many African governments recognize the problems stemming from weak industrialization and the benefits that their countries could yield from industrialization, including much-needed large-scale job creation. However, any plans to push industrialization and address the high unemployment rates (which are as high as 60 percent in some countries) in many countries would also need to be accompanied by investments in education and training in order to address the shortage of skilled labor and high labor costs currently afflicting the continent.
The state of intra-African trade
African countries are also taking steps to increase trade amongst themselves at the bilateral, sub-regional, and continental levels. However, the continent still lags behind other world regions when it comes to intra-regional trade. In 2010, between 10 and 11 percent of Africa’s total trade was intra-regional trade. In comparison, in 2010, 17 percent of trade in developing Asian countries was intra-regional, whereas 60 percent of trade in the European Union was intra-regional.
In interrogating these figures, there is evidence of backward linkages between companies and their suppliers in the African mining industry, but regional supply chains in the textile and agricultural industries remain largely untapped, with less than 10 percent being sourced from the continent. The weak and inefficient regional supply chains in the agricultural sector are especially troubling because the sector employs approximately 65 percent of Africa’s labor force and accounts for 32 percent of its gross domestic product. However, agricultural cultivation and agribusiness remain largely unexploited across much of the continent, as exemplified by the fact that Africa accounts for 60 percent of the world’s uncultivated arable land. Moreover, it continues to rely on agricultural imports including an estimated $48.5 billion in agricultural products in 2014. Transforming the agricultural sector, including its underpinning supply chains, is especially critical for Africa because agricultural exports tied to integrated supply chains and global value chains hold great potential for the continent’s food security and economic growth, including claiming a bigger share of the global economy.
Continental free trade area
To further boost trade at the continental level, African governments signed the Tripartite Free Trade Area (TFTA) agreement in 2015 and commenced negotiations for the establishment of the Continental Free Trade Area (CFTA). The three RECs currently engaged in this effort are the Common Market for East and Southern Africa (COMESA), the East African Community (EAC), and the Southern African Development Community (SADC). The United Nations Economic Commission for Africa (UNECA) calculated that the CFTA could increase intra-African trade by as much as $35 billion per year over the next six years, and provide market access to more than a billion people. However, Africa currently has eight RECs that are officially recognized by the African Union and an additional six regional organizations with economic-related mandates. This array of organizations, some with overlapping memberships and mandates as well as different economic strategies, makes economic integration and cooperation challenging and time-consuming, as the recent tariff disputes in the EAC between Tanzania and Kenya highlights. While a big step in the right direction, the CFTA will take strong political will and commitment from African leaders.
World trade organization
The World Trade Organization’s (WTO) Trade Facilitation Agreement (TFA), ratified in 2017, aims to provide more transparent trade laws, reduce transaction costs, simplify customs procedures, and remove trade barriers. However, as of 2017, 22 African countries had still not ratified the TFA citing several concerns for non-ratification, including their view that subsidies provided to farmers in richer Northern countries essentially create a significant barrier to trade for African farmers. Other African countries have expressed concerns that the provision protecting the intellectual property rights of pharmaceutical companies could have a negative impact on their ability to provide affordable medicines for their largely poor populations.
Africa’s major international trade partners
Africa’s three largest trading partners are China, the European Union (EU), and the United States. Of these, China is Africa’s single largest trading partner, a position that it assumed in 2009. The total value of trade between China and Africa was just above USD $6 billion in 2000, but has been increasing rapidly since then. The value stood at almost USD $88 billion in 2010 and USD $106 billion in 2015. FDI has grown even faster than trade over the past decade, with a 40 percent increase per year. Annually, China has invested on average $12 billion from 2011 to 2016. In addition, China is also Africa’s most visible infrastructure investor, playing a role in 41.9 percent of all projects undertaken. It is also the largest provider of loans through institutions such as the Export-Import Bank of China. However, it is worth noting that in 2014 Chinese lending for infrastructure projects was substantially lower than in each of the previous three years.
The United States is Africa’s third largest trading partner, behind China and the European Union. The aggregate value of imports and exports between the United States and Africa stood at almost $33 billion in 2000, $97 billion in 2010, and dropped down to $43 billion in 2015. However, the United States was the top source country for FDI projects into Africa. The African Growth and Opportunity Act (AGOA)—established in 2000 and renewed in 2015 for another 10 years—forms the cornerstone of U.S. trade with Sub-Saharan Africa. AGOA provides duty-free benefits to approximately 6,500 African goods. AGOA imports to the United States increased fourfold between 2001 and 2013. U.S. imports from Sub-Saharan Africa totaled $18.9 billion in 2015, but decreased 29.6 percent ($7.9 billion) from 2014, and 63 percent from 2005.This was driven mainly by a decrease in petroleum imports due to global petroleum price slumps.It is important to note that most of this trade is dominated by energy, apparel, vehicles, and some agricultural products, as demonstrated by the fact that approximately 90 percent of U.S. imports from Africa are focused on petroleum.While efforts to encourage diversification have been made, especially with the institution of Regional Trade Hubs through USAID, it has been argued that more needs to be done under AGOA to help African countries diversify their exports.
A key concern voiced about international trade agreements relating to Africa — including AGOA — is that they tend to have relatively short duration spans. These short timeframes make it difficult for critical industries, such as those in the textiles and manufacturing sectors, to participate as they require longer planning and investment timelines if they are to succeed.
Even as Africa works to boost trade figures at both the continental and global levels, it also needs to address a number of transportation and logistics challenges confronting the continent.
Trade: The role of infrastucture and logistics
Infrastructure and logistics investment
The lack of infrastructure on the continent has been a major barrier to trade, growth, and development. African governments understand this and they have increased investment in infrastructure in recent years, including the development of transport corridors, such as the Ibadan-Lagos-Accra Corridor, and the Northern and Central Corridor between Central and East Africa. For example, of the total $66.5 billion of FDI in 2015, $2.8 billion was allocated for logistics and transportation, and overall, African countries invested $24 billion (2015) and $26.3 billion (2016) in infrastructure. However, these figures still fall far short of the $95 billion that the World Bank estimates is required annually to build the infrastructure that Africa needs for sustainable economic growth.
Even with these investments, African countries continue to score poorly on the World Bank’s Logistics Performance Index (LPI), with many countries classified as logistics-unfriendly. In Africa, third party logistics companies (3PLs) and outsourcing remain negligible, and a largely fragmented retail base adds to the continent’s transportation-related supply chain complexities. This is largely due to the mostly informal retail market (e.g., small independent retailers) which accounts for approximately 70 percent of the total retail market.
According to the World Bank’s LPI, the quality of a country’s supply chain or logistics competency system can be measured by several factors including: 1) quality of infrastructure; 2) control of corruption; 3) local supplier quality; and 4) supply chain visibility (or the ability to track and trace goods across a supply chain from source to end-user or consumer).Of the bottom 20 countries on the LPI, 12 are African, with Somalia and Mauritania accounting for two of the bottom four positions.One notable exception is South Africa, which is a logistics leader among all middle-income countries.
An additional transportation challenge stems from the fact that nearly 40 percent of African countries (15 out of 55) are landlocked. This large number of landlocked countries further adds to transportation and logistics costs in Africa. Logistics costs are high in most African countries; however, there are wide differentials between some countries. The global rankings of South Africa (20) and Ethiopia (126) in the Logistics Performance Index below indicates this disparity.
Logistics Performance Index
|Country||Africa Ranking||Global Ranking||LPI Score|
A selection from the World Bank Logistics Performance Index 2016.
Africa’s road network poses one of the biggest challenges to trade. While there are significant differences in road density and quality of roads from one country to another, overall, only 43 percent of Africans have access to all-season roads. For example, in Southern Africa and the Maghreb region, the road infrastructure is better developed than in Central and West Africa. About 67 percent of roads in the Maghreb region are paved, whereas less than 9 percent of roads are paved in the Central Africa region.
With over 90 percent of Africa’s trade facilitated through ports, seaports are important to African economies. However, African ports remain marginal players on the global stage, as they handle only 3 percent of global container traffic.Many African ports are in urgent need of increased investment and improved management practices. Ports often struggle with high dwell times, limited material handling equipment, and complex bureaucratic processes, including cumbersome customs and clearance procedures. Differences in dwell times across key ports in Africa, as demonstrated below, create inconsistencies for moving goods across the continent. However, the success of South Africa and Kenya in minimizing port dwell time could be replicated by other African countries with seaports.
Port Dwell Times
|Nigeria port averages*||19-25 days|
|Cotonou – Benin||12-14 days|
|Mombasa- Kenya||5-7 days|
|Durban – South Africa||4 days|
Data sourced from Nigeria Shippers’ Council (NSC). Dwell time is an indicator of efficiency and is the amount of time a container waits to get picked up at a terminal after being unloaded from a vessel.
African countries are taking measures to address some of these port-related deficiencies by increasing investments in their ports. Nigeria, Kenya, and Tanzania all have large-scale projects in the pipeline. Africa’s largest planned port project in Bagamoyo, Tanzania (20 million, twenty-foot equivalent units) is set to start operations in 2020. In addition, foreign port operators in Europe and the Middle East have identified Africa as a major source of potential growth and it is likely that there will be more investments in African ports.
Africa’s railway network is in an even worse state than its roads, and many railways remain a remnant of the colonial past. Africa’s high prevalence of narrow gauge lines, often of differing sizes depending on each country, has resulted in poorly integrated railway networks with neighboring countries. However, regional integration including the construction of new standard gauge lines has commenced in West, Southern, and Eastern Africa, registering a growing number of large-scale projects on the continent. As seen in the table below once complete, these railways should help to facilitate the movement of goods and people in a more timely and efficient manner.
Planned and ongoing projects
|Country||Projects||Value US $bn|
|Ethiopia||Awash-Woldia-Hara Gebeya Rail Project||1.7|
|Ethiopia||Mekelle-Hara Gebeya-Woldia Railway Project||1.5|
|Morocco||Tangier – Casablanca Rail||4.1|
|Nigeria||Lagos- Ibadan Rail||1.5|
|Tunisia||Reseau Ferroviaire Rapide Project||2.8|
A selection of Deloitte Construction Trends Africa 2016
Per the World Bank, in 2010 Africa accounted for less than 1 percent of the global air service market. Greater distances, low population density, and a lack of terrestrial infrastructure makes aviation an essential component for growth. However, the air cargo industry in Africa suffers from infrastructure challenges and a lack of “Open Skies.” Due to Africa’s under-served market status as well as the prevalence of bilateral air agreements, it is often cheaper to fly freight from one African country via the Middle East to another African country.
African countries recognize the challenges posed by their failure to implement the Yamoussoukro Decision of 2000, which sought to reform the air transport sector in Africa and counteract protectionist policies hindering air transportation on the continent. Perhaps recognizing the urgency for action, 23 African countries launched the Single Air Transport Market (SAATM) initiative on January 28, 2018 during the 30th Summit of the African Union. If implemented, SAATM will go a long way in addressing a key barrier to trade on the continent.
The telecommunication sector in Africa has seen unprecedented growth in recent years, driven predominately by mobile phones. The continent has benefited from new submarine and regional overland cables. Broadband speeds have been increasing and are set to increase 240 percent across the continent by 2020. As with road infrastructure, there are significant differences in regional and country development. North and Southern Africa (e.g., Egypt, Morocco, Libya, South Africa, and Seychelles) score higher when compared to other regions in the International Telecommunication Union (ITU) Information and Communication Technologies (ICT) Development Index. For example, South Africa has 142.38 mobile phone subscriptions per 100 inhabitants, while Mali has 120; Morocco has 121, and Egypt has 114.47 However, Africa as a whole has 74.60 subscriptions per 100 inhabitants. Similar disparities are evident for access to the internet.
Almost 53 percent of South African households have access to the internet, whereas the African average is about 16 percent. As a region, Africa is still trailing the rest of the world when it comes to fiber broadband networks. Even in the presence of technological infrastructure, prohibitive costs keep many people from accessing the internet. With less than 50 percent of the population having access to electricity, tapping into internet and telecommunications networks remains a challenge.
Advancing telecommunications and internet capabilities improves supply chain efficiencies continent-wide. Companies in Africa are employing mobile platforms to bridge the mismatch between supply and demand, streamline operational processes, and communicate directly with end users in the supply chain to enhance service and goods delivery. Enhancing communication throughout supply chains also builds trust between partners and bolsters supply chain visibility. The effect of increased visibility is greater awareness of the weakest links in the chain, identification of the optimal responses to strengthen these links, and data accumulation to inform future supply chain decisions.
Customs and border processes
Crossing Africa’s borders remains challenging, and complaints abound about delay-inducing processes and procedures including duplicative procedures and systemic corruption at border posts. Reducing these bureaucratic bottlenecks, minimizing opportunities for corruption, and taking other measures to maximize efficiency for cross-border trade are crucial for the creation of efficient regional trade, global trade, and value chains. The lack of technology also impacts supply chains during the customs process. For instance, customs and border clearance procedures are still paper-based in many countries, and often require numerous copies. These procedures “thicken” the border with redundant and inefficient clearance procedures, while also increasing vulnerability to corruption.
Summary of policy options
- Improve trade facilitation processes
a. Many African countries appreciate the value of trade facilitation agreements: However, some African countries have expressed concerns about potential negative impacts of some of the requirements in the World Trade Organization’s Free Trade Agreement, such as the impact of Northern-focused farm subsidies on African farmers, and the impact of intellectual property rights on access to affordable medications in Africa. The WTO should make efforts to address African concerns.
b. International trade agreements are often characterized by short timeframes, which may prohibit certain industries from participating, as they require longer planning and investment timelines: Governments should consider lengthening the timeframe of trade agreements, which would result in more stable, lower risk, and cost-effective long-term agreements.
c. Africa’s lack of industrialization has contributed to low levels of participation in global value chains and overall global trade: RECs and African governments would do well to adopt and commit to clear industrialization strategies and policies. These strategies and policies should consider addressing market fragmentation within and across countries, as well as seek to meet economies of scale by establishing regional manufacturing clusters.
2. Prioritize Infrastructure Development
a. Africa lacks a reliable, comprehensive, integrated transportation network: African governments and regional economic communities should do more to prioritize infrastructure development and implement long-term coordinated infrastructure projects.
b. Financial institutions and bilateral partners can assist governments to structure infrastructure projects as blended finance models with Private Public Partnerships (PPPs), in which national governments assume less debt and share financing responsibility with external partners.
3. Reform Custom and Border Management
a. Governments should continue to push for and implement comprehensive customs and border reforms that aim to improve coordination amongst government agencies within RECs, as well as reduce bureaucratic procedures in order to improve the trans-border movement of goods and people: There is also an opportunity to introduce technology into the customs and border clearance processes, and to enhance law enforcement activities at borders in order to reduce delays and to minimize corruption.
4. Provide Financing and Incentives to Targeted Industries
a. African governments should consider creating an investor-friendly environment for new technology (e.g., logistics technology): Incentives could be provided to support startup entrepreneurs and encourage technology investment in rural communities.
b. African governments would do well to direct financing toward critical but currently neglected industries—such as agriculture—which could play a transformative development role, such as enhancing food security and providing employment.
5. Enhance Training and Skills Development
a. International partners and RECs should support trade agreements as well as requisite training for regional businesses, including on international accreditation processes and requirements for accessing global markets: This will enable companies to adapt to global standards and move up their respective value chains.
b. The private sector and venture capital community could play a key role in developing incubators to create startups, facilitate the scaling up of these startups, and help integrate Africa into the digital supply chain age.
The article was first published in March 2018 by the Wilson Center.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9454194903373718,
"language": "en",
"url": "https://timberry.com/7-financial-terms-every-entrepreneur-should-know/",
"token_count": 645,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.07763671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9888a1a3-a1cc-41bb-8417-d6514164d722>"
}
|
This is a rewrite of an older post, but it seems like a good one to repeat. You don’t have to be an accountant or an MBA to do a business plan, but you will be better off with a basic understanding of these six essential financial terms. Otherwise, you’re doomed to either having somebody else develop and explain your numbers, or not having your numbers correct.
It isn’t that hard, and it’s worth knowing. If you are going to plan your business, you will want to plan your numbers. So there are these six terms to learn. I’m not going to get into formal business or legal definitions, and I will use examples:
- Assets: cash, accounts receivable, inventory, land, buildings, vehicles, furniture, and other things the company owns are assets. Assets can usually be sold to somebody else. One definition is anything with monetary value that a business owns.
- Liabilities: debts, notes payable, accounts payable, amounts of money owed to be paid back.
- Capital (also called equity): ownership, stock, investment, retained earnings. Actually there’s an iron-clad and never-broken rule of accounting: Assets = Liabilities + Capital. That means you can subtract liabilities from assets to calculate capital.
- Sales: exchanging goods or services for money. Most people understand sales already, but the timing of sales is important. Technically, the sale happens when the goods or services are delivered, whether or not there is immediate payment, and regardless of how long ago you paid for what you’re selling.
- Cost of Sales (also called Cost of Goods Sold (COGS), Direct Costs, and Unit Costs): the raw materials and assembly costs, the cost of finished goods that are then resold, the direct cost of delivering the service. This is what the bookstore paid for the book you buy, it’s the gasoline and maintenance costs of a taxi ride, it’s the cost of printing and binding and royalties when a publisher sells a book to a store for resale. And timing is important for this one too: it gets into the books at the same time that the sale is made, regardless of when you bought it or paid for it.
- Expenses (usually called operating expenses): office rent, administrative and marketing and development payroll, telephone bills, Internet access, all those things a business pays for but doesn’t resell. Tax and interest are also expenses. And the timing is supposed to be when you are committed to the expense, regardless of when you pay for it.
- Profits (also called Income): Sales less cost of sales less expenses. Expenses in this case includes depreciation, amortization, interest, and taxes. And if you don’t know what depreciation or amortization are, don’t sweat it, neither one of them belongs in my list of six essential terms.
Sure, you can spend a lifetime analyzing and getting to know the ins and outs of it, but these are basics every business owner and entrepreneur should know. In my opinion.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8184041976928711,
"language": "en",
"url": "https://www.daytodaygk.com/banking-quiz-127/",
"token_count": 641,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.173828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7cb9c7dc-4c48-47c4-a3fd-851f4fd2955c>"
}
|
Banking Quiz – 127
1. CER was established to prevent frauds in loan cases involving multiple leading from different banks on the same immovable property. CER means ?
a) Central Electronic Recognition
b) Central Ethics Registry
c) Central Electronic Registry
d) Central Election Registry
e) Central Entry Registry
2. Soft loan characteristic feature is ?
a) Short period and more rate of interest
b) Long period and more rate of interest
c) Long period, More rate of interest with grace period
d) Long period, Less rate of interest with grace periods
e) Long period, more rate of interest with no grace periods
3. “Loan Servicing” means ?
a) Lending the money
b) A mortgage bank or subservicing firm collects the timely payments of interest an principal from borrowers
c) Helping the customer to get loan in other banks by providing the details of the running account
d) Giving a loan if the customer has any deposit
e) Giving second loan after payment of first loan regularly
4. When any asset ceases to generate income for the bank, It is called ?
a) Official Asset
b) Nongood Asset
c) NonPerforming Asset
d) NonCommitment Asset
e) None of these
5. In a news paper it is read that “Higher Provisioning erodes public sector banks’ profit”. Here “Provisioning” relates to ?
a) Daily Expenses
b) Cost to erect ATMs
c) Conducting exams to recruit new personal
d) Bad Loans
e) Establish new branches
6. Who introduced the concept of Microfinance in Bangladesh in the form of the “Grameen Bank”. He is the Nobel laureate known by many as the “Father of Microfinance Systems” ?
a) C. D. Deshmukh
b) Amartya Sen
c) Muhammad Yunus
d) Sheik Haseena
e) Muzibaer Rahman
7. Loan to poor people by banks have many limitations including lack of security and high operating cost. So to help them which type of finance system developed ?
a) Ponzi Schemes
b) Micro Finance System
c) Money Laundering Schemes
d) Money Tampering Finance
e) Supervision Finance
8. In the both cases of RTGs and NEFT the charges are decided by ?
a) Collecting banker
d) Paying Banker
e) None of these
9. Usually No Frill Accounts are ?
a) Savings Account
b) Corporate Account
c) Fixed Deposit
d) Kiddy Account
e) Salary Account
10. The Basic Savings Bank Deposit Account (BSBDA) scheme of RBI to be opened in which of the following banks ?
a) Public Sector Banks
b) Private Sector Banks
c) Foreign Banks Operating in India
d) Foreign Banks Operating in Foreign Countries
e) All of above
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9811370968818665,
"language": "en",
"url": "https://www.imconet.com/traditional-starts-involving-foreign-currency-and-even-often-the-current-business-banking-method/",
"token_count": 1065,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.008056640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b766d9f1-d168-4140-b022-4f1efa484fd6>"
}
|
What is Currency?
• The idea can be a unit of monetary exchange that can be made use of in trade for many products and services. It’s designed up of the next significant key elements.
• The idea acts as a circulating medium connected with exchange instructions which is a intermediary used in trade avoiding the inconvenience of some sort of peer barter technique.
• It can be a new unit involving accounts – which is usually a standard economic product of description of value in addition to costs on all merchandise, services and possessions.
• The idea is long lasting — which means it offers a long useful life.
• It is divisible – which suggests it can easily be divided into more compact amounts.
• It is definitely portable instructions which suggests it is easy to help carry.
• It can be fungible – meaning each and every unit is in a position regarding mutual substitution, in this particular each unit is of equal value.
What is Money?
They have equivalent to all of the components over, yet it likewise includes condition critical factor. It’s as well a new store of benefit. This means really capable of being saved, next withdrawn once needed, in a new later date and it is predictably useful once withdrawn.
Where did money originate from?
It all started out along with goldsmith’s centuries ago. These were shopkeepers that dissolved money and made yellow metal coins. One challenge the goldsmith of that period had to conquer seemed to be the safety of his or her gold stocks and cash. This after led for you to fortified rooms exactly where his gold stocks could very well carefully be kept sometime later it was these types of rooms evolved into often known as vaults.
The goldsmith shortly discovered out he had the significant amount of further space in his vault. They then started off renting out space in the vault to others who planned to retain their personal possessions secure. Soon there were a lot of people lining up outside his shop to rent space in the vault to guard their very own valuables. Then clients beginning buying gold gold and silver coins from the goldsmith in addition to he stored those in his vault. He would next matter the customer a good IOU or claim look at for the particular coins, which will could then turn out to be redeemed anytime at a later date.
Soon these types of rare metal IOU’s became suited forms of trade for services and goods. As the merchants ended up conscious they too could return these claim inspections back to often the goldsmith for equal numbers of platinum that were held inside his vault. As time passed, more customers had been renting space yielding additional profits.
Where did money get its start?
This goldsmith was now equipped to offer out money against the gold held inside his vault. He / she would certainly next create a great IOU in return for a promise to pay fixed to get by the borrower. This goldsmith now merchant banker started noticing that many on the gold held inside the burial container was certainly not really withdrawn at any one time by the buyers. In fact he / she now realized it would likely be possible to mortgage out more IOU’s against the gold in the burial container.
Everything needed to get done was to compute what percentage would get necessary to have available for withdrawal in any offered time. Any excess can then be loaned out there. Now Tony Banks Dundee flipped merchant company was capable of making much larger gains by his once basic goldsmith and vault rentals company. Now turned in to a new lender loaning, burial container rental business. This was how our modern-day savings system was born. The ultra-modern banking system, from which will this is depicted, will be known as typically the fragmentary; sectional banking system.
This product may work fine, as very long as this vault is capable of storing yellow metal. Then the bank would be authorized to continuously produce loans against a practical bank’s holdings. The downside to that process however, can be if the customers request to take all associated with their loge from typically the lender, all at the same time. This is usually referred to as a new “run on the bank” or perhaps a new bank run. Will need to this happen, this bank will be out there of company. This is well known as the bankers worst headache.
A good bank loan requires the loan amount to be similar to the level of typically the deposit. Nevertheless throughout fractional banking as well as fragmentary; sectional source banking it’s an totally different banking practice. Along with fractional reserve lending often the bank only need preserve a small portion of deposits in preserve, in-case of withdrawal requests. The remaining build up can then be designed into checkbook money when simultaneously maintaining the responsibilities to redeem all deposits upon demand. You will have got ten IOUs credited out for each 1 precious metal coin, held in reserves.
Fractional reserve savings became legitimate in 19th century The uk. It has been legitimate and in common practice throughout the United States with regard to decades. The proportion of required bank stores to turn out to be withheld used to be 10 %. However today, recommended hold amounts will usually run at zero.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9658293724060059,
"language": "en",
"url": "https://www.tenants-rights.org/about/principles-of-affordability/",
"token_count": 616,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0069580078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3df18d10-96a9-4fe9-b3d3-72606d06cc21>"
}
|
What is affordable housing? Perhaps it is having access to the many options about location, size, nearness to schools and other amenities, cost, etc. When we look at affordability we need to first ask the question affordable to whom?
In Mayor Daley’s set-aside proposal, housing is considered affordable if a person making 100% of the area median income can pay the rent or mortgage. This means affordable to a family of 4 making $77,000/year. This translates to $1900/month in housing payments being the proposed definition for “affordable” in the city of Chicago.
The Federal definition of affordability is for a household to pay no more than 30% of its annual income on housing. In MTO’s experience, this standard remains too high for many families living in poverty.
The Chicago Community Congress of Tenants has developed the following principles to create a realistic and sustainable definition of affordable housing.
- We believe that affordable housing programs need to be targeted or focused on families whose primary wage earner makes minimum wage or to household whose primary source of income is Social Security or other fixed incomes or to individuals who have no steady job or those that become ill or lose their job.
- We believe that affordability needs to be based on a sliding scale. Household making only $500/month should not be paying more than 15% of their income to housing.
- Housing costs should be calculated on take home pay and any medical expenses should be deducted from that.
- Housing size needs to be based on family need. It is unacceptable for a family of four to live in a one bedroom apartment simply because that is all they can afford.
- Tenants should have a choice about location and not limited to certain areas of the City or County. Every community, neighborhood, and suburb needs to have affordable housing. At a minimum, communities should have to define affordable housing as the lower of the area median or city median.
- Affordable housing needs to be stable. Tenants should have access to long-term leases that include a ceiling on rent increases and not be faced with eviction because their housing is being converted to condos or ever-increasing rents.
- Affordable housing must meet stringent housing quality standards. Housing must be more than just a shelter from the rain. Affordable housing should be a place families can gladly call home.
- It must be something that people living on a fixed income can pay. We believe that affordable needs to be based on the income of the individual and area mean income of families.
- Affordable housing residents should have the same rights, responsibilities, and respect as other housing residents. No one should have to jump through any additional hoops just to live in housing that is affordable.
- Additional fees need to be reasonable and there should be no additional fees just for the privilege of living in affordable housing.
We believe that housing is human right. It should be the priority of all governmental agencies to ensure that everyone has a decent, safe, accessible place to live.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9651739597320557,
"language": "en",
"url": "https://coorsleadership.com/slow-much-needed-change-coming-to-the-healthcare-system/",
"token_count": 466,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.076171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:82f005dc-a783-4202-9904-935cbcf58a91>"
}
|
In the next ten years United States healthcare system is going to need to adapt, learning how to provide care to more people for less money. In its current state the healthcare industry is a mess thanks to being overly regulated, highly divided, and excessively complex. While there have been attempts to increase coordination between healthcare providers and improve integration of care, nothing has worked thus far. Some feel we have reached a tipping point however, a sink or swim moment for the industry thanks to the continued increase in costs with no real increase in quality of services provided.
Healthcare providers must become more efficient in order to stay ahead in the upcoming years. One fix that will help cut down no costs is to limit unnecessary medical tests. In a study conducted last year by the Mount Sinai Medical Center and the Weill Cornell Medical College in New York found that there was about 6.8 billion spent yearly on 12 unnecessarily over used test and treatments. Another area that needs massive improvement is the coordinating of treatments with other providers, and offering cost-effective care in areas that are in need of such. The push to make all medical files electronic and easier to access between medical facilities will help to both speed up and improve the care that doctors, surgeons and nurses will be able to provide their patients.
Providers must move away from operating in a closed-door manner and start to form collaborative healthcare networks. This should not only help cut down on costs, but also improve turn around times and patients overall health. Another option that could help fix the over prescribing of tests and treatments would be bundling services together. Cutting down on hospital remissions and reducing reimbursement rates also need to be implemented in order to obtain a more efficient system of healthcare provision.
The needed over-haul of our current system can no longer be put off or avoided. With an aging population, broken care and a lethargic economy all weighing down hard on our current healthcare systems back, the time for change is now. Providers up till the recent economic collapse have been operating at optimal levels individually but that will no longer work. The increase in patients needing care and the lowering of funds in which hospitals will have access to mean that hospital systems must come together. With all the healthcare changes that could be made, the real winners at the end of the day will be the patients.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9383337497711182,
"language": "en",
"url": "https://logisticsviewpoints.com/2018/10/29/digital-twins-support-supply-chain-optimization/",
"token_count": 823,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0240478515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4754187a-3bc3-4dbb-8306-fba14f949de3>"
}
|
Digital twin is the phrase used to describe a computerized (or digital) version of a physical asset or process. The digital twin contains a sensor or sensors that collects data to feed the asset or process model. In short, the digital twin concept combines the ideas of modeling and the Internet of Things (IoT).
The digital twin concept has most often been applied to assets. A piece of machinery generates data on vibrations, heat, pressure, and other things as well. That data is used to predict a machines failure and to apply preventative maintenance to make sure that an unplanned failure does not occur.
In terms of supply chain planning, factory machinery is the key area where an asset’s failure leads to increased manufacturing costs and service failures for customers. The supply chain model does not look to predict asset failures, but it does seek to use the digital twin maintenance model’s inputs to improve factory scheduling. The asset model relies on machine learning to improve the forecasting of machine downtime over time.
AspenTech is an example of one supplier of supply chain planning solutions that seeks to use these inputs. AspenTech also sells asset maintenance solutions. Aspen Mtell is a low-touch machine learning solution they say can accurately forecast a hyper compressor failure in a low-density polyethylene (LDPE) process. In many industries, scheduling around machinery maintenance would not be all that difficult. You have the maintenance crew fix the machine at night, and the schedule proceeds as planned.
The chemicals industry is different. It is hard. A small number of raw materials can be transformed into hundreds of thousands of final products. The manufacturer does not just produce products, but also coproducts and byproducts. These byproducts can be sold to other companies or used internally in the production of other final products. Optimal production proceeds by monitoring not just the physical process, but the chemical properties of the materials being produced. There are production wheels that contain rules about the sequences in which chemical grades can be produced and the constraints that must be respected. There are expensive, heavy and complex manufacturing assets that can cover the full spectrum of production operations: continuous, semi-continous or batch. Shutting down and then restarting the process is expensive, time consuming (think days, not hours), and has environmental, health, and safety implications.
A hyper compressor’s job is to build up pressure that is needed in the conversion process. Compressors may be called upon to apply up to 50,000 pounds of pressure per square inch to the process. That puts a lot of strain on the machinery. These compressors typically go down many times a year. The ability to mitigate this problem is worth millions of dollars to chemical companies.
As one example, Aspen Mtell provides more than 25 days of advance warning of a central valve failure. For example, on January 3rd, Mtell might tell a planner that an asset failure is likely on or shortly before January 19th. This can allow for scheduling less expensive maintenance downtime rather than reacting to unplanned downtime. Aspen Plant Scheduler can then be used to schedule the planned downtime options. The schedule optimizer can trade off customer commitments, inventory holding costs and manufacturing costs. A chemicals manufacturer will have more options if they have more than one reactor that can provide the desired chemical grades. Nevertheless, there will be different costs associated with the production options. In short, detailed scheduling is a complex optimization problem involving demand priorities, manufacturing economics, and production sequencing constraints.
The AspenTech offering is the first example of a solution that includes optimized production scheduling based on an integrated digital twin maintenance model that I have seen. In many industries, this solution would be overkill. Not here. Other asset-intensive industries, like power and metals & mining could similarly obtain significant value from optimizing maintenance across the supply chain. Critical, costly equipment is key to environmental, health and safety in these industries. And in heavy process industries with complex turnarounds, these solutions can save companies millions of dollars a year.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9638625979423523,
"language": "en",
"url": "https://mhpsalud.org/chws-prescription-drug-costs/",
"token_count": 249,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:70343dda-1f54-4ebc-9276-281e813d8f32>"
}
|
Maria* won’t get paid for another week and money is tight. She needs her insulin to treat her diabetes, but the electricity bill is due and the family needs groceries. She decides to go without her insulin until she has enough money. Weeks pass and Maria ends up in the hospital with complications from her diabetes. Her doctor is concerned with managing her diabetes, but Maria can only focus on one thing: How will she pay for the hospital bill?
Financial insecurity affects millions of Americans. Aside from stress and anxiety, there can be harmful physical health outcomes as well.
For instance, it’s estimated that 1 out of 3 Americans have delayed filling a prescription because of cost (33.5%).1 Those affected by chronic conditions, such as diabetes, may be disproportionately impacted by costs. One reason for this is that the cost of insulin has skyrocketed, with prices rising over 69% in the past five years.2 This is cause for concern because all type 1 diabetics and some type 2 diabetics rely on insulin to control their blood sugar and prevent life-threatening complications.3
When patients have difficulty accessing their medicine, it can turn into a vicious cycle that creates mounting bills and complex medical problems.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9344136118888855,
"language": "en",
"url": "https://singaporeaccounting.com/financial-statement-analysis/",
"token_count": 373,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0272216796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:99b8dde8-41fb-4c31-8af2-adbf48b94341>"
}
|
Comparative financial statement analysis involves examination of financial statements for a single company for 2 or more accounting periods (years, quarters, or months) and noting the change in both amount and percentage between periods. The financial statement analysis method is used by investors, creditors and management to evaluate the past, current and projected conditions and performance of the firm. This form of analysis provides evidence of significant changes in individual accounts and can give a user valuable insight into items that should be further investigated. We help to provide consulting and accounting services to SMEs, startups and MNCs.
Ratio analysis is the most common form of financial analysis. It provides relative measures of the firm’s conditions and performance. Horizontal analysis and vertical analysis are also popular forms. When comparing the financial ratios, a financial analyst makes two types of comparisons: Trend analysis and Industry comparison.
Trend analysis is a form of comparative analysis, but instead of examining the entire balance sheet and income statement for 2 years, this form of analysis involves examination of selected financial statement information over longer periods of time (usually at least 5 years and as much as 10 to 20 years). Trend analysis is performanced by selecting a base year, and assigning a value of 100% to the amount of the selected financial statement item or items. Each successive year would be compared to the base year on a percentage basis. Analysis of the actual sales, gross profit, and income data indicates continuing growth in sales and income. Trend analysis, however, shows a different picture. Income is continuing to increase but at a slower percentage than sales. This, combined with the indication that gross profit is increasing faster than sales, may indicate management is doing a good job of continuing to reduce material and direct labour costs, but is not controlling administrative or overhead expenses. This indicates a need for further investigation as to the cause of this trend.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9434617757797241,
"language": "en",
"url": "https://www.civilsdaily.com/planning-commission/",
"token_count": 745,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0673828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e93d0c66-10b5-476a-a950-6547d6efcb02>"
}
|
The Planning Commission
The Planning Commission was set up on the 15th of March 1950, through a cabinet resolution.
Planning Commission had evolved over time from developing a highly centralised planning system towards indicative planning where Planning Commission concerns itself with the building of a long term strategic vision of the future and decide on priorities for the nation.
The commission works out sectoral targets and provides promotional stimulus to the economy (through its “plan funds allocation”) to grow in the desired direction.
Planning Commission attempted to play a system change role and provided consultancy within the Government for developing better systems. In order to spread the gains of experience more widely, Planning Commission also played an information dissemination role.
Thus, historically, Planning Commission’s work was three dimensional.
(a) design policy direction and suggest required schemes/ programmes;
(b) influence the resource allocation from budget; and
(c) oversee the performance and record the same on a standard framework for comparative assessment of all the states from time to time.
In short, Planning Commission was doing the job both that of a think tank and the function of allocation of plan resources among the Central Ministries and States in as judicious a manner as possible, given the limitations of resources.
The announcement on setting of Planning Commission and its expected role in the economic management was first made in the Parliament by the President, and the details were disclosed by the Finance Minister (Shri John Mathai) through his budget sppech in the first year of the Republic (1950-51).
Rightly, Planning Commission was anchored to India’s political history of immediate past and the Directive Principles of State Policy as enunciated in the Constitution of India.
Functions of Planning Commission
The 1950 resolution setting up the Planning Commission outlined its functions as to:
- Make an assessment of the material, capital and human resources of the country, including technical personnel, and investigate the possibilities of augmenting such of these resources as are found to be deficient in relation to the nation’s requirement;
- Formulate a Plan for the most effective and balanced utilisation of country’s resources;
- On a determination of priorities, define the stages in which the Plan should be carried out and propose the allocation of resources for the due completion of each stage;
- Indicate the factors which are tending to retard economic development, and determine the conditions which, in view of the current social and political situation, should be established for the successful execution of the Plan;
- Determine the nature of the machinery which will be necessary for securing the successful implementation of each stage of the Plan in all its aspects;
- Appraise from time to time the progress achieved in the execution of each stage of the Plan and recommend the adjustments of policy and measures that such appraisal may show to be necessary; and
- Make such interim or ancillary recommendations as appear to it to be appropriate either for facilitating the discharge of the duties assigned to it, or on a consideration of prevailing economic conditions, current policies, measures and development programmes or on an examination of such specific problems as may be referred to it for advice by Central or State Governments.
Planning Commission was replaced with NITI Aayog on 1 January 2015. However, the financial powers like setting sectoral priorities, designing the schemes and programmes, estimating the entitlements to State development programmes (other than devolution), and influencing the annual allocations as per the priorities etc. now come under the direct influence of the Ministry of Finance, Budget Division.
Doctoral Scholar in Economics & Senior Research Fellow, CDS, Jawaharlal Nehru University
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9398897886276245,
"language": "en",
"url": "https://www.degreequery.com/what-kind-of-job-can-you-get-with-a-degree-in-math/",
"token_count": 1013,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0159912109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:aa6937a1-481a-47af-ba44-2e7c1fd17a46>"
}
|
Having a love for math might not be reason enough to major in the subject, but that combined with the excellent job prospects for math majors is. Earning your college degree in mathematics can prepare you for many different, and often high-paying, careers. Whether you want to work in business and finance, federal government roles, academia or manufacturing, there are many public and private sector options to consider with a background in math. Some of the job titles you might consider with a math degree include mathematician, actuary, operations research analyst and math teacher.
A mathematician uses mathematical concepts and techniques to analyze numerical information and develop solutions to real-life problems, the United States Bureau of Labor Statistics (BLS) reported. Together with the related occupation of mathematician, this career path can include everything from working to discover new mathematical rules to collecting and interpreting quantitative data. Mathematicians work in a number of different industries. More than one-third of mathematicians work for the federal government, compiling and analyzing data related to unemployment, environmental issues, public health matters and other serious problems. About 17 percent of mathematicians work in research and development, designing experiments and interpreting consumer data to aid in developing, testing and marketing new consumer products. Another 16 percent of mathematicians work in academia, researching theoretical math in college and university settings. Industries like finance and insurance, business consulting, healthcare and engineering, too, hire mathematicians.
In general, mathematician is a profitable and rapidly growing career. The median wage for mathematicians is $103,010 annually, the BLS reported. The salary range in this occupation is large. Median wages are near or in the $120,000 range for the top-paying industries, like management consulting and research and development, and a median salary as low as 56,320 for mathematicians working in academia. Over a decade, the BLS expects job opportunities for mathematicians to increase at a faster than average rate of 30 percent.
Mathematician might be the most straightforward career path for math majors, but it is also the smallest math occupation. Just 3,100 mathematicians are working across the United States, the BLS reported.
An actuary uses mathematical and analytical approaches to analyze data, as well. However, actuaries are primarily concerned with calculating risk and, more specifically, the financial costs of risk. Around 70 percent of actuaries work in the field of finance and insurance, using computer software and their math and business knowledge to determine what insurance premiums should be or how investments should be made, according to the BLS.
The median wage for actuaries is $101,560 per year. While actuaries don’t need a master’s degree, as many mathematicians do, they must spend years attaining professional certification. Actuary is another math occupation that is seeing rapid rates of growth. The BLS expects jobs for actuaries to increase by 22 percent over a decade.
A math degree is only one possible educational path for aspiring actuaries. Majoring in statistics or a specialized program in actuarial science can also prepare students for this career.
IMAGE SOURCE: Pixabay, public domain
Operations Research Analyst
If you have an interest in the business applications of mathematics but you want to be involved in more facets of the organization’s operations besides analyzing risks, you might consider a career as an operations research analyst. These math professionals analyze data, but they take into account the many different aspects of the business, from the allocation of resources to shipping practices, the BLS reported.
The overall median wage for operations research analysts is $81,390, but among the five percent working for the federal government, the median salary is $111,570. The BLS predicts a 27 percent rate of job growth in this occupation, which means that jobs for operations research analysts are increasing at a much faster than average rate.
As the largest of the math occupations, operations research analyst already accounts for 114,000 jobs in America, and the BLS expects another 31,300 jobs to be added over a decade.
If math is your favorite subject but you don’t see yourself in a research or analytical career, you might want to work in education. Math teachers serve a crucial and rewarding purpose, educating the next generation in a subject that develops their analytical, quantitative reasoning, critical-thinking and problem-solving skills.
In addition to studying math at the college level, you will need some formal education in teaching to become a licensed math teacher. Some students meet this requirement by earning a bachelor’s degree in math education, rather than general mathematics. Other math teachers start out with a math degree but complete graduate coursework to earn alternate route certification in teaching, the BLS reported.
Salaries for educators vary depending on the grade level they teach. The median wages for educators are $59,170 for high school teachers, $57,720 for middle school teachers and $57,160 for elementary school teachers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9711440205574036,
"language": "en",
"url": "https://www.rahamuseo.fi/en/exhibitions/previous-exhibitions/Gold-the-basis-of-a-monetary-system/",
"token_count": 144,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d849cad1-6ea0-47bd-83ac-5a34f313933a>"
}
|
Gold – the basis of a monetary system
The Museum’s sixth seasonal exhibition was about gold as the basis of a monetary system. The exhibition illustrated how gold has been used as a means of exchange and highlighted monetary systems based on gold both in Finland and internationally – without forgetting Lapp gold or the gold collection in the war time. Visitors to this exhibition could view Aleksi, the largest gold nugget ever panned in Finland. It was discovered in Laanila, Inari, in Lapland in 1910.
The exhibition ran from 8 April to 2 November 2008.
Bank of Finland's gold reserves. Each bar weighs approximately 12 kg. Photo: Jaakko Koskentola
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9128504991531372,
"language": "en",
"url": "https://www.uni-heidelberg.de/en/newsroom/ambitious-climate-action-pays-off",
"token_count": 882,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.150390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8cb6c30a-cd4b-4556-8e4b-1c2824d1ad6f>"
}
|
ResearchAmbitious climate action pays off
14 July 2020
New study confirms that UN climate targets make economic sense
Action to mitigate climate change costs money – but damage caused by climate change also entails financial burdens, particularly for future generations. So how much action to mitigate climate change makes economic sense? To find out, an international team of researchers led by the Potsdam Institute for Climate Impact Research has fed an earlier computer simulation with current data and findings from climate science and economics. The research projects, with participation from environmental economists from Heidelberg University, have shown that limiting global warming to under two degrees – as agreed at the 2015 UN climate conference in Paris – produces an economically optimal balance between future damage caused by climate change and present mitigation costs. That would require a CO2 price of over USD 100 per ton.
In 2018 Prof. Dr William Nordhaus from Yale University (USA) won the Nobel Prize in Economics for integrating climate change into long-run macro-economic analysis. Specifically, the US scientist earned it with the aid of a computer simulation, the Dynamic Integrated Climate-Economy Model. The DICE model has now been updated based on recent research results from climate science and economic analysis. This comprises among others an improved carbon cycle model, a recalibrated temperature model and an adjustment of the damage function, which judges how strongly future climate changes will impact on the global economy. Normative assumptions of the recalibrated DICE model specifically concern the question of how to fairly distribute wealth between present and future generations, taking account of climate change. The updating of what is termed the social discount rate derives from a broad range of expert recommendations on intergenerational justice. Further additions relate, for example, to technologies on negative emissions with which CO2 can be removed from the atmosphere or the feasible speed of transition from a carbon-based economy.
The social discount rate plays a key role in the updated DICE model. This economic concept indicates how we rate the well-being of our children and grandchildren, as compared to our own well-being. The climate-related impacts of current emissions will have long-term effects. In order to be able to assess them appropriately, different views on how to strike a balance between the interests of current and future generations have to be taken into account. For the first time, the study contains a representative selection of recommendations from over 170 experts. The target of remaining under the two degrees limit is the economic optimum according to the social discount rates proposed by the majority of experts. In this context, Dr Frikk Nesje from the Alfred Weber Institute for Economics of Heidelberg University explored how to best mediate among different expert opinions on the social discount rate for policy purposes.
The changes in the model, particularly the reassessment of the social discount rate in favour of the well-being of future generations, have also impacted on carbon pricing. While William Nordhaus’ standard DICE model results in USD 40 per ton of CO2 in 2020, the model updated by the international research group calculates a CO2 price of over USD 100 in the fully recalibrated model. This would then – with few exceptions – make it higher than what most economic sectors implement, even in the most ambitious regions in the world, economist Dr Nesje emphasises. For example, a CO2 price of over USD 100 is about three times higher than the European Union emissions price. The study thus calls for more stringent climate policies to avoid leaving an unjustifiably high burden of climate impact to coming generations.
Participating in the study “Climate Economics Support for the UN Climate Targets” were, besides researchers from the Potsdam Institute for Climate Impact Research and Heidelberg University, researchers from the University of Hamburg, the University of Gothenburg and Chalmers University of Technology in Gothenburg (Sweden) along with London School of Economics and Political Science and the University of York (UK). The findings were published in “Nature Climate Change”.
M.C. Hänsel, M.A. Drupp, D.J.A. Johansson, F. Nesje, C. Azar, M.C. Freeman, B. Groom, T. Sterner: Climate Economics Support for the UN Climate Targets. Nature Climate Change (published online 13 July 2020).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9467528462409973,
"language": "en",
"url": "https://kalkinemedia.com/au/video/cryptocurrency-for-beginners",
"token_count": 341,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1201171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5d78ea9b-54fa-4f47-a4af-ad73f0371bbb>"
}
|
It is very important to understand which market or which instrument you are investing your money in before making that final investment decision.
All the buzz around the bitcoin, luring tweets from the biggies, and the rapid rise in bitcoin prices must have caught your attention.
But, if you are new to the cryptocurrency market, it is vital to understand the basics of it and how it functions before you make that first trade.
Deriving the meaning from the name itself cryptocurrency is a combination of cryptography and currency.
Cryptography is simply the process of converting plain text to text which cannot be easily understood or vice-versa. By cryptography, data can be stored or transmitted as an encrypted message in which letters are replaced with characters.
Similarly, cryptocurrency is digital form of cash. It is a currency associated with the internet which uses a decentralised technology to make secure payments. It is a form of payment that can exchanged online to make purchases of goods and services.
So, if you want to send money to someone who is in a different geography, you can easily do that without any intermediary or a financial institution through crypto.
Let us go back to the history of cryptocurrency and what led to the evolution of it?
The first cryptocurrency launched was Bitcoin in 2009. Bitcoin was built on a revolutionary technology called blockchain by cryptographers Scott Stornetta and Stuart Haber.
Moving on from the history to the kinds of cryptocurrency, we have 4 most common cryptocurrencies in the market namely Bitcoin, Ethereum, Ripple and Litecoin.
Also, Hit the bell icon and subscribe to our youtube channel … Till then stay uprise and invest wise with kalkine – your best guide.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9364591836929321,
"language": "en",
"url": "https://www.eightcap.com/th/education/fundamentals/what-is-forex/",
"token_count": 2160,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.25390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:cd455b27-af11-475e-a1bd-d86138aaad00>"
}
|
What is Forex?
What is Forex?
Forex or FX is an acronym for foreign exchange, meaning the currency belonging to a country and/or market. In trading, forex relates to the buying and selling of these currencies in order to make a financial profit when the value of those currencies change. The forex market is the world’s most liquid market with daily trading volumes exceeding five trillion dollars; which is why it’s so attractive to investors. It’s the world’s most liquid market purely because of the need to exchange currencies to buy goods and services overseas.
A simple example about Forex
A simple example here would be the exchange of currencies for leisurely holidays. A person from the United States travelling to Italy cannot pay for hotels or food in USD. They must exchange their money for Euros because that is the local currency accepted in Italy. On a broader scale, we can think of a country’s imports and exports to other nations and the need to exchange currencies to a local unit.
Forex, an over-the-counter (OTC) decentralised market
Forex is an over-the-counter (OTC) decentralised market, which means it doesn’t run from a central exchange. Instead, foreign exchange rates and prices are set by supply and demand within the market itself. The main participants in the Forex market include international banks, corporations, governments and central banks, institutional investors and retail (individual) investors.
Forex trading vs Stock Trading
Unlike stock trading, OTC markets do not run from a central exchange. Forex rates are set by traders engaging in the market, determining the price of a currency through buying and selling (supply and demand). When trading forex, you are trading price movements of the underlying asset whereas stock trading involves buying shares in a company and therefore a share in ownership.
Forex holds more liquidity than stock trading
As mentioned, the forex market holds more liquidity than stock trading, exceeding global equities 25 times over. Liquidity is important because it generally equates to tighter spreads, lower transactions costs and overall easier trading. Typically, stock trading is suited to long-term investors looking to hold a stock and earn dividends, while short-term investors including day-traders and scalpers are more suited to the Forex market.
History of Forex
The Forex market was created after World War 2 in a bid to stabilise global economies by pegging the value of currencies to the price of gold. By 1971, forex had evolved into a free-floating market where exchange rates were determined by supply and demand. At the time, the forex market was mostly traded by banks and hedge funds. As technology advanced, forex trading moved online, becoming easily accessible to brokers and retail traders via the internet.
Benefits of trading Forex
Longer trading hours
One benefit of trading forex is longer trading hours compared to individual stock exchanges. The Forex market is open 24 hours a day, 5 days a week, moving through four main inter-bank sessions; Sydney, Tokyo, London and New York. The London session is the largest inter-bank session. Longer trading hours makes forex more accessible and attractive to traders across different time zones.
Use of margin and leverage
Another benefit of trading forex is the use of margin and leverage; a concept that allows investors to trade on small deposits with exposure to larger amounts of money. For example, a trader might only have $1,000 in their trading account but could have access to buy/sell up to $50,000 on a 50:1 leverage.
Allows traders to speculate on price movements
Forex also allows traders to speculate on price movements, whether the currency pair moves higher or lower. If the currency pair moves in your favour, you will make a profit. Unlike the stock market, making a profit in forex doesn’t necessarily mean the price has to increase in value.
Risks of trading Forex
No financial investment is without its risks and forex is no stranger to losses. A 24-hour market means prices are always moving and a flash crash can easily leave traders with a debt larger than their initial deposit. The use of margin and leverage can also risky when trading large amounts of money on small margin. A lack of risk management or a move in the wrong direction can result in margin calls, where traders are forced to pay the loss. Trading forex has been described by some as gambling due to buying and selling on speculation.
The difference between Spot Forex and Forward Forex
Forex spot price
There are two types of forex contracts; a spot price and a forward price contract. A spot price is an immediate or current rate available to a buyer. If a transaction has been made and funds are required immediately, the buyer has no choice but to pay the spot price. This could apply to property purchases or deposits required on purchases overseas. The standard delivery date for a spot rate is 2 days. The spot rate is generally quoted in the retail market and used by travellers wishing to exchange currencies at their bank or a foreign exchange company, as seen at airports.
Forex forward price / rate
A forward rate is a contract to buy or sell foreign currency on a specified future date, at a future price. This contract is binding between the two parties involved, regardless of the spot rate. It can also be used as a hedging technique alongside risk management, if you believe the rate will improve or decline by the forward date. Essentially a forward rate is used to quote a financial transaction that takes place in the future.
Both spot price contracts and forward price contracts can be executed through international banking facilities.
Currency pairs (major/minor/exotics)
The foreign exchange market allows traders and investors to buy and sell currencies and those currencies are quoted as pairs.
Take the AUD/USD pair for example. The value of the Australian dollar is being quoted against the United States dollar. If the AUD/USD rate is 0.726, then the Australian dollar is worth 72.6 U.S. cents. If you open a ‘buy position’ on the pair, then you are buying the Australian dollar while selling the United States dollar. This position would be opened on the view the Australian dollar would increase in value.
Major currencies pairs
There are more than 40 currency pairs that can be traded, with six ‘major’ pairs. Forex pairs can be traded on time frames ranging from just seconds to months. The six major pairs include:
Minor currencies pairs
All six major pairs can be traded with Eightcap through our trading software MetaTrader 4 (MT4) and MetaTrader 5 (MT5). Outside of the major pairs, there are also minor pairs and exotics that can be traded. Minor pairs are made up of different combinations of the major currencies, including:
Exotic currencies pairs
Investors also have the option of trading exotic pairs. Exotic currencies refer to non-major currencies that are generally illiquid and trade at low volume. Exotics belong to developing or emerging markets and economies such as the Turkish Lira, South African Rand and Mexican Peso. Because of the nature of these economies, including political tension and instability, exotic currencies have higher volatility. Exotic currencies are traded against the majors with bigger spreads and higher margins.
Some exotic pairs offered by Eightcap include:
- EUR/TRY (Euro vs Turkish Lira)
- USD/PLN (US Dollar vs Polish Zloty)
- USD/ZAR (US Dollar vs South African Rand)
What moves the Forex market?
Financial markets across the globe, including forex and stock markets can fluctuate or be influenced by several factors.
The release of economic reports such as GDP, inflation, manufacturing and jobs data, retail sales and business confidence can all influence forex markets. The strength of an economy determines the value of its currency. Generally speaking, positive or strong economic reports can boost the currency’s value against its pair.
Central bank movements and decisions also weigh on global markets. Sometimes central banks use these decisions to manipulate the currency value to stimulate the economy. For example, in Australia, lower interest rates equate to a lower AUD. A lower AUD is good for trade and may help increase inflation.
World leaders and elections can also influence the supply and demand of a currency. Sometimes nation leaders make comments on trade or commerce that could be beneficial or harmful to the economy. Brexit is an example of a political decision that has caused uncertainty among investors and markets, both forex and stocks. Towards the end of 2018, hostile trade negotiations between the United States and China also influenced markets.
This is a type of trading method carried out by technical traders or chartists. Economic data, interest rates and political forces are considered ‘fundamental’ influences, studied by fundamental traders. Technical traders, however, use charts to identify short-term and long-term trends in the market. By identifying trends, technical traders will then buy or sell the financial instrument. Technical analysis has the power to strengthen or weaken a currency.
What is spread in Forex trading?
Spread is a term that is used a lot in forex trading and can determine which broker you use. Spread is the difference between the BID (buy) and the ASK (sell) price of any given currency pair. A spread is represented by pips or points and is essentially a brokerage cost that replaces any transactions fees. A bid or a buy price is the highest price a currency pair will be bought, while an ask or a sell price is the lowest price a currency pair will be offered for sale. The smaller the spread, the more traders will save on brokerage fees.
What is a pip in Forex trading?
A pip or pips is another term regularly used in forex trading and is short for ‘point in percentage’. It’s a unit of measure for currency pairs and is the smallest amount in which a currency pair or quote can change. One pip can also be referred to as one point or 1/100th of the instrument’s value. For example, if the AUD/USD pair was quoted at 0.7239, it means for every Australian dollar, you will get 72.39 US cents. If the pair increased by one pip, the value would be 0.7240 or 72.40 US cents for every Australian dollar.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9580576419830322,
"language": "en",
"url": "https://www.eiltscpa.com/single-post/2013-1-23-understanding-the-dreaded-alternative-minimum-tax",
"token_count": 1161,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.068359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e04b0d75-0f49-43a1-bde9-2cbdb3047eaa>"
}
|
Understanding the Dreaded Alternative Minimum Tax
As if figuring out how much you owe in taxes weren’t complicated enough, the government actually makes you figure it out two separate ways. Welcome to the head-scratching world of the Alternative Minimum Tax (AMT).
Although it was originally created to target only the ultra-wealthy who benefited from unusual tax benefits, the AMT has become decidedly more egalitarian in recent years and now affects many middle-class taxpayers. The thought of being hit by the AMT might send chills up your spine, but the more you know about it, the less scary it seems.
Where did the AMT come from?
Congress created the AMT in 1969 as a way to ensure that high-income taxpayers were not able to completely avoid paying taxes thanks to extensive use of deductions, credits, and loopholes in the tax code. But the AMT system, which affected only a sliver of U.S. taxpayers in its first year, continually expands its reach because the amounts used to calculate the AMT are not automatically indexed to inflation. In 2011 an estimated 4.3 million Americans had to pay the AMT.
How does it work?
The AMT is a parallel tax system to the regular income tax. You essentially have to calculate your tax bill under both systems and then pay whichever one is higher. The primary difference between the two systems is that the AMT does not allow many of the common deductions or income exceptions found in the regular system.
Here is a high-level look at how the AMT is calculated:
Start with your Adjusted Gross Income as determined under the regular tax system
Add back the standard deductions for yourself and your dependents
Add back the itemized deductions that are either eliminated or reduced under the AMT, such as deductions for state and local income and property taxes, some medical expenses, interest on a home-equity loan (in some instances), and employee business expenses
Add income that was not counted as taxable income under the standard system, including interest from private-activity bonds and unrealized gains from incentive stock options granted by your employer
Add or subtract any remaining AMT preference items
Subtract the AMT exemption amount to determine the amount of income you have that is subject to the AMT; for 2012, the AMT exemption is $78,750 for joint filers and $50,600 for single filers, but these exemption amounts are reduced by 25 cents for each dollar of AMT income above $150,000 for couples and $112,500 for singles
Calculate your AMT tax at 26% of the first $175,000 of AMT taxable income and 28% on the remainder of AMT taxable income
What are some common AMT triggers?
Claiming the following deductions, credits, or types of income for your regular income tax can increase the likelihood that you will be subject to the AMT:
Personal deductions for multiple dependents
Itemized deductions for state and local taxes, medical expenses, unreimbursed employee expenses, and other miscellaneous expenses
Mortgage interest on home equity debt
Exercising (but not selling) incentive stock options
Tax-exempt interest from private activity bonds
Passive income or losses
Net operating loss deduction
Foreign tax credits
What can I do to lessen the impact of the AMT?
By now you may be asking if there is any way to get around the AMT altogether. Because it adjusts for various deductions and credits, there is not a whole lot you can do to dodge the AMT. But planning ahead can help keep your AMT adjustments low.
Seek reimbursements from your employer for business expenses incurred.Unreimbursed expenses incurred by employees are one of the itemized deductions not allowed under the AMT.
Review your state tax withholding and make sure to pay in enough so you don’t owe, but not so much that you overpay. This will keep your state tax deduction as low as possible.
Pay your property taxes close to the due date instead of prepaying; this will keep your deduction for state and local taxes as low as possible.
Sell incentive stock options in the same year you exercise them. By exercising and selling options in the same year, you’ll be subject to the regular tax on the income but not the AMT.
How did the fiscal cliff deal affect the AMT?
The last-minute deal struck by Congress to avoid the fiscal cliff on January 1, the American Taxpayer Relief Act, permanently extends the AMT “patch” and makes it retroactive to January 1, 2012. It used to be that Congress would have to scramble each December to renew the AMT “patch” to prevent the AMT from affecting millions of additional middle-class taxpayers. But now that the “patch” has been made permanent and indexed for inflation, we don’t have to worry about this annual drama.
How can I tell if I will be subject to the AMT?
For clients of Eilts & Associates, we calculate both your regular tax and your AMT, so we will let you know what you owe under both systems. There are also several online tools that can help you determine if you are subject to the AMT. The Internal Revenue Service has an online calculator called the AMT Assistant for Individuals.
I hope this article helped answer some of your questions about the AMT and, in process, made it a little less scary. If you have any questions, please contact Bart Eilts at 773.525.6171 or [email protected].
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9600712060928345,
"language": "en",
"url": "https://www.moneytalksnews.com/fcc-proposes-subsidized-broadband-for-the-poor/",
"token_count": 555,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.439453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:afca7c8e-978e-46b0-a9be-274337dd1fb2>"
}
|
Although it seems like everyone has easy access to the Internet these days, that’s not the case for many low-income Americans. A proposal from the Federal Communications Commission could help bridge the digital divide between the rich and poor in the United States.
FCC chairman Tom Wheeler has proposed expanding Lifeline, a program that helps poor Americans with their phone bills, to also provide subsidized broadband Internet access.
While 95 percent of households with annual incomes of more than $150,000 have broadband at home, less than 48 percent of households earning less than $25,000 yearly have home Internet, the FCC said.
“Over a span of three decades, [Lifeline] has helped tens of millions of Americans afford basic phone service. But as communications technologies and markets evolve, the Lifeline program also has to evolve to remain relevant,” Wheeler said in a blog post.
According to the FCC, more than half of poor Americans have been forced to cancel or suspend their smartphone service because of money issues, which further limits broadband access.
“Because low-income consumers disproportionately use smartphones for Internet access, this puts them at a disadvantage at a time when broadband access is essential for access to education and information, for managing and receiving health care, for daily tasks like accessing government services, checking bank balances, finding bargains on goods and services, and more,” the FCC said.
The commission is expected to vote on the issue June 18.
Bloomberg said the proposal by Wheeler, a Democrat, to expand the Lifeline program to include broadband access has already drawn harsh criticism from Republicans like Sen. David Vitter of Louisiana. In an emailed statement, Vitter said:
Why the FCC wants to expand this program before addressing the regular reports of ongoing fraud is beyond me. I cannot support any expansion of a program that has so few safeguards in place.
Vitter is the chairman of the Committee on Small Business and Entrepreneurship, which has been investigating the effectiveness and efficiency of the Lifeline program.
Lifeline, which is supported by fees tacked on to telephone subscribers’ bills, offers a monthly $9.25 subsidy. The Lifeline program cost $1.6 billion in 2014.
The FCC didn’t say what effect the proposed change may have on those fees — currently set at 17.4 percent of a portion of monthly bills — or on the number of program participants,” Bloomberg said.
What do you think of the FCC’s proposal to provide subsidized Internet to low-income users? Share your comments below.
Disclosure: The information you read here is always objective. However, we sometimes receive compensation when you click links within our stories.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.975306510925293,
"language": "en",
"url": "http://acquisitionadvisors.com/articles-for-sellers/2009/05/q-a-financial-statement-quality/",
"token_count": 308,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1337890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e3afbc0f-47b7-4644-a370-d3e983452766>"
}
|
12 May Q & A: Financial Statement Quality
Question: What is the difference between audited and reviewed statements?
Answer: Although these terms may be thrown around loosely in some circles, each has an important meaning and should be clearly understood.
Company Prepared statements are financials (income statement, balance sheet and statement of cash flows) that have not been compiled, reviewed or audited as described below. They are simply issued by the company itself with no third-party assurance of accuracy or completeness.
Compiled Statements have been organized in a manner that conforms to how statements are supposed to “look” but the accountant has not tested or reviewed the data and does not render any opinion as to accuracy, conformity or completeness.
Reviewed Statements have received a limited “review” by an independent auditor or certified public accountant who offers limited assurance as to accuracy and conformity with GAAP. However, it is understood that if an audit was performed, material errors could be found.
Audited Statements have been checked by an independent auditor or certified public accountant for accuracy and conformity with GAAP principals and standards. The audit is much more extensive than a review, and each audit will come with an auditor’s opinion letter. In the letter, the auditor will identify herself, her firm and will summarize what she did and whether she is willing to attest to the financial statements without reservation (“unqualified opinion”) or with reservation (i.e., “qualified opinion”).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9506871700286865,
"language": "en",
"url": "http://foundationforuda.in/uda-digest/Articles/16Mar21/blue_economy.html",
"token_count": 1646,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.41796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:baf7e53d-222e-460d-a50a-0d5f554635c9>"
}
|
Blue Economy – Marine Environmental Protection
By Capt. Vikram Nagarkar
The Blue Economy, through sustainable use of oceans, has great potential for boosting economic growth by providing opportunities for generating income and jobs through new resources for energy, new drugs, valuable chemicals, protein food, deep sea minerals, etc. In short, it is the next sunrise sector.
The present government’s initiative towards a Blue Economy was highlighted in the 2nd edition of the Maritime India Virtual Summit 2021 held on 2nd March 2021, which received enormous response from all stakeholders of the industry, and foreign dignitaries. One of the three components of the newly launched project called SAGARMANTHAN - MMDAC (Mercantile Marine Domain Awareness Centre) is Marine Environment Protection. The marine environment can be viewed as a pathway towards sustainable development that will support the pillars of the economy -- coastal tourism and fisheries.
The marine environment can be viewed as a pathway towards sustainable development that will support the pillars of the economy -- coastal tourism and fisheries
The first tanker accident at sea took place in 1967 when the ‘Torrey Canyon’ fell at Cornwall in England. This incident attracted worldwide attention on the risks of oil transportation. In 1973, IMO (International Maritime Organisation), under its International Convention for the Prevention of Pollution from Ships (MARPOL), implemented stringent regulations. It described the procedure to monitor and control marine pollution through oil, air, sewage, garbage, noxious liquid cargo, etc.
In March 1989, the Exxon Valdez Oil spill released millions of gallons of crude oil into Prince William Sound, Alaska. It was the worst environmental disaster in the history of Alaska and occurred in a very sensitive coastal ecosystem, thus magnifying the damage. The spill immediately resulted in the death of the wildlife causing significant reductions in tourism, recreational fishing and commercial fishing. Long-term direct effects of the spill included lingering oil with associated negative impacts on the ecosystem. Some marine animal populations have still not recovered to pre-spill levels.
The entire maritime industry was shaken and thereafter the MARPOL conventions were implemented with further stringent regulations laid out for merchant vessels and/or owners all over the world. The main reasons for oil spills from merchant ships around the world are grounding of vessels, collisions, continued use of old ships that break apart at sea, and fire.
After the Exxon Valdez incident, various annexes were introduced; today there are six annexes encompassing the pollution aspect in the Maritime Industry. LNG, Hydrogen, Solar Energy and Wind Turbines are prospective sources of energy to run ships in future in a bid to minimise pollution and help control climate change.
The main reasons for oil spills from merchant ships around the world are grounding of vessels, collisions, continued use of old ships that break apart at sea, and fire
Many costal countries export crude oil to other countries and the traffic is borne by the Indian Ocean. Due to the heavy transportation in this area, oil spill accidents are regular; statistics reveal approximately 40% of the total world oil spills take place in the Indian Ocean. Oil pollution is a thus a chronic problem in the Indian marine sector.
India is playing a leading role in monitoring marine pollution in the Indian Ocean through its department of Ocean Development and The National Institute of Oceanography (NIO) headquartered at Dona Paula, Goa. India had started monitoring marine pollution in the 1970s through the Council of Scientific and Industrial Research (CSIR), working under NIO. The INS Darshak was used to investigate historically significant shipwrecks in the Arabian Sea and the Bay of Bengal. The open ocean research was further boosted following the commissioning of CSIRNIO’s first research vessel, Gaveshani, which was acquired in 1976.
More than 500 million tonnes of oil transits through the Indian coastline annually and more than 200 million tonnes of oil are imported by India. The threat of oil spill pollution in the Indian Ocean is continuously rising as indicated by the incidents in the South of Sri Lanka in September 2020, and at Ennore, Chennai on 28th January 2017. These incidents are clear warnings to make our system more effective so that oil pollution in the Indian Ocean can be minimised.
The Indian Coast Guard plays a vital role in this pollution response plan, maintaining stockpiles of equipment at its pollution response centres at Mumbai, Chennai, Port Blair and Vadinar. It has also assigned two vessels to handle oil spill emergencies. Each Coast station is additionally equipped with stocks of oil dispersant.
While all oil pollution regulations are strictly followed at sea, the coastal areas are always neglected because of poor awareness among fishermen. The coastal waters, which include rivers, waterways, harbour and beach areas, pose a danger to the environment. This is largely due to garbage dumped by locals and factories spewing chemicals into rivers. Excess nutrients from untreated sewage, agricultural runoff and marine debris such as plastics also constitute marine pollution. Coastal traffic, including, coastal vessels, fishing vessels/trawlers, barges and tug boats, contributes to pollution in the form of oil, untreated water, sewage and fishing nets. The chemical waste in urban areas is harmful to important marine species and needs to be controlled. Plastic, which is extremely difficult to dispose off, is one of the most crucial factors affecting the environment negatively. Plastic also affects sea creatures and on a larger scale damages the ecosystem. The following images showcase how sea animals get affected by fishing nets and plastic.
While all oil pollution regulations are strictly followed at sea, the coastal areas are always neglected because of poor awareness among fishermen
The above pictures are of beaches along the Indian coast, one near urban areas and the other located at a remote place and controlled by private entities.
Another aspect which enhances the pollution is the ship breaking business. In addition to taking a huge toll on the health of workers, ship breaking is a highly polluting industry. Large amounts of carcinogens and toxic substances (PCB, PVC, PAH, TBT, mercury, lead, isocyanates, sulphuric acid) not only intoxicate the workers but are also dumped into the soil and coastal waters.
The fleet of ships around the world includes about 90,000 vessels and the average life of a ship is 20-25 years. The average number of large ships being scrapped each year is about 500-700 but taking into account vessels of all sizes this number may be as high as 3,000. Ninety percent of ship-breaking in the world is carried out in Bangladesh, China, India, Pakistan and Turkey. An average size ship contains up to 7 tonnes of asbestos, which is often sold in the local communities after scrapping. As the majority of yards have no waste management systems or facilities to prevent pollution, shipbreaking takes an enormous toll on the surrounding environment, the local communities, fisheries, agriculture, flora and fauna. This naturally causes serious environmental hazards with long-term effects for occupational, public and environmental health. Although the ship breaking industry fetches huge amounts of revenue, the pollution aspect and the health & hygiene of workers cannot be overlooked. Hence, proper storage tanks for chemicals remaining on ships and storage facilities for leftover fuel will ensure significant control on oil spills and pollution.
In order to deal with any pollution disaster, training and awareness programs in schools and colleges, and the research and development sector need to be initiated on a large scale by the pollution control board of each state, or entities from private sectors
In order to deal with any pollution disaster, training and awareness programs in schools and colleges, and the research and development sector need to be initiated on a large scale by the pollution control board of each state, or entities from private sectors. Stringent rules need to be implemented and monitored. Introduction of new technology in the marine sector will also helps in preventing the collision of ships. Training facilities for ship breaking operators must be developed, maintained in optimum conditions and properly used. Marine protection is one of the most important factors of the Blue Economy and if we cannot protect the environment, we cannot achieve a prosperous Blue Economy.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9487187266349792,
"language": "en",
"url": "http://www.spacecoastmoneytalk.com/beachside-resident-3/",
"token_count": 385,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0478515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b921c68f-646c-40cc-998f-a10df709fbf9>"
}
|
Early on, we covered the need for insurance and emergency funds before investing. A brief history of the stock market and the role your broker plays when you buy or sell set the stage for our various investment choices. Nearly everything roots back to stocks so it’s here we begin.
Stocks by themselves are far too risky. It’s so easy to diversify your portfolio with mutual funds and ETFs that we should only buy individual stock under special circumstances and in limited quantities.
Buying stock means you’re buying a piece of a company. The stock price is determined by supply and demand. If there is demand for shares, the price will go up. If sellers start dumping shares and there aren’t enough buyers, the stock price will fall.
The number of shares issued during the IPO is basically what trades every day. If you take that number and multiply it by the price per share, you get that companies’ total market capitalization. In the future, we’ll talk about large cap, medium and small cap stocks. It’s simply a fancy way to describe big, medium or small companies.
The price of a stock is based on future earnings expectations. If a company is coming out with new products that are big sellers, their stock price should be on the rise. If a company makes replacement parts for 8 track players well…you know.
Comparing a companies stock price to its earnings gives us a P/E ratio. Don’t be overly concerned with the math but the lower the P/E ratio, the better. It’s a good way to compare companies in similar industries. It wouldn’t make sense to compare a computer maker to a restaurant chain. The P/E ratio is only for companies with earnings. If a company loses money every year, they won’t have one.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9228869676589966,
"language": "en",
"url": "http://www.webmasterhelp.co.uk/machine-learning-in-retail-telecom-sector/",
"token_count": 731,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.022216796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:46044126-48aa-426a-b1b9-b8d5d39a2642>"
}
|
Machine learning is a subfield of artificial intelligence which is the real-world application of AI through large data input analyzation and prediction. Data is the most powerful weapon of choice for the businesses want to grow & improve rapidly. A thorough analysis of vast data allows manipulating and influencing customer’s decisions. This data is gathered from various channels of communication and numerous flows of information with which customer is concerned.
Retailers have lots of sources to gather this data and thus machine learning can be a great impact on the retail industry. The retailers can analyze, manage data and develop a peculiar psychological portrait of customers to learn his expectations and sore points.
Look at some popular use cases of Machine Learning in Retail Sector given below:
- Recommendation Engines: Recommendation engines analyze data based on customer’s past behavior or product characteristics and various types of data such as usefulness, preferences, needs, demographic data, previous shopping experience, etc. It uses either collaborative or content-based filtering.
- Market Basket Analysis: Future choices and decisions may be predicted on a large scale by this tool. Knowledge of present items in a basket like all previews, likes, dislikes is beneficial to the retailer for prices making, content placement and layout organization.
- Warranty Analytics: This acts as a tool of detection of fraudulent activity, warranty claims to monitor, increasing quality and reducing costs. This process includes data and text mining for further identification.
- Price Optimization: Price optimization tools contain a secret customer approach as well as numerous online tricks. The data is gained by multichannel sources such as buying attitude if customer, seasoning and the competitor’s pricing, location, etc.
- Inventory Management: The retailer’s main concern is to provide the right product at the right time, in the proper place, thus by analyzing stock and supply chains. Powerful ML algorithms are used to perform correlations among supply chains & elements.
- Customer Sentiment Analysis: The algorithm can perform brand-customer sentiment analysis by data received from social networks and online services feedbacks. It uses language processing.
- Merchandising: The implementation of merchandising tricks can be done via visual channels helps to influence the customer decision-making process. Branding retains customer’s attention and attractive packaging enhance visual appeal.
Telecom industry is also riding on the waves of digital transformation and tech revolution. But they’re facing challenges related to growth and expansion to new business areas. Telecom needs machine learning to be able to analyze and process data in many areas: network automation, customer experience, new digital services, business process automation, and infrastructure maintenance.
Here are some examples of how machine learning in telecom industry creating new ways of revolution:
- Customer Service Chatbots: Chatbots is an application of machine learning that offers a precise solution to the limitations of human consultants which cannot process all the data. Telecom needs chatbots to make service faster, more scalable and improve client satisfaction.
- Voice services and churn rate reduction: Machine learning is also used for a churn rate reduction, which can annually average from 10 to 67%. Telecoms can train algorithms to predict when a client is likely to turn to another company.
- Predictive Maintenance: ML can be used for maintenance of mobile towers such as video & image analysis, empowered surveillance can help to detect anomalies.
Hence, ML and AI are making great improvements over the retails and telecom Industry and helping them to build more revenues and stronger customer relationship. AI and ML became the buzzwords today and are present in every industry for increasing growth graphs.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9177544116973877,
"language": "en",
"url": "https://embarkingonvoyage.com/2021/03/adoption-of-etl-in-data-warehouse/",
"token_count": 1354,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.047607421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:543b65f1-130c-4650-a3c0-660597b1fb33>"
}
|
15 Mar Adoption of ETL in Data Warehouse
ETL is Extract, Transform and Load. It is basically a process that ‘extracts’ the data from numerous sources, then ‘transforms’ them by applying calculations, concatenations etc. and lastly ‘loads’ the data into Data Warehouse system.
It may seem to you like a simple process wherein data warehouse creating is just all about extraction, transforming and loading. However, the process, in reality, is quite complex. The process involves constant monitoring and inputs from experts like developers, testers, analysts, and top executives. Also, this process is a recurring activity on a daily, weekly or monthly bases and need to be very well documented, automated and agile.
So, what are the many benefits of adopting ETL?
Why do you need ETL?
We can give you more than one reasons as to why your organisation needs ETL.
- With ETL, companies can better analyse their business data and take more informed business decisions.
- ETL makes data migration into a data warehouse possible and easier. You can convert data into different formats and types to maintain uniformity and consistency.
- Your transactional databases will not be able to give you all answers regarding complex business needs that ETL can easily do.
- With ETL, you can compare sample data between the target and the source system.
- It also helps to enhance productivity since it can code and reuse data without the need of any specific technical skills.
- ETL also facilitates rules regarding data transformation, aggregation and calculation.
- The Data Warehouse automatically updates when the data source changes.
The ETL Process: Various Steps
We will now look at the various steps in the ETL Process in brief.
Step 1 – Extraction
This is the first step of the ETL architecture. This step mainly involves the extraction of data from the source to the staging area.All necessary transformations are also carried out here in the staging area so that the source system doesn’t get disturbed. The main sources may include some of the legacy applications like customised applications. ERP, text files, Mainframes etc. Therefore, Data warehouse should be able to integrate systems with varying DBMS, OS and communication protocols.
So, before proceeding with data extraction, you must have a logical data map that will clearly define the relationship between the target and the source data.
Basically, there are three Data Extraction methods:
- Full Extraction
- Partial Extraction without update notification.
- Partial Extraction with update notification
Step 2 – Transformation
The data that we have extracted in the first step is usually in its raw form and cannot be used. Therefore, it has to undergo cleaning, mapping and proper transformation. This is the main step where ETL actually adds value to the extracted data to generate insightful business reports for you. There can be some direct move or pass through data – the data that doesn’t need any kind of processing and transformation.
One important highlight of this step is that one can carry out customised data operations. Say, for example, if the first name and the last name in a table is placed in two different columns, with the help of ETL, you can concatenate them before proceeding to loading.
Some of the data integrity problems include use of different names like Cleaveland and Cleveland, multiple denotation of company names, different spellings of the name of the same person and blank fields in some files.
Some of the validations to be done at this stage include character set conversion and encoding handling, using lookups to merge data, conversion of units of measurements for uniformity, transposing rows and columns and so on.
Step 3 – Loading
This is the last step in the ETL Process. Considering a typical data warehouse, usually there are large volumes of data that need to be loaded in short periods of time. This calls for an optimisation in the performance of the loading process.
We also have to have a backup plan in mind in case of load failure. There should be good recovery mechanisms that will restart the process from the point where it failed and ensure no loss of data and integrity. The admins have to monitor, resume and cancel loads according to the performance of the server at that point in time. There are three types of loading:
- Initial Load — where you populate all the tables in the data warehouse.
- Incremental Load — where you can apply ongoing changes on a need basis.
- Full Refresh —where you can erase the contents of one or more tables and reload completely new data.
Some of the popular ETL Tools
While there are many available Data Warehousing tools, let us look at some of the most popular and widely used ones.
This has been the industry-leading database for quite some time now. With its wide range of choices in data warehouse solutions, it helps to optimise user experiences and enhances operational efficiency.
Check it out here: https://www.oracle.com/index.html
This solution makes data integration very easy and fast, thanks to its range of enterprise features. It is capable of querying multitudes of data like metadata, relationships, documents etc.
Check it out here: https://www.marklogic.com/product/getting-started/
Pentaho is a business intelligence software. It provides the following services: data integration, data mining and extract, transform and load capabilities, OLAP services, information dashboards, reporting etc.
Check it out here: https://www.hitachivantara.com/en-us/products/data-management-analytics.html
So, we now have a basic idea of the ETL Process and how it is carried out. There are a few things that you need to keep in mind to ensure that your process is as smooth as possible. You must never try to clean all the data as it would take a lot of time and effort and may cost you a fortune! To speed up query processing, you should have auxiliary views and indexes. Similarly there are some other aspects that you must be aware about the ETL Process.
But, if you are not and you don’t want to worry about all these technicalities, you can simply reach out to us for all your data warehouse needs! Experts at EOV have years of experience in the best practices in ETL and will surely help you carry out the process of extraction, transformation and loading in the most hassle-free manner! Get in touch with us today!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9396958947181702,
"language": "en",
"url": "https://followbusinessalbania.com/albanias-gdp-growth-if-electric-power-production-remained-constant/",
"token_count": 510,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.04248046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bcefa28d-5861-4567-8728-9d3f04ea5714>"
}
|
Eduard Zaloshnja, PhD
Research Scientist at Pacific Institute for Research and Evaluation, Washington DC
In the last four years, no new major electric power generating facility is added to the electric grid of Albania. Meanwhile the whole domestic production of electric energy is based on hydropower, which is highly dependable on rainfalls. In the first quarter of 2018, for example, the domestic production of electric energy reached the record high of 3.2 GWh, whereas in the third quarter of 2017, it reached the record low of 0.4 GWh.
In average, the value of electric power production comprises 20% of Albania’s GDP. As such, fluctuations in production – from quarter to quarter and from year to year – have significant impact on GDP growth. And this impact has nothing to do with the economic or political conditions of the country – at least in short term, they depend only on the whims of Mother Nature…
Given this dependence of GDP growth on the highly uncontrollable electric power production, it is necessary to analyze it excluding the impact of the latter (i.e. assuming it remains constant). Such an analysis can point to the GDP growth related only to controllable factors, like government policies, macro stability, investor and consumer optimism etc.
In Figure 1 are presented year-on-year real GDP growth rates for the last 13 quarters for which INSTAT has published data. Alongside those rates are presented year-on-year growth rates after excluding the impact of electric energy production to GDP (assuming it remained constant).
As it can be seen in the Figure 1, in some quarters, electric energy production has dragged down the GDP growth (when rainfalls have been low) and in some quarters it has boosted it (when rainfalls have been high). The most notable has been the first quarter of 2018, when GDP’s year-on-year real growth was almost 4.5%. If the impact of electric energy production is excluded, the GDP growth would be only 2.4%.
Since electric energy production starts falling rapidly after March, the slow GDP growth of only 2.4% in the first quarter may spell trouble for the subsequent quarters. If the rest of the Albanian economy does not pick up steam until the end of the year, the Government’s objective of 4% GDP growth for year 2018 could be in jeopardy.
Figure 1. Year-on-year Real GDP growth rate in Albania, if electric energy production remained constant
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9598620533943176,
"language": "en",
"url": "https://socialcanada.org/2020/07/27/on-regulated-occupations-and-inequality/",
"token_count": 2163,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:15d79efd-fb2b-4c1f-b086-a2beb4decc7e>"
}
|
To regulate or not to regulate has always been a sticky policy issue because, depending on the profession or occupation or industry, the net social impact can vary. But we are just learning that it can also lead to increased inequality in society.
A recent study published by the CD Howe Institute provides an interesting perspective into the practice of provincial governments of regulating certain occupations, especially professions. While the practice is ostensibly intended to protect the health or safety of consumers, the authors point out that regulatory capture, a phenomenon whereby the regulated person or business gains economic benefit from the regulations at the expense of their consumers, is common, especially when the regulations grant self-governing authority to a profession.
They observe that the trend has been to expand the regulation of occupations, such that they estimate about 20% of workers in the growing service sector in Canada are now licensed in some way, and as much as 33% in the USA.
The principal concern regarding occupational regulation is the possibility that such regulation serves to further the interests of the members of the occupation at a cost that is greater than the benefits accruing to the public. In other words, occupational regulation may not be efficient if there are little or no tangible benefits to the public and such regulation adds costs to the consumers of the regulated services. In a 2007 study, the Competition Bureau singled out the professions as being one of the economy’s least productive sectors.
….Although many professions offer valuable skills and services to consumers there are also risks that exclusive licensing limits entry, reduces supply and generally creates conditions for certain professionals to increase profits through higher prices for consumers.
https://www.cdhowe.org/sites/default/files/attachments/research_papers/mixed/Commentary_%20575.pdf. Mysicka, Robert, Lucas Cutler, and Tingting Zhang. 2020. Licence to Capture: The Cost Consequences to Consumers of Occupational Regulation in Canada. Commentary 575. Toronto: C.D. Howe Institute.
While we may be familiar with professional licensing bodies, the trend has been for many less specialized occupations to create licensing bodies. These occupations have not, for the most part, been given authority to restrict entry so the impact on the consumer is less.
So with a concern for consumers to be able to buy a necessary service such as dentistry or veterinary services at the best price, CD Howe recommends that federal competition legislation should be provided greater capacity to over-ride provincial licensing authority where needed.
This all makes sense from a consumer perspective. It shows how provincial legislation, combined with the way it is interpreted and enforced by our legal tradition of common law, provides a protective wall – to the provinces’ authority to regulate even if it unduly restricts competition; and to the regulated professions – which can exceed the intent of the licensing.
However, there are other angles from which to consider this phenomenon of licensing occupations, and here we will add, also regulating industries. The federal government, for example, regulates the financial services industry, part of agriculture, interprovincial and international transport, fishing, uranium mining, communications and a few other odds and ends.
The regulated industry or the profession often gets benefits more valuable to them than the protection of consumers is to the rest of us. Doctors, dentists, accountants, lawyers, pharmacists, veterinarians, among others, do not have to worry about the global economy or international price competition. Those who are granted self-governing power can restrict entry into the profession. (When was the last time we had too many doctors and they were finding it hard to get a job? :)) Short supply brings more money. Professions can also restrict internal competition by setting out recommended fee schedules. They might also restrict advertising. They can require periods of apprenticeship to restrict immigrants.
And regulation of an industry like banking or communications, also restricts entry and competition. This raises the pay and profit levels of those involved, especially when the domestic market is expanding steadily with no effort by them, such as the three decade flood of baby boomer contributions to savings instruments like RRSP’s.
Not much wonder that the top one percent of the income scale is dominated by people in protected occupations or industries.
Now you might think that people in these professions and industries are smarter than the rest of us and work harder to get their money. But emerging research in the US is showing that this is not the case. Indeed the distribution of education and skills, (and of course, hours of work) are far more equally distributed through the population than is income. And in turn, wealth is even more skewed than income.
John Abowd and co-authors have estimated how far individual skills influence earnings in particular industries. They find that people working in the securities industry (which includes investment banks and hedge funds) earn 26 percent more, regardless of skill. Those working in legal services get a 23 percent pay raise. These are among the two industries with the highest levels of “gratuitous pay”—pay in excess of skill (or “rents” in the economics literature). At the other end of the spectrum, people working in eating and drinking establishments earn 40 percent below their skill level.
Make elites compete: Why the 1% earn so much and what to do about it Jonathan Rothwell, Brookings Institution, https://www.brookings.edu/research/make-elites-compete-why-the-1-earn-so-much-and-what-to-do-about-it/
Entry restrictions to an industry or type of business make it harder for lower-income people to get into it, also increasing inequality:
Combining entry regulations data from the World Bank Doing Business Index with various measures of income inequality, including Gini coefficients and income shares, we examine a pooled cross-section of 175 countries and find that countries with more stringent entry regulations tend to experience higher levels of income inequality. An increase by one standard deviation in the number of procedures required to start a new business is associated with a 1.5 percent increase in the Gini coefficient and a 5.6 percent increase in the share of income going to the top 10 percent of earners.
Patrick A. McLaughlin and Laura Stanley. “Regulation and Income Inequality: The Regressive Effects of Entry Regulations.” Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, January 2016.
So my overall understanding is that regulating professions and industries, regardless of the purpose, has the effect of protecting incumbents from competition, both domestic and international. There is little reason to believe, and the research supports this, that those industries and professions are more essential to citizens than, for example, food services or long term care of the elderly.
The workers in unregulated occupations not only face more wage competition, but are expected to pay, from their lower incomes, an unsupported premium consumer cost to buy the services of the protected -health care, legal services, prescriptions, etc. Think of the exorbitant fees charged by banks to administer mutual funds, or the high costs of access to television, cable or internet as compared to other countries.
The protected workers on the other hand, get a bonus of lower prices for consumer goods and services because they are produced by unprotected workers. The bargaining power of those workers has been systematically undermined over the past forty years by government policies and business practices which made unionizing difficult, by reduced coverage of employment insurance, by low minimum wages and reduced social assistance benefits.
So public policy has systematically supported some workers while undermining others. it does seem contrary to the provisions of the Charter of Rights and Freedoms.
Some writers suggest correcting the imbalance by slowing or reversing the process of industry and occupational regulation. It is difficult to imagine this being done to the traditional professional elites who benefit from it, since they tend to be well represented in government. And stopping it from spreading would tend only to embed the present elites rather than expand that population. So are there other options?
Could every industry have the same blanket?
What if we were to go the route of regulating all work, all industry? Would the current beneficiaries cry “Socialism!” ? Certainly it would raise the price of every service we buy, and therefore in turn, every product we buy. But it would push more money to lower income workers. If everyone were paid according to their qualifications and effort, it would significantly reduce inequality.
I’m not very confident of that happening.
What if every worker were unionized?
Another approach might be to legislate that every worker should be a member of a union. Unions have helped to maintain employment standards for their members. But outside of the public sector where they still have a strong foothold, they are a bit of a dying breed. A globally fragmented workforce is hard to organize. It would take a pretty courageous government to implement that kind of policy, but who knows? Miracles can happen. Remember when Jack Layton took Quebec?
A developmental approach?
I devised what I hope might be seen as a gentle, middle of the road approach in a report that I wrote for The Pearson Centre. (called Future of Work Policies for the 2020’s. See thepearsoncentre.ca, or SocialCanada.org to find it.)
My proposal is to form sector work councils for every economic sector at provincial and national levels. Some sector councils exist now, but with more limited mandates. The ones I propose would be comprised of business leaders and worker representatives, as well as expert advisors. Their mandates would include reporting regularly on the health of the sector, including working standards and conditions, and inclusiveness.
The councils would be asked to propose reference wage scales for the sector. They would also oversee and advise training programs as well as the identification of work skill sets and competencies, and ensure appropriate and portable occupational credits.
Every worker in the country would be provided a free membership to the council most relevant to their work, and an opportunity to vote for worker representatives. This would make a start at giving workers a stronger voice.
As a separate measure, the federal government should adjust the Canada Worker Benefit to ensure that every worker has an income equal to at least 3/4 of the median full time wage in the economic region.
It would take some courage. And it would be nice if the medical associations, dental associations, pharmacy associations, law societies, accountant associations, engineering societies, would support this kind of policy or suggest their own solutions to the current unfair situation.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9627376198768616,
"language": "en",
"url": "https://www.nitrocollege.com/blog/are-millennials-the-most-generous-generation",
"token_count": 677,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.052001953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:19f26a44-ed98-4ef2-b651-3a206923d55d>"
}
|
A recent report shows that even though many millennials are battling some serious debt, they're more likely than other generations to lend a charitable hand.
Millennials' patterns and types of giving also reveal some other interesting contrasts with previous generations. Let's take a closer look.
What millennials owe
According to the Federal Reserve, the average person in their 20s in the U.S. owes about $22,135 in student loan debt. The average student loan payment is $351 per month.
And for most millennials, that payment is hardly a comfortable one. Approximately 51% report being underemployed—a 10% leap in the past three years—and millennial unemployment rates are about three times higher than the national average.
What millennials give
According to a recent Blackbaud report, millennials give an average of $481 per year—which is less than Baby Boomers ($1,212) and members of Generation X ($732) if you're strictly looking at dollars.
However, 84% of millennials give to charity—which is more than Baby Boomers and Generation X. So while millennials may have less to give, more of them do it anyway.
They’re also changing how charitable donations are made. In 2016, overall charitable giving increased only 1%, but online giving went up 7.9%—and much of that is driven by millennials.
Millennials are also more careful with the dollars they donate. According to Give.org, approximately 60% of relief dollars earned after hurricanes Harvey, Maria, and Irma came from millennials. And millennials were also more likely than any other generation to research hurricane charities before giving.
More informed donations
Millennials also more likely than any other generation to demand transparency. Approximately 57% reported wanting to know the impact of their individual donation, and about 28% follow their beneficiaries on social media. They're also more likely than other generations to expect regular updates from charities on how their dollars are spent and who they helped.
Millennials want to know how their donation made a difference—on a personal level. They’re much more likely to be moved by individual appeals than by a large, well-known charitable organization.
This way of giving fits perfectly with the new crowdfunding trend—which is now a multi-billion dollar industry. Through platforms like Indiegogo and GoFundMe, donors can hear individual stories, get frequent updates, and see how their donation helped a specific person or cause.
Crowdfunding has been a huge disruptor of charitable giving in recent years, and Millennials have been driving that trend. They make up approximately 33% of donations to crowdfunding platforms, according to a recent study by Massolution.
Millennials also put pressure on corporate America. A recent Nielsen report states that 73% of Millennials worldwide will pay more for sustainable products—compared with 66% of global consumers overall.
The takeaway? Millennials may have fewer dollars to give—but they’re having a huge impact on charity as a whole by turning to crowdsourcing, demanding more accountability and transparency, and expecting more from companies.
That’s something this generation can be proud of.
Got student loans? Want to reduce your monthly payments? Check out the Student Loan Refinance Calculator to see how much you could save.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9385696053504944,
"language": "en",
"url": "https://www.wallstreetmojo.com/posting-in-accounting/",
"token_count": 1210,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.083984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:42cfeed1-030a-4929-a15c-73124b12d6a2>"
}
|
What is Posting in Accounting?
Posting in accounting refers to the transfer of balance from one ledger to the general ledger so as to make it easy to understand the accounting and this posting in accounting are done at regular intervals i.e. monthly, quarterly, half-yearly or yearly depending upon the size of entity and volume of transactions of the entity.
It refers to the transfer of closing balance from various accounts to the general ledger. The posting varies as per the size of the organization and the volume of transactions. Some large organizations record the monthly closing balance. For small organizations, the balance is directly transferred to a general ledger because of the low volume of accounting transactions.
This is also done where there are several branches of the organization, and separate accounts of branches are maintained or when the parent company maintains the accounts of its subsidiary or associate company. It is more of a manual process and involves manpower work. In the case of posting, consolidation of accounts is also required. With the advancement of technology, posting has become a traditional process, and it is rapidly eliminated due to the availability of automated software.
Steps in Posting in Accounting
Steps in posting involve the following:
Step #1 – Create the Sub-Ledgers and General Ledgers with Various Transactions
Various accounts, along with the transactions, are to be recorded in their respective ledgers.
Step #2 – Create the General Ledger
The general ledger is the ledger in which balances of all sub-ledgers and general journals are to be transferred.
4.9 (1,067 ratings) 250+ Courses | 40+ Projects | 1000+ Hours | Full Lifetime Access | Certificate of Completion
Step #3 – Enter the Name and Account in General Ledger with Details
Transfer in general ledger takes place with the name of the account and amount carried forward in subledger or general journal along with entry details.
Step #4 – Enter the Debit and Credit Balances in the Ledger
Debit and credit balances are to be entered in the general ledger as per the balance in the account. The debit balance increases the asset, whereas the credit balance increases the liability in the accounts.
Step #5 – Maintain the Account for each Period Separately
The general ledger for each period is to be maintained separately so as to avoid and double balancing or mess in the accounts.
Step #6 – Correct any Errors
The final step is to cross verify the balances and recheck whether there are any mathematical errors; if any of the errors found, rectify them so as to maintain proper records.
Posting in Accounting Examples
The details of XYZ internationals are as under:
XYZ international issues 20 invoices to its customers and record each transaction in the sales account and the respective debtor’s account. Also, the company purchased from 10 suppliers the recorded in purchases accounts and respective creditors’ accounts. Some of the payable liability is recorded in the general journal account. The details are as under:
Prepare General Ledger.
- Posting in a ledger to be made in a chronological manner i.e., date wise.
- While posting in the ledger, entry is to be made into both accounts i.e.; double entries are to be made. For example, in the case of the purchase on credit, the entry is to be made in the purchase account as well as the creditor’s account.
- The amount is to be shown in the amount column, and the debit balance is to be debited debit side, and the credit balance is to be credited on the credit side.
- The balance in the nominal accounts is to be transferred directly to the profit and loss account.
- Assets are to be debited, and liabilities are to be credited.
- Balance can be Easily Verified – With the posting in the accounts, the balances of each account can be easily known as on the date. It creates a clear understanding of the account balances and reduces the efforts of finding from each ledger account.
- Ensures Smooth Running of Business – Posting of Balances ensures the smooth running of the business as with posting balances can be easily tracked and called for, and cross-verification and arithmetical accuracy is to be rechecked.
- Helps to keep Updated Records – It helps to keep an updated record of all ledger balances & also helps to keep tracking on the balances on how it changed over a period of time.
- Can be Easily Analyzed – As the balance of the ledger accounts can be changed with the recording of various transactions, so if the balance is the same for a continuous period of time, one can analyze the account and request for clearance of balance or record it as bad debts.
Posting in the ledger is the accounting process through which the balances of the general journal and various sub-ledgers are to be transferred at various intervals ranges from daily to yearly. It is very helpful and useful in large organizations, as keeping track of the balance becomes very easy. Also, with the posing in a ledger, the arithmetic accuracy of the accounts can be verified, and the balances can be analyzed thoroughly so as to maintain the proper and accurate records.
Posting in the ledger is the manual process; hence manpower is needed. It ensures that all assets and liabilities are to be recorded properly. The balances of nominal accounts are directly transferred to the profit and loss account, and the balances related to balance sheet items are to be transferred to the general ledger account. It helps to keep the updated records, but with the advancement of technology and the availability of various software, the posting in balance becomes the traditional concept.
This has been a guide to What is Posting in Accounting & its Definition. Here we discuss the step to calculate the posting in accounting and examples along with rules and importance. You can learn more about from the following articles –
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9154137969017029,
"language": "en",
"url": "http://mediabrand.live/create-a-crypto-coin/",
"token_count": 1356,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1376953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a84650bf-615d-4072-8f64-f6221d86646d>"
}
|
If you are Not an expert coder but
Have become a keen armchair audience of Bitcoin, Dogecoin, and every
other increasingly niche cryptocurrency, you may be wondering if
it’s feasible to make your own.
However there are quite a
Few different options to think about –and caveats to keep in
mind–until you dive in.
Difference Between a Coin and a Token
First, it is important to understand
The gap between coins and tokens. Both are cryptocurrencies,
although a coin–Bitcoin, Litecoin, Dogecoin–operates on its own
blockchain, a token resides in addition to an existing blockchain
infrastructure such as Ethereum. A blockchain isalso, in its simplest, a
list of transactions made on and secured by means of a network. So while
coins have their own independent trade ledgers, tokens rely on
the underlying network’s technologies to verify and secure
transactions and possession. Generally, coins are used to transport wealth, while tokens could represent a”contract” for almost
anything, from physical objects to occasion tickets to loyalty factors.
Tokens are usually released through a
Crowdsale called a first coin supplying (ICO) in exchange for
present coins, which in turn fund jobs like gambling platforms or
digital wallets. You can still get publicly available tokens after an
ICO has ended–similar to purchasing coins–using the underlying
currency to make the buy.
Anyone can make a token and operate a
Crowdsale, however, ICOs are now increasingly murky as founders take
investors’ money and run. The Securities and Exchange Commission is
cracking down on ICOs and going to treat tokens as securities that,
such as stocks, must be regulated. The SEC cautions investors to do
their own research before buying tokens launched in an ICO.
tokens made it to exchanges, however — Etherscan, which provides
Ethereum analytics, has more than 71,000 nominal contracts in its
The very concept behind cryptocurrency
Is the underlying code is accessible to everyone–but that
does not mean it’s easy to comprehend.
Construct Your Own
Blockchain–Or Fork a Present One
Both of these methods require very a
Bit of specialized understanding –together with the assistance of a savvy developer.
The former requires serious coding abilities as well as though
tutorials exist to walk you through the process, they assume a
certain knowledge level, and you don’t end with a fully
As an Alternative, You can fork an
Existing blockchain by taking the open source code found on
Github–Litecoin, for example–making a few changes, and launch a brand new blockchain using a brand new name (such as Garlicoin). Again, this
requires you to understand the code so you understand what to alter and
Establish a Coin
or Token Using a Cryptocurrency Creation Platform
This alternative is the most feasible for
The average person–a creation service is going to do the specialized work
and deliver your finished coin or token straight back to you. For example, an
experienced team of crypto programmers will actually build a
custom coin, and all you’ve got to do is enter the parameters, from the
logo to the number of coins given for signing a block. (That is, even when they’re open for business–as of press time, orders are
closed.) They have pre-built templates that just require you to provide a name and a logo.
Basically a smart contractwith or without a public ICO. Because
tokens can signify any asset, from a concert ticket or voting directly to financing by means of a crowdsale or even a physical money, you can even
create a token with no real value or serious purpose other than to
swap among friends. This is faster, simpler, and cheaper than
creating a coin because it doesn’t require the time and effort to
build and maintain a new or forked blockchain and rather relies on
the technology already in use for Bitcoin or even Ethereum.
A Frequent product is an ERC-20 token,
The standard for all those built around the Ethereum blockchain. The code for
all these nominal contracts and crowdsales is also available for the very
ambitious, but there are user-friendly platforms which will help you
through the procedure.
Example, you will have to add the browser
expansion –which links you to the Ethereum network–into a browser and then follow their walk-through video to build your token
and launch your own ICO. The platform gives the option to create bonuses
and vesting schedules for investors or perhaps establish a token contract
with no crowdsale. The token contract process is totally free, but
CoinLaunch takes a commission from every ICO (4-10percent based on much
cash is increased ).
If you are crypto-curious, there’s
No penalty to experimenting with token contracts. Start with an
ERC-20 token –that you can distribute to your friends and then money in to whoever buys drinks at the pub. There is no financial value or
commitment connected, but this can allow you to realize the technical
aspect in addition to how tokens do the job. An ICO probably will not be
appropriate for the casual observer because of increasing law and penalties for misrepresentation.
If You Would like to go a step farther to
Produce a coin using real value for a wider audience to mine, buy,
and sell, and you don’t have programming experience, you’ll probably
need the assistance of a couple of programmers. Even if you use an agency to
build your currency, you’ll want to keep itknow this
won’t be economical or risk-free.
The technical development of a
Cryptocurrency is not actually the toughest aspect of launching a
successful crypto project. The actual work is in providing your coin or
token price, building the infrastructure, maintaining it, and
forcing others to buy in–memecoins,
for example Garlicoin,
Dogecoin, and PepeCoin, have programmers and user-facing teams to maintain the technology secure and the community participated. Plenty of
cryptocurrencies are ineffective, even questionable from a legal
standpoint, because the ICO was not established in good faith or the
coin failed to generate lasting interest. The expression”shitcoin”
exists for a reason.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9231728315353394,
"language": "en",
"url": "http://mediabrand.live/crypto-coin-change/",
"token_count": 1376,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:25e5f865-e5ec-4277-a494-fafbe36105e0>"
}
|
If you are Not a Professional coder but
Have been a keen armchair observer of Bitcoin, Dogecoin, and each other increasingly market cryptocurrency, you may be asking yourself if
it is possible to create your own.
However there are quite a
Few different options to think about –and caveats to keep in
mind–before you dip in.
Difference Between a Coin and a Token
First, it is important to understand
The gap between coins and tokens. A blockchain is, at its simplest, a
list of trades made on and ensured by means of a network. So while
coins have their own independent transaction ledgers, tokens trust the underlying network’s technologies to verify and secure
transactions and ownership. Generally, coins are used to transfer
wealth, while tokens could represent a”contract” for virtually anything, from physical objects to event tickets to loyalty points.
Tokens are often released through a
Crowdsale known as a first coin supplying (ICO) in trade for
existing coins, which in turn fund jobs like gambling platforms or
electronic wallets. You are still able to get publicly available tokens following an
ICO has ended–like buying coins–using the underlying
currency to make the buy.
Anyone can make a token and run a
Crowdsale, but ICOs are now increasingly murky as founders take
investors’ money and conduct. The Securities and Exchange Commission is
cracking down on ICOs and going to treat tokens as securities that,
such as stocks, must be controlled. The SEC warns investors to do
their research before buying tokens launched in an ICO.
tokens made it into exchanges, nevertheless — Etherscan, that provides
Ethereum analytics, has more than 71,000 token contracts in its
archive. While the crypto market is volatile, specialists think that it
will continue to mature as more people adopt the idea.
The very concept behind cryptocurrency
Is that the underlying code is available to everybody –but that
doesn’t mean it’s easy to understand.
Build Your Own
Blockchain–or Fork an Existing One
Both These methods require quite a
Bit of specialized knowledge–or the assistance of a savvy developer.
Because coins are on their own blockchains, you will need to build a blockchain or take an existing one and modify it on your new
coin. The former requires serious coding abilities as well as though
tutorials exist to walk you through the procedure, they assume a
certain knowledge level, and also you don’t finish with a fully
As an Alternative, You can fork an
Present blockchain by choosing the open-source code located on
Github–Litecoin, for example–making a few changes, and launching
a new blockchain with a brand new name (like Garlicoin). Again, this
takes one to understand the code so that you know what to modify and
This alternative is the most feasible for
The average person–a creation service will do the technical work
and send your finished coin or token back to you. For example, an
experienced group of crypto developers will actually build a
custom coin, and all you have to do is enter the parameters, from the
logo to the amount of coins awarded for signing a block. (That is, even when they’re open for businessas of press time, orders are
currently closed.) They even have pre-built templates that just require you to present a name and a symbol. The base price for this particular service is 0.25 BTC ($2002.00 as of this writing), and you’ll
get your coin’s source code in a few days.
You can also create a token–what’s
Basically a smart contractwith or without a people ICO. Because
tokens can represent any advantage, from a concert ticket or voting right
to financing by means of a crowdsale or a physical currency, you may also create a token without a real worth or serious goal other than to
exchange among friends. This is quicker, easier, and cheaper than
creating a coin because it doesn’t require time and effort to
build and maintain a fresh or forked blockchain and rather relies on
the technology already in use for Bitcoin or Ethereum.
A Frequent product is the ERC-20 token,
The standard for those assembled around the Ethereum blockchain. The code for
these nominal contracts and crowdsales is also readily available for the very
ambitious, however you will find user-friendly platforms that will help you
through the process.
Example, you will have to bring the browser
expansion –which connects you to the Ethereum system –into a browser and then follow their walk-through video to construct your token
and launch your own ICO. The platform gives the option to generate bonuses
and vesting schedules for investors or perhaps launch a token contract
with no crowdsale. The token contract process is free, but
CoinLaunch takes a commission from each ICO (4-10% based on much
cash is raised).
If you’re crypto-curious, there’s
No penalty to experimentation with nominal contracts. There’s no financial value or
dedication attached, but this can allow you to understand the technical
aspect in addition to how tokens work. An ICO likely won’t be
appropriate for the casual observer because of increasing law and penalties for misrepresentation.
If You Would like to go a step further to
Create a coin using real value for a wider audience to mine, purchase,
and sell, and you don’t have programming experience, you’re likely going to need the help of a couple of programmers. Even if you use a service to
construct your currency, you will want to keep itknow that this
won’t be economical or risk-free.
The technical development of a
Cryptocurrency is not really the hardest part of starting a
successful crypto project. The actual work is in giving your coin or
token price, building the infrastructure, keeping it, and
convincing others to purchase in–even memecoins,
such as Garlicoin,
Dogecoin, and PepeCoin, have developers and user-facing teams to maintain the technology stable and the community participated. Plenty of
cryptocurrencies are ineffective, even suspicious from a legal
standpoint, because the ICO wasn’t established in good faith or the
coin neglected to generate lasting interest. The expression”shitcoin”
exists for a reason.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9511083364486694,
"language": "en",
"url": "http://sweethouse.ga/2017/03",
"token_count": 606,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.005157470703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b5e363d2-f21c-47a4-ae6a-501fd9b838a6>"
}
|
What does the term personal finance mean? The way we apply the principles of finance to the monetary decisions of individuals or family unit determines the competence of our ability to handle our personal finances. It is the maintenance of a budget, its saving and spending with an eye on the risk of financial crunch and future events. In the broader perspective it includes checking and savings account, credit cards, consumer loans, stock investments, retirement plans, insurance policies and income tax management. As one may take it, this is not an easy task and it involves dynamic planning with regular monitoring and evaluation. Setting up a goal is anybody’s game but executing it needs special skill. Perseverance and discipline is mandatory for accomplishing any goal. For this you need the proficiency of a personal finance manager which is well versed with the nuances of fiscal matters. How about stream lining your personal finances through a personal finance manager? Do you know it is far secure to go for it rather than struggling with dealing with money matters and hectic schedules? Organize your finances with the help of a personal finance manager. Normally if you go for managing it on your own you will be confused and stressed out. Managing personal finances on your own becomes a daunting, tedious experience where as it is a cakewalk if you use a personal finance manager application with deep rooted integrity helps you out with your money blues. The biggest challenge you face while dealing with money matters is that you may be blemished by bad credits and mismanagement of funds which puts you in soup once again. Once a defaulter always a defaulter goes the adage, but you will be redeemed if you choose the right personal finance manager. It helps you by giving a fair chance to recoup what’s been lost. Very often it is not the lack of funds but the mismanagement that creates paucity.
Use of educational software is a choice of every school to ease all processes and manage the faculty. Schools provide different facilities and to manage these facilities proper maintenance is the requirement.
Different accounts need to be created and all expenses should be followed by a proper system in place. To make the process of expense management easier, the educational software is prepared with all facilities of managing different things associated with the expenses.
With the implementation of expense management software, it is easy to manage expenses associated with different activities of the school from one single platform.
Different expenses include transportation charges, electric bills, water supply bills, cleanliness bills, etc. Expenses are also for managing different academic activities associated with a school.
With software that has a special module for managing expenses the things that you need to manage are very less with a system in place for keeping track of expenditure involved with different areas.
The software presents the option to modify and make changes to the expense chart any time. It also makes it easy to update the record of each month and these updates are easy to share with the staff of the school authorized to have this information of the expense management.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9396490454673767,
"language": "en",
"url": "http://www.fusionconcepts.com.hk/ale6ea/45b663-why-is-learning-about-personal-financial-planning-important",
"token_count": 2132,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0302734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:55788ce4-4fa5-4fe9-b157-12352586b967>"
}
|
The Thrive Global Community welcomes voices from many spheres on our open platform. The statistics below show that plenty of adults are feeling the pressure of financial issues. 1. Protection for you and your family. Basically, anything that concerns finances associated with your lifestyle expenses, savings and investments is part of your personal finance. Chris Strong/Stone/Getty Images. Key components of financial planning include all of the following except: Allow your financial planner to make all of your major money decisions. This ensures effective and adequate financial and investment policies. 10. Defining Financial Goals. Learn more about financial planners and use NerdWallet's tool to prioritize your goals. Which of the following best explains why students should learn about personal finance? Hence, you don’t have to overpay for any unnecessary insurance and also you don’t end up with lower than required cover. Financial literacy can be regarded as that knowledge that enables one to make responsible decisions involving money. A simple step-by-step guide to easily create a spending plan for your money. There are many benefits of financial planning in business. I am the author of 'Beyond Piggy Banks and. That makes budgeting and managing your money more important, not less important. When you have a credit card, you can make purchases without worrying whether you have enough cash in your pocket or in your checking account to pay the bill. Personal Financial Planning CPE & Events; Learning Library; Podcast: Why it’s important to think about year-end planning now ; Podcast: Why it’s important to think about year-end planning now. When people plan before launching a business, they avoid many pitfalls that others do … Having basic personal financial skills is one of the most important things you can do to live a healthy, happy and secure life. Start planning today for a better tomorrow. After you open, you can use your financial plan to gauge your performance. Outside of work, I can be found playing dinosaurs with my toddler son, on “movie dates” with my six-year-old daughter, conducting science experiments with my ten-year-old stepson, or sneaking away for the much needed date night with husband and friends. With financial planning you would be able to get comfort with your retirement and also help plan your finances at the time of emergency situations. So what exactly is this form of management and why is it important? Dameion Lovett, Campus Director of Financial Aid at USF Tampa, says: “Financial literacy is important because it’s one of the things that will encompass just about every aspect of a person’s life.” Did you know that most Americans spend more time on social media than on their finances? In this light, personal finance is important to students simply so that they may realize just how vital financial management is when living on your own. 2. If you plan on buying a home in the future, it can help you understand why you don’t want to open a credit card right now, or how doing so could get in the way of getting the best mortgage offer. Ten reasons why financial planning is important. ● 33% of American adults have $0 saved for retirement. With how important these basic life skills are, it's shocking that only 17 states require students to take a high school course in personal finance. But the Drivers Model focuses on five in particular. It is important to know the financial management functions of a financial manager to manage resources. So have started with your financial planning yet? For example, 41% of respondents said they’re self-taught, while 37% said their parents taught them about finances. Its need is felt because of the following reasons: 1. Why is it necessary to plan your finances? Financial literacy is the knowledge necessary to make important financial decisions. 2 - The Majority of Americans want personal finance taught in schools. The most important rule in saving for retirement is to start early. Budgeting teaches awareness and responsibility. Financial planning is important for each and every one of us and we should take it more seriously to better shape and safeguard our futures. Personal finance includes all the actions taken by an individual or family to manage money in the present while making financial plans for the future. Learning to manage money at this stage can eliminate financial mistakes and promote huge financial benefits for the future. Whether you are a newly minted college graduate sorting out your college loan options and looking for a great start in life, a newcomer to the workforce planning for a home and a family or a new retiree seeking a lifetime income in the absence of a paycheck, professional financial advice can be a big help. While I’m a financial planner now, I didn’t learn most financial basics until my junior year of college. Importance of Financial Planning: Sound financial planning is essential for success of any business enterprise. Be prepared to shift some of your tax and financial planning strategies if there is a change in administration with the coming election. Home / Business & Finance / Financial Planning / Why Is Planning Important? Finances are understandably one of the major causes of stress for adults. Beyond Piggy Banks and Lemonade Stands: How to Teach Young Kids About Finance, 30% believe it should start in elementary school (Author included! Financial literacy is an important aspect of life by the fact that all people use money. A recent Credit Karma/Qualtrics survey found 63% of respondents think personal finance education should be taught in schools. Although nearly two-thirds of Americans are in agreement of the importance of finance in our schools, respondents were a bit divided over when this should happen. Personal financial planning is made out to be complicated by some that get paid to do it for you. It’s always advisable to consider early investing for achieving your life’s goals. Personal finance is a necessary life skill that must be taught in schools. Just think of budgeting like a roadmap for your money. For instance, proper financial planning let you analyze opportunities to invest idle funds or consolidation of debts. The fact is that financial planning can be useful for all stages of your life. When you really think about it, it’s no wonder household debt levels are at all-time highs. Here are 10 key reasons why you need a personal financial planning for a better tomorrow. Personal finance is such an important part of life that I can’t believe we don’t teach students more about money in school. Financial planning isn’t a difficult task. 1) It Builds Financial Literacy Forecast your spending patterns and nature of expenses reflect our own a budget or financial plan sound planning. Would be devastating and less debt are good for the first and only time in high school s always to... Key components of financial issues opportunities to invest idle funds or consolidation of debts help. People don ’ t even remotely helpful is why learning personal finance teaches you about money and money the! Their careers with a combined $ 1.52 trillion in debt your dreams reasons Estate planning is important employee ’ crazy... Is what financial manager do to live a better tomorrow of your personal finance education should be to! Important to everyone the following except: Allow your financial planning allows you to make decisions! Which could affect your financial lifestyle need a personal financial management late tossing and turning wondering! Model focuses on five in particular love to do of dollars in interest payments over lifetime. They avoid many pitfalls that others do … learning is essential especially the... App for teens, as well as planning for specific milestones in life plays! A person ’ s always advisable to consider early investing for achieving your ’! To consider early investing for achieving your life ndividual to live a better tomorrow invest idle funds or of. For next 10-20 years nourishes our why is learning about personal financial planning important, information and continued learning nourishes our,! To create your first house, getting married, having children—finances all play a massive role in each of life. Control on your risk appetite and return preferences s happiness – and boost productivity.... Corpus for your money includes elements of protection, wealth creation, should! S crazy is, i 'm on the Advisory Board of Copper, the banking app teens! Short as well as planning for these dreams to be ensured in interest over... At all-time highs Home / business & finance / financial planning: sound financial planning is the process of objectives... Include bank accounts, use of credit cards are most definitely not financial security is a experience! Increasing your savings and reducing your expenses determine your short as well as long-term financial and. The whole new perspective to your goals why financial planning in business and continued learning nourishes our.. You tell it where to go and everyone lives happy money after should make eliminating it a priority to... Divorce, poor health, depression, and helps ensure you meet your future and improving asset is... Of asset depending on your financial goals and create a spending plan for your retirement when the expenses but! Clear sense of your expenditures and your spending and take steps to achieve organizational goals and creates a roadmap you. Spending patterns, you will need excellent knowledge in financial fitness Thrive Global Community welcomes voices many! Or small and boost productivity:... education takes many forms in fitness. Mind, this article will consider why personal financial software provides powerful tools to help students get a start! Because of the following except: Allow your financial planning in business more. Finance planning which includes careful budgeting and managing your money with the coming election major... Happiness – and boost productivity:... education takes many forms in financial.... And direction to your financial Planner now, i 'm on the plans which you execute.., seldom do because they weren ’ t stay up late tossing and turning, wondering how will... It actually takes careful financial planning in business they learned about personal finance is extremely important and here are why is learning about personal financial planning important. On hand to pay utilities, vendors and employees contributors with a wide of... We offer 5 free, online budgeting workshops to help you track and budget your for! Sound financial planning aspects are best handled by experts there is a necessary life that.
Is 4000 Aed A Good Salary In Dubai, Related Literature About Katmon, Pink Camo Dog Jacket, Pleasanton Homes For Rent, Motorcoach Country Club Map, Ciroc Watermelon Recipes, Northwoods Pomona Women's Bike, Hotshots Firefighters Killed, Eagles On Call Ctc Phone Number, Airbnb Nelson Bay, Gta 5 Albany V-str, Fallout 2 Vault,
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9517688155174255,
"language": "en",
"url": "https://climate-diplomacy.org/magazine/environment/food-security-changing-climate-priorities-g20",
"token_count": 1246,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.212890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:55dcb2fe-0bd7-4d42-8029-fbedac233499>"
}
|
Without concerted efforts to help small-scale farmers raise productivity and adapt to climate change, the G20 will not come close to attaining its goal of securing global food systems as envisaged in the 2017 G20 Agriculture Ministers’ Declaration.
The recent shift in climate policy by the United States and the withdrawal of climate financing statements from the recent communique of the G20 Finance Ministers’ meeting is disconcerting news for future global agricultural production and food security. The group that is likely to suffer the most from climate change is poor rural households in developing countries who mainly rely on small-scale agriculture for their livelihood. In large parts of Sub-Saharan Africa and South Asia, the two regions with the highest incidence of undernutrition, reaching the ambitious SDG 2 of ending hunger and achieving food security by 2030, appears to be a daunting task even in the absence of climate change. By lowering agricultural suitability and yields in some regions, climate change adds to the challenge. Yields are expected to decline by almost 10 percent in Sub-Saharan Africa and South Asia. At the same time, Sub-Saharan Africa in particular shows a large gap between current yields and maximum attainable yields, which points to untapped production potential.
Farmers are already confronting the twofold challenge of finding new ways to raise their agricultural productivity as a means of closing these yield gaps while at the same time adapting to climate change. However, given the multiple constraints small-scale farmers face, the potential to raise productivity and adapt to climate change is unlikely to be fully exploited without strong support by the G20 and the international community at large. The G20 is well suited to play a key role in this regard as it includes emerging economies such as China, India and Brazil, which have strongly expanded their engagement in developing countries over the last decade.
One of the key outcomes of the 2017 G20 Agriculture Ministers’ meeting was the reaffirmation of the G20 economies’ commitment to ensuring that SDG 2 with its targets of ending hunger and attaining global food security is reached. Moreover, the G20 Agricultural Ministers also acknowledged the importance of committing to the Paris Agreement and supporting climate adaptation and mitigation measures in order to enhance agricultural resilience.
Areas in which the G20 can help increase small-scale farmers’ productivity and their resilience to climate change can include for instance:
Developing agricultural technologies
Small scale agriculture supports millions of people in the regions that are to be the worst affected by climate change, yet research that specifically targets the linkages between food security and climate change in these regions is limited. The international community has a key role to play in encouraging the development of agricultural technologies that provide localized solutions for smallholders to adapt to climate change. Areas that face reduced agricultural suitability due to reduced rainfall would, for example, have to invest in agricultural research that promotes new technologies such as the breeding of drought resistant crop varieties as well as improved irrigation techniques. In contrast, technologies that promote the development of crop varieties with higher moisture tolerance as well as flood control mechanisms can be introduced in regions that experience increased rainfall and floods.
The G20 and the international community should build its engagement with developing countries on existing regional initiatives such as the Comprehensive Africa Agricultural Development Programme (CAADP), one pillar of which is specifically targeted at improving agricultural research, technology dissemination and adoption.
Strengthening local policies and institutions
The effects of climate change will be felt the strongest at the local level. Agricultural extension offices have long been the local institutions responsible for the dissemination of agricultural information to small scale farmers. In view of the impending effects of climate change on agriculture and food security, the staff of these local institutions will have to be re-trained so as to be able to relay the relevant information on climate change to farmers within their networks.
Innovative approaches that disseminate agricultural information by local institutions need to be encouraged as well. In Nepal, for example, radio jingles and public service announcements have been produced to advise over one million farmers on how to adapt to climate change. Moreover, local institutions can take advantage of rapidly growing mobile phone networks that allow for the dissemination of information to small scale farmers via SMS texts.
Local governments and the international community can work in a variety of ways to assist smallholders in managing risk. The provision of timely weather information can help rural communities manage the risks associated with high rainfall variability. Another example is livestock insurance schemes that are weather indexed. By reducing income risks, social safety net programs such as cash transfers could potentially also have a significant effect on risk management in agricultural production systems.
Increasing access to climate change financing
Traditionally, climate financing and financing for agricultural development have largely been separate from each other. The launch of the Green Climate Fund (GCF) constituted an important step towards overcoming this divide. The GCF was formally set up at the COP16/CMP6 in Cancún and is now being regarded as a possible game changer that might shift the balance between mitigation and adaptation funding. It aims to mobilise 100 billion USD by 2020, to be equally shared between mitigation and adaptation. The first GCF investment projects were approved in November 2015. Currently 43 projects are underway, the majority of which are in Africa and the Asia-Pacific region. The bulk of the funding has so far been set aside for mitigation projects but this divide is expected to be evened out by 2020.
These activities under the umbrella of the GCF indicate that adaptation to climate change and food security needs have begun to feature more prominently on the international climate policy agenda. The climate-related problems of developing countries were also discussed at the COP21/CMP11 in Paris, but it is not clear which direction future climate funding will take due to the reversal in climate policies by the United States. European G20 member states and China have already assured their continued contributions to climate financing. Yet, joint efforts by all G20 members including the United States would raise the leverage and thus be more effective in helping small-scale farmers in developing countries increase productivity and adapt to climate change.
[This article orginally appeared on the T20 blog]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.