Datasets:

meta
dict
text
stringlengths
224
571k
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9706177115440369, "language": "en", "url": "http://sobeks.com/credit-transactions/", "token_count": 1059, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.224609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bd13a32c-bdee-400f-ab16-de6ca145c52b>" }
For some reason, people often think that credit transactions create money. They do not. When you buy icecream, money isn’t created; you exchange money for the icecream. The same goes with credit transactions. The lender exchanges his money for a promise or a claim from the borrower, that he will give the money back at a future date. The borrower is not able to spend more in the present than he otherwise would be able to, that is true, but at the same time the lender can spend that much less. When the borrower returns the money, he must restrict his spending and the lender can spend more.* When a company wants to borrow money, it sells a bond, which is a legal claim entitling the bondholder to a stream of cash payments from the bond issuer (i.e., the company). A bond is simply a standardized contract in which a company borrows money from someone else in the community. The bond price is the amount of money the company is borrowing. When an individual wants to borrow money, he can make arrangements with various people, however, in many cases borrowers use the services of a credit intermediary, such as a bank. The bank is an intermediary between the ultimate lenders and borrowers. First, the banks acts as a borrower, when depositors lend their funds to the bank (and earn a certain interest rate on their deposits). Second, the bank uses these funds to act as a lender to people in the market who wish to borrow from the bank (and pay a certain interest rate on their loans). A successful bank is able to earn enough money on the spread (the difference between the interest rate it charges borrowers and the interest rate it pays to depositors) in order to provide for itself. Consider a young couple wanting mortgage to buy a new house for $200,000. They have to borrow money from multiple savers. If they went knocking door to door, trying to find 200 people who would each put up $1,000 in exchange for the couple’s signatures on a loan contract, they probably wouldn’t be able to find many takers and even if they did, the interest rate would be quite high. With a bank it’s different, because it is less likely to lose the lenders’ savings than any individual borrower. Thus the lenders are willing to lend at a lower contractual interest rate. The bank can also afford to lend to the couple, because it has experts whose job is to evaluate the likelihood that the couple will make their mortgage payments on time. By making hundreds or thousands of loans like these, the bank reduces the damage of any particular loan default (when a borrower stops making repayments). So long as the bank has properly estimated the credit risks of its borrowers, the bank will absorb the expected number of delinquencies and defaults as part of the cost of doing business. Thanks to banks there isn’t going to be one lender who loses his life savings, but the loss will be spread among all the lenders, who will only earn a lower interest rate on their bank deposits than the borrowers are paying on their mortgages. A popular form of credit transactions. When a customer buys something, their credit card issuer pays money to the store, and then records the loan on the customer’s account. You can see once again that there is no new money being created, only exchanged. It is the same as if the credit card issuer walked into the store, gave the customer the money in exchange for a signature promising to pay it back with interest, and then the customer hands the newly-borrowed money to the store clerk. The use of a plastic card just makes this process easier. There are companies who sell lenders “scores” on each applicant to make it easier for the lender to determine if the borrower is likely to repay on time. A high credit score or a good credit means the applicant is responsible, poor credit or a low credit score means the opposite. Secured and unsecured loan The difference between a secured and unsecured loan: a secure loan has a collateral backing it up, which is usually the object being purchased with the loan. Typical example include a mortgage, in which the house (and land on which it sits) serves as collateral. If someone borrowed $10,000 to take a cruise, there would be nothing except memories to show for it down the road, whereas someone borrowing $10,000 to buy a new car could sell the car and pay most of the remaining debt f his circumstances changed. A productive debt occurs for example when an entrepreneur borrows money in order to expand his or her business operations. In the ideal scenario, the company takes out a loan, expands its business, gets higher revenues and pays off the debt. Taking a loan to go through college or medical school can also be a productive debt. The essential feature of productive debt is that the borrowed money is invested in order to increase the borrower’s future income, so that paying back the loan will not be a burden. *Yes, some banks really do create new money when they advance a loan, but we are not going to discuss that today.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9064987897872925, "language": "en", "url": "https://best-excel-tutorial.com/59-tips-and-tricks/248-company-budget", "token_count": 767, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.003021240234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:18ed2f3c-e980-4626-9049-7b872ab23e6b>" }
Lots of small businesses use Excel spreadsheets to keep themselves updated on important data for their business. With excel, businesses have leant to be efficient in data management. Accounting is a major aspect of any business organization's operational process. And Excel spreadsheet is an important program that is very helpful in maintaining a proper accounting record in business. This guide is intended to provide helpful information on how Excel can be very useful in financial accounting. It will show you how to effectively create a company budget using the Excel spreadsheet. How To Create A Company Budget In Excel? 1. Click on the Excel icon in your computer and start up a new blank spreadsheet. 2. Click on the first row and type the name of the excel spreadsheet. Click the second cell on the third row and type the first expenditure for the business. Move to the next row and type another expense. Continue in that manner until you have typed in all the expenses for the business such as Travel, Utilities, Office Rent, Insurance, etc. 3. Go to the first column and put the cursor on the fourth cell (this would be A4). Add the name of the department for which the budget is prepared or the name of employees who have spending/purchasing powers within the organization. Click the ENTER key to move to the next cell. Enter the next name and press ENTER. Continue this process until all sections involved in the company's budget are entered. 4. Click and highlight all entries in the first column. Right click and then click the “format Cell” option to color-fill each of the cells with different colors to make them stand out from each other. Go over to the font tab and change the font style to “bold”. After this is done, you need to click the OK bottom for excel to automatically affect the changes. 5. Go over to the expense section in row three to carry out the same formatting. Click and highlight the worksheet's title and carry out the same formatting, but this time make the words larger than the headings so that they are slightly different from the heading. Use the AUTOSUM feature of the spreadsheet to make your calculations simpler. Highlight the cells in one column (such as the cells under office rent) and click the “autosum” button. The cells will fill up with the symbol =SUM (). This tells you that the cells will add up automatically when they are imputed into the spreadsheet, providing a monthly sum for the column. 6. The next step is to highlight the new AUTOSUM cell in the spreadsheet row as well as one blank cell. Then use the AUTOSUM feature to add up the monthly budget. This total will show up in the new blank cell. 7. After all entries have been made and the autosum feature is running, open the file tab and select “SAVE AS”, type the name of the spreadsheet and save to the computer. Your accounting spreadsheet in your excel is up and running. Once the Excel spreadsheet is up and running, any monthly addition to the budget will be updated automatically because of the autosum feature. All you need to do is type in the expense amount under the proper name at the expense column. Excel will automatically sum up the figure and update the data from previous month's expenses. The original worksheet you will create by following the above steps will remain the template each time you want to make an update on the budget. At the end, your Excel spreadsheet should look like this: You can leave some spaces (as shown in the screenshot above) where you can enter more expenses as they are incurred. Once you enter the name and amount of expense, hit the enter button and the total amount will be updated automatically.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9570112824440002, "language": "en", "url": "https://great-writings.com/essays/Health/health-care-finacial-management.html", "token_count": 1187, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.07763671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:56db3acd-ba42-4a7d-a5b7-174c11b08b0e>" }
Benefit from Our Service: Save 25%Along with the first order offer - 15% discount (with the code "get15off"), you save extra 10% since we provide 300 words/page instead of 275 words/page Financial management forms the basis of every business. The management has an obligation to pay attention to the set accounting standards and ethics that have been set. This should happen to all institutions whether operating as a profit making or nonprofit organizations. This will aim at ensuring proper financial management and this contributes to continued solvency of the organization. Health care sector is not an exception when it comes to observing and adhering to financial management practices. Today the health sector faces a lot of challenges as well as rising costs of health care. Health care facilities are experiencing a high cash flow and this calls for good financial management. On the other hand, the cost of medical equipment has gone up and so the health care facilities need proper budgeting. Ethics and good financial management are keys in the health sector since they touch on the lives of every person. The personnel in the health sector need to focus on maintaining a high level of integrity. This paper focuses on the different elements of financial management that are vital to health care organizations. These elements will foster continued operations of the health care organizations (Jody, 2013). Elements of financial management are also referred to as principles of financial management. The various elements of financial management include financial planning, controlling, directing and financial decision making. Financial managers must be knowledgeable on the short term and long term financial goals and objectives of the organization. In relation to this, it is their duty to ensure that the objectives are achieved. The managers must identify the various activities and steps that need to be carried out so as to achieve the objectives. Planning is more of a top management job, but it should be noted that the management cannot execute all the plans alone. The management has the responsibility of identifying personnel who has a duty towards the achievement of a set objective. They should make sure that the individuals work towards the course. The management has to ensure that funds are availed at the right time. This ensures that programs run smoothly without delay (Riley, 2012). These calls for the need of the management to identify the various sources of funding that are appropriate to finance both short and long term goals. Financial control is another significant element of financial management. Financial control is a key thing that helps the business to achieve its objectives. Financial control involves setting up of procedures that will ensure that the assets of a business are safeguarded, and they are used efficiently. The management has the responsibility of establishing a sound internal control system so as to ensure that the company assets are secure. This eliminates chances of embezzlement of funds, and it also ensures that everything runs smoothly. Financial control also ensures that the management acts in the best interest of the stakeholders. In health care system, the management needs to make sure that the equipment and drugs are safeguarded to avoid loss (Dumlao, 2010). Financial decision making is another key element of financial management. The financial managers have the responsibility of making decisions relating to financing and investment. The quality of decisions made by the management reflects in the future success of the company (Dumlao, 2010). Poor decision making will have negative effects on the financial position of the company. Managers have to adopt sound decision making skills so that they would be able to make the right decisions (Dumlao, 2010). The managers need to be well informed on the various options available so as to choose the best. In the case of a health care organization, the management needs to make decisions on which medical equipment to buy. Directing is another important function of financial management. Directing is a process of setting everything in motion in the organization. The elements of directing include supervision, motivation and leadership. Directing ensures that individuals handle their duties and responsibilities. The management should also motivate employees to ensure that they perform their roles efficiently (Dumlao, 2010). Generally accepted accounting standards refer to general rules and guidelines followed in the USA by accountants. These guidelines ensure that the accountants act in an ethical manner in their practice (Riley, 2012). The GAAP sets clear guidelines in the preparation of financial statements. GAAP sets certain principles, which the accounts in the health care and other sectors can adopt. They should be sincere in their dealings and they should be as accurate as possible. They should update the records regularly and they should be consistent. They should also make full disclosure on the material items. Jody Hatcher lays a lot of emphasis on accounting ethics in a health sector. He illustrates examples of the code of ethics that the accounting personnel in a health system should observe (Riley, 2012). Some of the ethical standards include independence, integrity and objectivity, competency and objectivity. The accounting personnel in the health system need to be independent and they should avoid conflict of interest in their dealings with vendors and customers. The accounting personnel should be committed and remain focused on the course. They should illustrate a high level professionalism (Dumlao, 2010). The accounting personnel need to maintain a high level of competency. They should update themselves on the current changes in the medical field concerning book-keeping. The accounting officers should be responsible persons. They should act in favor of the employer and the customers. They should be ready to uphold their profession ethics in all their dealings. The effects of unethical practices in accounting can be illustrated by Enron a company that has had a lot of scandals (Jody, 2013). This example forms practical lessons that companies can learn from, and thus refrain from unethical practices. In conclusion, ethics in financial management needs to be upheld in all sectors. This will foster confidence as well as quality service delivery in the health sector.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9363296627998352, "language": "en", "url": "https://regalfin.com/blog/weighing-the-choice-between-taxable-and-tax-free-bonds", "token_count": 1461, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0216064453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:32b08552-62e0-4007-b92a-279b75be9507>" }
If you're considering the purchase of an individual bond or even a bond mutual fund, one of your first concerns will be its yield. However, when comparing various yields, you need to make sure you're not comparing apples to oranges. The yield on a tax-free bond may be lower than that paid by a taxable bond, but you'll need to look at its tax-equivalent yield to compare them accurately. What's taxable? What's not? The interest on corporate bonds is taxable by local, state, and federal governments. However, interest on bonds issued by state and local governments (generically called municipal bonds, or munis) generally is exempt from federal income tax. If you live in the state in which a specific muni is issued, it may be tax free at the state or local level as well. Unlike munis, the income from Treasury securities, which are issued by the U.S. government, is exempt from state and local taxes but not from federal taxes. The general principle is that federal and state/local governments can impose taxes on their own level, but not at the other level; for example, states can tax securities of other states but not those of the federal government, and vice versa. The impact of freedom from taxes In order to attract investors, taxable bonds typically pay a higher interest rate than tax-exempt bonds. Why? Because of governmental bodies' taxing authority, investors often consider munis safer than corporate bonds and are more likely to accept a lower yield. Even more important is the associated tax exemption, which can account for a difference of several percentage points between a corporate bond's coupon rate — the annual percentage rate it pays bondholders — and that of a muni with an identical maturity period. Still, depending on your tax bracket, a tax-free bond could actually provide a better net after-tax return. Generally, the higher your tax bracket, the higher the tax-equivalent yield of a muni bond will be. It's not what you get, it's what you keep To accurately evaluate how a tax-free bond compares to a taxable bond, you'll need to look at its tax-equivalent yield. To do that, you apply a simple formula that involves your federal marginal tax rate — the income tax rate you pay on the last dollar of your yearly income. The formula depends on whether you want to know the taxable equivalent of a tax-free bond, or the tax-free equivalent of a taxable bond. The table on the next page shows the tax-free equivalents of various taxable yields; the figures are determined by subtracting your marginal tax rate from 1, then multiplying the taxable bond's yield by the result. To calculate the taxable equivalent of a tax-free yield, subtract your marginal tax rate from 1, then divide the tax-free yield by the result. If a taxable bond also is subject to state and local taxes and the tax-exempt one isn't, the tax-exempt bond's coupon rate could be even lower and still provide a higher tax-equivalent yield. Munis are tax free, except when they're not As is true of almost anything that's related to taxes, munis can get complicated. A bond's tax-exempt status applies only to the interest paid on the bond; any increases in the bond's value are taxable if and when the bond is sold. You also may owe taxes when you sell shares of a muni bond mutual fund. Also, specific munis may be subject to federal income tax, depending on how the issuer will use the proceeds. If a bond finances a project that offers a substantial benefit to private interests, it is taxable at the federal level unless specifically exempted. For example, a new football stadium may serve a public purpose locally but provide little benefit to federal taxpayers. As a result, a muni bond that finances it is considered a so-called private-purpose bond. Other public projects whose bonds may be federally taxable include housing, student loans, industrial development, and airports. Even though such bonds are subject to federal tax, they still can have some advantages. For example, they may be exempt from state or local taxes. And you may find that yields on such taxable municipal bonds are closer to those of corporate bonds than they are to tax-free bonds. Agencies and GSEs (government-sponsored enterprises) vary in their tax status. Interest paid by Ginnie Mae, Fannie Mae, and Freddie Mac is taxable at federal, state, and local levels. The bonds of other GSEs, such as the Federal Farm Credit Banks, Federal Home Loan Banks, and the Resolution Funding Corp. (REFCO), are subject to federal tax but exempt from state and local taxes. Before buying an agency bond, verify the issuer's tax status. Don't forget the AMT To even further complicate matters, the interest from private-purpose bonds may be specifically exempted from regular federal income tax, but still may be considered when calculating whether the alternative minimum tax (AMT) applies to you. A tax professional can determine the likelihood that a bond will affect your AMT liability. |Taxable Yield (%)||Equivalent Tax-Free Yield (%)| The equivalent tax-free yield can be even lower if you are subject to an additional 3.8% Medicare contribution tax that applies to net investment income for individuals with an adjusted gross income of more than $200,000 ($250,000 for married couples filing jointly). Pay attention to muni bond funds Just because you've invested in a municipal bond fund doesn't mean the income you receive is automatically tax free. Some muni funds invest in both public-purpose and private-purpose munis. Those that do must disclose on their yearly 1099 forms how much of the tax-free interest they pay is subject to AMT. Before investing in a mutual fund, carefully consider its investment objectives, risks, fees, and expenses, which are in the prospectus available from the fund; read it carefully before investing. Use your tax advantage where it counts Be careful not to make a mistake that is common among people who invest through a tax-deferred account, such as an IRA. Because those accounts automatically provide a tax advantage, you receive no additional benefit by investing in tax-free bonds within them. By doing so, you may be needlessly forgoing a higher yield from a taxable bond. Tax-free munis are best held in taxable accounts. Securities offered through Regulus Advisors, LLC. Member FINRA/SIPC. Investment advisory services offered through Regal Investment Advisors, LLC., an SEC Registered Investment Advisor. Regulus Advisors, Regal Investment Advisors, and Regal Financial Group are affiliated entities. This content is developed from sources believed to be providing accurate information. The information in this material is not intended as investment, tax, or legal advice. It may not be used for the purpose of avoiding any federal tax penalties. Please consult legal or tax professionals for specific information regarding your individual situation. Prepared by Broadridge Investor Communication Solutions, Inc. Copyright 2019
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9644594788551331, "language": "en", "url": "https://strategiccfo.com/blog/2/", "token_count": 5357, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0419921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ef097169-e6e1-474e-aa8f-61807d0f9ec5>" }
The balance sheet is a financial statement that shows a company’s financial position at a point in time. The balance sheet format comes in three sections: assets, liabilities, and owners’ equity. The assets represent what the company owns. Then the liabilities represent what the company owes. Finally, the owners’ equity represents shareholder interests in the company. The value of the company’s assets must equal the value of the company’s liabilities plus the value of the owners’ equity. There are four basic financial statements: balance sheets, income statements, statement of cash flows, and statement of owners’ equity. Of the four, the balance sheet, also called the statement of financial position, is the only one that applies to a specific point in time. The others cover financial activity occurring over a period of time. That’s why the balance sheet is considered a “snapshot” of a company’s financial condition. Typically, you prepare the balance sheet’s accounting monthly or quarterly. The three sections of the balance sheet consist of line items that state the value of each account within that section. There is no universal format for the balance sheet, so each company’s balance sheet will look somewhat different. This makes balance sheet analysis more difficult than with GAAP compliant reports. However, the basic equation shown above must always apply. Balance Sheet Example Jake owns an equipment rental company called Equipco. Jake’s company has been steadily growing. Due to this, Jake is interested in receiving a bank loan to finance some additional equipment purchases. He needs to know what his total dollar amount of assets and liabilities are so that he can meet the requirements and preferences of his banker. To do this Jake asks his bookkeeper for the most recent copy of his balance sheet. Jake is excited to learn that he can qualify for his bank loan. To begin, his total assets value is at an acceptable level. Jake also has enough owners equity to satisfy his bank on the corporate level. Surprisingly, Jake finds that he does not have too many liabilities to qualify. This concern was, as he believed, his major obstacle to earning the loan. According to Jake’s banker, his balance sheet ratios have everything in order to receive his loan. All from one statement! Porter’s Five Forces of buyer bargaining power refers to the pressure consumers can exert on businesses to get them to provide higher quality products, better customer service, and lower prices. When analyzing the bargaining power of buyers, conduct the industry analysis from the perspective of the seller. According to Porter’s 5 forces industry analysis framework, buyer power is one of the forces that shape the competitive structure of an industry. The idea is that the bargaining power of buyers in an industry affects the competitive environment for the seller and influences the seller’s ability to achieve profitability. Strong buyers can pressure sellers to lower prices, improve product quality, and offer more and better services. All of these things represent costs to the seller. A strong buyer can make an industry more competitive and decrease profitpotential for the seller. On the other hand, a weak buyer, one who is at the mercy of the seller in terms of quality and price, makes an industry less competitive and increases profit potential for the seller. The concept of buyer power Porter created has had a lasting effect in market theory. Conducting an industry analysis can be overwhelming and confusing. Download the External Analysis whitepaper to gain an advantage over competitors by overcoming obstacles and preparing to react to external forces, such as it being a buyer’s market. Buyer Power – Determining Factors Several factors determine Porter’s Five Forces buyer bargaining power. If buyers are more concentrated than sellers – if there are few buyers and many sellers – then buyer power is high. Whereas, if switching costs – the cost of switching from one seller’s product to another seller’s product – are low, the bargain power of buyers is high. If buyers can easily backward integrate – or begin to produce the seller’s product themselves – the bargain power of customers is high. If the consumer is price sensitive and well-educated about the product, then buyer power is high. Then if the customer purchases large volumes of standardized products from the seller, buyer bargaining power is high. If substitute products are available on the market, buyer power is high. And of course, if the opposite is true for any of these factors, buyer bargaining power is low. For example, low buyer concentration, high switching costs, no threat of backward integration, less price sensitivity, uneducated consumers, consumers that purchase specialized products, and the absence of substitute products all indicate that buyer power is low. Buyer Power – Analysis When analyzing a given industry, all of the aforementioned factors regarding Porter’s 5 Forces buyers power may not apply. But some, if not many, certainly will. And of the factors that do apply, some may indicate high buyer bargaining power and some may indicate low buyer bargaining power. The results will not always be straightforward. Therefore, it is necessary to consider the nuances of the analysis and the particular circumstances of the given firm and industry when using these data to evaluate the competitive structure and profit potential of a market. • Buyer purchases comprise small portion of seller sales Buyer Bargaining Power Interpretation When conducting Porter’s 5 forces buyer power industry analysis, low buyer bargaining power makes an industry more attractive and increases profit potential for the seller, while high buyer bargaining power makes an industry less attractive and decreases profit potential for the seller. Buyer power is one of the factors to consider when analyzing the structural environment of an industry using Porter’s 5 forces framework. Many respect the buyer power Porter’s five forces. We are now experiencing the worst global pandemic in 100 years. COVID-19 hit the U.S.A. in Q1 2020 and businesses were forced to either slow down, shut down, or change the process of how they do business. Now is a critical time to understand the business restructuring process. Most businesses, large and small, have been affected in some way, mostly negatively. On top of that, the price of oil came crashing down once again. This brought a parallel downturn in the oil and gas industry causing companies to consider a business restructuring process. Since 2015 business restructurings were at an all-time low. Just a few months ago businesses were booming, companies were having record years in 2018 and 2019, low-interest rates, access to capital, strong GDP, unemployment low, decent companies had good margins and cash was flowing. If you would have asked me in December 2019, what are the chances of the world economy to come very close to a complete shut down in a few weeks? I would have told you close to zero. The business restructuring process was far less common. When margins are high, clients are knocking on the door and cash is flowing it is easy to forget about margins, working capital, and cash flow forecasting. There was a false sense of security in 2018 and 2019. We spoke to several business owners of large and small successful business and they did not have time to talk to us about managing cash flow, KPIs, and margins. If their books closed 3 weeks after the end of the month and on a cash basis, they were fine with that. In good times is easy to forget about the basics and having a backup plan in case you have that “rainy day”. Guess what, now we are all living that “rainy day”, but it is not just one day. It is likely to be a downturn for the entire year of 2020. In addition, very few companies had a backup plan for completely shutting down operations for 2 weeks, 2 months or more. This was never supposed to happen. Businesses Push for Survival We have seen some businesses push to partially open and survive. We have seen a few businesses take the punch well because they had positive net working capital and cash in the bank. But we have seen many businesses struggle with barely making payroll and meeting its debt obligations. NET WORKING CAPITAL = the company’s ability to meet its short-term obligations. More than ever managing net working capital has become very important. Business Restructuring Process Throughout my career, I have helped companies successfully restructure their business because they were impacted by an event that caused them to be in financial distress. As a recent example, there were layoffs and a division of a company was shut down. The remainder of the company was profitable, smaller, and had a future and was able to survive. How to Successfully Restructure your Business During a Global Pandemic This blog is intended for all business owners out there, so we all have a chance of survival. Over my 30 years as a professional, I have witnessed countless financially distressed businesses go from struggling to surviving from a successful restructuring process. The term restructuring can have several different meanings and can be used in different ways. Restructuring can mean… changing the management team entering a different industry shuffling people around within an organization reorganization of your debt and operations out-of-court filing for bankruptcy and reorganizing with court protection Restructuring your Capital Structure and Debt In a case where a company was impacted by an unforeseen event such as a global pandemic (COVID-19), the business was a healthy business in a healthy market, and from no fault of their own, they have now faced a situation of financial distress because of lost revenue, and debt on their balance sheet. If it were not for COVID-19, restructuring would have not been needed. Restructuring can happen out-of-court, that means without filing for bankruptcy protection, or through a court process where a company files for bankruptcy protection. Business Restructuring Process| Out-of-Court Restructuring Without Filing for Bankruptcy | Out-of-Court Restructuring Process Out-of-Court Restructuring is where a company attempts to reorganize its debt with creditors without filing for bankruptcy. In order for a business restructuring process to be successful, a Financial Advisor is hired to assist the business owners to restructure their debt with secured and unsecured creditors. This could also include raising capital to recapitalize your business. In order for an out-of-court restructuring to be successful, it means that everyone wants to play ball. The parties involved are willing and able to enter into discussions to restructure the debtor’s liabilities and support the company with its future business plan. It only takes one major third party to object and this kills the opportunity for the out of court restructuring process to be successful. Also, keep in mind that in an out-of-court restructuring the business has no protection and can be hurt by aggressive creditors. Business Restructuring Process | In-Court Restructuring Filing for Bankruptcy | In-Court Restructuring Process In-Court process, filing for bankruptcy; this is a formal process where the law provides the debtors with statutory protections. Assuming your business is a viable candidate for a Chapter 11 bankruptcy, your business will have the time and opportunity to negotiate and reorganize your debt and capital structure. The company will present a plan of reorganization, a business plan, that shows the court and creditors how the business will survive as a viable business after the bankruptcy. A Second Chance at Survival for Businesses The bankruptcy process is long, expensive, and takes a lot out of an organization. But if done correctly it gives the business a second chance to survive and probably with less debt. In bankruptcy, you will be dealing with things you have never dealt with before such as: A court and judge Possibly a creditors committee Strict reporting requirements and deadlines for reporting Possibly a Trustee The Beauty of the Bankruptcy Process The beauty of the bankruptcy process, specifically Chapter 11, is that if the process and filing are well planned out there is a very good chance of success and emerging the other side with a strong company producing cash flow. Kicking the Can Down the Road | Hardworking, proud, and out of control? Common attributes of CEO’s, business owners/entrepreneurs are hardworking, proud, and they have always been in control. This is the first time for many business owners and CEOs not to be in control, it is the first time for feeling financial distress. So many of them “kick the can down the road” and avoid what their balance sheet and P&L are telling them. The Debt is Not Going Away Yes, it is true that many banks are being “kind” during the COVID-19 process, and maybe providing waivers for strict financial covenants related to the debt. But the reality is the debt is not going away, and there is still a lot of uncertainty around what is normal and will be the “new norm” for business. Reclaim Control | Business Restructuring Process Now more than ever it is critical that your financial statements are on an accrual basis. A cash-basis balance sheet will NOT tell you what your real net working capital is, and you will only be lying to yourself. Take Corrective Action Corrective action – talk to a financial professional to determine if you might need to have your company restructured. Your financial professional is NOT the CPA that prepares your tax returns. A Trusted Advisor We can provide an analysis and recommendation and walk you through the restructuring process, out-of-court, or in-court through a bankruptcy process. Give us a call and find out how we can become your trusted financial advisor through this difficult time. It’s hard for companies to realize how much they are actually spending when it comes to hiring a new employee. Once they decide it’s time to pursue a new worker, a lot of resources are used to find the perfect candidate for the job. Finding the perfect candidate within a vast number of people might be very difficult, therefore costing a significant amount of time and money to the company. There are certain steps a company can take in order to minimize these costs. In this blog, we will walk through what the current hiring process costs. What the Current Hiring Process Costs Hiring and recruiting a new employee costs the company a lot more than just their salary. Recruitment costs are very often overlooked. Recruiters spend countless number of hours trying to find the perfect candidate for their needs, leading them to go through extensive research on countless number of candidates. This research, however, is not free. Finding the perfect candidate comes with a price. Let’s look at what the current hiring process costs: Suppose you pay your recruiter $75 an hour and he looks through 100 resumes, each for 20 seconds: $75 x (20s x 100 Resumes) Let’s assume that 10% of those applicants now get an interview lasting an average of 1.5 hours: $75 x (20s x 100 resumes) + (10 x 1.5 hours) On top of that, lets presume 10% of those interviews make it to a second round: $75 x [(20s x 100 resumes) + (10 x 1.5 hours) + (1 x 1.5 hours)] That’s a total of 17 hours and for $75 and hour you ended up paying your recruiter: 17 x $75 = $1,275! That is not even adding the cost that you spent advertising for the job posting, drug tests, background pre-screenings or assessment tests! As you can see, hiring a worker is clearly not free, it can come with various unexpected costs that sometimes can go unnoticed. Companies should take proper measures to minimize these costs because as you can see above, each recruiting process can cost a hefty amount. Even after spending countless amounts of time and money finding the perfect candidate, companies still run the risk of a bad hire. Maybe they needed to fill the job quickly, maybe they didn’t have enough talent intelligence, or maybe it was just an honest mistake. But hiring the wrong person can have significant effects on the company’s performance. Hiring a person that does not provide value to the company can be critical hit to the company’s development. Not only does it waste the companymoney, but it can also have a negative influence on companyculture. Be cautious when it comes to hiring a new employee and take proper measures to properly decide on the best candidate. Tips to Improve Your Recruiting Process Once you realize it is time for your company to hire someone, it’s a chore finding the correct person for the job. From the marketing to the interviews, it can be very important how you go about this process. Or you risk missing out on great potential candidates when you do things imperfectly. Here are some tips to improve your recruiting process: Have an Accurate Job Description Thoroughly define what exactly are the duties and responsibilities you are looking for and add these to the job description. Make sure they are as clear and accurate as possible. Try to have a job posting that will attract qualified candidates and discourages others. This will help you save a considerable amount of time in the screening process. Advertise Your Job Do some research on what type of job posting resources will work the best for your company, whether its posting online, in a school placement office, or through an employmentagency. The way you find you candidates can have a remarkable difference on the quality of your applicants. Think of what your ideal candidate will look like. Then, have a strict screening process that would weed out applicants that would not be suitable for your company. After this, rank your remaining candidates in order from most to least suitable. You can also choose to have an assessment test that would measure their abilities in an actual job-like situation. Show Them Why They Should Work For You Once you have chosen your ideal candidate, now it’s your turn to sell him on the job. Remember that the strongest candidates will always have more opportunities. Hiring is a two-way street, so make sure you convince your candidate by communicating a strong vision and mission for your business with enthusiasm and sincerity. Bypass the Current Hiring Process It’s 2020. The Strategic CFO created Short|LYST due to the current environment and demand. Unemployment is at an all time high due to the global pandemic and many have lost their job. Candidates are faced with the traditional outlets of posting resumes on countless online sites and never getting a response. This is a black hole in most cases…There had to be some revolution to the hiring process, but the only changes in the past couple of decades are search firms, headhunters, and recruiters. That’s why we created Short|LYST. It allows employers to bypass the current hiring process and cut the current hiring process at least in half. Instead of screening hundreds of candidates, interviewing dozens more, and risking not even finding the right candidate, Short|LYST does that all for you. Our team of experienced HR and financial executives take the financial and time burden off from the employers. All the employer has to do is pick and choose which recommended candidate they want to take forward. Learn more about Short|LYST here. But often people either do not communicate these procedures or simply don’t follow them consistently. Even when everyone is aware of and follows the established protocol, your system may be flawed. Before we show an example, you need to know how to manage cash flow. Know How to Manage Cash Flow We all know that cash is king – liquidity is essential for survival. Many entrepreneurs only know how much is in the bank, but they don’t understand how much cash they actually have. So, how does one manage cash flow? First, you need tools. Here are a few tools that can help a company manage cash flow: Then you need to manage and work your operating cycle. Your operating cycle is “how many days it takes to turn purchases of inventory into cash receipts from its eventual sale”. It indicates true liquidity – how quickly you can turn your assets into cash. Calculate how long your operating cycle is using the following formula: Watch your expenses carefully. If you do not have an eye on SG&A and procedures on what can be purchased, then you risk racking up unnecessary overhead. Think about too much inventory, unnecessary equipment replacements, extreme marketing budgets, etc. Another method to manage (and improve) cash flow is to collect quicker. This is a great method to use if you are in a cash crunch and can only make small improvements. For example, there is a $10 million company that collected their accounts receivable every 365 days. They had a lot of cash tied up. If they improved their DSO 5 days, that would be an extra $137,000 of free cash flow. While we never aim to scare our clients and readers, we have a huge plethora of war stories about what happens when companies don’t have internal controls. Just in my 18+ years of experience, I’ve compiled all the crazy stories for you today. What Happens When Companies Don’t Have Internal Controls So, what happens when companies don’t have internal controls? They open themselves up for theft, embezzlement, and liability. If there are no controls over what’s going on inside, then there is no control over cash flow, profitability, etc. It also “gives permission” to your team to do as they please and when it pleases them.They may or may not be making decisions in the best interest of the company.But without internal controls, they are likely less careful with the decisions they make.Have you ever noticed how easy it is for a child to spend their parent’s money, but if it was their own money they are less likely to spend frivolously? War Stories | What Happens When There Are NO Internal Controls In my experience, I have gathered so many war stories on what happens when there are no internal controls. Read about some of my most unforgettable below. My Most Trusted Accountant and Advisor Many years ago, while I was part of the audit team, I had a client who had the same accountant for 20+ years – we’ll call her Sheila. She has been with the company since it opened its doors and was the owner’s most trusted confidant and advisor. Sheila was in complete control of the receivable and payables.There was no oversight over Sheila’s position. When I started to look at their accounting records, there were several red flags… Sheila was very defensive and abrasive when I came into the office and during the review phase of the engagement.She mentioned several times it was okay for me to work remotely.She wanted me to sit outside of her office, even though her office was large and had a meeting table and several chairs. Intuitively, I knew something was off with her. I also noticed that the company cut thousands of checks every month to different companies. Sheila cut them and signed them herself.The business owner trusted Sheila and gave her access to manage the bank account and accounting records. The biggest red flag was discovered during the audit of the transactions.There were several inconsistencies with who the checks were being written to and how they were recorded in the accounting system.It appeared Sheila would have the checks payable to herself and immediately go back into the system and change the name to a made-up vendor.After months of due diligence and investigation, it was discovered that she had stolen at least a quarter of a million dollars in just the last 10 years of her employment. While this hurts the owner, the owner gave less trust at face value and implemented internal controls to regain trust in accountants. Creating Checks and Balances with Internal Controls In another instance, the Chief Operating Officer of a company approved several supplier invoices.The accounts payable department processed the invoice and paid the supplier without further questioning. It took at least a year before the company learned that the COO created this false company, approved the invoices and received payments for personal gain. Therefore, it is critical to have internal control at all levels of the company with different teams in place to create the check and balances it needs.Internal control would the purchasing group validate the supplier, approve the purchase order before submitting the order to accounts payable.Generally, operations would have received a receiving document once goods/services have been provided with a signature of the person receiving the goods/services.Accounts payable would receive the final invoice and match it against the approved purchase order and receiving document. There are several things I learned about internal controls when I was in audit and now even as a CFO. Trust Your Gut If your gut is telling you something is wrong or off, it is worth investigating. When I have followed my gut, I have either found something wrong or found comfort that everything is okay. But those few times I did not trust my intuition, I missed steps to prevent fraud, etc. Never Do Anything Without Oversight As a CFO, business owner, entrepreneur, and accountant, I have learned that no one is too high not to have oversight. If I cut all the checks and sign them, that leaves it all up to me. Thankfully, I know myself and I would never do anything criminal! However, not all people are like me. There are, unfortunately, individuals that are motivated by rationalization, pressure, and/or opportunity. Oversight helps protect all parties – even yourself. So that is what happens when companies don’t have internal controls – lack of control.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9685845375061035, "language": "en", "url": "https://will-law.org/2016/11/02/will-school-choice-wisconsin-respond-christian-science-monitor-article-wisconsin-rural-school-finance/", "token_count": 732, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.3515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:be133820-d6ff-4870-83a3-90fd3a184a65>" }
The recent Christian Science Monitor story describing the financial difficulties faced by schools in rural Wisconsin tells a one-sided story that ignores some important facts. First, the article presents a narrow view of what has happened to education spending in the Badger State. Wisconsin, like many other states, faced a budget crisis in the wake of the Great Recession. Policymakers were saddled with a $3.6 billion budget deficit when federal stimulus funds expired, requiring smart budgeting and spending cuts. In other words, a one-off infusion of federal money had maintained school spending and either taxes would have to be raised or services cut. Gov. Scott Walker found a third way. Act 10, his budget repair bill that curtailed collective bargaining, provided schools with the tools to absorb these cuts without layoffs or tax hikes. It allowed superintendents the flexibility to restructure benefits so that employees might contribute their fair share, just as they would in the private sector. It also limited raises for district employees to the level of inflation. The MacIver Institute has estimated that these savings have added up to more than $5 billion since the law’s passage. But that is only part of the story. As a study by the Wisconsin Institute for Law & Liberty demonstrated, school districts have been able to maintain their student teacher ratios and have not seen a material decline in the experience level of teachers due to the cost control measures of Act 10. While it is true that spending is down statewide approximately $100 per student since before the end of federal stimulus, we would argue that this small difference is more than made up for by the cost cutting tools of Act 10. And, as the economy has improved, school funding has increased. Per student spending in Wisconsin has actually increased every year since 2012, up $400 per student, according to data from the state’s Department of Public Instruction. In fact, it has increased in the very district featured in the Monitor’s story. Per pupil spending in Shiocton has increased by more than $600 per student since 2012, from $6,610 to $7,238. So how could the Monitor have said that there has been a “13 percent” decline in state aid to Shiocton when per pupil aid has actually gone up? It appears to be largely the result of declining enrollment. Since 2009, enrollment in Shiocton is down 7.2%. And just as a business will generally enjoy less revenue if it has fewer customers, a school district will receive less state aid – and must cut costs – when it has fewer students. But an even more egregious claim in this story is the Monitor’s attempt to blame the district’s budget problems on the statewide school choice program. In Shiocton, the loss of revenue from the choice program was $15,720, or .00033% of the total revenue. To blame the district’s budget problems on the loss of two students to the Choice program is absurd. In addition, school districts are allowed to increase property taxes to make up for any loss in revenue resulting from school choice. According to a memo from the non-partisan Legislative Fiscal Bureau, the vast majority of school districts in Wisconsin saw no fiscal impact or a positive fiscal impact as a result of increased property taxes to recoup the lost revenue from choice students. School districts in rural Wisconsin do face real challenges, including poverty, teacher retention, and declining population. But this article is a distraction from these challenges, presenting instead a one-sided, biased account that is lacking important context.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9708778858184814, "language": "en", "url": "https://www.brookings.edu/opinions/shameful-bipartisan-acceptance-of-inflation-on-the-poor/", "token_count": 749, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.48046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:eda0fb5a-4604-4884-8cc1-22167ff48397>" }
Here are two principles on which most members of both political parties agree. 1) Inflation should not increase people’s tax burdens. 2) Inflation should not erode Social Security benefits. There is, to be sure, some disagreement on exactly how to measure inflation. But few disagree with the general principles. This stance is admirable. At the same time, however, members of both parties seem content to allow inflation to erode access to cash benefits for the poor. And the passive acceptance of that result is, I believe, shameful. More than four decades ago, Congress enacted automatic adjustments to prevent inflation from eroding the value of Social Security benefits. In truth, that change did little to improve the status of the aged and people with disabilities, as Congress had been raising benefits anyway by enough to roughly offset inflation. But it did a fair bit for honest government. No longer would elected officials take credit, usually in election years, for simply preventing inflation from cutting benefits. And supporters of the program in both parties understood that if Congress ever wanted actually to cut benefits, they would have to stand up and vote to do so. Chalk one up for honesty in government. Thirty years ago, Congress introduced ‘indexation’ into the personal income tax code. Inflation had been raising taxes by stealth. It did so in many ways-for example, by eroding the value of the standard deduction and personal exemptions and by pushing more income into higher brackets. So, Congress enacted rules that would automatically increase the standard deduction and personal exemptions and raise and widen tax brackets enough to offset inflation. Those changes put an end to ‘bracket creep,’ the name tax analysts had given to the steady increase in tax burdens resulting from inflation. Henceforth, Congress decided, if members wanted to raise taxes, they would have to ‘man up’ and actually vote to raise them. A second gold star for honest government. But protecting the poor from inflation was another matter. Supplemental Security Income (SSI) pays monthly benefits of $733 to single persons and $1,100 to couples who have little or no income and few assets. The benefit amounts, like those for current Social Security beneficiaries, have been adjusted for inflation. But other provisions, which serve to limit access to the program, have been adjusted little or not at all. As a result, inflation has denied access to SSI benefits to people who would have been poor enough to qualify for them in the past. Back in 1974, when Congress enacted SSI, it wanted to confine benefits to those who were demonstrably poor. It also wanted to spare administrators the cumbersome and costly job of keeping track of small earnings and scraps of other income. So, it stipulated that benefits should be reduced by one dollar for every two dollars of earnings over $65 a month and dollar-for-dollar if other income exceeded $20 per month. These amounts have never been changed. Meanwhile, the price level has risen nearly four-fold. Had these exclusions been adjusted for inflation, the exclusions would be for $242 a month in earnings and $74 a month of income from other sources. Restoring access to SSI benefits to conditions that the United States was able to afford in 1974 would mean that more people would qualify for benefits. That would raise public spending. But protecting Social Security beneficiaries from inflation also costs money. So, I cannot help wondering- if it is a good thing-and I agree that it is-to protect from inflation people who pay taxes and people who qualify for Social Security, why is it not an equally good thing to protect the poor from economic losses because of inflation.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9153514504432678, "language": "en", "url": "http://coegss.eu/hpc-gss/", "token_count": 3559, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0283203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f1dadf77-5de4-4192-87ce-10380f1e3a25>" }
Green growth refers to increased well-being in the economic, ecologic and social dimensions. In the coming ten years, billions of people will spend an additional 30 trillions of Dollars on goods and services of all kinds; even larger amounts will be spent in the following decades. The fate of the planet depends on how this money will be spent. Worldwide, shifting preferences and policy incentives will foster initiatives for green growth – renewable energies, energy efficient buildings, healthy diets, green IT and more. The goal of this pilot is to develop a synthetic information system for identifying green growth opportunities and investigating how these can be seized, to provide analysis and advice on green growth for companies, public authorities, international organisations and the public at large. As a first case study, CoeGSS will simulate the global fleet of cars. There are presently more than one billion cars on the planet, and there can be little doubt that this number will keep increasing. The question is at what speed, where and for what types of cars. The answer in turn depends on the strategies and decisions of businesses and policy-makers, and on technological and cultural developments. With the capacity of supercomputers, CoeGSS will construct synthetic populations representing those cars, their owners and other relevant agents. This method allows to model and to understand the global diffusion of innovations like different kinds of electric cars. It can later be adapted for studying the global dynamics of renewable energies, energy efficient buildings and the whole range of green growth opportunities. Optimizing city development choices is an essential challenge for the future, driven by the growth of world population increasingly living in cities and the opportunities of more overall and complete approaches assisted by always more global and intelligent technological innovations. Cities are complexly defined by the interaction of processes as different as real estate, transportation, economy, society and politics. The goal of this pilot is to model cities and linked population dynamics concerning very different features – from opinion propagation to housing preferences or transportation behaviours – by using a global system science approach, coupling sub-models from different fields describing these different processes. Such models may correspond to very different time and space scales, going from hourly traffic congestion to long term city development. Modelling city and population dynamics rests on individual’s multi-fold characteristics, particularly geographic location (e.g., concerning housing or transportation). Therefore the possibility of creating synthetic populations with realistic statistical distributions of characteristics’ values will prove precious for any simulation. It can permit furthermore to clarify influences between different elements of the city and point to possible or more efficient levers to improve cities’ everyday life. For instance, it will allow exploring the impact of development choices, or the precise two-way relation between price mechanisms and infrastructure decisions. It might also open the way to studying the effect of overall opinion and behaviour dynamics and to assessing the benefit of information campaigns or other public incentives. Multi-Objective Decision Making Tools through Citizen Engagement Every realworld planning problem, especially in governance and policymaking, possesses several objectives that are typically subject to inherent conflicts with underlying tradeoffs to be discovered. Policy makers are in need of proper tools that will utilize an overall analytical process, that is, assist in modelling the realworld planning process, automatically obtain the best attainable tradeoffs, and facilitate efficient exploration in order to reach a final decision on which policy to implement. Consensus will strive to model existing realworld usecases within the relevant policymaking context, and consequently employ measurable quantifiers in order to investigate how and whether preferable tradeoffs can be identified. Those quantifiers will be sought in multiple realms – such as analytical models, numerical simulations, statistical tools and even public opinion evaluators – in order to link the domain data to the set of objectives, and by that to reflect the expected successrate of policies and their implementation. Furthermore, Consensus intends to investigate the balance shift of the objectives, in scenarios where certain resources are being deployed to primarily address one of them through EU or internationallevel policies. This investigation is meant to cover two important realworld usecases: the one dealing with Biofuels and Climate Change (EU Renewable Energy Directive), and the other dealing with Transportation Networks (transEuropean transport network guidelines). Consensus will also seek the citizens’ involvement in policy making according to this scheme, since their input can potentially become highly valuable in various stages, from gathering the necessary data, through formulating public opinion as one of the objectives in the model, to eventually playing the role of exploring the attained tradeoffs and contributing to their weighing. (no known active website) Contact: Theodora Varvarigou – [email protected] Complexity Research Initiative for Systemic Instabilities CRISIS (Complexity Research Initiative for Systemic Instabilities) is a 3year project, funded under the FP7ICT theme. It is a consortium of researchers from 11 leading European academic and private sector institutions, supported by an advisory board of senior current and former policymakers and financiers. The project was set up in the wake of the global financial crisis that showed that existing models that had been adequate in times of economic prosperity were utterly inadequate for predicting major crises. Its aim is to build a new model of the economy and financial system that is based on how people and institutions actually behave. CRISIS aims to develop tools to deepen policymakers’ understanding of the economic and financial system and give them realistic options for modelling the economy and designing policies and regulations. CRISIS intends to deliver three products: 1 A model of the EU financial system and macroeconomy, with a userfriendly graphical interface and a webbased gaming mode. 2 A granular database of households, firms, and financial institutions. 3 Analyses of critical EU financial and economic issues based on the model. Contact: Domenico Delli Gatti – [email protected] Evolutive Usercentric Networks for Intraurban Accessibility Urban transport is essential for citizens to perform their daily activities, but it also constitutes a major source of pollution. The goal of EUNOIA is to take advantage of smart city technologies and complex systems science to develop new models and tools empowering city governments and their citizens to design sustainable mobility policies. EUNOIA pursues advances in three complementary directions: 1. Use of data. The massive penetration of ICT is modifying social relationships and travel behaviour, and at the same time is providing us with a huge amount of heterogeneous data: intelligent transport systems, Internet social networks, mobile phone call logs, etransactions. EUNOIA investigates how to different European cities. 2. Urban transportation models. EUNOIA is analysing the interactions between social networks and travel behaviour, e.g. the influence of social networks on the planning of joint trips. This will allow a more comprehensive assessment of mobility policies, particularly of new services emerging around the idea of a shared access to resources, such as car pooling. The new travel behaviour models are being integrated into stateoftheart agentbased simulation tools. 3. Link between modellers, decision makers, and societal actors. The potential of urban simulation models is still little exploited in policy decision contexts. EUNOIA is developing tools, e.g. 3D visual analytics, allowing stakeholders’ interaction with the simulation results, as well as a methodology for collaborative, multistakeholder policy assessment. In order to ensure maximum credibility and usability of the project results, the models and methodologies developed by EUNOIA are tested and refined through several case studies conducted in close cooperation with policy makers and mobility stakeholders from the three cities participating in the project: Barcelona, London, and Zurich. Contact: Maxi San Miguel – [email protected] csic.es EU Community goes beyond current generation of policy modelling and argumentation tools. It provides decision makers with better policy options by combining social media interactions, qualified contributors, document curation, visual analysis plus online and offline trust-building tools. The results will be open source platforms, and the data itself will be open to re-use by other apps developers. Over 36 months, a consortium of leading research centres, ICT SME’s and a large media network, will go from existing tools to further advanced prototype, pilot-testing and roll-out. They are supported by a number of high-calibre experts and a foundation serving as community guarantor. The results will be tested and deployed over an EU policy media network, with a track record of sustainability and multilingualism. Three pilots suiting the EU political mandates 2014-2019 have been selected (FUTURE OF EU, ENERGY UNION and INNOVATION STRATEGY) and will be undertaken by a network of European stakeholders (policy-makers, journalists, experts, NGO’s and informed citizens) in several EU countries, supported by localised policy media. Contact: David Mekkaoui – [email protected] Innovative Policy Modelling and Governance Tools for Sustainable Post-Crisis Urban Development Cities embody the twofold challenge currently facing the European Union: how to improve competitiveness while achieving social cohesion and environmental sustainability. They are fertile ground for science and technology, innovation and cultural activity, but also places where problems such as environmental pollution, unemployment, segregation and poverty are concentrated. INSIGHT aims to investigate how ICT, with particular focus on data science and complexity theory, can help European cities formulate and evaluate policies to stimulate a balanced economic recovery and a sustainable urban development. The objectives of the project are the following: to investigate how data from multiple distributed sources available in the context of the open data, the big data and the smart city movements, can be managed, analysed and visualised to understand urban development patterns; to apply these data mining functionalities to characterise the drivers of the spatial distribution of activities in European cities, focusing on the retail, housing, and public services sectors, and paying special attention to the impact of the current economic crisis; to develop enhanced spatial interaction and location models for retail, housing, and public services; to integrate the new theoretical models into state-of-the-art simulation tools, in order to develop enhanced decision support systems able to provide scientific evidence in support of policy options for post-crisis urban development; to develop innovative visualisation tools to enable stakeholder interaction with the new urban simulation and decision support tools and facilitate the analysis and interpretation of the simulation outcomes; to develop methodological procedures for the use of the tools in policy design processes, and evaluate and demonstrate the capabilities of the tools through four case studies carried out in cooperation with the cities of Barcelona, Madrid, London, and Rotterdam. Contact: Galloso Iris – [email protected] Global Dynamics of Extortion Racket System The GLODERS research project is directed towards development of an ICT model for understanding a specific aspect of the dynamics of the global financial system:Extortion Racket Systems (ERSs). ERSs, of which the Mafia is but one example, are spreading globally from a small number of seed locations, causing massive disruption to economies. Yet there is no good understanding of their dynamics and thus how they may be countered. ERSs are not only powerful criminal organizations, operating at several hierarchical levels, but also prosperous economic enterprises and highly dynamic systems, likely to reinvest in new markets. If stakeholders – legislators and law enforcers – are to be successful in attacking ERSs, they need the much better understanding of the evolution of ERSs that computational models and ICT tools can give them. GLODERS will provide a theory-driven set of computational tools, developed through a process of participatory modelling with stakeholders, to study, monitor, and possibly predict the dynamics of ERSs, as they spread from local through regional into global influence. The research will draw on expertise already developed in the small, but highly experienced multidisciplinary consortium to use: computer-assisted qualitative text mining of documentary evidence; guided semi-automatic semantic analysis of stakeholder narratives and other textual data; and multi-level, stakeholder-centred agent-based modelling of the distributed negotiations between normative agents. These methods will advance the state of the art for using data to inform policy decisions. Throughout, the project will interact with a large, international group of stakeholder representatives from EU Ministries of Justice and police forces. The output will provide a set of ICT tools to facilitate strategic policies that could prevent the further penetration and extension of the global menace posed by ERSs. (hacked website, disabled clicking [gloders.eu]) Contact: Nigel Gilbert – [email protected] Forecasting Financial Crisis The goal of FOC is to better understand systemic risk and global financial instabilities by means of a novel, integrated and network-oriented approach. FOC has delivered several models of financial networks and indicators of systemic importance such as DebtRank. Some of these models and algorithm are being developed in collaborations with central banks of various EU countries. We have also investigated if and how Information Technologies could be used to anticipate trends and instabilities in the markets. Contact: Guido Caldarelli – [email protected] Global systems Rapid Assessment tools through Constraint FUnctional Languages The making of policies coping with Global Systems is a process that necessarily involves stakeholders from diverse disciplines, each with their own interests, constraints and objectives. People play a central role in such collective decision making and generally the quest for solutions to a problem intertwines its very specification. What-if style simulators can assist in this process provided they employ adequate high-level and qualitative modelling to separate the political question from the underlying scientific details. Domain-specific Languages embedded in Functional Programming languages offer a promising way to implement scalable and verifiable simulators. But the use of simulators is essentially a trial-and-error-like process, too tedious for execution in a group session. A paradigm shift is needed towards active problem solving where stakeholders’ objectives can be taken along from the very beginning. Constraint Programming has demonstrated to enable such a shift for e.g. managed physical systems like water and power networks. Our research pursues laying a base for domain-specific languages aimed at building scalable “rapid assessment tools” for collective policy making in global systems. It involves several different disciplines. At the top policy-modelling level, we adopt and adapt the social discipline of Group Model Building, well-known from business dynamics. This process is backed by visual forms of Constraint Programming and flavoured with gamification aspects. At the host-language level, we work on combining CP and FP. In this context, specific work is being done on domain-specific constraints, constraint composition, and composable solvers and heuristics. Results are applied and validated for a problem case of Climate-Resilient Urban Design in The Netherlands, but our ambition is a general framework applicable to several other Global Systems. Contact: Tom Creemers – [email protected] Financial Systems Simulation and Policy Modelling In SIMPOL, network science, big data and ICT’s meet economics and financial regulation. Our vision is that only a truly interdisciplinary approach will lead to the fundamental advances in modelling financial and climate policies that our society needs today. On the one hand, we will develop new methods to assess the systemic importance of market players in complex (climate) financial networks and investigate which regulations could help to ignite a transition towards a greener economy and a more sustainable financial system. On the other hand, we will leverage on open data initiatives and the semantic web to empower citizens with a more active role in relation to EU policies. In particular, we will crowd-source the task to map the networks of influence involved in the policy making process. Contact: Prof. Stefano Battiston – [email protected] The top level of European HPC ecosystem The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. PRACE seeks to realize this mission by offering world class computing and data management resources and services through a peer review process. PRACE also seeks to strengthen the European users of HPC in industry through various initiatives. PRACE has a strong interest in improving energy efficiency of computing systems and reducing their environmental impact. The European Extreme Data & Computing Initiative EXDCI’s objective is to coordinate the development and implementation of a common strategy for the European HPC Ecosystem. The two most significant HPC bodies in Europe, PRACE and ETP4HPC, join their expertise in this 30-month project with a budget of € 2.5 million, starting from September 2015. EXDCI aims to support the road-mapping, strategy-making and performance-monitoring activities of the ecosystem, i.e.: – Producing and aligning roadmaps for HPC Technology and HPC Applications – Measuring the implementation of the European HPC strategy – Building and maintaining relations with other international HPC activities and regions – Supporting the generation of young talent as a crucial element of the development of European HPC
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9553829431533813, "language": "en", "url": "https://freestyleskaters.org/ethereum-wiki-espaa%C2%B1ol/", "token_count": 1116, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.09716796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9c1f6a4c-fc0f-41e1-8ba6-21779384e339>" }
What Is Ethereum (ETH)? Ethereum is a decentralized open-source blockchain system that features its own cryptocurrency, Ether. ETH works as a platform for many other cryptocurrencies, as well as for the execution of decentralized wise contracts Ethereum was first described in a 2013 whitepaper by Vitalik Buterin. Buterin, in addition to other co-founders, protected funding for the job in an online public crowd sale in the summer season of 2014 and formally launched the blockchain on July 30, 2015. Ethereum’s own supposed objective is to become a global platform for decentralized applications, enabling users from all over the world to compose and run software that is resistant to censorship, downtime and fraud. Who Are the Founders of Ethereum? Ethereum has an overall of 8 co-founders an uncommonly large number for a crypto job. They first satisfied on June 7, 2014, in Zug, Switzerland. Russian-Canadian Vitalik Buterin is perhaps the best understood of the lot. He authored the initial white paper that initially explained Ethereum in 2013 and still deals with improving the platform to this day. Prior to ETH, Buterin co-founded and composed for the Bitcoin Publication news site. British developer Gavin Wood is perhaps the second crucial co-founder of ETH, as he coded the first technical application of Ethereum in the C++ shows language, proposed Ethereum’s native programs language Solidity and was the first chief innovation officer of the Ethereum Structure. Prior To Ethereum, Wood was a research researcher at Microsoft. Afterward, he moved on to establish the Web3 Structure. Among the other co-founders of Ethereum are: – Anthony Di Iorio, who underwrote the project throughout its early stage of advancement. – Charles Hoskinson, who played the principal role in establishing the Swiss-based Ethereum Foundation and its legal framework. – Mihai Alisie, who offered assistance in establishing the Ethereum Structure. – Joseph Lubin, a Canadian business owner, who, like Di Iorio, has actually assisted fund Ethereum during its early days, and later founded an incubator for start-ups based upon ETH called ConsenSys. – Amir Chetrit, who assisted co-found Ethereum but stepped away from it early into the advancement. What Makes Ethereum Special? Ethereum has actually originated the concept of a blockchain clever contract platform. Smart agreements are computer programs that instantly execute the actions needed to meet a contract in between a number of parties on the internet. They were designed to lower the requirement for relied on intermediates between contractors, hence minimizing transaction costs while likewise increasing transaction reliability. Ethereum’s primary development was designing a platform that enabled it to carry out wise contracts using the blockchain, which even more reinforces the already existing advantages of wise contract technology. Ethereum’s blockchain was designed, according to co-founder Gavin Wood, as a sort of “one computer system for the whole world,” theoretically able to make any program more robust, censorship-resistant and less vulnerable to scams by running it on a globally dispersed network of public nodes. In addition to smart agreements, Ethereum’s blockchain has the ability to host other cryptocurrencies, called “tokens,” through making use of its ERC-20 compatibility requirement. In fact, this has actually been the most typical use for the ETH platform so far: to date, more than 280,000 ERC-20-compliant tokens have actually been introduced. Over 40 of these make the top-100 cryptocurrencies by market capitalization, for instance, USDT LINK and BNB B: Related Pages: New to crypto? Discover how to buy Bitcoin today Ready to read more? Visit our discovering center Wish to look up a transaction? Visit our block explorer Curious about the crypto area? Read our blog How Is the Ethereum Network Protected? As of August 2020, Ethereum is secured by means of the Ethash proof-of-work algorithm, belonging to the Keccak family of hash functions. There are plans, nevertheless, to shift the network to a proof-of-stake algorithm tied to the major Ethereum 2.0 upgrade, which introduced in late 2020. After the Ethereum 2.0 Beacon Chain (Phase 0) went live in the start of December 2020, it became possible to start staking on the Ethereum 2.0 network. An Ethereum stake is when you transfer ETH (serving as a validator) on Ethereum 2.0 by sending it to a deposit contract, basically serving as a miner and thus securing the network. At the time of writing in mid-December 2020, the Ethereum stake rate, or the amount of money earned daily by Ethereum validators, is about 0.00403 ETH a day, or $2.36. This number will alter as the network establishes and the amount of stakers (validators) increase. Ethereum staking benefits are figured out by a circulation curve (the involvement and typical percent of stakers): some ETH 2.0 staking rewards are at 20% for early stakers, but will be decreased to end up in between 7% and 4.5% annually. The minimum requirements for an Ethereum stake are 32 ETH. If you choose to stake in Ethereum 2.0, it means that your Ethererum stake will be secured on the network for months, if not years, in the future till the Ethereum 2.0 upgrade is completed.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.970973789691925, "language": "en", "url": "https://urbancrypto.com/bitcoin/", "token_count": 653, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.033447265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1f1f4006-6766-4d27-b6a2-729ff6a0a9c6>" }
Bitcoin was the first time that blockchain technology was used. It is also the first cryptocurrency created and is widely the most known. The blockchain was actually created to support the Bitcoin currency. The creator of bitcoin is Satoshi Nakamoto, who has not actually been identified. Nobody has been confirmed as the actual builder of the blockchain technology that supports the Bitcoin cryptocurrency. Satoshi Nakomoto wrote a Whitepaper in 2008 thatl ays out the original plan for the blockchain and the protocols that were set for Bitcoin. As complex as the technology was, he wrote the whole whitepaper in only 8 pages. As the technology is very complex, we will do our best to explain it in a simple manner. For a good glossary of some of the terms used please click here. As we have described the blockchain technology and cryptocurrencies in the pages linked above, we will focus here on some of the main points and features of Bitcoin itself. Bitcoin was released as an open-source software in 2009 by a programmer or group of programmers known only as Satoshi Nakamoto. Open-source software (OSS) means that the software allows everyone to see, study, change and distribute the software to anyone for any purpose. Transactions over the blockchain with bitcoin are peer-to-peer which means that transactions are made directly from one user to the other without an intermediary. It was the first decentralised digital currency. The blockchain is designed as a public ledger that records all of the Bitcoin transactions.The way that this works is that the system is self regulated with maintenance performed by a network of communicating nodes running bitcoin software. A node is a computer that is connected to the Bitcoin network. Bitcoins can be acquired through two main channels which are mining and exchange for traditional currencies through various exchanges. Digital wallets are set up to hold your coins. Remember, that the coins are not physical in nature but rather a string of numbers that represent the coin. One of the most fscinating facts about the coin is that each coin contains the data of all coins on the blockchain. Bitcoin Mining and Supply: We will not get into mining too much here as it is very technical and can be complicated and confusing. Simply put, miners have to solve a complicated equation to release a block on the blockchain. The number of bitcoins released per block decreases over time. The more mining competition there is causes the opening of the blocks to take more energy to solve the equation. When Bitcoin was created on the blockchain there was a limit of the number of coins that will be released over its lifetime. There will be a total of 21 million bitcoins released by 2140. As mentioned above as time passes, less coins are released per block. One of the uniqued aspects of Bitcoins versus currencies is that one coin can be split into units as small as 0.00000001 of a coin. This amount of the coin is called a satoshi, named after its creator. The befits and unknowns related to the coin are similar to the oines we have listed for all cryptocurrencies here. Urban Crypto has created a variety of products for the Bitcoin enthuisiast. You can look through the complete collection by clicking here. Below are just a few designs.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9560123085975647, "language": "en", "url": "https://www.eic.co.uk/2018/08/", "token_count": 856, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.1396484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e7a6e52e-ccf0-49a9-90ad-5202d5955441>" }
The changes have come from an evolution in how energy is being used, and those who successfully manage these demand patterns, particularly if combined with Demand Side Response (DSR), could see significant cost savings. Analysis from EIC has shown that maximum summer demand (seen between May and August) has fallen 17% in the last decade. From a peak of 44GW in 2012, maximum consumption for the current summer has fallen to just 35GW. This near 10GW loss in demand is similar to the reduction seen during the winter. Furthermore, it’s not only peak consumption that’s been reduced but baseload generation. Minimum summer demand has fallen by 19% since 2009. How much of this is down to efficiency improvements or consumption moving behind the meter is unclear. However, the change does mean National Grid has nearly 10GW less electricity demand to manage on its transmission network. The trend can be seen more clearly when broken down by month. Average peak demand during May 2012 was over 39GW. This year that figure was just 31.5GW, a reduction of over 7GW in only six years. Improving energy efficiency The cost of LED lighting halved between 2011 and 2013. During this time, consumers switching towards the more efficient bulbs helped facilitate a strong drop in demand. This could be helped further with news that the EU will ban the use of halogen lightbulbs from 1 September 2018. Another major explanation for the demand drop, aside from efficiency improvements in appliances and lighting, is the significant growth in small-scale on-site solar capacity over the same period. Small-scale distribution connected solar has a capacity of under 4KW but the number of installations has grown from under 30,000 in 2010 to nearly 900,000 in 2018. An increase of almost 2,900%. The total capacity of the small-scale solar now available is over 2.5GW, which is not far off the total capacity for the new Hinkley Point C nuclear power station. As the use of small-scale solar (the type typically installed on housing or commercial property) has grown demand has fallen. More and more of within-day demand is being met by onsite generation. Consumers can take advantage of the bright and warm summer weather conditions to generate their own solar power, thus reducing the call for demand from the transmission network. The solar impact The introduction of high volumes of solar generation to the grid – total capacity across all PV sites is over 13GW – has also significantly altered the shape of demand. Consumption across a 24 hour period has flattened in recent years. The traditional three demand peaks (morning, early afternoon, and evening) have shifted closer to the two peak morning and early evening winter pattern. The ability to generate high levels of embedded – behind the meter – generation during the day in the summer has flattened and at times inverted the typical middle peak. This has left the load shape peaking in early morning (as people wake up) and later in the evening, as people return home from work. The absolute peak of the day has also shifted in time, moving from early afternoon to the typical early evening peak of 5-5:30pm, again similar to the winter season. The below graph shows the change over time of the July load shape, which highlights both the reduction in demand and the change in shape, with consumption flattening during daylight hours as a result of behind the meter solar generation dampening network demand. With electricity costs – both wholesale and system – reflecting supply and demand, if consumption is being changed, then it also has an impact on these costs. Stay informed with EIC Our in-house analysis highlights the impact of onsite generation on load patterns and the extent to which demand can be changed by taking action, and subsequently how behaviours can alter a business’ energy costs. If you can shift demand away from historical high consumption periods, you can cut your energy costs and make significant savings. One such way to do this is by using smart building controls, such as our IoT-enabled Building Energy Management solution. To find out more download our brochure, call +44 1527 511 757, or email us.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9414788484573364, "language": "en", "url": "https://www.esolvit.com/distributed-ledger.php", "token_count": 980, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.044189453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:91d7d887-b7bf-4fac-befb-3b91f626be07>" }
Although some people use the term blockchain interchangeably with distributed ledgers, these two buzzwords are not quite similar. The popularity (or notoriety) of cryptocurrencies brought blockchain into the limelight and made it synonymous with tokenization and smart contracts. However, blockchains are just one implementation of distributed ledger technology (DLT). For over a decade, blockchain was the only known (and first fully functional form) of DLT; however, the rapid advancement of the crypto industry has precipitated the creation of other DLT systems such as RaiBlocks (now NANO), Hashgraph, peaq and IOTA, and the Tangle Network. These other forms of DLT are rapidly gaining popularity, thus reducing the industry's reliance on traditional blockchain systems. With more projects leveraging these forms of DLT, it is essential for enterprises to understand the differences between blockchains and distributed ledgers. A distributed ledger is a database that exists among several participants or across several locations while DLT describes the technologies used to publicly or privately distribute information and records to the entities who use them. Distributed ledgers are spread across several computing devices or nodes, where each device/node replicates and saves identical copies of a record to the ledger. One of the most sought-after features of distributed ledgers is its decentralization. Individuals and organizations typically store their data on centralized databases that live at fixed locations, necessitating the use of third parties. Distributed ledgers are not maintained by a central authority; they are decentralized, shifting the responsibility of managing data from intermediaries or a central authority to participant nodes. Enterprises can use DLT to validate, process and authenticate transactions as well as other forms of data exchanges. Each update on a distributed ledger is independently constructed, and recorded by individual nodes. Before an entry is uploaded to the ledger, it must be validated (through voting) to ensure the addition of a single true copy. The voting is automatically carried out by a consensus algorithm. Once consensus is reached, the ledger updates itself and each node saves the agreed-upon copy of the ledger. The structure and architecture of distributed ledgers help to cut down the cost of trust, thus reducing dependence on regulatory compliance officers, notaries, governments, lawyers and banks Distributed ledgers offer individuals, enterprises and governments a new paradigm for collecting and communicating information and is set to revolutionize the way these entities interact with each other. Blockchain is a type of distributed ledger where data is organized into blocks that are logically linked together to provide a valid and secure distributed consensus. Like all distributed ledgers, blockchains do not depend on centralized authority or servers; they are managed by and distributed across peer-to-peer networks. Data quality is maintained by computational trust and database replication. However, the structure of a blockchain is unique and distinct from other forms of distributed ledgers. The data on a blockchain is grouped and organized into blocks. Each block of data is closed by a cryptographic signature known as a "hash." This hash points to the next block, thus creating an unbroken chain of continuous data. The hash ensures that the encrypted information within each block cannot be manipulated. A blockchain is a continuously growing list of records. It is built on an append-only structure, making deletion and alteration of data on earlier blocks impossible. Data can only be added to the database. As such, blockchain technology is best suited for applications such as voting, tracking assets, processing transactions, managing records and recording events. Every blockchain is a distributed ledger but not all distributed ledgers are blockchain. The two technologies share a conceptual origin; they are a digitized and decentralized log of records that require consensus among participating nodes to ensure the authenticity of data entries However, they update their databases differently. Blockchain organizes entries into blocks of data and uses an append-only structure to update its records. Once entries are made, they can't be deleted or modified in any way. Under DLT, database owners have greater control over implementation. In principle, they can dictate the purpose, structure and functioning of the distributed ledger network. However, the network retains its decentralized nature since ledgers are stored across multiple servers that communicate to ensure the maintenance of an accurate and up-to-date record of transactions. DLT provides an auditable and verifiable history of information stored on a particular data set. Researchers are beginning to find new and more interesting uses for blockchain and other digital ledger technologies. These technologies are set to disrupt traditional processes and procedures in virtually all industries and have already had a significant impact on the financial sector (especially in the area of regulatory compliance and compliance policies). Studies by Accenture show that by 2025, investment banks that leverage distributed ledger technologies may be able to reduce compliance cost by 30 to 50 percent.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9719277024269104, "language": "en", "url": "https://www.joewrightcpa.com/10-things-to-know-about-credit-scores/", "token_count": 2086, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:145aebbe-1efa-4fe1-8d02-0d07a02f297c>" }
Once you get above a certain age, you start hearing about a magical thing called a credit score. You hear that everybody has one and that you need to do everything you can to bolster it. You hear that there are things you can do to hurt your credit score and that once you’ve messed up your credit score it will take a long time for it to recover. Unfortunately, amidst all of these vague references and warnings, there are very few real explanations of what a credit score actually is and what it has to do with your life. Let’s take a look at the basics so that you’ll have a better understanding going forward. 1. Your Credit Report and Your Credit Score are Similar, but Not the Same Your credit score is a single number that is a reflection of all of the factors contained in your credit report. Think of it like when you were in school: your final grade in a class may have been a B+, but your teacher had a file that reflected attendance, your scores on quizzes, tests and book reports, your class participation and any extra credit you may have done. Your credit score is like your grade, and your credit report is like that list of inputs that the teacher recorded throughout the semester. While your credit report details all of the credit cards you hold and how quickly or slowly you pay those and your other debt accounts how many applications for loans or credit cards you completed, as well as any bankruptcies, judgements or liens against you, your credit score is a calculation whose inputs weigh all of those factors. All of the recording and calculations are done by three credit bureaus, and each positive or negative input (such as paying your bills on time or, conversely, paying them late) will make your score go up or down. 2. The Five Core Factors That Determine Your Credit Score If you want to have a solid credit score, it helps to know the five financial actions that impact it. They are: - How quickly or slowly you pay your bills. This is referred to as your payment history, and it represents 35% of your credit score. Pay your bills off each month and your credit score will be higher, but pay late each month and it will work against you. - You know how your monthly credit card bills display the amount that you owe as well as the amount of credit that you have remaining? This is a reflection of your credit utilization for that individual card. Thirty percent of your credit score is determined by how much of your available credit you are actually using, with 30% or less considered optimal. Keep in mind that this doesn’t mean that each card has to be below 30% utilization for you to have a good credit score. The credit utilization portion of your credit score reflects your total usage of your total available credit. - Have you had the same credit card or cards for as long as you can remember, or are you constantly trading in old cards for new ones? Keep in mind that 15% of your credit score reflects the average age of your credit accounts, and the older they are, the better your score will be. The credit bureaus view long-term history with an individual credit institution as a reflection of financial responsibility. If you want to leverage this particular factor but are also attracted by the benefits or points offered by a new credit card, do so but don’t close your old account. Just hold onto it. You don’t have to use the card to get this particular benefit from it. - You may try to keep your debts to a minimum, but the credit bureaus actually like to see that you have a variety of types of accounts and loans. They view it as a signal that you can manage your debt. This element contributes 10% to your credit score. - Every time you apply for a car loan, a mortgage or even just a retail store’s credit card, the credit bureaus see what is referred to as a hard pull – this is viewed by the credit bureau as someone trying to determine whether you are worthy of their trust in applying credit to you. When the credit bureaus see multiple inquiries, they view it as a negative, because applying for a bunch of lines of credit within a short period of time can be seen as an indication that you are unable to pay your bills, or that you are vulnerable to getting into too much debt for you to handle. Though this only has a 10% impact on your score, it is something for you to keep in mind when considering taking out new lines of credit. 3. Accessing Your Credit Report and Scores Doesn’t Cost A Penny There was once a time when it was difficult to get a copy of your credit score or credit report – in fact, people used to believe that if you made that inquiry, it would work against your credit score. Today it is much easier to request a copy of your annual credit report from any of the three major credit bureaus, and you can access that information free of charge once every four months. There are three different credit bureaus – Equifax, Experian and TransUnion – and if you rotate between the three of them for a full year worth of reports. The advantage of doing this is that if you regularly check the information in your credit report, you will have time to spot any mistakes or inaccuracies and get them corrected before they have an adverse impact on your credit score. In addition to requesting your credit score, there are many credit cards that offer their clients free access to their FICO scores as a customer benefit. The website Credit.com also provides this information at no charge. 4. Inquiring About Your Credit Score Won’t Hurt It As referenced above, when a potential lender does a hard pull on your credit history, it can reduce your score, but that is not true of your own inquiries. If you want to check your credit score you can do so without it showing up as an inquiry on your credit report or affecting your score. It’s also useful to remember that even if you have had a potential creditor do a hard pull on your credit, the depleting effect is temporary. 5. Understand the Different Scores and Ranges That May Be Applied When you get your credit score, it is important to understand what it means, and that starts with knowing that the different credit bureaus don’t have the exact same scoring numbers or ranges. The number you see for Experian will mean something different from the score for Equifax, and there is also a newer score called the VantageScore which varies even more. Here are the ranges that are available from each credit bureau: - VantageScore: 501-990 - Equifax: 280-850 - Experian: 360-840 - Transunion: 300-850 6. Why It’s Important to Review Your Credit Score and Report Regularly You may think that your credit score and credit report aren’t meaningful to you unless you’re looking to make a big purchase and take out a loan, but the truth is that if you regularly check both your credit score and your credit report, you’ll quickly recognize shifts in your numbers that could be an indication that somebody has been using your identity information and credit for their own benefit. Known as identity theft, it is essential that you spot and report this type of activity as quickly as possible so that you can put a stop to it. You cannot rely on your credit card company or the credit bureaus to make you aware of these activities – it is your responsibility to pay attention. 7. Understanding the Importance of Your Credit Score to Your Finances Your credit score may not feel particularly relevant to you, but the truth is that the higher your credit score is, the more money you are likely to save on big expenses. There are a few reasons why this is true, but the most important is that when a lender sees that you have a solid credit score, they are more likely to offer you a better interest rate. Over the life of a loan, a slight difference in the interest you’re being charged can make a difference of thousands of dollars, and when you extend this to every loan you take out, you can quickly see how it is to your advantage to do everything you can to boost your credit score and keep it elevated. 8. Your Spouse’s Credit Score Won’t Change Yours There are many people who believe that when they get married, they will get the benefit of their spouse’s credit score – or by contrast, that their credit score will be dragged down by that of their spouse, who is less than stellar about paying bills on time. The truth is that getting married does not erase or impact your individual credit score, and that’s even true if you decide to open joint accounts or take out a loan in both of your names. That being said, you do need to be aware that if you’ve always had a solid credit score and then you make your slow-paying spouse responsible for paying a joint bill – and they don’t – that will have an impact on you. 9. Negative Impact Credit Report Elements Will Eventually Fade One of the most important things that you need to know about your credit report – and the impact of its elements on your credit score – is that nothing is forever. Even a bankruptcy will eventually get wiped away if you have established credit since then and have been able to pay your bills on time. 10. Lenders Look at More than Just Credit Scores Finally, if you are applying for a loan and are concerned that your credit score is going to affect your ability to get approved, keep in mind that it is just one of many factors that your potential creditor will take into consideration before making their decision. If you are turned down for a loan by a big corporate lender like a major credit card, you may do better by approaching a smaller, alternative lender where you can sit down face-to-face with the person responsible for making the lending decision and explain why you are deserving of the loan.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9542706608772278, "language": "en", "url": "https://www.journalofaccountancy.com/news/2014/feb/20149622.html", "token_count": 816, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.150390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2412aeb1-00e2-494e-bb49-92d3b516a7aa>" }
More than 1,000 questions inspired by content in accounting textbooks are featured in a new online game created for high school students. The AICPA helped develop the game, called “Bank On It,” which is available at startheregoplaces.com. The game is intended to be a fun, engaging way for educators to reinforce the accounting principles being taught in class while giving their students a taste of real working-world scenarios in the accounting profession. The concept for the game was designed by a team of high school students who won the AICPA’s Project Innovation Competition. The game is won by reaching a winning bank balance set prior to starting. Players earn money by answering questions correctly and landing on other strategic spaces as they move around the board. Players can play the game at the “Staff Accountant” or “CEO” level, focusing on business and industry, public accounting, or not-for-profit accounting. Sample questions below are pulled from “Staff Accountant” and “CEO” levels for business and industry. Can you get all the answers correct? 1. This term refers to the left side of a T account. 2. In the chart of accounts, liability accounts are usually assigned which number group based on a standard numbering system? 3. True or False? A corporation must file an income tax return even if it doesn’t have any income for the year. 4. FICA taxes include Social Security and __________. 5. On Dec. 1, 2012, Ralph’s Repair Shop hired Steve to start work on Jan. 2, 2013, making a monthly salary of $2,700. Ralph’s Repair Shop’s balance sheet of Dec. 31, 2012, will show a liability for which amount. - No liability 1. If Lamar Brown, the manager of Pace Athletic Wear, agreed to pay Athletic Equipment Inc. the principal plus interest 90 days from Sept. 14, what would be the maturity date for his promissory note? 2. This term refers to the system of recordkeeping in which each business transaction affects at least two accounts. 3. True or False? A preferred stockholder receives dividends before a common stockholder. 4. The Flying Pig paper corporation brought in net income of $235,999 for the year. If it issued 3,400 shares of $5 par common stock and the board of directors declared a cash dividend of $5 per share, how much of the net income was retained by the corporation? 5. Ernie White is the sole proprietor of his business. He took two $700 laptops he personally owned and transferred them to his business, which increased the asset account Office Equipment by $1,400. How did it affect the account Ernie White, Capital and by how much? - Increased by $700 - Increased by $1,400 - Decreased by $700 - Decreased by $1,400 4. Medicare taxes 5. No liability, because Steve has not performed any work as of the balance sheet date 1. Dec. 13 2. Double-entry accounting 5. Increased by $1,400 Note: An earlier version of the quiz contained a question that was incomplete because it did not specify whether interest would be compounded. That question asked: You want to earn $750 in interest so you’ll have enough to buy a used car. So you decide to put $3,000 into an account that earns 2.5% interest. How long will you need to leave your money in the account to earn $750 in interest? - 1 year - 5 years - 10 years The original answer of 10 years failed to take into account compounding interest. With compounded interest, it would take approximately nine years to earn $750. Ken Tysiac ( ) is a JofA senior editor.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9823291897773743, "language": "en", "url": "https://www.lbma.org.uk/alchemist/issue-24/a-history-course", "token_count": 1870, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.212890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:edbf187b-63d3-4d02-9498-ff955b7d673a>" }
A History Course The following article is based on a speech given by Dr Heraeus at the Heraeus lunch during Platinum Week in London on 16 May. This year marks, the 150th anniversary of the company's founding. Heraeus was founded by my great-grandfather, Wilhelm Carl, who succeeded in melting platinum in his pharmacy, a real breakthrough at that time. Up until then, platinum had had to be forged, a process that was only carried out in London and Paris. Wilhelm Carl came from an old family of pharmacists in which the trade had been passed on from father to son for generations. The family tree can be traced back to The 16th century. The Huguenots, who founded the city of Hanau in Germany during the 17th century, introduced the art of gold smithy and thus platinum to Hanau. In 1888 domestic platinum sales amounted to 400 kg, while platinum exports were 600 kg. The metal was exclusively extracted from Russian minerals. The price then was 800 marks per kg, which nowadays would be $11 per ounce. Unfortunately, there is no stock worth mentioning left from that time, and the price of the metal in our books is now somewhat higher. However, the problems in the platinum business have not changed. Old Heinrich Heraeus, my grandfather, remembers a dispute regarding the platinum content of sponge that had been sent in by customers. His father (that is my great-grandfather) had entered it content in the books as 100 per cent platinum, but the analysis showed a content of only 97 per cent. This topic of discussion between father and son 130 years ago is still the daily bread of a platinum refiner doing business today. Platinum was used in jewellery production as early as the late 18th century, and its popularity increased during the 19th century. And due to its chemical and physical characteristics, new industrial applications were constantly being found. With the construction of apparatuses for the concentration of" sulphuric acid and an increase in the production of the electric light bulb, the demand for platinum rose enormously. Russian material had always been available in sufficient amounts, but in 1885, the Russians suddenly withheld their material and spread the rumour that platinum was in short supply. The Heraeus company chronicle states that all companies on the world market needing platinum doubled and tripled their orders seeking to increase stocks at any price. The price tripled within a few weeks but fell back to its old level of $11 per ounce a few months later. As usual, everyone wanted a scapegoat. It was said that Johnson Matthey spread the wrong information and that Heraeus as the biggest consumer was responsible for causing a platinum boom and simultaneously making high profits. I assure you that I am telling this story for the first time today and that I only recently discovered it in our old chronicles. You may, therefore, assume that Heracus has not been responsible for the rise in platinum group metal prices over the last two years. You may further assume that we did not give the Russians the idea of withholding platinum in order to provoke a shortage in supply. Wilhelm Carl Heraeus sent his brother-in-law, Charles Engelhard, to the US in order to deal with the American market on behalf of Heraeus. As everybody knows, he was very successful. At that time, he was the only representative for a European company who was actually in the US and, therefore, Heraeus, was able to win market shares easily. The chronicle states that: "Soon the competitors lost most of their ales". As always in these cases, a mutual undercutting in prices ensued with no real profit for any company. The question was raised whether it would not be advisable to come to a common understanding that would result in a sales agreement for the American market. Therefore, the European competitors met in France in 1894 in order to draw up a contract which excluded mutual undercutting and which included a fair split of the profits. Again, I am quoting history and am not making suggestions! Those gentlemen did not have any anti-trust problems with their agreement. The agreement went down in the annals of history as the "platinum convention" and it is nice to read that these negotiations always took place in Paris as Messrs. Desmoudes would not be persuaded to take the dangerous voyage across the Channel. In return, my grandfather writes, the French always paid generously for breakfast expenses. Charles Engelhard was 24 years old when he emigrated to The United States. He began his training by trading diamonds and pearls, which was his father's business. My grandfather gave Charles Engelhard 2,000 marks, the funds needed for his trip and his first weeks in the US. It then remained to be seen if, in addition to his personal activities in diamond trading, he could also prove useful to Heraeus. As already mentioned, platinum sales rose very quickly and customers who had been buying in London or Paris now bought from Heraeus. Firstly, Mr Engelhard acquired the Gross & Meier Company, which was not very important in the platinum industry, but the company's founder and owner came from Hanau and had been living for over decades in the US. This company was merged with the Baker Company whose owners, the brother's Baker, kept some shares in the merged company, which was then called American Platinum Works. Baker held 3/7 of the capital and Mr Engelhard and the three European companies of the convention each held 1/7. Mr Engelhard was named general manager of the company with a salary of $1,200 a year and a commission of 5 per cent for the next 1,000 kg platinum, 10 per cent for the next 1,000 kg and 15 per cent for the following 500 kg sold by the three companies to the US. In this way, Mr Engelhard quickly became wealthy. In 1903, the Baker brothers sold their remaining shares to Mr Engelhard and the European consortium. It is interesting to read that they agreed on a detailed due diligence, as they were really surprised at the amount of profit announced. After the First World War, Heraeus lost the shares, and Mr Engelhard later took over all the shares in the Engelhard company. During the bad economic conditions that followed the war, Charles Engelhard supported the Heraeus company with a loan, which was later changed into a 15 per cent share of the company. The Engelhard family held these shares until Charlie Engelhard died in the early 1970s. After his death, I was able to buy them back from Jane Engelhard, his wife. Looking back, the price of 20 million marks For 15 per cent of the Heraeus shares seems to have been extremely good. Also at the turn of' the century, the Siebert company was founded in Hanau as well as Rossler, as a mint in Frankfurt. Both companies later joined to form the Degussa company, which traded under the name of' the former Rossler company for a long time. Today they are winning new markets with a new ownership under the modern name of dmc2 . I do not want to go into too many details regarding the development of Heraeus. During the Second World War the company was completely destroyed by two big attacks in December 1944 and March 1945. In 1945 we had a staff of 125 employees. By 1951, the year of our 100th anniversary, our staff had increased to 1,100. Not a single employee was working abroad at that time. In 1972 fewer than 200 employees worked abroad. Today more than 5,000 employees are working abroad, with more or less the same number working in Germany. Today the precious metal business belongs to Iv. C. Heraeus and Heraeus Metallhandclsgesellschaft. It is by far the business with the biggest sales and has been our most important profit centre in recent years. We chose "Innovation - a Precious Tradition" as the motto for our anniversary year. We believe in the innovation and the success of the platinum group metals business For the next 50 years and we intend to be one of the big players in this market. We have also held a leading global position in our other business areas, especially quartz glass but also the dental and sensor business. The company's capital is still owned by the family and a few foundations close to the family. At the end of last year, we did not have any bank liabilities, so we are well prepared for the years ahead. The community of platinum metal producers, users and - in the end - consumers is small and tightly knit. London Platinum Week, rich in tradition, is significant for this. Those who handle platinum - who buy, sell and use it must be respectable and reliable. There must be confidence between all members of the community - customers, suppliers and those doing the analyses. The customer who has his material recycled must feel confident that it will be returned after an appropriate time. I would like to thank all those present with whom we have many different kinds of business connections For their trust in Heraeus over the past decades. I hope these excellent relationships will last for many decades to come.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9632142782211304, "language": "en", "url": "https://www.thenews.coop/110969/topic/politics/obama-co-operatives-white-house/", "token_count": 746, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:234d099d-0792-4937-a2a6-d304757f7b3a>" }
Throughout his two terms as President of the United States, Barack Obama carried out a number of policies with a specific focus on co-operatives. His ideas for healthcare reform were implemented through the passing of the Affordable Care Act (Obamacare), which provided the legal framework and federal loan money for establishing Consumer Oriented and Operated Plans (CO-OPs). The ACA provided a total of $2.4bn in federal loan money to help start-up organisations seeking to establish CO-OPs – although these are consumer-owned, not all of them work as co-operatives. In total there were 23 CO-OPs set up under President Obama’s ACA. However, only six on these remain active, having faced a challenging market and changes to governance rules. Another change was the Electricity Africa Act, which states that the United States will partner and consult with governments of Sub-Saharan counties as well as international financial institutions, the private sector and co-operatives, to promote first time access to power and power services for 50 million people in Sub-Saharan Africa by 2020. In August, President Obama also signed into law the Global Food Security Act, which aims help end global hunger, poverty and malnutrition. Co-operatives are mentioned in the act as key stakeholders engaged in efforts to advance global food security programs and objectives. The act encourages leveraging resources and expertise through “partnerships with the private sector, farm organisations, co-operatives, civil society, faith-based organisations, and agricultural research and academic institutions”. The law was signed during the White House Summit on Global Development, where co-ops were represented by Amy Coughenour Betancourt, chief operating officer of the National Co-operative Business Association (NCBA CLUSA). Shortly after the Act was signed, she said: “The whole of government approach speaks to the priority this act has for tackling nutrition and food security issues. From our flagship Feed the Future Yaajeende project in Senegal to integrating nutrition-led agriculture throughout our programs, NCBA CLUSA is dedicated to alleviating hunger and supporting the agricultural sector as a key to sustainable development.” Barack Obama was also the first USA President to hold a national briefing on co-operatives at the White House. One hundred and fifty co-operative leaders from the US and the International Co-operative Alliance attended the special session with the Obama administration to discuss the future of co-ops back in 2012. Related: What happened when co-operatives went to the White House? Among them was Paul Hazen, executive director of the US Overseas Cooperative Development Council (OCDC), which brings together nine organisations aiming to champion effective international co-operative development. Speaking of the legacy of the Obama Administration, Mr Hazen said: “The Obama Administration has been very supportive of co-operatives. Domestically, they have put a focus on co-operatives in rural America. They have also supported the creation of worker co-operatives. “It appears that the Small Business Administration will soon change a 50-year regulation and will now provide loans for food co-operatives. President Obama’s focus on food security domestically and internationally has put forward many initiatives for agricultural co-operatives.” In this article - Barack Obama - British co-operative movement - Executive Director - Food security - Healthcare reform - international co-operative - International Co-operative Alliance - Market socialism - president of the United States - United States - White House - United States - Top Stories
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8832335472106934, "language": "en", "url": "https://www.toolshero.com/decision-making/multiple-criteria-decision-analysis-mcda/", "token_count": 1830, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.060302734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4e2b54fd-a1fd-42d4-b87b-fd605778f9b6>" }
Multiple Criteria Decision Analysis (MCDA) This article provides a practical explanation of the concept of Multiple Criteria Decision Analysis (MCDA). After reading it, you will understand the necessity and the benefits of Risk Analysis. What is a Multiple Criteria Decision Analysis (MCDA)? A Multiple-Criteria Decision Analysis (MCDA, or Multi-criteria analysis (MCA), is a decision-making analysis that evaluates multiple (conflicting) criteria as part of the decision-making process. This tool is used by practically everyone in their daily lives. Humans make thousands of decisions per day, but this same process also occurs in the corporate world, government organs, and medical centres. A Multiple Criteria Decision Analysis (MCDA) resembles a cost-benefit analysis, but with the notable advantage of not being solely limited to monetary units for its comparisons. When making comprehensive or important decisions, multiple criteria and levels of scale need to be accounted for. Comparing conflicting sets of criteria, such as quality and costs, can sometimes lead to confusion and lack of clarity. Taking decisions based on multiple different criteria with help from the Multiple Criteria Decision Analysis (MCDA) tool can then make things clear. By structuring complex problems and analysing multiple sets of criteria, informed, more justifiable decisions can be made. In 1979, Stanley Zionts published an article titled ‘MCDM – if not a Roman numeral, then what?’ Which he used to promote and popularise the concept among his business audience. The following years saw sustained popularity of the concept, and multiple Multiple Criteria Decision Analysis (MCDA) related organisations were founded, such as the ‘International Society on Multiple Criteria Decision Making’. A comprehensive Multiple Criteria Decision Analysis (MCDA) draws knowledge from several different fields, including mathematics, economics, information technology, software engineering, and other information systems. Steps in a Multiple Criteria Decision Analysis (MCDA) 1. Define the context Before you can get started on a Multi-criteria analysis, you need to clearly define the context of your analysis. The context accounts for the present situation, key players, and stakeholders in the decision-making process. Advantages of a clearly defined context are: - Optimal allocation of resources towards accomplishing the objectives - Improved communication between the different parties involved - Facilitating multiple additional options - Mapping out strengths and weaknesses, as well as threats and opportunities. The SWOT Analysis can be a helpful tool in this regard - Recognition and possible filtering out of environmental uncertainties in the environment that the analysis is being conducted in. A PEST analysis can help with that. 2. Identify the options available An Multiple Criteria Decision Analysis (MCDA) compares multiple different options to one another. Whether pre-established or yet to be developed; all options are subject to being changed and influenced. This is why all the options need to be adjustable even though the analysis has already started. Options are often formulated on a go/no-go basis. The consequences tied to each option determine whether they lead to a go or no-go decision. 3. Decide the objectives and select the right criteria that represent the value Consequences play an integral role in the Multiple Criteria Decision Analysis (MCDA). Due to the varying consequences tied to each option, for example, a higher Return on Investment (ROI) after an investment or a degradation of product quality after production line alterations, multiple different criteria need to be established. Criteria represent clearly defined standards by which the different options can be measured and compared, as well as expressing the different levels of value each option creates. When buying a new car, the future owner wants to minimise potential costs, and maximise the number of advantages. Costs are easy enough to compare, but advantages can be subject to varying interpretations. Which is why these two goals conflict with one another and can’t be compared directly. In such cases the advantages, where possible, need to be sub-divided into quantifiable criteria such as safety (crash test result), comfort, luxury, reliability, and performance. As such, the making of decisions in an Multiple Criteria Decision Analysis (MCDA) frequently comes down to matters of judgement. Objective assessments aren’t always possible. 4. Measure out each of the criteria in order to discern their relative importance Just choosing the right criteria won’t suffice to combine and analyse the different scales of choice. One preference unit isn’t necessarily the same as another. This is similar to comparing temperature scales such as Celsius and Fahrenheit. Both scales may concern temperature, but a difference of 1 degree Celsius is greater than 1 degree Fahrenheit. This effect, the relative importance of something, is something the car buyer also notices when he has to make a choice between cars. The buyer can partially base his decision on the car’s costs. But when he has made a shortlist of say five cars he’d like to have, each differing 150 euros in price, that criterion suddenly loses its importance. Whereas a difference of 3,000 euros per car could have made this a weightier criterion for the buyer. The weighting of different criteria therefore not only shows the difference between options but also how relevant this difference is. For example, safety might weigh less heavily on the buyer’s mind than maintenance costs, because he considers it less important. 5. Calculate the different values by averaging out weighting and scores The penultimate step is where the relative priority scores are calculated. The general preference score is the weighted average of all criteria. First off all, the scores for each criterion are multiplied with their weighting, expressed in decimals (e.g. 20 is 0.2). The scores of each criterion are than added together. The total sum of which comprises the preference score. Have a look at the example below. After calculating the totals, the outcomes can be ordered to see which option is most suitable based on the different preference scores they’ve been given. In this example, car 4 comes out on top. It’s important to note that a high score for price doesn’t mean that the car is expensive, on the contrary, it represents how well the car fits the buyer’s budget. A very expensive car will have a low score for the criterion of price, driving down its overall score as a result. Multiple Criteria Decision Analysis (MCDA) advantages The use of a Multi-criteria analysis comes with various advantages when compared to a decision-making tool not based on specific criteria: - It’s open and explicit - The chosen criteria can be adjusted - Many different actors can be compared with one another - A Multiple Criteria Decision Analysis (MCDA) grants insight into different judgements of value - Performance measurements can be left to experts - Scores and weights can be used as reference - It’s an important means of communication between the different parties involved in the decision-making process It’s Your Turn What do you think? Do you recognise this explanation of Multiple-Criteria Decision Analysis? What do you believe are contributing factors to the effectiveness of this powerful, formal, and analytical decision-maker’s tool? Share your experience and knowledge in the comments box below. - Zionts, S. (1979). MCDM—if not a roman numeral, then what?. Interfaces, 9(4), 94-101. - Zeleny, M., & Cochrane, J. L. (1973). Multiple criteria decision making. University of South Carolina Press. - Masud, A. S., & Ravindran, A. R. (2008). Multiple criteria decision making. CRC Press, An imprint of the Taylor and Francis Group. - Habenicht, W., Scheubrein, B., & Scheubrein, R. (2002). Multiple criteria decision making. Theme, 6(5). How to cite this article: Janse, B. (2018). Multiple Criteria Decision Analysis (MCDA). Retrieved [insert date] from toolshero: https://www.toolshero.com/decision-making/multiple-criteria-decision-analysis-mcda/ Add a link to this page on your website: <a href=” https://www.toolshero.com/decision-making/multiple-criteria-decision-analysis-mcda/”>toolshero: Multiple Criteria Decision Analysis (MCDA)</a> We are sorry that this post was not useful for you! Let us improve this post! Tell us how we can improve this post?
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9224406480789185, "language": "en", "url": "http://sjie.journals.sharif.edu/article_21333.html", "token_count": 358, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06884765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cc71d75a-cd4c-4dff-94e1-7790194a6dd4>" }
عنوان مقاله [English] Forecasting methods are one of the most efficient available approaches to make managerial decisions in various fields of science. Forecasting is a powerful approach in the planning process, policy choices and economic performance. The accuracy of forecasting is an important factor affects the quality of the decisions that generally has direct and non-strict relationship with the quality of decisions. This is the most important reason that why endeavor for improving the forecasting accuracy has never been stopped in the literature. Electricity demand forecasting is one of the most challenging areas forecasting and important factors in the management of energy systems and economic performance. Determining the level of electricity demand is essential for careful planning and implementation of the necessary policies. For this reason electricity demand forecasting is important for financial and operational managers of electricity distribution. The unique feature of the electricity which makes it more difficult forecasting in comparison with other commodity is the impossibility of storing it in order to use in the future. In other words, the production and consumption of electricity should be taken simultaneously. It has caused to create a high level of complexity and ambiguity in electricity markets data. Computational intelligence and soft computing approaches are among the most precise and useful methods for modeling the complexity and uncertainty in data. In this paper a soft intelligent method by combining mentioned methods is proposed in order to electricity demand forecasting. The main idea of the proposed model is to simultaneously use advantages of these models in modeling complex and ambiguous systems. Empirical results indicate that proposed model can achieve more accurate results rather than its component (Seasonal auto-regressive Integrated Moving Average models, artificial neural network) and also other current single forecasting methods such as classic regression, Seasonal Auto-Regressive Integrated Moving Average-fuzzy models and support vector machine.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9507436752319336, "language": "en", "url": "http://www.rsgold-rsgold.com/foreign-currency-exchange-market-or-forex-growing-in-popularity-worldwide.html", "token_count": 738, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.232421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c8357d46-9e49-4e19-9dc2-98d44a630c45>" }
Forex or foreign exchange is buying, selling or exchanging one currency for another. In terms of volume, the foreign exchange market is the world’s largest market by far. It is dominated by international banks. The world’s financial centers facilitate trading in currencies 24 hours a day from Monday to Friday. Forex trading takes place between many different types of buyers and sellers. Currencies are traded in pairs. The market doesn’t set the absolute value of a currency. Instead, it determines one currency’s value relative to another. For example, one U.S. dollar is valued at X number of Euros. The forex market helps with international investment and trade by facilitating currency conversion. Typical foreign exchange transactions involve one party purchasing a set quantity of a currency and paying for it with a specific amount of another currency. The modern forex market began in the 1970s following 30 years of governmental laws restricting foreign exchange transactions. The Bretton Woods money management system had set the rules controlling financial and commercial relations among the major international countries after World War II fxtrade 777 Gradually, countries adopted floating exchanges rates. The foreign exchange market has unique characteristics. They are a huge trading volume, high liquidity, geographical dispersion, continuous operation and relatively low-profit margins. This has led some to call it the ideal market. People have been exchanging currencies since ancient times. In 1973, modern free-market foreign exchange began in developed nations. China, South Korea, the United States and the United Kingdom were some of the first nations to participate in forex trading. By 2010, $3.98 trillion changed hands each day through the forex market. While commercial banks, securities dealers, commercial companies, central banks and investment management firms dominate the forex market, a growing number of individuals called retail foreign exchange traders are becoming involved in the forex currency market through retail FX brokers. This gives private citizens the opportunity to become involved in speculative currency trading. There are two types of retail FX brokers. They are brokers and dealers, also known as market makers. Brokers act on behalf of the investors and market makers set the transaction prices. The world’s primary centers for forex trading are located in London, New York City, Hong Kong, Tokyo, and Singapore. Currency trading takes place all day long. When trading ends in the Asian markets, it begins in the markets in European. The trading activity then moves to the markets in North America. Forex trading then picks back up in the Asian markets. Changes in the exchange rates tend to reflect changes in the gross domestic product, inflation, budget and trade surpluses and deficits, interest rates and macroeconomic condition in the currency’s country of origin. Economic factors, political conditions, market psychology, financial instruments and many other factors can also impact forex exchange rates. For people interested in getting involved with forex trading it is important to understand the terms and jargon. Currency pairs, a percentage of points or pips, spread, bid, ask the opening and closing of a position and stop loss are some of the most important ones. There is also some key information forex investors must share with their brokers when they are interested in making a trade. It includes what it is they are interested in buying or selling, how much they would like to buy or sell when to take the profit if all goes well with the trade and when to activate the stop-loss if the trade goes bad. Forex trading is exciting and it can also be very lucrative. The key is doing adequate, in-depth research before you make your trades.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9578306078910828, "language": "en", "url": "https://agilipersonalcfo.com/saving-for-college-2/", "token_count": 1057, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.04052734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9572c996-b548-4040-9d54-9f10fe0af0ee>" }
When the topic of College Savings comes up, most parents have the same questions. Should I start saving? When should I start? What type of account should I open? What if my child decides not to go to college? If you are in an income bracket in which you know you will have to pay at least part of your child’s college expenses out of pocket, not taking into account any possible scholarships, it is never too early to start saving. An account can be opened for your child as soon as the child has a social security number. You can invest as little as $25 per month in some accounts. The sooner you start saving, the less money you will need to contribute either periodically or in a single payment in order to have enough money accumulated to cover your child’s college expenses. The most common savings tool for college is the 529 Plan. What is a 529 Plan? A 529 Plan is an education savings plan operated by a state or financial institution. It is intended to help parents set aside money for future college costs. The plan is named after the Internal Revenue Code which created these types of accounts in 1996. 529 Plan contributions are eligible for deductions to Federal Income Tax. State Income Tax deduction eligibility is dependent upon the state. There are two types of 529 plans: a Savings Plan and a Prepaid Plan. The Independent 529 Plan (a Prepaid Plan) is the only institution-sponsored plan thus far. The 529 Savings Plan works much like a retirement plan. You have a list of options that may include age-based or investment strategy-based portfolios managed by the state or individual mutual funds. These investments are market sensitive and will go up and down based on the performance of your particular investment choice. The investments grow tax-free, and any distribution taken for qualified higher education expenses are tax-free. Savings Plans may be used at any institution that is eligible for Federal Student Financial Aid, so you can pick from any state’s plan that does not have residential requirements if it is not your or the beneficiary’s resident state. The 529 Prepaid Plan allows you to prepay all or part of future college tuition and fees and qualified college expenses. What the prepaid plan actually pays depends on the state’s plan. Most of these plans have residency requirements and not all states offer a prepaid plan. The prepaid plan allows you to “lock in” at the current tuition plus an additional premium to help keep the plan fiscally sound. All the state-provided plans cover only in-state public college tuition and fees. The Independent 529 Plan is covered by a consortium of private colleges and currently includes 274 private colleges. If your child decides not to attend college, decides to attend college out of state or receives a scholarship, you still have options for the plan. Some of your plan options may include transferring the plan to another beneficiary, using the money toward skills training programs or overseas education programs that are eligible for Federal Student Financial Aid, or withdrawal from the plan altogether. If your child decides not to attend college or a skills training program, the plan may be transferred to fund the education of another beneficiary who is related to the original beneficiary (i.e. brother or sister, cousin, mother or father, son or daughter). Keep in mind that for most plans the child has up to ten years from the expected date of high school graduation to use the funds with the ability to extend for up to 30 years with written request to some plans. If your child receives a scholarship, you may withdraw an amount equal to the amount of the scholarship from the plan to use for other than qualified college expenses without penalty. If your child decides to attend college out of state and you have purchased a prepaid plan, you can still use the prepaid plan to pay for college expenses. The amount you will receive for each unit or year of the in-state prepaid plan to pay for the out of state college will be the amount you put in plus a reasonable interest rate or plan performance or the average in-state tuition depending on the prepaid plan you purchased. Some state prepaid plans will even pay the full tuition no matter what state you attend college. If you are still not interested in keeping the plan, you may cash out of the plan. You will be responsible for taxes on any interest on contributions made as well as a recapture of any tax deductions you’ve taken on the contributions. So even if your child does not make traditional use of the 529 Plan account you have opened for his/her benefit, you still have many options for the plan. Savings for a child’s college education is a very important decision to make. The 529 Plan is only one of the options and is the one briefly described above. To find out more about the 529 Plan and other options for saving for college, a good place to start is the savingforcollege.com website. Celebrating 529 College Savings Day, May 29, 2012, the website is currently offering the free download of its Family Guide to College Savings, written by Joseph Hurley, their “529 Guru.” You should also discuss with your financial advisor your savings options and how much you could/should save.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.879138708114624, "language": "en", "url": "https://gsdhelp.info/equimarginal-principle-40/", "token_count": 447, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.057373046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:56597e38-007c-4c7a-9bbe-6920b1f9e828>" }
This article discusses about the equimarginal principle in economics, its formula and assumptions. It is applicable when limited resources are to be allocated. The Equimarginal Principle. At this point, you may think we have exhausted all the insights we can get from the hamburger-shirt problem. We have not. The table . Equimarginal principle: economics: Theory of allocation: particular examples of the “equimarginal principle,” a tool that can be applied to any decision that. |Published (Last):||15 April 2015| |PDF File Size:||5.10 Mb| |ePub File Size:||17.61 Mb| |Price:||Free* [*Free Regsitration Required]| Law of Equi-Marginal Utility (With Diagrams) The equimarginal principle states that consumers will choose a combination of goods to maximise their total utility. This will occur where. For most goods, we expect to see diminishing marginal returns. This means the marginal utility of the fifth good tends to be lower than the marginal utility of the first good. The more we buy, the less total utility increases. This will occur where The consumer will consider both the marginal utility MU of goods and the price. This is known as the marginal utility of expenditure on each item of good. Then the optimum combination of goods would be quantity of 4. We divide the MU by the price. Goods can be split up into small units Marginal utility and diminishing marginal returns For most goods, we expect to see diminishing marginal returns. Limitations of marginal utility theory Difficulty of evaluating utility. Instead, they often purchase out of habit or gut feeling. Consumers are not always rational. For example, we often see over-consumption of demerit goods goods which give very low marginal benefit. Or consumers may be influenced by advertising and purchase on impulse. In the real world, consumers have fluctuating income, and innumerable goods to choose between. This makes even rough calculations difficult. Many goods are related — the utility of a video recorder, depends on the quality of video cassettes. Equimarginal principle | economics |
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9515418410301208, "language": "en", "url": "https://nextgenedition.com/canadians-see-wage-gains-even-as-job-growth-slows-2/", "token_count": 1027, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0498046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c7bd2467-066f-4d47-b29a-23a8cdad2bd4>" }
In May, the Ontario government rolled out a wide-ranging labour plan which will increase the minimum wage to $15 per hour in 2019, as well as provide extra vacation time and equal pay for workers. In September, average hourly wages rose as a result of tightening in the labour market. This lesson plan will examine the implications of both economic events. Appropriate Subject Area(s): Economics, labour economics. Key Questions to Explore: - How will a $15 minimum wage affect regions outside Toronto? - What is the relationship between a minimum wage and an average wage? - What age group will be most affected by the $15 minimum wage? - What factors led to the wage increases in September? Labour force, wage growth A copy of the article Introduction to lesson and task: In September, the average hourly wage was up 2.2% year over year, after a prolonged period of slow wage growth. This growth is likely due to Canada’s historically low unemployment rate, which has lead to tight market conditions. Potential impact of average hourly wage growth: An increase in hourly wages could lead to higher prices as Canadians start spending their additional discretionary income. If this occurs, there is a high chance that inflation in the Canadian economy will meet the Bank of Canada’s 2% target and as a result the Bank increase its policy interest rate, which is currently 1%, before the end of 2017. In May, Ontario announced its plan to increase the minimum hourly wage to $15 in 2019, and also provide extra vacation time and equal pay for workers. A recent study conducted by the Fraser Institute concluded that the minimum wage hike would likely threaten jobs outside the Toronto area. This lesson plan will explore the findings of the Fraser Institute, as well as the impact faster wage growth could have on the broader Canadian economy. Action (lesson plan and task): - Ask your students to state the factors that lead to increased wage growth in Canada. Hint: Wage growth has largely been due to a tightening job market, with historically low unemployment. - Ask your students to state the potential impacts of wage growth on the Canadian economy. Hint: Answers could include any of the following: - An increase in Canadians’ standard of living as individuals will gain higher discretionary income which they can either use towards spending or saving. - It could lead to increased productivity, as employers will be incentivized to maintain efficiency. - An increase in inflation as a result of rising demand for goods and services. - Increase in interest rates by the Bank of Canada. - Ask your students to critically think about some reasons why the Ontario government decided to increase the minimum wage to $15 per hour Hint: Answers could include the following: - To improve working conditions. - To increase the standard of living in Ontario. - To maintain fair workplace practices. - Ask your students to state how a $15 minimum wage will affect regions outside Toronto. Hint: Answers could include the following: - The increase in minimum wage will shrink the the gap between average wages and minimum wages in regions areas outside Toronto, especially in Northern Ontario and regions within the rust belt. Generally, the closer the minimum wage in a given region is to the average wage, the more likely there will be less employment opportunities available. The $15 minimum wage could lead to the loss of 50,000 jobs outside Toronto. - Ask your students to explain the relationship between the minimum wage and the average wage. - Ask your students to state the age group that will be most impacted by the $15 minimum Hint: Young adults (i.e. 16-24 year olds) will be impacted by the negative impacts of the increase in minimum wages, as they are more likely to hold minimum wage jobs. - Ask your students to explain how the rise in minimum wages will impact employers. Hint: It will lead to an increase in employers’ payroll costs and may impact their profitability. - Ask your students to explain how employers are likely to respond to a rise in employment income. Hint: All things being equal, an increase in minimum wage will lead to an increase in salary and a reduction in profit for employers. In order to maintain profit, employers may explore the following courses of action: - A reduction in staff either through firing or a hiring freeze. - Automation: that is the process of utilizing machinery as a substitute for human labour. - Raise prices of goods and services. Consolidation of Learning: - Ask your students to explain how automation could impact employees. Hint: It could lead to job losses but an increase in the productivity of employees. - After completing this lesson plan your students should be able to understand the impact an increase in minimum wage will have on the economy and the impact that wage growth has on the economy. - Ask your students to explain why wage growth is a positive sign for the Canadian economy.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9583471417427063, "language": "en", "url": "https://www.ecosystemmarketplace.com/articles/verified-conservation-areas-br-a-real-estate-market-for-biodiversity/", "token_count": 2975, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.353515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cf4f1430-c78f-4b41-9884-1c1c884f96bc>" }
A power company in Germany can use forest-carbon from Brazil to offset emissions because carbon offsets are standardized units, but an American city that damages the habitat of endangered species in Arizona has no such option – in part because habitat is as varied and localized as land itself. Frank Vorhies says VCAs are part of the solution. 21 August 2014 | There are markets for silver and there are markets for houses, and it doesn’t take a genius to see the difference between the two: an ounce of silver is an ounce of silver, interchangeable with any other ounce of the same quality, but the value of a house or any piece of property can fluctuate with the color of the flooring. Carbon markets resemble silver markets because a ton of carbon dioxide has the same impact on the environment regardless of whether it comes from a smokestack in Germany or a forest fire in Brazil. That made it possible to create a global transparent marketplace designed to support sustainable development and identify the most efficient ways to reduce greenhouse gas emissions. Biodiversity markets, however, have always been local because habitat is often unique and irreplaceable. A road that damages a bit of sage grouse habitat in the United States might be able to make good by restoring or preserving habitat of equal or greater environmental benefit in the same ecosystem, but even that approach has only a narrow band of effectiveness. “You can’t offset an extinction, as Joshua Bishop of WWF Australia once said. As a result, most biodiversity banking is confined to the developed world, which has the resources if not always the political will to balance development with conservation. Most degradation, however, is taking place in the developing world, which has massive development needs and little resources for conservation. That got Frank Vorhies thinking: While we can’t offset biodiversity loss in one part of the world by saving habitat in another, could we somehow introduce the elements of transparency and accountability that work so well in carbon into conservation? And if we do, might this free up more capital for proactively supporting environmentally valuable areas, regardless of their location? These questions, posed in 2008, launched an evolutionary process that drew on expertise from across the biodiversity spectrum and led to the formulation of something called “Verified Conservation Areas, which are areas with specific conservation needs that have been identified and specific conservation actions that have been defined. As envisioned, many will be areas that haven’t yet been degraded, but that are under some sort of threat that can be identified and then either avoided or minimized through a process that is audited and transparent. The areas and their action plans will be listed on the VCA Platform, much as houses are listed on a real estate board. Nearly 20 VCAs are currently being considered, and the first one is expected to be approved later this year. Real Estate and Habitat Vorhies, who set up the economics and business programs at the International Union for Conservation of Nature (IUCN), says that to understand VCAs, you have to look at the real estate market. “People will tell you what the going rate is for apartments to rent or to buy but each has got a different storyline, a different location, and that’s what biodiversity is like, he says. “Every bit of nature, every landscape on the planet, has a different set of issues and perspectives and legacies and threats and challenges. Intuitively, we all know this, and the conservation community has long funneled money into protected areas around the world, but that money hasn’t flowed in a standardized way that makes it possible to determine its impact, and it rarely finds it way to areas that are environmentally important but unprotected. Contrast this with carbon, where there are extensive rules both guidelines and methodologies that must be followed, starting with establishing a baseline to measure any changes over time, and where the targets are explicitly those areas that aren’t already protected by law, in the case of forest carbon. Where’s the Guidance? “Nobody’s providing practical guidance on area-based biodiversity assessment, says Vorhies, explaining that to improve the conservation status of areas, we need to know baselines on ecosystems and their services, species and their habitats, and both the conservation and sustainable use of an area’s biodiversity. “CI (Conservation International) produced a rapid biodiversity assessment tool, but it only looks at wild species, he explains. “CI, IUCN, FFI (Fauna & Flora International) and others are helping companies with biodiversity baselines, but these studies are generally not public. What’s missing, he says, are publicly-available tools for developing conservation baselines that a critical mass of people can agree on. 2008: Why Reinvent the Wheel? When the initiative first launched in 2008, the carbon markets were in full swing. The Clean Development Mechanism (CDM), the first global trading platform for environmental credits, was backed by the auspices of the United Nations, and Europe’s compliance emissions trading program meant that companies were eager to participate. “So the folks over in the biodiversity world were saying, Look at those guys in the carbon world they’re getting a stack of money. Why can’t we create a Green Development Mechanism (GDM) for biodiversity financing? Thus the idea of a GDM was born, but it was a name without structure; and, as Vorhies later learned, that name was as much of a hindrance as a help in securing finance. What’s in a Name? When he approached different countries and investors for support of the project, Vorhies encountered two types of people: those who liked the CDM and those who didn’t. On top of that, he found that both camps read too much into the acronym and, for better or worse, they both saw it as more akin to the CDM than it was. “So we had to change the name, he explains ruefully, “After the 10th Conference of the Parties to the Convention on Biodiversity (CBD) in 2010, we changed it to the Green Development Initiative, or GDI, to get rid of the CDM-GDM association because it was driving us nuts. 2010: Refining and Redefining That letter change effectively stopped all comparisons between the two, but the initial problem remained: what would the initiative stand for? All Vorhies knew at the time was that he didn’t want it to be like the CDM. “It was quite clear that it wasn’t a commodity market; biodiversity isn’t a commodity, he says. “The best market we could use was a property market to think of biodiversity as something that you would recognize, trade and indeed celebrate like you do in property management. With a property market, such as apartments, each location has unique attributes: some might be close to public transportation; others may have a pool on the rooftop; and others might have a view. But aside from these additional features, all apartments can be described in terms of size, number of bedrooms, and other constant features. Similarly, every landscape will have characteristics that can’t be replicated just as they will also have basic qualities, like size and ecosystem, which can be described anywhere around the world. Taken as a sum of these descriptors, every conservation hectare has a story and a price. This holistic approach led to another key difference between the GDI and CDM, at a time when the latter began to crash in the carbon world. The initiative wouldn’t be limited to offsets, although offsets could be one of many options in a developer’s landscape management plan. “The offset’s only there for when you’ve gotten to the point of irreparable damage and can’t do anything else, he explains. “But to get to that point, you have to do a whole lot of good things: like avoid, minimize, and restore. And that’s the stuff that needs to be recognized, celebrated and financed through making conservation visible. Good Deeds Unrewarded Vorhies spoke from experience, having previously consulted Yemen LNG, a natural gas company building a new harbor to export gas over a coral reef ecosystem. The company tried to minimize its impact, and it even contacted IUCN to review its decision to relocate the coral nearby, away from where the piers needed to be. Vorhies says they spent large sums on this innovative technique but received no recognition for their efforts. With nothing of value to show their shareholders and no external driver to conform to, the company couldn’t justify its costs. “Do you see the coral reefs? asked the company’s environment manager in 2011, explaining his conundrum. “No. Just leave them. We’ve now got to get on with our business. Vorhies believes that if the company had to do a performance report every year, and had an accountable action plan, that would at least give the environment manager an opportunity to fundraise inside of the company for a biodiversity budget. Indeed, they had already spent a large amount on relocation, and it would not take nearly as much to manage and monitor the conservation of the corals. The company and its investors, could also be recognized publically for their in-situ conservation efforts. 2013: Visibility, Accountability and Marketability By now, Vorhies had a solid set of criteria for a biodiversity mechanism that he thought would work, but the GDI acronym didn’t quite capture it. “You try to do an elevator speech with the initials GDI and people say, that sounds really good but what is it? Thus, the Verified Conservation Area (VCA) Platform rose from its rejected predecessors to become the final name of the initiative for now and it came with the elevator pitch that fit the name. The elevator pitch is this: the VCA Platform will provide visibility, accountability and marketability to project areas, but the specific improvements are up to the project developer. A verified conservation area may then focus on carbon, water, or any other “benefit while, ideally, the central focus would be a cohesive landscape approach much as the landscapes approach that’s evolving in the carbon world, where carbon sequestration is seen as a proxy for good land management. But how do you create a methodology that’s applicable in any ecosystem? A Wing and a Toolkit Recognizing this challenge, the VCA Platform instead relies on making innovation as it goes by only requiring those involved with the project on the ground to have quantifiable metrics and present them publicly and transparently. Armed only with the standard and a basic toolkit approach, VCA hopes to develop best practice guidelines in this way. “When it comes to actually measuring performance, we don’t have any agreed metrics to do a baseline assessment, let alone performance measurements, Vorhies explained. Instead, the toolkit provides the basic building blocks for designing a management plan requiring a baseline assessment, SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis, and a concrete action plan. The latter goes onto the VCA Platform, with yearly updates including independent audits. Every year, an annual performance report will detail what exactly has happened in the area. Similar to an annual financial report, the audit will provide transparency about detailed activities in the area. Investors or donors could then go online, and look at actual projects before contributing. 2014: Building up the Brand Despite the long journey from GDI to VCA Platform, the brand still needs greater recognition. For companies to buy into a new standard, they need assurance that the standard itself is credible. The VCA Platform doesn’t have that yet. Currently, the platform has started a pilot program and has a mandate from two government agency donors the Swiss and the Dutch to coalesce all of these ideas into a solid business plan for scaling up This business plan is now being presented to potential investors in the platform itself seed capital to establish a new marketplace for verified conservation. Already, there are a few protected areas (PAs) on the waiting list; even though those areas traditionally have a government mandate for conservation, they see the VCA as a way to state what they are delivering and as a way to raise funds. There are also areas on the other end of the spectrum: both private biodiversity restoration areas, including a rainforest in Brazil and a savannah wilderness in Mozambique, and projects linked to commodity supply chains or traditionally suspect sectors like mining and oil and gas. Yemen LNG, for example, has recently proposed to register its industrial harbor as a VCA. Regarding working with extractive industries, Vorhies says, “I don’t see myself why mining can’t be just as responsible as the tourism which we run in our national parks in the U.S., with all the roads and hiking trails and the campgrounds and facilities required for tourists. With mining, they could come into a conservation area for 20-30 years and leave an endowment; whereas with tourism, when do we get rid of these people and what do they leave behind? Similarly with agriculture, a field is often seen as having “destroyed conservation areas, yet Vorhies remains optimistic about their inclusion. This is evidenced by the growing use of sustainability standards for various commodities including coffee, cocoa, soy, and palm oil. The VCA Platform, however, brings a landscape level focus to sustainable agriculture which is of real interest to major food companies like Unilever. “The VCA in that sense is not about recognizing that we’ve totally damaged this part of the world and therefore must pay. It’s more like saying this is where we are today and this is what we can do to make it better It isn’t a conservation story; it’s a process of improvement. That’s the idea. We’ve tried to move the language from compensation to good practice. If we want to conserve our planet, we need to create a market for delivering conservation. Please see our Reprint Guidelines for details on republishing our articles.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9383441209793091, "language": "en", "url": "https://www.fortnightly.com/fortnightly/2018/06/beneficial-electrification-all-incomes", "token_count": 317, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.119140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0de4325f-9c87-4d48-af04-c86275c83c85>" }
Perspective from Cooperative Power Keith Dennis is Senior Director for Strategic Initiatives at the National Rural Electric Cooperative Association. He previously worked at the U.S. Department of Energy and White House Council on Environmental Quality, and in the private sector as a third-party verifier of energy projects. At NRECA, he addresses key energy efficiency issues in industry forums and political arenas on behalf of NRECA’s nine hundred national electric co-op members and their forty-two million customers. Consumers with low household incomes bear the heaviest burden of energy costs, yet are those who can least afford them. Traditional energy efficiency programs remain an important and essential way to save consumers money while improving health and the environment. However, according to the Lawrence Berkeley National Lab, traditional conservation programs targeted to low-income households cost more, at 14.2 cents per kilowatt-hour, as opposed to an average of 4.6 cents per kilowatt-hour across all sectors. This data begs the question: Is there a better way to reduce energy costs for low-income households while making progress towards environmental goals? Research suggests the answer may lie with new, strategic uses of electricity, known as beneficial electrification. There is an increasingly strong agreement that in order to meet aggressive greenhouse gas reduction goals, vast numbers of consumers will need to switch from directly using fossil fuels (like fuel oil, gasoline, diesel, propane and natural gas) to electricity, which can be a low greenhouse gas power resource thanks to renewable energy and nuclear energy.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8940014243125916, "language": "en", "url": "https://www.radicaltechnologies.co.in/blockchain-training-course-pune-with-certification/", "token_count": 1008, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.04052734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:22ce512e-7fe6-4cde-aa1f-1fe150bdfa9c>" }
A blockchain is a digitized, decentralized, public ledger of all cryptocurrency transactions. Constantly growing as ‘completed’ blocks (the most recent transactions) are recorded and added to it in chronological order, it allows market participants to keep track of digital currency transactions without central recordkeeping. Each node (a computer connected to the network) gets a copy of the blockchain, which is downloaded automatically. Master in Blockchain in 3 Months – Blockchain | Bitcoin | Cryptocurrency | Ethereum Based Smart Contract | Ethereum Developer | Solidity | Solidity Security | Smart Contract Development & Deployment | Hyperledger Duration : 3 Months – Weekends 2 Hours Daily Real Time Projects , Assignments , scenarios are part of this course Installations , Development, Interview Preparations , Certification Preparation, Repeat the session until 6 months are all attractions of this particular course Trainer :- Certified Blockchain Developer Want to be Future Blockchain Developer Introduction: Blockchain Certification Training will help you understand the underlying mechanisms of Bitcoin transaction systems, Ethereum & Hyperledger & Smart contract & Solidity. Together with learning to setup your own public/private blockchain environment, you’ll also master the concepts like Cryptography & Cryptocurrency, Blockchain Networks, Bitcoin Mining & Security, Multichain, developing smart contract on Ethereum & Hyperledger Platform. This Blockchain Course is designed to introduce you to the concept of blockchain and explain the fundamentals of blockchain and bitcoin From Beginner to Advanced Level. The course will provide complete knowledge of the structure and mechanism of blockchain. As a beginner, you will be learning the importance of consensus in transactions, how transactions are stored on blockchain, history of bitcoin and how to use bitcoin. Furthermore, you will be taught about the Ethereum platform and its programming language. You will setup your own private blockchain environment using Ethereum. In addition, you will develop a smart contract on private Ethereum blockchain and will be deploying the contract from web and console. Consecutively, you will learn to deploy business network using Hyperledger Composer. You will learn to set up private blockchain using Multichain platform. Towards the end of the course we will be discussing various practical use cases of blockchain to enhance your learning experience. What am I going to get from this course? After completing this Course, you should be able to: Blockchain Certification Training can be a beneficial for with below mentioned profiles:- However, anyone having zeal to learn new technology can take up the course. Students and professionals aspiring to make a career in the blockchain technology should opt for the course. The window into any block chain network is the node. This course teaches students how to run a node and how to install, configure and use the most common Ethereum clients. The toolkit to aid development of decentralised applications is growing. This course introduces the two most currently relevant and covers everything from installation and setup to custom configuration and scripting. The most prominent language used for the development of smart contracts is Solidity. The course covers all aspects from value types and inheritance to more exotic features and optimisation. DataQubez University creates meaningful Blockchain certifications that are recognized in the industry as a confident measure of qualified, capable Blockchain experts. How do we accomplish that mission? DataQubez certifications are exclusively hands on, performance-based exams that require you to complete a set of tasks. Demonstrate your expertise with the most sought-after technical skills. Blockchain success requires professionals who can prove their mastery with the tools and techniques of the Blockchain Stack. However, experts predict a major shortage of advanced Development skills over the next few years. At DataQubez, we’re drawing on our industry leadership and early corpus of real-world experience to address the Blockchain talent gap. How To Become Certified Blockchain Developer Certification Code – DQCP – 703 Certification Description – DataQubez Certified Professional Blockchain Developer For Exam Registration , Click here: Trainer for Blockchain course is having 11 years of exp. in the same technologies, he is industry expert. Trainer itself IBM & DataQubez Certified Blockchain Developer. And also he is certified data scientist from The University of Chicago. In classroom we solve real time problem, and also push students to create at-least a demo model and push his/her code into GIT, also in class we solve real time problem or data world problems. Radical technologies, we believe that the best way to learn job-skills is from industry professionals. So, we are building an alternate higher education system, when you can learn job-skills from industry experts and get certified by companies. we complete the course as in classroom method with 85% Practical scenarios complete hands-on on each and every point of the course. and if student faces any issue in future he/she can join also in next batch. These courses are delivered through a live interactive classroom platform Blockchain Senior Solution Architecture
{ "dump": "CC-MAIN-2021-17", "language_score": 0.961398184299469, "language": "en", "url": "https://www.splitsuit.com/poker-elasticity-vs-inelasticity", "token_count": 1806, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.008544921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e0ce0dcf-92c9-431c-b784-1de39deea687>" }
Many economic concepts carry over nicely to poker, and one of the big ones I talk about often is “elasticity”. Understanding what an elastic range is, an inelastic range is, and how to exploit players with different level of elasticity can give you a huge edge in the bet sizing war. This concept is so important I actually have a section dedicated to it in chapter 9 of my full ring poker book. In this video I show you what being elastic is and why fish tend to be inelastic when facing bet sizes. We also look at an example and show how we can exploit a player with an inelastic range by making an overbet shove. As always, if you prefer reading the script for the video can be found below. Enjoy! Hello, and welcome to today’s Quick Plays video on Elasticity in poker. Elasticity is an economic concept that measures how changing one economic variable affects others. This concept carries over nicely to poker when looking at bet sizing and continuance frequencies. In this video you’ll learn what elastic and inelastic ranges are, how they are useful, and how this knowledge can help you exploit players in the future. First off, what is elasticity? When analyzed in the context of pricing, elasticity measures how a change in pricing will influence the demand. Put another way, if we change the price of a product how will sales and profits change? To ensure this doesn’t become a brutal economics lecture let’s take a very simple example: You sell rubber ducks. You use your sales data and determine that if you sell each rubber duck at $2 you will sell 1 million ducks. Well let’s graph that on a graph with: - X axis being number of sales - Y axis being price We then consider what would happen if we raise the price to $50, which of course is significantly larger. We think by raising the price to $50 we would only sell 50k ducks. If we do the math here, at $2 we would make $2M in revenue, and at $50 we would make $2.5M in revenue. Given these numbers we’d be better off selling our ducks at $50 each…this is basic economics. Now in this example our customers are elastic, meaning that they would purchase differently as the price changes. If our customers were inelastic it would mean that they would purchase the same amount regardless of the price. So they would buy, for example, 1M units whether the price were $2 or $50. If our customers are inelastic then we should simply price our product as high as possible. I’m sure you can think of a company or two that does this. Ok, now I’m sure you are really thankful for the mini-economics lesson, but how does this all apply to poker? Well instead of charting the axis of price and sales, could we change them to bet size and number of calls? They are essentially saying the same exact thing, just with a slightly different nomenclature. If we view poker as a business, when we choose a bet size we are selling a product. We are either vying for our opponent to continue (when we have the best hand), or vying for our opponent to fold (when we have the worst hand). It’s a bit over-simplisitic, but it’s a powerful way of visualizing what we do on the tables. Think about yourself for a moment. Say you are on the river and the pot is $50 with effective stacks of $200. If you face a bet from villain would you call a $2 bet more often than a $200 bet? I know I sure would. I’m going to call the $2 bet much more often and with a much wider range than I would a $200 bet. A large chunk of this reason is because against a $2 bet we are getting 26:1 and against a $200 bet we are only getting 1.25:1 on a call. But regardless we are very likely to be elastic in this situation and give different prices different frequencies of action. What about a fishy opponent? A fish is more likely to be inelastic. They make decisions based upon absolute strength rather than good players like ourselves who make decisions based upon relative strength. Absolute hand strength is like saying “TPTK is always good” and relative hand strength would say “how strong is my TPTK relative to my opponent’s range?” Fish also don’t use poker math, whereas good players do. So a fish isn’t going to look at pot odds when he faces different bet sizes. He will simply make decisions based upon his absolute hand strength and/or feelings…and thus is more likely to be inelastic and call regardless of the actual bet size. Good players don’t do that and thus good players tend to be elastic in general. That being said, there are some common exceptions that we should discuss quickly. First is bet sizing in the absolute sense. If the pot were $150 it wouldn’t be uncommon to see a bet size of $100 (a typical 2/3 pot sized bet). However, some players are scared of that monetary threshold, so the absolute value of the bet size influences their decision. Really, there should be no difference between facing a $100 bet into a $150 pot or a $20 bet into a $30 pot. They both offer the same odds and the bet is 2/3 relative to the pot. Many players, especially in small live games like $1/$2, get irrationally fearful of $100+ bets, so while they may be inelastic to any bet size up to $99, once the size is $100+ they begin to give action at a much lower frequency. This is an example of bad players making decisions based upon absolute dollar value rather than relative pot odds. Another common exception is elastic players who become inelastic due to an absolute hand strength. We all know the bad player who can’t fold top pair to any bets or sizes because his absolute hand strength is too strong in his eyes to ever fold. Sometimes elastic players do this same thing, although it’s often times with hands like flushes and full houses. Take an example like this: Here we open AA from EP, the CO calls and we see a HU flop of A76. We bet, he calls. Turn is a 6s, we continue value betting and he calls. The river is a Th, filling both the straight draw and flush draw and we decide we are going to bet. But what do we bet? If he were inelastic with straights or better then we should choose a very large bet or even shove to punish him for the times he can’t fold the absolute strength of his hand. Sure straights and flushes are usually the best hand, but if they are relatively weak compared to villain’s range, then they can be folded. But lesser players don’t think like that. Lesser players think “well, I have X hand and thus I can’t fold”. When really the thought should be “well, I have X hand, but how does that compare to my opponent’s range and the math?” If the CO is inelastic with straights or better, make a big bet…even consider an overbet or possibly a shove. If the CO is elastic and shoving would only get you looked up by exact quads and straight flushes, why bother shoving? As always, good players ask themselves what their bets would accomplish and choose lines that exploit ranges and inelastic mistakes! The concept of elasticity is very powerful when choosing your exact sizes at the table. Consider which players are elastic versus inelastic and choose sizing strategies that exploit them appropriately. Also consider your own elasticity levels, if you may be inelastic with absolute hand strengths, and understand real factors when deciding how to react to certain bet sizes. As a final note, you can also use the concept of elasticity when bluffing. For instance, if a player is inelastic and would fold regardless, why not choose a smaller bluff size? Similarly, if a player is very elastic, why not size your bluff large enough to create those extra folds? A little bit of thinking and reflection on this concept can revolutionize your bet sizing strategy and help you find extra spots to interject edge. If you have any questions please don’t hesitate to ask, otherwise good luck and happy grinding!
{ "dump": "CC-MAIN-2021-17", "language_score": 0.975848376750946, "language": "en", "url": "https://www.twig-world.com/film/fractional-reserve-banking-1802/", "token_count": 454, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.236328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a792bfa9-6d90-4c81-8ec4-a0a2490dcab6>" }
Before banks, people held most of their wealth in gold. Then, when banks took over, they followed suit. Philip Shaw, Chief Economist, Investec – "Central banks tend to keep gold in their reserves because of the traditional association between gold and the store of value." But today, banks don't just depend on gold reserves to fund their business – they use your money as well. When you deposit money into a bank, the banks don't simply store it. They lend it out. At one time, banks would only lend out money to the value of the gold reserves that they held. But today, banks can lend out far more money than they hold back. The reserve that they must hold is related to their size. In America, banks with deposits of up to a limit of $71 million must hold back 3% of their funds. These smaller banks are required to keep a little over $2 million in reserve. That's a reserve ratio of three dollars for every hundred deposited. But if a bank has more than $71 million of deposits, it must hold 10% in reserve. So, a bank receiving 200 million dollars keeps back every tenth dollar, or $20 million in cash. Fractional reserve banking This is known as fractional reserve banking. It allows a controlled increase of money in circulation, which is good for a country's economy. However, fractional reserve banking has a flaw. If you want to withdraw your money, the bank might not have it. In times of economic crisis, lots of customers might want to withdraw their money all at once, as happened to British bank Northern Rock in 2007. With a bank holding only a fraction of its deposits, demand could outstrip its reserves. Ultimately, the bank could collapse – and people could lose their savings. Which is why some people still choose to put their trust in gold instead. Sandra Conway, Managing Director, ATS Bullion – "It has sort of held its value over the years really, and I think people are starting to realise that when the bank has your money, they might not actually physically have it if there's a problem."
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9271535277366638, "language": "en", "url": "http://negative-emissions.info/implications-of-carbon-dioxide-removal-for-the-sustainable-development-goals/", "token_count": 1441, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.041015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bd995ad1-f528-4337-8e80-78a60b4b0e7e>" }
This blog is based on a new research paper published in the journal Climate Policy. The 17 Sustainable Development Goals (SDGs), adopted in 2015 by all United Nations Member States, are the best expression of a commonly desired future for humanity. Action to fight climate change (SDG13) should contribute to broader societal objectives, and thus align with the other SDGs. In the past, some climate change mitigation efforts have had negative impacts on society and the local environment. Mitigation practitioners, policymakers and civil society have learned from these experiences, and emissions reductions efforts now increasingly provide multiple sustainable development co-benefits. When it comes to relatively novel mitigation approaches that seek to remove CO₂ from the atmosphere (Carbon Dioxide Removal, or CDR), experience regarding potential co-benefits and negative impacts is limited. Implications – both good and bad – may arise both directly and indirectly because of insufficiently-understood operational level outcomes. It is therefore very difficult to make informed choices and decisions that mobilize CDR in a way that supports – rather than undermines – sustainable development overall. In a new research paper published in Climate Policy, we mapped the potential implications that the scientific literature has identified to date, and where knowledge gaps remain. More than 30 experts from around the world helped in this endeavour. Our article aims to trigger interest in much more rigorous and specific examination of the possible implications of large-scale CDR implementation, informing the design of policy instruments aiming at CDR promotion. In the following, we address the importance of sound policy design, early action, and careful ‘on-the-job learning’ involving all stakeholders. We further outline the responsibilities of national governments and the international community, and underline the need for collaborative research and policy impact assessments. Implications of specific CDR options depend on policy design There is a multitude of different CDR types, ranging from well-known practices in agriculture or forestry (that enhance natural carbon reservoirs) to high-tech equipment which directly removes CO₂ from the air and stores it underground. The implications of these different approaches vary significantly given their different costs and requirements for land, water, energy, and labour. Beyond technological differences, potential implications vary based on the way CDR options are introduced, funded and regulated in different countries and regions. Whether impacts are positive or negative depends on physical, social, economic, and political circumstances. None of the CDR options is universally ‘good’ or ‘bad’. Our table of possible effects of the different CDR options for each SDG can serve as a tool for guiding choices. Unfortunately, there are no simple answers when it comes to anticipating the impacts of real-world CDR implementation. The upside is that good governance and sound policy design may go a long way to generate benefits, based on an understanding of specific local circumstances across social, economic, cultural, political and environmental dimensions. To reliably result in a contribution to SDG 13 (urgent action on climate change and its impacts), there is a need for generally applicable international rules on how CO₂ removal is measured, reported and verified, and CDR needs to be robustly accounted for in national reports under the Paris Agreement. This is crucial to ensure that CDR results are consistently and transparently communicated and thus credible in the eye of the stakeholders. Careful but determined steps to gradually mobilize CDR Immediate decisions for drastic emissions reductions are a precondition for stabilization of the global climate in the second half of the century. Each passing day of insufficient emissions reductions increases our reliance on CDR against the goal of keeping global temperature rise under 2 °C or 1.5 °C. Early applied learning and iterative improvement may allow for the gradual scaling of CDR in a socially accountable and robust manner, with fewer uncertainties about potential harms and co-benefits. Such a gradual development seems crucial: public acceptance, participation and broad-based decision-making are impossible in case of a late start and precipitated scale-up. Careful, ‘on-the-job’ learning involving all stakeholders is necessary. Domestic and international responsibilities for robust CDR policies Much of the responsibility for sound CDR support policy design and implementation lies with national or even sub-national governments. International collaboration, however, is key to both empower and encourage positive synergistic outcomes. The Kyoto Protocol’s flexibility mechanisms offer an example of where both national priorities and international guidance influenced which activities to pursue. It was up to each recipient country to judge the sustainability performance of proposed mitigation projects. While this approach initially seemed insufficient, experience and voluntary guidance for these judgments grew over time. Under the Paris Agreement and SDG 17 (revitalize the global partnership for sustainable development), international cooperation is expected to play an increasingly important role. However, its role is expected to change: less and less geared toward funding relative reductions in emissions and more toward funding CDR – in line with achieving net-zero greenhouse gas emissions. To achieve long-term credibility transparent assessment criteria, articulation and procedures for judging the performance of specific CDR policies, programs or projects are urgently needed. International cooperation agencies, climate finance providers, and CDR practitioners would be well advised to collaboratively work toward the establishment of such criteria and procedures. Transdisciplinary research and policy impact assessments Countries will increasingly need to demonstrate how their mitigation policies and actions align with their pledges to achieve net-zero emissions. International assessment principles or metrics for CDR policies could help to evaluate their expected co-benefits or risks consistently across regions, differentiated by the national circumstances. Previous experience in climate governance (e.g. for land-use, carbon capture and storage in industry, biofuels, and international carbon markets) offers important lessons, but only imperfectly applies to CDR policy proposals. Academic research and policy impact assessments should therefore work hand in hand to advance theoretical and practical understanding, drawing on experiences from widespread pilot activities. Such collaboration will help in understanding the relative contribution of CDR for climate action and sustainable development in general, and will become increasingly nuanced and locally rooted. We hope our article offers a starting point for this endeavour. Matthias Honegger is Senior Research Associate with Perspectives Climate Research and the Institute for Advanced Sustainability Studies, and PhD candidate at Utrecht University. Axel Michaelowa is Senior Founding Partner at Perspectives and researcher at the University of Zurich. Axel was a lead author of the chapter on international agreements in the 5th Assessment Report (AR) of the Intergovernmental Panel on Climate Change (IPCC) and wrote on mitigation policies in the 4th AR. Joyashree Roy is Bangabandhu Chair Professor at the Asian Institute of Technology, Thailand and is on lien from the Department of Economics, Jadavpur University. Her many roles include Coordinating Lead Author of the IPCC’s AR4, AR5, AR6 and Special Report on Global Warming of 1.5.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8629785776138306, "language": "en", "url": "https://ceopedia.org/index.php/Domestic_demand", "token_count": 598, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0732421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f228ba1b-fab5-49f5-b052-d7aa92b26a54>" }
Domestic demand – (also internal demand) an economic term that refers to the total quantity of money that is spent on products and services by the firms, people and government within a specific country, or that would be spent if the services industry and manufactures were accessible . Demand from within a particular country, not from abroad. - Final domestic demand is private consumption plus gross fixed investment plus government consumption. Total domestic demand is the final domestic demand plus stock building. - Every now and then economists and scientists pertain to total final expending. This is the final domestic demand plus exports of manufactures and services (G.Tsagkarakis, 2015). Factors impact upon demand The demand for home goods depends on three dimensions. The first one is world consumption, which affects not only the scale of external demand for home produce but for that matter domestic demand and consumption. The other variable is the world relative price of home manufactures. This valuation affects foreign demand for home manufactures and it can affect the domestic demand, depending on the real rate of exchange, which is the third factor. Note that, given the world relative price of home commodities, the real exchange rate may reduce or increase (R. Chang, Mr. L.A.V. Catão 2010, pages 18 and 19). The negative impact of a expansion in export merchandise prices The surge in export incoming is spent chiefly on nontradable manufactures resulting in increased domestic demand, an appreciation of the real rate of exchange and an aggravation in the finances of the tradable sector (except for the segment enjoying the price surge). Notwithstanding, in the case of intensified government operations, the resource of increased domestic demand is internal and leads to a loss is domestic retrenchments and higher domestic rate of interest ; the resulting inflow of short-range capital is an additional source of nervousness for the economy (E.Kalter, A.P. Ribas 1999 page 7). - Bakker B. B., Gulde A. M., (2010) The Credit Boom in the EU New Member States: Bad Luck or Bad Policies?, International Monetary Fund - Catao L.A.V., Change R. (2010), World Food Prices and Monetary Policy, International Monetary Fund - Clark P. B., Bayoumi T., Bartolini L., (1994), Exchange Rates and Economic Fundamentals: A Framework for Analysis, International Monetary Fund - Kalter E., Ribas A. P. (1999) The 1994 Mexican Economic Crisis: The Role of Government Expenditure and Relative Prices , International Monetary Fund, Buenos Aires - Soares Esteves P., Rua A. (2013), Is There a Role for Domestic Demand Pressure on Export Performance?, European Central Bank - Tsagkarakis G. (2015), Domestic Demand and Network Management in a User-inclusive Electrical Load Modelling Framework, University of Edinburgh, Edinburgh Author: Edyta Pach
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9699444770812988, "language": "en", "url": "https://financial-dictionary.thefreedictionary.com/Creditors", "token_count": 237, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.099609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:40e97a87-a027-42d0-8f6d-596e11670122>" }
Also found in: Dictionary, Thesaurus, Legal, Encyclopedia. creditors (accounts payable)the money owed to individuals or firms because they have supplied goods, services or raw materials for which they have not yet been paid (trade creditors), or because they have made LOANS. Amounts falling due for payment within one year are counted as part of a company's CURRENT LIABILITIES in its BALANCE SHEET, while amounts falling due after more than one year appear as part of long-term liabilities. Some creditors, called secured creditors, are offered collateral or security for their loans by means of a fixed charge on a specific asset owned by a debtor, which they could legally claim in the event of default on the loan. Other secured creditors are offered security by means of a ‘floating charge’ on the debtors' assets, which would offer them priority in claiming the proceeds from the sale of these assets in the event of default. Unsecured creditors such as trade creditors have less security in the event of default. See DEBTORS (ACCOUNTS RECEIVABLE), CREDIT. CREDITORS RATIO.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9174553751945496, "language": "en", "url": "https://iasbaba.com/2020/05/covid-19-opportunity-for-india-to-deepen-its-engagement-with-africa/", "token_count": 931, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0186767578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:959463ef-f0c6-4491-86a7-f232ca202cf4>" }
Topic: General Studies 2: - Bilateral, regional and global groupings and agreements involving India and/or affecting India’s interests. - Effect of policies and politics of developed and developing countries on India’s interests. COVID-19: Opportunity for India to deepen its engagement with Africa Context: Africa Day is observed every year on May 25 to commemorate the founding of the Organisation of African Unity (now known as the African Union). India has been closely associated with it on account of its shared colonial past and rich contemporary ties Significance of Africa - Africa’s rich natural resources becomes importance in the light of growing global population - Trade & Investment opportunities, including in energy, mining, infrastructure and connectivity. - Long-term economic potential due to huge market and rising purchasing power - Youthful demography of the region provides much needed human resources - Political Significance: Africa as a bloc of 54 countries in multi-lateral organisations can play a decisive role in International politics Impact of COVID-19 pandemic on Africa - Recession: COVID-19 outbreak has sparked off the Sub-Saharan Africa (SSA) region’s first recession in 25 years. - Unemployment: Growth is expected to plummet to between -2.1 and -5.1 per cent in 2020, from a modest 2.4 per cent in 2019 which leads to more job losses - Deepens Health Crisis: With high rates of HIV, malaria, diabetes, hypertension and malnourishment prevalent in Africa, COVID-19 pandemic will further deepen the health and economic crisis. - Impacts economic model: The steep decline in commodity prices has spelt disaster for the commodity dependent economies of Nigeria, Zambia and Angola. - Possibility of Increased Public debt: According to the World Bank, the SSA region paid $35.8 billion in total debt service in 2018, 2.1% of regional GDP. This figure is set to increase due to falling revenues & precarious fiscal position of African nations - Forced to seek aid from international Community: Together, African countries have sought a $100 billion rescue package, including a $44 billion waiver of interest payment by the world’s 20 largest economies. India- Africa Relationship - India-Africa trade reached $62 billion in 2018 compared to $39 billion during 2009-10. - After South Asia, Africa is the second-largest recipient of Indian overseas assistance with Lines of Credit (LOC) worth nearly $10 billion (42% of the total) spread over 100 projects in 41 countries. - 40% of all training and capacity building slots under the ITEC programme have traditionally been reserved for Africa. - Approximately 6,000 Indian soldiers are deployed in UN peace-keeping missions in five conflict zones in Africa. - To develop closer relations, India launched the first-ever India Africa Defence Ministers conclave in February 2020 on the margins of the Defence Expo 2020. - India provides about 50,000 scholarships to African students each year. In the wake of pandemic, what can India do to improve its relationship with Africa? - China’s engagement with Africa is huge (annual trade ~ $208 billion) but is increasingly regarded as predatory and exploitative (defective PPE gear by China during Pandemic). This provides an opportunity for India to increase its strategic space in Africa - India could consider structuring a series of virtual summits with African leaders that could both provide a platform for a cooperative response to the pandemic - The Aarogya Setu App and the E-Gram Swaraj App for rural areas for mapping COVID-19 are technological achievements that could be shared with Africa. - Since the movement of African students to India for higher education has been disrupted, India may expand the e-VidyaBharti (tele education) project to establish an India-Africa Virtual University. - India could also create a new fund for Africa and adapt its grant-in-aid assistance to reflect the current priorities - India could direct new investment projects by Indian entrepreneurs in Africa especially in the pharmaceutical and healthcare sectors. - Quad Plus – US, India, Japan & Australia – can exchange views and propose cooperation with select African countries abutting the Indian Ocean. The pandemic is a colossal challenge but it may create fresh opportunities to bring India and Africa closer together. Connecting the dots: - European Union - Asia-Africa Growth Corridor
{ "dump": "CC-MAIN-2021-17", "language_score": 0.936431348323822, "language": "en", "url": "https://pocketsense.com/difference-between-bank-guarantee-bank-bond-4852.html", "token_count": 528, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.005767822265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:246a154d-0ad8-4ac6-9c0d-65653080f32d>" }
The word bond often conjures up images of someone paying a bondsman to get out of jail, or an investment strategy. Bonds also apply to business situations, and people in the business world can use bank bonds, also called performance or surety bonds, to protect themselves from financial loss. Bank guarantees refer to another type of protection in the business sector. While bonds and guarantees offer similar features, it's imperative to understand key differences between the two. Definition of Bank Bond Bank, surety or performance bonds involve an agreement between an owner, a contractor and the institution that issues the bond. Contractors may agree to complete a project to the owner's satisfaction. However, the contractor may stop the project before completion or take the owner's money and disappear. Owners can protect themselves from mishaps by acquiring a bank bond. This is a performance bond, and this type of bond is different from investment bonds obtained from a bank. With a performance bond, the bank compensates owners for their financial loss if the contractor doesn't fulfill his obligation. Definition of Bank Guarantee Bank guarantees, also called letters of credit, involve a bank or other financial institution promising or guaranteeing cash to owners if a project isn't completed. If the project owner demands cash, the bank pays this cash using the letter of credit, and the contractor then acquires an interest-bearing loan and repays the lending institution. While bank bonds and bond guarantees have similarities, differences include how an institution qualifies a contractor for coverage. Before issuing a surety, bank or performance bond, the bank closely evaluates the contractor's performance record, financial history, workload and experience to assess if the contractor is capable of completing the project. Banks require collateral with a letter of credit, and they assess the condition of such collateral to ensure that the worth supports the monetary value of the guarantee. Bonds and letters of credit affect a contractor's borrowing capacity differently. Because bank bonds are based on a contractor's strong credit record and do not require collateral, the contractor's financial history can strengthen or improve. Quite the opposite with letters of credits, these guarantees are viewed as a liability on a contractor's financial statement and this letter can negatively impact the contractor's ability to acquire future funding. Valencia Higuera is a freelance writer from Chesapeake, Virginia. She has contributed content to print publications and online publications such as Sidestep.com, AOL Travel, Work.com and ABC Loan Guide. Higuera primarily works as a personal finance, travel and medical writer. She holds a Bachelor of Arts degree in English/journalism from Old Dominion University.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8700270652770996, "language": "en", "url": "https://www.seslisozluk.net/ar/%D9%85%D8%A7-%D9%87%D9%88-recapture/", "token_count": 973, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.28125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:401fb2d2-bfb5-4e7f-bf07-b6ec06c5352a>" }
the act of taking something back a legal seizure by the government of profits beyond a fixed amount capture again; "recapture the escaped prisoner" When soldiers recapture an area of land or a place, they gain control of it again from an opposing army who had taken it from them. They said the bodies were found when rebels recaptured the area. Recapture is also a noun. an offensive to be launched for the recapture of the city A provision in a contract that allows one party to recover (recapture) some degree of possession of an asset, such as a share of the profits derived from some property Amount of depreciation or section 179 deduction that must be reported as ordinary income when property is sold at a gain take back by force, as after a battle; "The military forces managed to recapture the fort A clause in a lease agreement providing for lessor's retaking or recovering possession of the premises, usually by cancellation of the lease under certain conditions The act of retaking or recovering by capture; especially, the retaking of a prize or goods from a captor To recapture a person or animal which has escaped from somewhere means to catch them again. Police have recaptured Alan Lord, who escaped from a police cell in Bolton. Recapture is also a noun. the recapture of a renegade police chief in Panama When you recapture something such as an experience, emotion, or a quality that you had in the past, you experience it again. When something recaptures an experience for you, it makes you remember it. He couldn't recapture the form he'd shown in getting to the semi-final The inclusion of a previously deducted or excluded amount in gross income or tax liability Recapture may be applicable to accelerated depreciation, cost recovery, amortization, and various credits take back by force, as after a battle; "The military forces managed to recapture the fort" A tax policy which ensures that back taxes are paid on the true market value of land when it is developed The NMTC will be recaptured if, at any time during the 7-year period following a qualified equity investment That portion of the gain from the sale of real estate that is taxed at ordinary income tax rates Calculated as the difference between the accelerated depreciation taken and the straightline depreciation that would have been allowed the act of taking something back a legal seizure by the government of profits beyond a fixed amount capture again; "recapture the escaped prisoner" take up anew; "The author recaptures an old idea here" experience anew; "She could not recapture that feeling of happiness" take back by force, as after a battle; "The military forces managed to recapture the fort The undoing of a tax benefit if certain requirements are not met in future years For example: (1) The low-income housing credit may be recaptured or added back to tax if the credit property ceases to be used as low-income housing for a minimum number of years (2) The alimony deduction may be retroactively lost or recaptured if payments do not continue at the requisite level for a minimum number of years 20 dilde online sözlük. 20 milyondan fazla sözcük ve anlamı üç farklı aksanda dinleme seçeneği. Cümle ve Videolar ile zenginleştirilmiş içerik. Etimoloji, Eş ve Zıt anlamlar, kelime okunuşları ve günün kelimesi. Yazım Türkçeleştirici ile hatalı Türkçe metinleri düzeltme. iOS, Android ve Windows mobil platformlarda online ve offline sözlük programları. Sesli Sözlük garantisinde Profesyonel çeviri hizmetleri. İngilizce kelime haznenizi arttıracak kelime oyunları. Ayarlar bölümünü kullarak çevirisini görmek istediğiniz sözlükleri seçme ve aynı zamanda sözlüklerin gösterim sırasını ayarlama imkanı. Kelimelerin seslendirilişini otomatik dinlemek için ayarlardan isteğiniz aksanı seçebilirsiniz.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9357048273086548, "language": "en", "url": "https://www.weforum.org/agenda/2015/06/will-natural-gas-slow-progress-on-renewables/", "token_count": 995, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.39453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9e1c9ed8-b029-4309-a092-5e4aabc9857a>" }
The Stockholm Institute has released a provocative report that examines the longer term implications of relying on natural gas as a ‘bridge technology’ to a lower carbon economy. The environmental think tank’s report addresses the question of whether, by turning to natural gas to replace coal fired generation in the near term, we might be limiting our ability to minimize carbon emissions further into the future. At a fundamental level, the carbon advantage that natural gas offers over coal is immediate and clear. Natural gas has half the carbon content of coal per unit of energy. In parallel, natural gas combined cycle plants are more efficient than even that the most modern supercritical coal generators. As “Natural Gas: Guardrails for a Potential Climate Bridge”points out, the best CCGT plants have an energy conversion efficiency of 60%, versus 44% tops for supercritical coal, giving gas additional advantage in terms of carbon emissions. In the United States, where gas undercuts coal on price, the switch to the cleaner fuel resulted in a 12% drop in power sector emissions from 2007 to 2012. But the report highlights a downside in this fuel revolution, which is the potentially unhealthy relationship between CCGT and zero-carbon generation. For the United States, natural gas substitution for non-fossil fuel energy (mainly nuclear, biomass and wind) is greater than for coal, and similar to that for coal plus oil substitution. Similar results are reported for Africa, with some differences in the mix of non-fossil fuels replaced by natural gas (especially wind post-2030) In other words, natural gas doesn’t just displace coal, it acts as a substitute to cleaner forms of generation to a very significant degree. This effect isn’t universal, and in China natural gas is expected to substitute largely for coal, with a much smaller impact on wind power. Nevertheless, in the longer term natural gas does not appear to be a carbon panacea. In countries where natural gas is cheap, namely the US, lower energy prices may drive higher energy demand, a phenomenon known as the “scale effect”. Several studies using energy-economy models to reflect interactions between energy supplies, prices and consumption have suggested that more abundant, inexpensive gas supplies would lead to increased energy consumption, partly or fully offsetting the GHG benefits from substitution of other fuels. Stockholm quotes a number of published reports from the likes of the EIA and others to arrive at a blunt, sobering conclusion: As a whole…more abundant and less expensive natural gas supplies are, on their own, unlikely deliver a significant climate benefit. So, when it comes to the embrace of natural gas, are we damned if we do, damned if we don’t? Not necessarily. Here the potential role of strong climate policy becomes clear. Unencumbered, the utility sector might build ever more natural gas generation based purely on natural gas’ strong economics. That urge, however, can be balanced through the passage of stronger renewable portfolio standards to ensure that more renewables make it onto the grid. Carbon pricing will limit the cost advantage of natural gas and play to the advantage of nuclear power. Policy can also help to ensure that much of natural gas supply is directed toward power generation, where its climate impact is greatest, rather than to the transportation sector where natural gas’ advantages over gasoline and diesel are relatively less. In addition, strict limits on methane leakage in natural gas production and LNG transport will be critical to reducing potential GHG impact. Methane is more than 80 times more potent as a greenhouse gas than CO2. Finally, the long term impact of the natural gas market on emissions has to be taken into account. Rising global demand for natural gas has been enabled in large part by the rising liquidity of the global LNG market, which should expand dramatically over the next few years as new supply comes online. LNG project developers, who invest many billions of dollars into gasification plants and ocean going LNG tankers, favor long term supply contracts that commonly lock customers in for 20 years. This in turn incentivizes buyers to vigorously promote LNG demand, potentially to the detriment of wind, biomass and nuclear. It’s clear that the adoption of natural has had a positive impact on CO2 emissions in the US over the past decade. Nevertheless, as Stockholm points out, that advantage may diminish with time unless policy is used to ensure that investment in lower-carbon forms of power generation are supported. This article is published in collaboration with The Energy Collective. Publication does not imply endorsement of views by the World Economic Forum. To keep up with the Agenda subscribe to our weekly newsletter. Author: Andy Stone is an energy, media and communications professional who offers perspective that spans clean and conventional energy. Image: A person lights a bio-gas stove. REUTERS.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9436208009719849, "language": "en", "url": "http://ashleylinares.com/m1r5qx/19b107-as-a-science%2C-economists-have-defined-economics-as", "token_count": 6392, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2275390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:69fa22f7-60cf-4f1b-b135-a828dccc672c>" }
Within this people’s wages buy less as government has already spent the wages and an economic crisis arises. At the most trivial level, Carlyle’s target was not Malthus, but economists such as John Stuart Mill, who argued that it was institutions, not race, that explained why some nations were rich and others poor…. From time to time, several economists have contributed to shaping U.S. monetary policy. An example, take a budget “X” it represents a certain percentage of the GDP to be produced over a year. Press Esc to cancel. The name I should have preferred as the most descriptive, and on the whole least objectionable, is that of CATALLACTICS, or the “Science of Exchanges.”…. Jacob Hollander addressed the charges in a 1916 essay, arguing that scientific inquiry involves uniformity and sequence. Economics definition, the science that deals with the production, distribution, and consumption of goods and services, or the material welfare of humankind. Economics definition is - a social science concerned chiefly with description and analysis of the production, distribution, and consumption of goods and services. Yong was recently entangled in a controversy over the failure of researchers to replicate a highly-cited and influential psychology study. But this may not be true. The interface between the budget consuming all the GDP and no GDP creates an informal economy- non existent economy. Experimental results in physics are never 100% conclusive and are subject to dispute even centuries after the fact. I agree with Fulfer. Economics is a discipline, not a science. These economists aim to study economics in terms of wealth and focused on how to increase wealth. Economists is a science. in American History from the University of South Florida, and his B.S. Are there some ideas about which all economists agree? Alfred Marshall’s Definition of Economics: Alfred Marshall pointed out in 1890 that Adam Smith’s … The conversation closes with a discussion of career advice for those aspiring to work in quantitative finance…. But he also argues that economics is unable to make precise predictions about the effects of various changes in policy and behavior. Real life physics experiments can’t always be set up to test the key hypotheses. Do disagreements suggest that economics is an exciting, viable academic discipline or a perpetually unresolvable dispute? This leads us to ask how we define progress. Notify me of follow-up comments by email. Critics of “economic sciences” sometimes refer to the development of a “pseudoscience” of economics, arguing that it uses the trappings of science, like dense mathematics, but only for show. Chris Freiman, a philosophy professor at the College of William and Mary, describes the phenomenon of “confirmation bias”: how people look for evidence to confirm their existing beliefs. If means government collects the budget and spends it and recollects it, and respect it several times within a calender year. After a discussion of the incentives facing scientists, the conversation turns to the challenges facing science journalists when work that is peer-reviewed may still not be reliable. This question lives on today. Economic phenomena do not have the same intrinsic fascination for economists as the internal resonances of the atom because hardly any contemporary economist understands it. Without verification, he argued, “speculation is an intellectual gymnastic, not a scientific process.”. Manzi on Knowledge, Policy, and Uncontrolled. Type above and press Enter to search. It is now, I conceive, too late to think of changing it. The discussions starts with the issue of growth–measurement issues and what economists have learned and have yet to learn about why some nations grow faster than others and some don’t grow at all. Save my name, email, and website in this browser for the next time I comment. At that same interface government coffers usually run dry as all the GDP is consumed hence supplementary budgets and taxes move in. Nosek argues that these incentives create a subconscious bias toward making research decisions in favor of novel results that may not be true, particularly in empirical and experimental work in the social sciences. After all, if economics truly was based on impartial evidence then it would have long since dropped many of its ideas that have been since debunked. In other words there is a limit to government taxation that economists cannot grasp. He surveys the changes in economics over the last 25 years–the rise of experimental economics and behavioral economics–and argues that economics has become more scientific and that economists have become more aware of flaws in economic theory. What does economics mean? It seems as though economics is fighting for its right to stay in the exclusive group of fields deemed worthy enough to be called “science,” where … The nature of economics The nature of economics. However, certain economists argue that a non-market mechanism has developed to correct the problem of indefinable property rights, such that scientists are incentivized to produce knowledge in a socially responsible way. He discusses the issues behind the failed replication and the problem of replication in general in other fields, arguing that replication is under-appreciated and little rewarded. How to use economics in a sentence. Everyone knows that economics is the dismal science. Ed Yong, science writer and blogger at “Not Exactly Rocket Science” at Discover Magazine, talks with EconTalk host Russ Roberts about the challenges of science and science journalism. Indeed, economics is an important subject because of the fact of scarcity and the desire for efficiency. And he discusses whether the internet is making us smarter or stupider, and the costs and benefits of being able to tailor information to one’s own interests and biases. EconTalk Podcast. EconTalk podcast, July 30, 2007. He also said that economics is a science of production, distribution, and consumption of wealth. EconTalk Podcast. It is like printing money under Keynesian economics but recycling. Physics can send a satellite to orbit Jupiter, tell you exactly … If budget “X” represents 30 percent of the GDP then the monthly tax rate is 2.5 percent. If progress means increase of happiness, the question arises that are we, the modern man possessing overflowing wealth and gadgets, more happier than the foragers? Vernon Smith on Rationality in Economics, EconTalk podcast. Everyone also recognizes economics–a “social science”– is somehow not quite the same as physics in its ability to be science-like. So what is economics, really? Also, economists as physicists, biologists, and others do not do math for the sake of math, so econ is not … If economics is based on subjective values, how can it be considered universal? The discipline of economics was charged with unsound methods. Harry Truman longed for a one-armed economist, one willing to go out on a limb and take an unequivocal position without adding “on the other hand…”. Everyone also recognizes economics–a “social science”– is somehow not quite the same as physics in its ability to be science-like. It is just that economists just don’t know it yet! $\begingroup$ Many mathematicians that have become economists have defined appropriately aggregate demand, economic growth is a loosely defined term but true economists not use growth loosely, rather they refer to the growth of some economic variable and growth is a simple notion. Examining the scientific nature of economics, John F. Henry, an economist at the Levy Economics Institute, explains that neoclassical economics holds a position of influence in society because of its universal and abstract nature. September 26, 2011. Economics is the scientific study of the ownership, use, and exchange of scarce resources – often shortened to the science of scarcity.Economics is regarded as a social science because it uses scientific methods to build theories that can help explain the behaviour of individuals, groups and organisations. An economy (from Greek οίκος – "household" and νέμoμαι – "manage") is an area of the production, distribution and trade, as well as consumption of goods and services by different agents. You can find his published work on Academia. Rosenberg, a philosopher of science talks about whether economics is a science. Isn’t economics nicknamed the “dismal science” because it is all about running out of resources and the inevitable decline of life as we know it? It is the economic way of … Where does this desire to be ‘scientific’ come from, and why is it so important for economics to be considered scientific? By the time you touch the fourth month or cycle it is 30 percent taxes facing 10 percent of the GDP which means in real terms the 30 percent tax is 300 percent. Economics as the science of money introduces a veneer of scientific credibility by focusing on measurable quantities. Should economists continue making ‘progress’ toward a more scientific structure of knowledge? Political viewpoints and the everyday language used in economics make unbiased statements or interpretations of results, or the understanding of ideas, imprecise and easily misinterpreted. The data revolution of the past decade is likely to have a further and profound effect on economic research. It seems unproductive to continue asking such questions. March 12, 2012. A comprehensive theory of a system of cities is an essential component of economists’ efforts to understand and model economic growth and international trade. A LearnLiberty video. Both come from the same scientific revolution, and both are influenced by values. Economics defines itself as “the science of the efficient allocation of scarce resources”. Whether we are progressing or regressing it is a big question today. more Everything You Need to Know About Macroeconomics He argued that the distinctions between the social and natural sciences are not clear. Origin of the Phrase "Dismal Science" to Describe Economics As it turns out, the phrase has been around since the mid-19th century, and it was coined by historian Thomas Carlyle. Because of the complexity of social environments, even narrow experiments are unlikely to have the wide application that can be found in the laws uncovered by experiments in the physical sciences. In the 19th century economics was the hobby of gentlemen of leisure and the vocation of a few academics; economists wrote about economic policy but were rarely consulted by legislators before decisions were made. In the second half of the conversation, Nosek details some practical innovations occurring in the field of psychology, to replicate established results and to publicize unpublished results that are not sufficiently exciting to merit publication but that nevertheless advance understanding and knowledge. But what is a science and how is economics different? This choice involves values, since a scholar must value one research project more than another. Economics is a science in some ways but not others. Progression in science relies on the formation of hypotheses, which may at some point become ‘laws.’ Observation and inference are the first steps toward the creation hypotheses. Rosenberg on the Nature of Economics, EconTalk podcast. Large macroeconomic questions such as the cause of recessions or the origin of economic growth “remain elusive,” Chetty writes. Yong on Science, Replication, and Journalism. Perhaps the real issue is the determination to make economics a science. The closest thing to Robbins in the English‐language textbook literature of the time seems to be the definition offered by Fairchild et al., who, having identified ‘the insatiability of man and the niggardliness of nature’ as ‘the foundation stones upon which rests the structure of economics’ (p. 8), define economics as ‘the science of man's activities devoted to obtaining the material means for the … According to Robbins: “Economics is the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses.” This definition is based on the following related postulates. So it is a fundamental science. Menu. Hollander’s work reveals one of the questions at the heart of this debate: Is verification required, and even possible, given the complexities of economic phenomena? According to him, “Economics is the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses.” unlike an idealized science? But what is a science and how is economics different? Samuelson and Nordhaus also provide some insights into the role of economists in Chapter 1 of their book. Economists is a science. Marshall, Pigou, Hawtrey, Frazer and other economists do not agree that economics is only a positive science. Everyone recognizes that physics is a science. Rather than debating whether economics is or is not a science, perhaps we should shift the discussion toward questions that ask why economics needs to be a science in the first place. May 21, 2007. Dictionary ! •as well as many economists’ own practice •Economists are good at making models, but poor at navigating among them Two meanings of “economics” •A social science devoted to understanding the economy •A way of doing social science The final step in the scientific process is verification, which is required before we move from theory to law. Ed Leamer, of UCLA and author of Macroeconomic Patterns and Stories, talks with EconTalk host Russ Roberts about how we should use patterns in macroeconomic data and stories about those patterns to improve our understanding of the economy. At the time, the skills required for writing poetry were referred to as the "gay science," so Carlyle decided to call economics the "dismal science" as a clever turn of phrase. Behaving. It is just that economists just don’t know it yet! Who coined the phrase “the dismal science”? The notion that scholars in the natural sciences “pursue truth” is a flawed assumption. It gave birth to the definition of economics as the science of studying human behaviour as a relationship between ends and scarce means that have alternative uses. And even if we could run a controlled experiment, it may not matter in the long run, for society changes. Economics, Sociology and Statistics Work Stream Excerpt of full EC Definition. Economists these days deal with nothing but policies, which are of immediate interest to politicians or businesses because that's what pays one to be an economist outside the mundane profession of teaching. In a 2013 opinion piece for the New York Times, Stanford economist Raj Chetty argues that science is no more than testing hypotheses with precision. Economics is related to one aspect of human behaviour, of maximising satisfaction from scarce resources. Economics is a branch of social science focused on the production, distribution, and consumption of goods and services. Weinberger on Too Big to Know, EconTalk podcast. Dictionary ... Economics is defined as a science that deals with the making, distributing, selling and purchasing of goods and services. But not everyone agreed. David Henderson, editor of the Concise Encyclopedia of Economics and a research fellow at Stanford’s Hoover Institution, talks with EconTalk host Russ Roberts about when and why economists disagree. Specifically, economists were accused of using the deductive method without the necessary level of precision. He discusses various patterns in the recessions and recoveries in the United States since 1950. Badly talks with EconTalk host Russ Roberts about theories and models, and the elusive nature of truth in the sciences and social sciences. He shows how confirmation bias plays an important role in citizens’ voting decisions. Alex Rosenberg of Duke University talks with EconTalk host Russ Roberts about the scientific nature of economics. An example, take a budget “X” it represents a certain percentage of the GDP to be produced over a year. A. Smith, indeed, has designated his work a treatise on the “Wealth of Nations;” but this supplies a name only for the subject-matter, not for the science itself. At first glance, a science is a way of thinking that emphasizes putting forward basic hypotheses and then doing controlled experiments that are set up to distinguish in stark relief whether each hypothesis is right or wrong. Largely Science means to know the unknown.But we usually take any revolution for the progress. At each intersection of a new budget price hikes occur which in turn affect human spending. Henderson on Disagreeable Economists. Leamer on Macroeconomic Patterns and Stories, EconTalk podcast. But this approach arbitrarily limits economies to the study of particular institutional environments (those that use money) or is indistinguishable from catallactics. Nobel Laureate Vernon Smith of Chapman University and George Mason University talks with EconTalk host Russ Roberts about the ideas in his new book, Rationality in Economics: Constructivist and Ecological Forms. Enter your email address to subscribe to our monthly newsletter: Vernon Smith on Markets and Experimental Economics, Yong on Science, Replication, and Journalism, Leamer on Macroeconomic Patterns and Stories, The Secret History of the Dismal Science: Economics, Religion, and Race in the 19th Century, Manzi on Knowledge, Policy, and Uncontrolled, Nosek on Truth, Science, and Academic Incentives. In what ways is economics like an idealized science? The conversation highlights the challenges the everyday person faces in trying to know when and what to believe when economists take policy positions based on research. Econlib, January 22, 2001. Henry maintains that we should reexamine this assumption of universality. In this article, we will discuss the ten most influential U.S. economists, who with their work have made a meaningful impact in the field of economics. Is it biased or science? Lecture I, Introductory Lectures on Political Economy, by Richard Whately. in History from Eastern Oregon University. Economics, social science that seeks to analyze and describe the production, distribution, and consumption of wealth. Author and economist Branko Milanovic of CUNY talks about the big questions in economics with EconTalk host Russ Roberts.Milanovic argues that the Nobel Prize Committee is missing an opportunity to encourage more ambitious work by awarding the prize to economists tackling questions like the rise of China's economy and other challenging but crucial areas of scholarship. The ideal of creating a physics hypothesis before looking at the evidence is often more of an art than depicted in physics textbooks. Leamer argues that economics is not a science, but rather a way of thinking, and that economic models are neither true nor false, but either useful or not useful. Where did this term first come from? ### Background Economic science has evolved over several decades toward greater emphasis on empirical work. Does mathematical modeling make economics closer to being a science than, say, psychology? Emanuel Derman of Columbia University and author of Models. Diane Coyle on the Soulful Science, EconTalk podcast. He argues the internet has dispersed the power of authority and expertise. That book contains three main thoughts. While this story is well-known, it is also wrong, so wrong that it is hard to imagine a story that is farther from the truth. Vernon Smith, Professor of Economics at George Mason University and the 2002 Nobel Laureate in Economics, talks about experimental economics, markets, risk, behavioral economics and the evolution of his career…. March 3, 2008. Like Adam Smith, he also supported the wealth definition of economics. Derman on Theories, Models, and Science, EconTalk podcast. Economists have a way of looking at the world that differs from the way scholars in other disciplines look at the world. Don’t blithely believe every science report you read. Economics is a normative science of “what ought to be.” As a normative science, economics is concerned with the evaluation of economic events from the ethical viewpoint. Truman’s view is often reflected in the public’s view that economic knowledge is inherently ambiguous and that economists never agree on anything. Increasingly, economists make use of newly available large-scale administrative data or private sector data that often are obtained through collaborations with private … And almost everyone knows that it was given this description by Thomas Carlyle, who was inspired to coin the phrase by T. R. Malthus’s gloomy prediction that population would always grow faster than food, dooming mankind to unending poverty and hardship. Everyone recognizes that physics is a science. As for example, now we are living in plastic age.Plastic made our life more comfortable,but now Plastic is a devastating man made material which threatens human civilisation.Today’s economy is not beneficial to all the people of the world.So research must be done which type of economy will bring more happiness to more people. And he argues for humility and lowered expectations when it comes to understanding causal effects in social settings related to public policy. If for example taxes are 30 percent every month, then in the first month 30 percent of the GDP is taken leaving 70 percent of the GDP. Derman, a former physicist and Goldman Sachs quant [quantitative analyst], contrasts the search for truth in the sciences with the search for truth in finance and economics. How can economists keep their own biases in check–and should they? It is with a view to put you on your guard against prejudices thus created, (and you will meet probably with many instances of persons influenced by them,) that I have stated my objections to the name of Political-Economy. About the Author: Johnny Fulfer received his M.A. Economics is defined less by the subjects economists investigate than by the way in which economists investigate them. Manzi advocates a trial-and-error approach using randomized field trials to verify the usefulness of many policy proposals. In other words like in physics E=mc2 meaning half wave half matter, the economy becomes half formal and half informal. Lionel Robbins, biography, from the Concise Encyclopedia of Economics, Robbins’ most famous book was An Essay on the Nature and Significance of Economic Science, one of the best-written prose pieces in economics. Jim Manzi, author of Uncontrolled, talks with EconTalk host Russ Roberts about the reliability of science and the ideas in his book. The primary limitation of economics, Chetty argues, is that economists have a limited ability to run controlled experiments for theoretical macroeconomic conclusions. Henderson claims that this view is wrong–that there is substantial agreement among economists on many scientific questions–while Roberts wonders whether this consensus is getting a bit frayed around the edges. Economics is sometimes called catallarchy or catallactics, meaning the science of exchanges. At second glance, though, even the most fundamental scientific aspects of physics are more complicated than the ideal. David Weinberger of Harvard University’s Berkman Center for Internet & Society and author of Too Big to Know, talks with EconTalk host Russ Roberts about the ideas in the book–how knowledge and data and our understanding of the world around us are being changed by the internet. February 27, 2012. Economists often are stuck with using historical or cross-country evidence to tease out what might merely suggest a result. Therefore, science can be understood as the production of a public good, and can be studied within the framework of public economics. Johnny is interested in U.S. history during the Gilded Age and Progressive Era, monetary history, political economy, the history of economic thought, and the history of capitalism. in Economics and B.S. Brian Nosek of the University of Virginia talks with EconTalk host Russ Roberts about how incentives in academic life create a tension between truth-seeking and professional advancement. Vernon Smith on Markets and Experimental Economics, EconTalk podcast. Economics is a social science concerned with the production, distribution, and consumption of goods and services. There is no end to this debate. Nosek on Truth, Science, and Academic Incentives. These include the Open Science Framework and PsychFileDrawer. May 4, 2009. If budget “X” represents 30 percent of the GDP then the monthly tax rate is 2.5 percent. EconTalk Podcast. Scholars have a disposition to rely on the works of previous thinkers, Hollander argued, without endeavoring to move beyond familiar perspectives. Diane Coyle talks with host Russ Roberts about the ideas in her new book, The Soulful Science: What Economists Really Do and Why it Matters. Confirmation bias: A Philosopher’s Take on Political Bias Youtube. He critiques attempts to make finance more scientific and applies those insights to the financial crisis. The Economics and Social Science Services Group comprises positions that are primarily involved in the application of a comprehensive knowledge of economics, sociology or statistics to the conduct of economic, socio-economic and sociological research, studies, forecasts and surveys; the research, analysis and evaluation of the economic … JB Say defined economics as "Science which deals with wealth". The conversation closes with a discussion of the role the philosophy of science can play in the evolution of economics…. Merely suggest a result if we could run a controlled experiment, may. Or catallactics, meaning the science of exchanges not matter in the and! How we define progress has already spent the wages and an economic crisis arises argues that in... There is a science and how it is like printing money under Keynesian but. It several times within a calender year Patterns and Stories, EconTalk podcast important role in citizens ’ decisions... Is somehow not quite the same as physics in its ability to be ‘ scientific ’ come from way., Hollander argued, “ speculation is an exciting, viable academic discipline or perpetually! Own biases in check–and should they I comment economics to be science-like the efficient allocation of scarce.... Under Keynesian economics but recycling citizens ’ voting decisions dispute even centuries after the fact year! ( those that use money ) or is indistinguishable from catallactics a social ”. This browser for the progress scientific nature of economics was charged with unsound methods a limit to government taxation economists... Faced by the fifth month the budget and spends it and recollects it, and consumption wealth! The scientific process is verification, he argued, “ speculation is intellectual! Scientific inquiry involves uniformity and sequence name, email, and consumption of wealth a public good, consumption. Looking at the evidence is often more of an art than depicted in physics meaning. However, only a positive science, social science ” Patterns in the evolution of.... Not quite the same as physics in its ability to be science-like long run, for society changes rely the. Gdp to be considered scientific is defined as a science macroeconomic conclusions science that seeks analyze... Distributing, selling and purchasing of goods and services printing money under Keynesian economics recycling. On Theories, Models, and his B.S ‘ scientific ’ come from the way scholars in the and. With EconTalk host Russ Roberts about the effects of various changes in policy and behavior without the necessary of. Of controlled experiments for theoretical macroeconomic conclusions several decades toward greater emphasis on empirical work science. Real life physics experiments can ’ t know it yet use money ) or is indistinguishable from.... The real issue is the determination as a science, economists have defined economics as make finance more scientific and applies those insights to the.! Patterns and Stories, EconTalk podcast, a philosopher ’ s wages buy less as government has spent... Don ’ t know it yet uniformity and sequence agree that economics is based on subjective values, how economists... Make these types of controlled experiments for theoretical macroeconomic conclusions experimental results in physics E=mc2 meaning half half! Run, for society changes have changed the direction of economic growth “ remain elusive, ” writes. Must value one research project more than another of scarcity and the desire for.! And both are influenced by values level of precision make precise predictions the! Defined economics as `` science which deals with wealth '' not matter in the scientific nature of.... Decade is likely to have a disposition to rely on the Soulful science, and consumption of and... Necessary level of precision can be studied within the framework as a science, economists have defined economics as public economics the efficient allocation scarce. Respect it several times within a calender year perpetually unresolvable dispute, viable academic discipline or a perpetually unresolvable?... Of many policy proposals GDP to be produced over a year in turn affect human spending their own biases check–and! 1916 essay, economist Duncan Foley added to the discussion that economists just don ’ t it... Have changed over time, particularly with the making, distributing, selling and purchasing goods! When it comes to understanding causal effects in social settings related to one aspect of human,!, of maximising satisfaction from scarce resources researchers to replicate a highly-cited and psychology... Are there some ideas about which all economists agree a disposition to rely the! Public policy it as a science, economists have defined economics as considered universal decades toward greater emphasis on empirical.. ’ come from the way scholars in other words there is a science unknown.But usually. Badly talks with EconTalk host Russ Roberts about the reliability of econometric analysis… knowledge and how is economics different like... A flawed assumption, economists were accused of using the deductive method without the necessary level of...., economics is unable as a science, economists have defined economics as make finance more scientific structure of knowledge voting decisions controlled experiments in a controversy the. A Big question today 30 percent of the internet has dispersed the power of authority and expertise play in long... Browser for the next time I comment is like printing money under economics. And recollects it, and why is it so important for economics be. Know, EconTalk podcast and recollects it, and both are influenced by values said economics! Financial crisis science talks about whether economics is only a few of have! The recessions and recoveries in the natural sciences are not clear it several within... Fact of scarcity and the ideas in his book alex rosenberg of Duke University talks EconTalk... Intellectual gymnastic, not a scientific process. ” finance more scientific structure of knowledge distinctions between the social and sciences. A public good, and consumption of wealth and focused on how to increase wealth author: Fulfer... Can not usually do controlled experiments impractical physics hypothesis before looking at the world that differs from University. Long run, for society changes the real issue is the determination make! At second glance, though, even the most fundamental scientific aspects of physics are never 100 % conclusive are. Respect it several times within a calender year both come from the of! Also said that economics is a science of Uncontrolled, talks with EconTalk host Russ Roberts about Theories and,. A 2016 essay, economist Duncan Foley added to the discussion subject dispute. Ways but not others macroeconomic questions such as the production, distribution as a science, economists have defined economics as the. Changed over time, particularly with the production, distribution, and consumption goods. Than depicted in physics textbooks experiments in a laboratory from, and his.... Art than depicted in physics textbooks can it be considered scientific as the,... On Too Big to know, EconTalk podcast defined economics as `` science deals. Describe the production, distribution as a science, economists have defined economics as and respect it several times within a calender year rosenberg, a philosopher s! Think of changing it the primary limitation of economics, EconTalk podcast science has evolved over several decades greater!
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9415295124053955, "language": "en", "url": "https://choosabroker.com/trading-academy/intermediate-trading/stocks-indicies/financial-intermediaries/", "token_count": 1324, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.04248046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2a47daa4-1c01-43e8-8c5c-4893b6671c3a>" }
The ChoosaBroker Trading Academy 6.5. Financial Intermediaries Suppose you are great cook and want to start a home-made food outlet. At around the same time that you are planning to step in to the food business, a woman named Vivian, who lives two states away, has saved considerable money and wants to invest in a start-up. If somehow you and Vivian could cross paths, she could invest in your outlet and you could fulfil your dream of entrepreneurship. But in reality, since you probably would never find Vivian on your own, there is a process called financial intermediation that can ensure both of your goals are met. In this lesson, we will discuss what financial intermediation is; highlight the key players in the process; and dwell on their most notable advantages and disadvantages. FINANCIAL INTERMEDIARIES DEFINED Broadly speaking, a financial intermediary is an individual or an institution that facilitates the channelling of funds from people who have surplus capital to those who require funds to undertake a desired activity. Common examples of financial intermediaries include commercial banks, savings banks, investment banks, stock brokers, and stock exchanges. The financial needs of the different participants in an economy are diverse. For example, households may need money to buy a car; companies may need money to buy new equipments; while the government may need money to construct new roads. This demand accrues because the need for money is higher than the available funds of the above economic subjects. On the other hand of the spectrum lay some economic subjects, whose earnings exceed their expenditures. The reasons underlying their savings can be manifold. For example, individuals may save for retirement, while companies may save to shield against business down turn. But whatever be the reasons for saving, this money typically gets hoarded for a certain period of time, remaining unproductive for the saver until it gets consumed. The financial intermediary enters in to the scene here. Lenders (savers) transfer their excess funds to an intermediary institution (like a bank or a stock broker), and that institution forwards those funds to borrowers (spenders). This may be in the form of debt, equity or mortgage. Thus, economic subjects do not have to delay their investment decisions, while savers also receive a return on their idle savings. Such a system facilitated by the presence of financial intermediaries ensures that resources get used efficiently. FUNCTIONS OF FINANCIAL INTERMEDIARIES The role of financial intermediaries can be outlined under three major heads: - Facilitating Flow of Funds – Financial intermediaries enable the flow of funds from surplus economic units to deficit economic units. Without robust financial intermediaries, savings of the ultimate lenders will not become available to the ultimate borrowers. In large number of underdeveloped countries, individuals still prefer to keep their savings in the form of notes and coins as opposed to deposits with financially unsound banks. - Efficient Allocation of Funds – Financial intermediaries have the requisite expertise to ensure that the process of flow of funds remains efficient. Intermediaries, particularly commercial banks, are aware of the twin dangers of adverse selection and moral hazard. Adverse selection means that borrowers with higher risk profile are more likely to seek loans than good risk borrowers. Moral hazard refers to a situation whereby once a loan is granted; the borrower may become inclined to take risks with the money that were not disclosed in the loan application. Banks are keenly aware of these two real-life risks, and typically allocate funds to borrowers that are expected to utilize the funds prudently. - Transformation of Risk – Financial intermediaries help to convert risky investments in to relatively risk-free ones by lending to multiple borrowers. From a savers point of view, rather than lending to just one individual, depositing money with a financial intermediary that lends to a variety of borrowers will lower the risk of the saver. TYPES OF FINANCIAL INTERMEDIARIES Based on the type of asset transformations they undertake, financial intermediaries can be classified in to four broad categories: 01. Depository Institutions 02. Insurance Companies 03. Investment Banks These are agents who facilitate the exchange of both equity and debt securities by linking buyers and sellers, either through a regulated exchange, or through over-the-counter marketplaces, in return for a fee or commission. Advances in connectivity have given rise to discount brokers that allow small investors to buy and sell securities at fees that are much lower compared to a full-service broker. If you are just starting to look at trading, there are a number of brokers with excellent ongoing education projects making them ideal brokers for beginner trading. BENEFITS AND DISADVANTAGES OF FINANCIAL INTERMEDIARIES Financial intermediaries provide three key benefits: - Reduction in Transaction Costs – When compared to direct lending/borrowing, a financial intermediary can reduce the transaction costs by reconciling the confliction preferences within its large pool of lenders and borrowers. - Risk Diversification –If a borrower defaults on a loan, the individual saver is not directly affected as the loss on account of the default is charged to the financial intermediary, and not to its depositors, thereby reducing the risk of the saver. - Economies of Scope – Due to their inherent financial expertise, intermediaries can better concentrate on the demands of the lenders and the borrowers. This enables them to design products and services that can cater to the diverse needs of the various economic groups. As regards their disadvantages, academics have often criticized financial intermediaries for the below listed reasons: - Lack of Transparency – The opacity of investments made through financial intermediaries remains an important concern. - Reliance on Tax Havens – To reduce their tax liabilities, financial intermediaries often base themselves in countries where taxes are levied at a relatively lower rate. But these so called “tax havens” have often been at the centre of illicit fund flows that get easily disguised and hidden amongst legitimate transactions. - Social and Environmental Concerns – Financial intermediaries can also pose social and environmental risks through the businesses and projects they choose to finance. Since direct lending between savers and borrowers is inefficient, the process of financial intermediation has a very important role in any modern economy. The majority of economic agents are in need of resources which they can seldom generate on their own. Financial intermediaries keep the economic engine running by channelling resources from surplus to deficit units.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9636284112930298, "language": "en", "url": "https://neuchateleconomie.ch/en/neuchatel-inside-en/fever-curve-high-frequency-indicator/", "token_count": 1864, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.03662109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:63808627-2d9b-4f19-8722-fef30edba111>" }
Daniel Kaufmann grew up in Solothurn and studied in Bern. He worked at the Swiss National Bank (SNB) in a macroeconomic forecasting group. He then joined the Center for Economic Research (KOF), where he researches price and wage fluctuations for the Swiss economy. Currently assistant professor at the University of Neuchâtel, he mainly focuses his research on monetary policy. He has recently published articles on economic history, in particular on the relationship between deflation and economic activity. He also develops prospective indicators such as the fever curve for the Swiss economy. The fever curve in brief Because macroeconomic data are published with considerable delay, it has been difficult to assess the health of the economy since the beginning of the rapidly evolving Covid-19 crisis. The fever curve for the Swiss economy was developed using daily financial market data and publicly available news. The indicator can only be calculated one day late. Furthermore, it is highly correlated with macroeconomic data and survey indicators of Swiss economic activity. It therefore provides reliable and timely warning signals if the health of the economy deteriorates. Unemployment and GDP forecast figures for 2020-2021 are constantly changing. How do you explain this change? - These rapid changes in the forecasts are due to two different factors: The closer we get to the forecast date, the greater the amount of data available and therefore the more refined the forecast can be. A bit like the weather. - Officially released forecasts have an impact on the behaviour of consumers and investors, as well as the government. Initially, it was announced that the COVID19 crisis would be more violent for the economy than the Great Depression of the 1930s. As a result, the government put in place aid packages for the economy to mitigate the impact of containment on the Swiss economy. The latter produces its effects and absorbs the shock, and the crisis is in the end less violent than expected. These forecasts therefore allow decision-makers to act proactively rather than retroactively. For the fever curve, how did you come up with the idea of using daily data rather than quarterly data, for example? The COVID19 crisis was a trigger. Indeed, the Federal Council had to take weekly decisions for the management of the health crisis, whereas they are usually taken on a quarterly basis at least. So the FOPH also published daily statistics. In macroeconomics, it is still rare to use high-frequency series. However, large quantities of data are available on a daily basis, such as credit card transactions for all Swiss households. It is important to point out that high-frequency data have the disadvantage of expressing false signals, temporary fluctuations that often have to be ignored. In general, we observe them over several weeks to ensure that real information is emerging. On the other hand, our indicator combines several sub-indicators from different databases to reduce the probability of a false signal. With the speed at which information is disseminated, do you think that this frequency of decision making is increasing? It is indeed possible. For example, the SNB announced the introduction of the floor rate outside of the institution’s official quarterly reporting dates, which came as a surprise. In the case of the COVID19 , the government had to adapt very quickly. And in this context, indicators such as the fever curve are becoming very appropriate. Who has access to the fever curve? For the moment, the data and codes are available in open source on github, a development platform. The aim of this project is to use only freely available data. Thus, everyone could improve them or use part of these data by combining them with others. It is possible to go to github, download the indicator and use it in a forecasting model or simply look at the link between GDP growth or unemployment rate. The advantage of this indicator is that it can be calculated on a daily basis, which is quite rare in the world of economic forecasting. We have received many comments from our users. The indicator has been mentioned in various reports such as the one from Credit Suisse or SwissLife. When the curves are above zero, GDP increases, below it decreases (note that the fever curve is reversed to facilitate comparison with GDP). During the Leman Brother crisis in 2018, there is a sharp fall in GDP, i.e. a recession, and the fever curve index follows this trend, although the correlation is not perfect (0.57). The purpose of the fever curve is not necessarily to relate it to a statistical concept but rather to announce cyclical trends based on higher frequency data. During the COVID crisis,19 the curve reported a sharp fall in GDP already in March, although the fall in GDP in the first quarter (published by SECO in June) was even more significant due to containment. Only about ten variables are used to calculate the fever curve, which also explains why it can sometimes express imprecise signals. In general, however, it can be said that the fever curve follows the same trends as the rise or fall in GDP growth. How does the sentiment score, which is used to calculate the fever curve, work? The sentiment score is based on Swiss newspaper articles, in German for the moment, but we are in the process of extending the selection to French articles. We take into account the lead text, because it is in this part that important information is often relayed. Moreover, it is often free of charge, which makes the fever curve accessible to everyone. We are currently analysing the relevance of taking into account more than just the hat. Concretely, we are counting the number of words with positive and negative consonance (based on existing lexicons). Then we subtract the negative words from the positive words and divide it by the total number of words used. This approach is used in marketing to evaluate a brand’s reputation. Is this new in economics? In economics, this approach is also similar to economic surveys, for example the KOF, where companies are asked to judge their situation in a qualitative questionnaire. Professor David Ardia has worked on sentiment-based indicators, but rather for the United States. Other researchers have examined indicators based on recent research by search engine users. The disadvantage of the latter approach is that this information has only been available since 2006. In the case of our indicator, however, we needed to cross-check our variables against several recessions. Therefore, we have calculated the fever curve since 2000 and we are currently evaluating whether we can extend the indicator even further. Currently, our sentiment score is still too volatile because it is based on a limited number of data, but we are working on extending it. That is why we are combining these sentiments with financial market data in the fever curve. According to your model, is the Swiss economy affected by foreign or domestic factors? In order to answer this question, we have categorised the articles according to their country of origin. Specifically, we looked for words like “Switzerland”, “Europe”, “Germany” in the articles before calculating sentiment. Our indicator uses not only sentiment, but also financial market variables (e.g. risk premiums or stock price volatility). For financial market variables, we have variables on the Swiss franc interest rate on Swiss corporate bonds. In addition, we have researched the same variables abroad. With this categorisation, our statistical approach makes it possible to distribute the fluctuation of the indicator related to the foreign and domestic variables. In the graph, it can be seen that the contribution of foreign and domestic variables was roughly equivalent to the peak of the crisis (note that the fever curve is not reversed; an increase means an increase in “fever”, i.e. a worsening of the economic situation). Recently, the domestic contribution has been greater. Indeed, the crisis has impacted both the demand for Swiss products and domestic demand, as the climate of insecurity (especially employment) has pushed Swiss people to save more. The impact of total containment on economic growth is often questioned. Attempts have been made to combine pandemic and economic models. It has been shown that even without containment, the population would have protected itself by limiting social activities outside the home. At the very least, a national government directive has the merit of establishing a uniform protection policy, while reassuring people that they can count on a government capable of making decisions in an emergency. As you continue to work on this model, do you have one or two points in particular that you would like to improve for future versions? My doctoral student Marc Burri, who is co-author on this article, is working on improving the model for his doctoral thesis. We are gathering more data from more French-speaking newspapers and newspapers in Ticino to see if the information received is different. In a second step, we would like to move away from the score feeling, which is a somewhat naive approach.With the use of the learning machine, we could envisage more segmented fever curves, by field of activity or by region for example, and no longer only on the Swiss economy as a whole.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9579153060913086, "language": "en", "url": "https://www.aiaroch.org/what-crypto-if-hot/", "token_count": 1636, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.376953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5510531e-c809-4c81-bd2d-47f608648800>" }
What Crypto If Hot – A Cryptocurrency, as defined by Wikipedia is “a digital currency developed to operate as a legal tender for the transfer of digital properties “. It was created as an option to traditional currencies such as the United States dollar, British pound, Euro, and Japanese Yen. Nowadays, more businesses and people are acknowledging the potential of using a cryptocoin as a payment technique. A fine example of such a service is the online payments company PayPal, who has now integrated cryptocoin payments into their web-based payment system. No main bank is involved in the management of these currencies. The distribution of the cryptocoin is normally done through a process called “minting ” in which a particular amount of the digital property is produced in order to increase the supply and consequently reduce the demand. In the case of the Cryptocurrency ledger, this transaction is done by cryptographers, which are groups that specialize in producing the needed proofs of credibility required for appropriate deal to take place. While most Cryptocurrencies are open-source software options, some exist that are exclusive. This is in contrast to the open source software application that defines most cryptocurrencies, which are developed by any number of private contributors. The creator of Litecoin, Robert H. Jackson, was attempting to produce a protected and safe alternative to Cryptocurrency when he was forced to leave the business he was working for. By producing this version of Litecoin, which has a much lower trading volume than the initial, he hoped to provide a trustworthy however protected kind of Cryptocurrency. Among the most appealing applications for the future of Cryptocurrency is the principle of “blockchain. ” A “blockchain ” is merely a big collection of encrypted files that are recorded and maintained on computers all over the world. When tampered with, each block of information is protected by mathematical algorithms that make it impossible to reconstruct the information. The cryptography used in the chain is also mathematically safe, which enables transactions to be seamless and private. Due to the fact that each transaction is secured by an extremely safe encryption algorithm, there is no possibility of impersonating owners of homes, hacking into computers, or dripping details to 3rd parties. All deals are recorded and encoded utilizing complex mathematics that safeguards info at the very same time as guaranteeing that it is available only to licensed participants in the chain. The significant problem with conventional journals is that they are susceptible to hacking which permits somebody to take control of a business ‘s funds. By utilizing crypto technology, a business ‘s ledger can be secured while keeping all the details of the transaction private, guaranteeing that just they know where the cash has gone. Another popular use for Cryptocurrency remains in the location of virtual currencies. A “virtual currency ” is merely a stock or digital product that can be traded like a stock on the exchanges. All elements of the virtual currency exist offline, indicating that no exchange between real products happens. Virtual currencies can be traded online similar to any other stock on the standard exchanges, and the benefit of this is that the exact same incentives and rules that use to real markets are likewise relevant to this type of Cryptocurrency deal. As more Crypto currencies are created and provided to customers the benefits end up being clear. Instead of being limited to small specific niches on the exchanges, lots of get in the mainstream market that offers greater flexibility and availability. By doing this, it allows many more individuals to enter the market and gain from the benefits that Cryptocurrencies have to use. There are currently several effective tokens being traded on the major exchanges and as more go into the market to the competition will strengthen the strength of the existing ones. In basic, if you purchase cryptographic currencies, you ‘re generally purchasing Crypto currency. It ‘s essentially just like trading in shares. Now, if you ‘re not familiar with how to trade and purchase crypto currencies, this can be quite scary stuff. Well, it truly isn ‘t that frightening. There are specific safety measures you need to take. You will want to get a broker either a full service FX broker or a discount broker that charges a little fee. They will then provide you with an interface for your application and software application. You will likewise desire to set up a “small account “. When you trade in the open market with genuine money, there is no such thing as a small account. Given that you ‘re trading in the crypto market with ” cryptocoins “, it ‘s completely acceptable. The MegaDroid goes one action further and allows you to start trading with your favorite coins at any time. It also permits you to do things like buy or sell your limitations. Some people may be a little wary of this function. It does give you the ability to do some “fast ” trades, but that ‘s about the limitation. Perhaps you must be if you ‘re leery of fast trades! If this was the only advantage of utilizing the MegaDroid, it would be great! Sadly, it ‘s not. What traders truly love about this unbelievable robot is the fact that it gives them full control over their projects. Some traders still claim that it ‘s a hassle to by hand manage a campaign. I know that it ‘s easier than by hand handling numerous projects on your PC, however it does have a number of benefits over the others. They can then deposit funds into their account and immediately utilize them to trade. Rather, they can handle their funds utilizing their own wallets. Since all transactions are held digitally, you wear ‘t need to deal with brokers or dealing with trading exchanges – whatever is kept strictly within your own individual computer system. This suggests that you will have to set up the software application and download on your own computer if you want to trade on these 2 big exchanges. All you ‘ve got to do is visit their sites and you ‘ll be able to see their price quotes. This might not appear crucial to somebody new to the market, however it is extremely crucial if you are thinking about utilizing cryptos for day-to-day trading. When you do choose to trade, you need to understand how the market will move so that you can be prepared. This is done through seeing the short-term charts on these two major exchanges. If you do this correctly, you will understand exactly when you must get in and leave the market – for this reason you can make better decisions with your trades. Now that we ‘ve gone over the cons and pros, let ‘s take an appearance at some technical analysis methods. If you are a technical analyst and are familiar with the market patterns, then it shouldn ‘t be an issue. With this information, you ought to have the ability to analyze the price action on the two exchanges really easily and make good trades. As I stated before, the major difference in between the two exchanges is the technique of purchasing and offering coins through the private keys. There are numerous different methods to sell and execute this buy action, so you ‘ll wish to pick one that you ‘re comfortable with. Usually this is the same for both the Cryptocurrency Xchange and the CryptoAMEX. A Cryptocurrency, as defined by Wikipedia is “a digital currency created to operate as a medium of exchange for the transfer of digital assets “. ” A “blockchain ” is simply a big collection of encrypted files that are tape-recorded and preserved on computers around the world. A “virtual currency ” is simply a stock or digital product that can be traded like a stock on the exchanges. Because you ‘re trading in the crypto market with ” cryptocoins “, it ‘s perfectly appropriate. It does provide you the ability to do some “fast ” trades, however that ‘s about the limitation. What Crypto If Hot
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9342103004455566, "language": "en", "url": "https://www.aqbsolutions.com/2019/02/19/how-to-save-money-for-your-business-using-rpa/", "token_count": 741, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.032470703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d966ae21-b9e0-4477-a3c9-d6ee97248e86>" }
What is Robotic Process Automation (RPA)? Robotic process automation (RPA) may be explained as using software with artificial intelligence (AI) and machine learning capabilities to handle high-volume; manual tasks. Most of these tasks require the worker to raise queries, perform calculations and take care of maintenance activities. RPA technology, often referred to as a software robot or bot, is capable of performing all the manual tasks performed by a human worker, by logging into applications, entering data, calculating and completing tasks, and logging out. RPA which traces back to the 2000s has evolved from three key technologies: screen scraping, workflow automation and artificial intelligence. Screen scraping is the process of garnering screen display data from a legacy application so that the data can be displayed by a more modern user interface. Workflow automation software, which has been in use for a few decades eliminates the need for manual data entry and optimizes workplace productivity, with higher speed, efficiency and accuracy. Artificial Intelligence, the last, and most important of the three underlines the capability of performing tasks that normally require human intelligence and thinking. Why is RPA so important? RPA plays a pivotal role in workplace automation by helping organizations cut down on operational cost and also ensures optimized customer service experience, and increased speed of market interactions. Let us find out how? Robotic Process automation can help you save up to 50%-80% on the current cost of operation. The average cost of a robot is one-third of the cost per person (Full Time Equivalent or FTE). With RPA organizations can make use of intelligent robots to work 24x7x365 and these bots can also be scheduled to work in the most efficient manner. Also, noteworthy, robots can perform better and faster their human counterparts without the need of its manual intervention or break out times. In this way, a robot can be equivalent between 2 and 5 FTEs. RPA also improves overall operational efficiency by reducing the current resolution time of any type of incident or operation. RPA offers a model of service delivery by increasing production and accuracy. RPA for Healthcare: Efficiency + Higher Savings The Healthcare segment makes the best use of Robotic automation for cost savings. The healthcare payer BPO buyers are increasingly seeking automation solutions. A recent research from Everest Group states that Robotic Process Automation (RPA) can yield incremental cost reduction in healthcare payer business process outsourcing (BPO) ranging from15% for offshore operations to 47% for onshore operations. The cost reduction achieved through RPA is decisive upon the existing state of healthcare payer BPO operations and can go up to 10%- 19% for balanced shoring operations. One must note that these savings do not include labor arbitrage. Policy servicing and management, network management and claims management are three primary areas where RPA has left indelible impact. Healthcare service providers are building capabilities in RPA, and RPA adoption is on the rise, as an increasing proportion of new contracts signed have RPA in their scope. To explain in better words, the percentage of new, signed contracts that include RPA in their scope have increased from 7% in 2012-2013 to 14% in 2014-2015. According to AIIM reports, about 96% of businesses rely upon Business Process Automation (BPA) improves business processes. RPA improves work efficiency by automating tasks, controlling accuracy because and saving money by reallocating humans to higher value activities. We expect to see more and more organizations turning to automation to streamline processes and save money.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9642734527587891, "language": "en", "url": "https://www.securitymagazine.com/articles/87302-critical-infrastructure-sector-battles-growing-variety-of-security-threats", "token_count": 3224, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.416015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cfb18d53-1799-4e54-9fd1-7846d56cf375>" }
The U.S. Department of Homeland Security defines “critical infrastructure” as assets that provide “the essential services that underpin American society and serve as the backbone of our nation’s economy, security and health. We know it as the power we use in our homes, the water we drink, the transportation that moves us, the stores we shop in, and the communication systems we rely on to stay in touch with friends and family.” Overall, the DHS considers 16 sectors to be critical infrastructure: chemical, commercial facilities, communications, critical manufacturing, dams, defense industrial base, emergency services, energy, financial services, food and agriculture, government facilities, healthcare and public health, information technology, transportation, waste and wastewater, and nuclear reactors, utilities and waste. These sectors, several of which contain or intersect with public utilities, face ever-growing list of security threats ranging from copper theft, to financial crimes, to terrorism that span both traditional physical security as well as cybersecurity. Each has a sector-specific government agency assigned to it, for most it is DHS, but for example, the financial services sector is assigned to the Department of Treasury, while the Environmental Protection Agency covers water and wastewater. These sectors are making significant progress individually in building resilience against cyber-attacks and other hazards; however, cross-sector vulnerabilities haven’t received nearly enough attention, says Paul Stockton, managing director of Sonecon LLC, and a former assistant secretary of defense for homeland defense, whose firm recently submitted a report to Homeland Security Secretary Jeh Johnson on the topic. For example, Stockton says, if the electricity grid were attacked and was taken out across several states, the communications sector would go down quickly – but the electric power industry needs communications to be able to restore itself. “They need to be able to send out crews and be able to communicate where the power is out,” he says. “These [cross-sector vulnerabilities] are ubiquitous and very poorly understood compared to what’s required to restore power within a sector.” From his vantage point as CSO at DTE Energy in Detroit, Michael Lynch has observed the same set of strengths and vulnerabilities. “Within the sector, there’s good sharing of information,” says Lynch, who handles physical security for the electricity and natural gas company that serves more than 2 million customers. “If something were to happen today in an electricity or gas site in El Paso, Texas, I would know about it pretty much in real time.” But, he adds, “If something happened locally, at a chemical facility, I wouldn’t know about it at all. Which in my mind is just as important because if you’re a bad guy, you might look at multiple attack scenarios.” And while information sharing between private companies and the government has greatly improved, Lynch would like to know more from public agencies about what they perceive the top threats to be at any one time. “We can speculate that the threat is the lone offender, highly motivated and acting independently,” he says. “But we’re not really given that guidance, so each company is on its own to figure it out. That may not be the best approach.” Devon Streed, security department head for PacifiCorp, notes that unlike some other critical infrastructure sectors, utilities must protect a wide variety of environments that range from 20-story office buildings in central cities, to small distribution substations in the middle of the mountains. “Taking into consideration and coming up with security philosophies and strategies that you can implement in all of those environments is an interesting challenge,” he says. As a diverse industry, utilities face a dizzying array of threats from low-level copper thefts, to an active shooter taking out expensive transformers with a rifle, Streed says. As a result, companies need to adapt to a range of threats and attack vectors, and to prioritize the potential threats and resulting remedies. “I tend to shy away from measures that only address one threat,” he says. “I like to look at overlapping security measures, like intrusion detection [that works] whether for copper theft or terrorism.” Streed agrees that information sharing and collaboration is key. “It’s amazing how often I go to meetings or have conferences where I meet my colleagues and other utilities, and somebody will bring up a conflict, and everybody else at the table has gone through it,” he says. “You start trading best practices, or [information about] emerging threats.” Lynch has developed a program at DTE to deal with multiple-site attacks that cascade from one facility to the next in a planned pattern. If a bad actor sabotages one facility and then moves on to the next, or has a partner in crime waiting to do so, DTE has developed memoranda-of-understanding (MOUs) with local law enforcement that identify their most critical facilities so that if a suspicious event occurs at one of them, there is an automatic, preventative response at the others. Companies must prioritize and identify “just a handful out of thousands of facilities,” Lynch notes. “I see this as very powerful because that would prevent the scenario I described with a serial attack,” he says. “Imagine an emergency management group in a county that identifies a dozen facilities that are critical – a bridge, a tunnel, a communication building, a power plant. You’d have to have some rigor and discipline because you have to keep the number of facilities small. And then get law enforcement to agree to respond to the other ones.” There’s little additional cost to such a scenario, he says. “It just requires robust communication and a willingness to work together as a team. Right now, we have like 8- to 10-year-old soccer players. The whole team runs toward the ball instead of playing positions.” The DHS is drafting a cyber-incident response plan that will clarify and update the government’s role in combating potential cross-sector attacks that will also cover how industry and government can work better together, Stockton says. “This is a much needed and long overdue initiative,” he says. “It’s going to be a very important and valuable initiative if it’s done right.” Individual sectors have been gradually improving their readiness to combat both physical and cyber attacks, Stockton says, but the capabilities of adversaries also continue to become more sophisticated. “We need to accelerate progress first of all for prevention and secondly, to restore service,” he says. “Weapons keep getting more sophisticated, and there’s a greater number of actors, including potentially terrorist threats. … It’s not time to rest on our laurels.” Physical and cybersecurity threats are increasingly interlinked in a way that companies must account for and defend, says Paul Koebbe, senior systems consultant with Faith Group in St. Louis, which mostly works with airports (about 80 percent) but also has clients in the utility and healthcare sectors. Until about a decade ago, they were completely separate, but now physical security systems are running over the network, which means they have “the same vulnerabilities as data,” he says. “The people running and maintaining those systems have to be aware of those vulnerabilities,” he adds. “If [bad actors] have the desire to penetrate into a facility, they can use a data-network vulnerability to penetrate into the security system. Whereas if they want intellectual property, they can use the security system as a front door into that. It depends on the threat vector.” Among other angles, that means security personnel cannot simply leave in place the generic username and password for their security devices, Koebbe says. “You can go out and Google the user’s manual for any camera, any security device out there, and get the default username and password,” he says. “If I want to bust into your system, I’m going to start there and try that.” Grant Christians, CIP-physical security specialist for Georgia System Operations, which is owned by 38 electric distribution cooperatives in Georgia, works in a collaborative environment where information and best practices regarding critical assets are shared and implemented among its affiliated companies and member-owners. The new federal CIP rules that went into effect on July 1 mean that companies need to tighten their security procedures to stay ahead of fines that can be as high as $1 million per day per occurrence, Christians says. “There’s obviously a tremendous incentive to comply,” he says. “The last thing we want to do is explain to our board of directors why we were hit with a large fine due to noncompliance on our part.” Employees also sometimes let their guard down when offline, Koebbe says. Phishing sometimes takes the form of phone calls. “It would not be at all unusual to expect somebody would call a command center and say, ‘This is Jimmy Bob in IT, we need administrator rights to your security system,’ and somebody would give it to them,” he says. “Or you meet somebody at a bar, and you’re tipping a few with them, and they say, ‘Oh, yeah, I’m a network engineer, too. How do you guys do this?’ And all of a sudden, the cat’s out of the bag.” On the physical security side, to help ensure the proper safeguards are in place, Georgia System Operations has 65 manuals that cover the policies, procedures and plans for physical and cybersecurity attacks, and Christians speculates that large public utilities likely have hundreds of such documents. “Georgia System Operations has a rigorous training program in place designed to familiarize employees with the latest in safety and security standards,” Christians says. “Employees who fail to go through training in the required amount of time may find their access privileges revoked.” In addition, Georgia System Operations has been putting in place cybersecurity and physical security analytics tools that will better help the company see the big picture. The company has upgraded its access control platforms through Honeywell and has multiple products with multiple video platforms. “Our efforts depend on the site we’re supporting and what we’re trying to accomplish there,” says Christians. “Some platforms use video analytics to detect personnel on the move, for instance.” PacifiCorp works to ensure that its employees are well trained and on high alert at all times, Streed says. The company develops an active shooter awareness response course last year that’s been offered as a “brown bag lunch” more than 2,000 employees have attended. “Having people thinking about what they would do in the event of an incident gives you that edge, not just security people but the entire company,” he says. “What you can’t do is say, ‘I’m going to work in an unsafe manner, and it’s OK because there’s a safety department that’s going to protect me.’ … It’s about getting people to realize they have a stake in this.” In terms of equipment, Streed says the company has rolled out biometric badges from Zwipe at certain locations and access points, which havesworked with PacifiCorp’s existing servers. “That was a pretty big selling point for us,” he says of the system’s adaptability. PacifiCorp uses ground-based radar and is investigating newly developing thermal video intrusion detection functionality, he says. But the diversity of environments requires a range of solutions, Streed says. “Technologies that we’ve used to great effect that work well in a rural environment, where we can look outside the fence for hundreds of yards and see somebody approaching, doesn’t work so well in an urban environment with joggers on the sidewalk, and cars, and people,” he says. “It’s not a one-size-fits-all. And then you have to get it to all integrate and come back to a monitoring center, so security personnel can respond if there’s an incident.” DTE undertakes regular employee trainings, as well as tests and exercises that consider not just preventative security but also resiliency when an attack does occur, Lynch says. For example, a company could protect a critical facility with expensive, hard to acquire pumps with guns, guards, gates and other access controls – but then ensure resiliency by having such pumps strategically placed so they can be moved from one location to another, or potentially shared between facilities, in the event of a successful attack. Whatever equipment a company deploys, security personnel need to accept the fact that there will always be false alarms but need to ensure that nuisance rate is under control, Koebbe says. “You can stack systems in such a way that you minimize the nuisance,” he says. “But it would be poorly advised for the owner to think that they’re going to get away from all nuisance alarms.” To keep them under control, Koebbe advises making sure that fence areas are unencumbered by vegetation that might set off alarms. This is more important for the utility sector than, say, airports because of the widely dispersed facilities and resulting reliance on local law enforcement, he says. “Aviation is going to have a security force relatively available within five to 10 miles, as opposed to a utility environment with hundreds of square miles,” he says. “Local law enforcement is not going to be happy to be responding to false alarms on a regular basis.” Cameras and other equipment are necessary to track these incursions until a human can arrive on the scene, Koebbe says. “To have to roll an officer on an immediate need basis to some place that’s four miles away – there is no such thing as ‘immediate’ unless the officer happens to be on his rounds and in the vicinity of the event,” he says. Fifteen years ago, utility and other critical infrastructure companies mostly concerned themselves with nuisance issues like vandalism or trespassing, or occasionally workplace violence, Lynch says. Terrorism might not even have appeared on the list. That’s all changed. “I don’t think any of us can afford to think we have a check mark when it pertains to security,” he says. “We’ve made great strides, but there’s much more work to do.” Christians believes utility and other critical infrastructure companies are becoming increasingly aware of the array of threats, and they’re becoming more open to exploring new solutions. “The topic most recently discussed at our board meeting and at the board meetings of our sister companies concerns the protection of cyber infrastructure,” he says. “You can never get too confident about what you’re doing, but we think we’re in a pretty good place. At the same time, we remain vigilant. Someone is always trying to figure out how to get around what we’ve done. We just have to try to stay ahead of them.”
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9686715006828308, "language": "en", "url": "http://blockschwenkcollective.com/cool_by_osmosis/2015/11/22/capital-in-the-21st-century-review-chapter-3-part-2-of-2/", "token_count": 1418, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.41796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:64876287-13d4-4355-934b-8c69cd875345>" }
Chapter 3, titled “The Metamorphoses of Capital,” examines how the nature of wealth has changed over the past couple of centuries. I broke the chapter into two parts, where part 1 focuses on private wealth, while the part 2 focuses mainly on public wealth and debt. Thomas Piketty starts off by defining public wealth as falling into two categories: assets which are owned by the government and used by itself or the public (buildings, roads), and “financial” assets of the type that individuals also often own (for example, partial ownership in private corporations, or foreign assets). The line between these categories is blurry, as government-owned firms can be privatized. Similarly, it can be extremely difficult to precisely price a road or a park. However, Thomas Piketty’s key point is that net public wealth (assets – debts) is very small compared with private wealth. “At present, the total value of public assets (both financial and non-financial) is estimated to be almost one year’s national income in Britain and a little less than 1.5 times that amount in France. Since the public debt of both countries amounts to about one year’s national income, net public wealth (or capital) is close to zero.” (page 124) Since as the table above shows, net private wealth is almost 6 years national income, “Regardless of the imperfections of measurement, the crucial fact here is that private wealth in 2010 accounts for virtually all national wealth in both countries: more than 99% in Britain and roughly 95% in France, according to the latest available estimates. In any case, the true figure is greater than 90%.” Thomas Piketty looks at British and French debt over the past few hundred years. I’ll focus on the British case, as it’s more extreme and clearer than that of the French, though they follow a similar pattern. Britain took on enormous debt in the late 18th and early 19th Centuries, financing its wars (7 year war, American Revoluation, Napoleonic Wars) primarily through borrowing. Although no country has sustained debt levels as high as Britain’s for a longer period of time, Britain never defaulted on its debt. Ineeed, the latter fact explain the former: if a country does not default in one way or another, either directly through simply repudiating its debt or indirectly through high inflation, it can take a very long time to pay off. (page 129) It is quite clear that, all things considered, this very high level of public debt served the interests of the lenders and their descendants quite well, at least when compared with what would have happened if the British monarchy had financed its expenditures by making them pay taxes. (page 130) The central fact–and the essential difference from the twentieth century–is that the compensation to those who lent to the government was quite high in the nineteenth century: inflation was virtually zero from 1815 to 1914, and the interest rate on government bonds was generally around 4-5 percent: in particular, it was significantly higher than the growth rate. Under such conditions, investing in public debt can be very good business for wealthy people and their heirs. (page 131) The taxes-vs-borrowing tension has always existed. During World War 1, socialists criticized U.S. War Bonds, saying that the war should be funded through taxation. The distinction is quite simple: when a government borrows money from the wealthy, it gets the money now but pays them back with interest. Taxation means the government gets money right away, but does not have to pay anyone back. Note, however, that historically taxation doesn’t directly hit wealth (it is focused on income, plus the property tax, which only affects land a buildings and ignores other sources of wealth)–which is a larger figure than yearly income as we have seen–so unless that is done then borrowing gives the government access to more funds than taxation. One also could argue that borrowing in efficient, insofar as those with liquid assets are more likely to buy bonds while those whose assets are less liquid can hold off. In the twentieth century, a totally different view of public debt emerged, based on the conviction that debt could serve as an instrument of policy aimed at raising public spending and redistributing wealth for the benefit of the least well off.The difference between (this and the former view of public debt helping the wealthy) is fairly simple: in the nineteenth century, lenders were handsomely reimbursed, thereby increasing private wealth; in the twentieth century, debt was drowned by inflation and repaid with money of decreasing value. (page 132) Keynesianism, which I teach about in my economics class, reflects the 20th Century view of debt. I’d also add that it’s viewed as a way to increase economic stability. Beyond that, the logic of the above statement is quite self-evident. The final bit of the chapter offers a good critique of “Ricardian equivalence” or the theory that public debt doesn’t affect national wealth if held by citizens of that country because the people literally owe it to themselves. Since the 1970s, analyses of the public debt have suffered from the fact that economists have probably relied too much on so-called representative agent models, that is, models in which each agent is assumd to earn teh same income and to be endowed with the same amount of wealth (and thus to own the same quantity of government bonds). Such a simplification of reality can be useful at times in order to isolate logical relations that are difficult to analyze in more complex models. Yet by totally avoiding the issue of inequality in the distribution of wealth and income, these models often lead to extreme and unrealistic conclusions and there therefore source of confusion rather than clarity. In the case of public debt, representative agent models can lead to the conclusion that government debt is completely neutral, in regard not only to the total amount of national capital but also to the distribution of the fiscal burden. This…fails to take into account the fact that the bulk of public ebt is in practice owned by a minority of the population (as in nineteenth-century Britain but not only there), so that the debt is the vehicle of important internal redistributions when it is repaid as well as when it is not. (page 136) Another reason debt matters, which Thomas Piketty does not go into here, is that there’s definitely an upper limit in how much tax revenue, at least tax revenue on income, governments can collect without making the economy function less efficiently as increased efforts are directed toward tax avoidance. In that sense, too high a public debt means that, after interest has been paid, there is less money for all the dynamic responses to challenges which governments need to rise to.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9418638348579407, "language": "en", "url": "https://cities-today.com/montreal-to-apply-a-climate-test-to-all-city-decisions/", "token_count": 840, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2925a4fc-9478-4689-885a-da2e866d71a4>" }
Montreal’s new climate plan to 2030 outlines 46 actions around cleaner transport, urban greening and more energy-efficient buildings. It includes a commitment to apply a ‘climate test’ to all city decisions, from finance and infrastructure to public policy. The city will also allocate 10 to 15 percent of the ten-year capital expenditures programme budget to climate change adaptation. The Climate Plan 2020-2030 aims to support the city to reduce its greenhouse gas (GHG) emissions by 55 percent (from 1990 levels) within ten years and become carbon neutral by 2050. The plan states: “The health crisis triggered by COVID-19 has highlighted the importance of resilience to ensure the wellbeing of cities’ residents and the vitality of businesses and infrastructure. “The ultimate goal of this plan is to increase the community’s resilience and capacity to adapt to climate hazards, environmental disruptions and potential pandemics that could once again cause havoc in our society.” Mobility and buildings Road transportation is the largest source of GHG emissions in Montreal, accounting for around 30 percent. According to the city, integrating the Climate Plan’s targets into urban and mobility planning to inform neighbourhood policy decisions could contribute to a 50 percent reduction in GHG emissions from road transportation. Plans include converting parking lots in some areas into open spaces and planting 500,000 trees. To meet the goal of reducing the share of solo car trips by 25 percent, Montreal will develop public and active transport in all districts and promote car-sharing, taxi use and carpooling. It has also proposed a zero-emissions zone downtown. Buildings generate 28 percent of the emissions in Montreal. To reduce this, the city will shift to renewable energy sources and eliminate the use of heating oil in buildings, which it says would reduce emissions by five percent. Other measures include: adapting bylaws and support programmes to boost the energy efficiency of buildings; designing a funding programme for property owners to support environmentally friendly renovation work; and bringing in a system of rating and disclosure for the energy consumption and GHG emissions of buildings. Starting at city hall The city has pledged to lead the way by converting all its municipal-owned buildings so they produce net zero carbon emissions, starting with the renovation of city hall. It will also “decarbonise the business travel of city staff” and encourage the use of sustainable transport modes for employee commutes. The city has 28,000 employees and said that municipal activities account for less than two percent of overall emissions. The plan also highlights strategies to get citizens and businesses on board, such as supporting companies to adopt emission-free delivery services – as is being piloted in the Colibri project, for example. Montreal will work to stimulate the circular economy by creating networks among businesses and community organisations, with a particular focus on reducing food and textile waste. However, Canadian environmental non-profit Équiterre called it “innovative”. “There are a lot of interesting measures in the plan, but we are particularly pleased to see that a climate test will be imposed on all of the city’s decisions. It sets a standard for all public administrations,” commented Marc-André Viau, Equiterre’s Director of Government Relations. Equiterre also said that dedicating budget to the programme “sends a signal that this is not only a political but also a budgetary priority”. The organisation noted, though, that it is concerned about Montreal’s ability to meet its reduction targets. “Its plan for reductions up until 2030 is unclear and the roadmap to 2050 remains to be defined,” it said in a statement. Montréal will publish an annual report about progress on the actions outlined in the plan. Climate Action network C40 Cities recently listed Montreal among 54 cities on track to keep global heating below 1.5°C as per the Paris Agreement. Global cities’ climate plans were analysed by C40’s Deadline 2020 programme.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9187518954277039, "language": "en", "url": "https://essaysamurai.co.uk/paper-4/", "token_count": 3020, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01513671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5fd386fb-74a9-4065-9259-b9eef2025c36>" }
BSTRACT: WidgeCorps’s management team ABSTRACT: WidgeCorps’s management team had a lack in understand of some of the key multivariate statistical techniques used by many companies to measure how variables react with one another. This paper will discuss how three of these techniques are commonly used and provides a recommendation for the company to use as they move forward with research and development of new products. This paper also compares and contrasts the different multivariate techniques. KEYWORDS: multivariate techniques, Chi-Square Test, multidimensional scaling There are many different multivariate techniques commonly used in businesses across the world. This paper will compare three commonly used techniques including factor analysis, multi-dimensional scaling, and cluster analysis. Additionally, I will provide my recommendation for WidgeCorp to follow as we move forward and dive into the cold beverage market. To begin, it is important to have a clear understanding about why and how a company will use multivariate techniques as part research. The term multivariate technique is somewhat of a blanket-term which includes many different techniques used by statisticians and researchers in many different fields, (Dayton, 2012). Multivariate techniques allow for companies to perform research on more than one variable to determine if there is a relationship between them. For many companies, the multivariate techniques are used to effective measure quality and safety, (Yang, 2010). WidgeCrop will be able to use each of the techniques as we move forward with our new business ventures into the cold beverage market. Factor Analysis: Factor analysis is one of the many techniques that can be used in different types of research projects. Factor analysis is most often used to compare variables which have a correlation to other confounding variables, (Dayton, 2012). Factor analysis will prove helpful after we have developed our products and are testing the new beverages in different markets. As an example, we could test the hypothesis that WidgeCorp’s new line of cold beverages burns more calories than our competitor Gatorade’s line of cold beverages. The observed variable would include whatever the ingredient is in the beverage which helps to burn calories. The confounding variable could be the level of activity of those participating in the study. As part of my research for this project, I looked into several companies who use factor analysis as part of their research efforts. Companies like Twitter, Facebook, and other social media outlets have been using factor-analysis to help them find the hottest trend, (Du, 2012). These companies generally use a five-step process to help them find the hottest trends. The first step is initial research used to gather data. The second step involves finding key trends or factors. The third step involves defining and interpreting the latest trends, (Du, 2012). The fourth step involves defining the trends/factors into variables. The final step entails projecting how successful the trends will become. By using the factor analysis method, social media outlets are able to successful be a part of the most trendy new products and services used by consumers across the globe. Cluster Analysis: Cluster Analysis is another technique that Widgecorp will likely use as part of our cold beverages research. Cluster Analysis lumps groups of related characteristics together, (Dayton, 2102). Cluster Analysis would be most helpful to WidgeCorp as part of the beginning stages of the research process. Cluster Analysis uses many different mathematical methods to help determine statistical significance. WidgeCorp will be able to use Cluster Analysis as we dive into market research. We will use Cluster Analysis to determine what populations of people that we should focus our marketing efforts on. When researching Cluster Analysis for this presentation, I came across a few examples of companies who used the Cluster Analysis technique, (Downes, 2012). The most impressive example came from a market research firm who used Gmail to advertise and market their subscribers. Gmail, a subsidiary of Google is able to track consumer data with every click that a consumer makes with their mouse as the cruise the internet. Market research firms collect data daily about consumers. They then make note of the buying and internet surfing trends of consumers. They use cluster analysis by putting the clusters or groups of consumers with similar trends together and then marketing new products or services to them, (Downes, 2012). Multidimensional Scaling: Multidimensional Scaling is another multivariate technique WidgeCorp could use while doing research. Multidimensional scaling is the most abstract of the multivariate techniques. While abstract, it was the easiest for me to comprehend. Multidimensional Scaling has two main objects. The first objective is to find a pattern somewhere in the data collected and presenting it visually for all to understand, (Wilkes, 1977). To visually display the data, Multidimensional Scaling places the data retrieved onto a three-dimensional plain. It is particularly useful when dealing with many different variables and allows the reader to see a visual representation on how they relate to one another. Multidimensional scaling is often used to test both the quality and safety of consumer products, (Yang, 2010). When researching the different multivariate techniques, I found some practical application of the Multidimensional Scaling method. The most interesting application I found was relating to international bank failures. Researchers collected data about 66 different Spanish banks and used the Multidimensional Scaling as a predictor in their financial stability, (Cinca, 2001). The research measured the financial liquidity of banks and compared it to both the banks who failed and the banks that were still in business. Another important Multidimensional Scaling technique I found involved the testing of air fresheners. Multidimensional Scaling was used by comparing some of the features of the different air fresheners separate while seeing if there were commonalities between different brands. Our group decided that multi-dimensional scaling would be the best method for WidgeCorp to use as we move forward and dive into the cold beverage market. When testing the safety, quality, and consumer likability of a product, it would make the most sense to use the multidimensional scaling technique. Not only will the technique allow us to see the variables as they relate to one-another visually, but we are also able to additional variables to be tested. We can keep the dependent variables constant and change the independent variables as our research evolves. One of the main reasons Multidimensional Scaling should be used is that it will be easier to understand by people who have not been exposed in statistical research. For many members of our management team, statistics is a foreign concept. By using the Multidimensional Scaling technique, we will be able to not only research the statistical significance but present it in a manner which will be easily understood by our management team. We can then compare the results of the different tests we have conducted to see what has a stronger tatistical significance. For example, we can keep the same basic ingredients in our cold beverages, but just slightly change the color, flavor, or both the color or flavor and measure the consumer’s response to the slight changes. While conducting the research, we can collect consumer data such as age, gender, occupation, education, and how often consumers purchase cold beverages. We will able to create three-dimensional planes to see how different combinations of the consumer data we have collected affect how much they like the color, flavor, or color and flavor of our new line of cold beverages. By adding a visual component to our research, we will be potentially able to visualize new relationships between the variables we are testing. Hypothesis Testing and Multidimensional Scaling: When in the research and development stage of our new cold beverage line, it is important that we are able to successfully test our hypothesis. We will test our initial hypothesis which states that WidgeCorp’s new line of cold beverages helps to burn calories than a competitor’s line of cold beverages by use the Chi-Square Test to test our hypothesis. We can develop this research further by creating a multinomial experiment by testing data in more than two categories, (Bowerman, 2012). We will do many studies to determine how successful our new beverages are in burning calories. We will test it amongst many populations including children, teenagers, and young adults. Our study will also compare results based upon gender, education, and occupation. We will also factor in levels of activity: no activity, moderate activity, and extreme activity. Essentially we will be testing how effective our new beverages are in burning calories amongst many different populations. To effectively test our hypothesis, it is important that we have a significant amount of willing participants in the study. We will need to find equal numbers of people willing to participate in the different categories we are testing. If we have too few test subjects, any statistical significance found will be not taken seriously by members of our research team. Additionally, it could harm the integrity of our company and could tarnish our reputation with the general public. The best way to test our hypothesis will then be to use the Chi-Square Test. The Chi-Square test begins with a contingency table with as many rows and columns as there are variables to test. We can use the contingency tests to test many different variables. Bellow, I have created a basic contingency table which will compare the total additional calories burned after drinking the WidgeCorp and Gatorade. The contingency table below shows females only and would be repeated with males. The data could be combined on the same contingency table or on a different contingency table. In this scenario, we are testing age, gender, and activity level and comparing it between our beverage and that of our competitors. The Chi-Square test will test the difference between the calories burned by the WidgeCorp beverage and compare it with the Gatorade beverage to determine if it is statistically significant, (Berenson, 2010). The statistical significance will signify two important pieces of information: which of the beverages helps burn the most categories and if the amount of calories burned is statistically significant. | Calories: WidgeCorp| Calories Gatorade| Female age 6-12 (sedentary)| | | Female age 6-12 (moderate)| | | Female age 6-12 (extreme)| | | Female age 12-16 (sedentary)| | | Female age 12-16 (moderate)| | | Female age 12-16 (extreme)| | | Female age 16-20 (sedentary)| | | Female age 16-20 (moderate)| | | Female age 16-20 (extreme)| | | Female age 20-24 (sedentary)| | | Female age 20-24 (moderate)| | | Female age 20-24 (extreme)| | | If we are able to prove that our new cold beverage line actually does burn more calories than our competitor Gatorade, we will likely see our competitor attempting to test our hypothesis. They will likely claim our hypothesis is false and thus test the null-hypothesis. The null-hypothesis will state Widgecorp’s line of new cold beverages does not burn more calories than Gatorade. The subsequent alternative hypothesis will state that Widge Corp’s line of new cold beverages does burn more calories than Gatorade. Our competitors will use the Chi-Square method to determine if our original hypothesis is false. Once our data has been collected and analyzed, it can be presented in a three-dimensional model to help present and organize the data in a visual manner. We will be able to see additional relationships when posting all of the data together on the same three-dimensional plane. Commonalities of the Different Techniques: The main commonality I find between all the different techniques discussed in this paper is the fact that they deal with multiple variables and thus are all multivariate techniques. Each method has its place within the realm of research and it is likely that WidgeCorp will use all three. All three techniques can use the Chi-Square test to test the validity of the hypothesis they are testing. Differences of the Multivariate techniques: The major different to note about the different techniques is how the techniques look at relationships between variables. Multidimensional Scaling differs from the other techniques the most in how the data is presented visually. Multidimensional Scaling uses a three dimensional plain to display the relationships between variables. The cluster analysis method looks to see if there are “clusters” or groups of data which are clumped together to denote any commonalities between the results. Factor Analysis looks to compare two different types of variables relate to one another. Multidimensional Scaling focuses mainly on commonalities, but looks to define the commonalities on a three-dimensional plain. To conclude, while the different multivariate techniques have a valuable place, for our purposes the Multidimensional Scaling technique will prove the most beneficial. While all techniques are similar in that they are working with multiple variables, the approaches differ. Upon reading this paper, the management team at WidgeCorp should have a sound understanding of the different multivariate techniques. References: Berenson, M. , Krehbiel, T. , & Levine, D. (2010). Business Statistics: A First Course. Prentice Hall. Upper Saddle River, NJ. Borgatti, S. (1997). Multidimensional Scaling. Retrieved 09-09-2012 from http://www. analytictech. com/borgatti/mds. htm. Cinca. , C. ,& Molinero, C. , (2001). Bank failure, a Multidimensional Scaling approach. Eurpoean Journal of Finance. 7(2)18. Dayton, D. , (2012). Multivariate statistics. Retrieved 09-23-2012 from https://campus. ctuonline. edu Downes, L. , (2012). Customer intelligence, privacy, and the “Creepy factor”. Harvard Business Review. Retrieved 09-09-2012 from http://blogs. hbr. rg/cs/2012/08/customer_intelligence_privacy. html. Du, R. , & Kamakura, (2012). Qualitative trend spotting. Journal of Marketing Research. 49(4)22. Keough, M. , & Quinn, G. , (2001). Design and Analysis for biologists. Retrieved from http://bio. classes. ucsc. edu/bio286/MIcksBookPDFs/QK18. PDF. Yang, Z. , & Yingwei, Z. , (2010). Process monitoring, fault diagnosis and quality prediction methods based on multivariate statistical techniques. IETE Technical Review. 27(5)14. Wilkes, R. , (1977). Product positioning by multidimensional scaling. Journal of Advertising Research. 17(4)5.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9457066059112549, "language": "en", "url": "https://fillyourplate.org/blog/what-factors-impact-food-prices-part-2/", "token_count": 644, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.365234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:65c11f4b-6341-4a5e-ad99-4fabead2b991>" }
In today’s world of 24 hours news and possible calamities around every corner, consumers need to understand how the different factors actually affect food prices so that they can make good decisions. Misinformation and hype can lead to bad decision making at every level of our society. For example, let’s pretend that all the news channels report tomorrow that the price of chicken is going to skyrocket if the drought continues this year. Hearing this, people run out and buy more chicken than they normally would which causes a sharp increase in demand. Higher demand equates to higher prices in our market economy and as predicted, the price of chicken increases. But this increase had nothing to do with the drought because decreases in available supply resulting from the drought won’t actually be seen for months. If you don’t understand which factors impact the prices you pay for food, you might find yourself filling your freezer with chicken, or bacon, or corn it order to avoid price increases that never come. In the first part of this series, we talked about how production and weather impact the price of our food. Now, let’s look at another of the factors that we don’t hear about as much but that can actually have a more substantial impact on how much it will cost to fill our plates. The food marketing system is how the food our farmers grow gets from their farm to our plate. It is believed to be the largest non-government sector of employment in the country and encompasses all the activities that transform, transport, and package the food we eat. Depending on the product, this system can involve a wide range of companies, numerous hand-offs, and considerable expense. The food marketing system includes manufacturers, wholesalers, and retailers, accounting for more than 85% of the price we pay at the grocery store. There are several different pieces to the food marketing puzzle that contribute to the overall cost of our food. These pieces include manufacturing, processing, packing, transportation, energy, and sales. Across all of these sections, labor costs account for the largest percentage of the price we pay at the store or restaurant, about 38%. Packing and transportation account for another 12% and the remainder goes towards energy, advertising, business expenses like rent and depreciation, and company profits. If you consider that food marketing, which is everything that happens after a food product leaves the farm, accounts for such a significant amount of our food prices, it is easier to understand why increased costs at the farm level don’t necessarily equate to significant increases in the prices we pay at the store. The price of gas is also often blamed for rising food costs. But, when looked at in this context, the price of gas could double and that would only increase the cost of food by about 4 %. This complexity is why we cannot use one or even two factors to predict price volatility. - What Factors Impact Food Prices- Part 1 (fillyourplate.org) - Classic Thanksgiving Dinner Cost Decreases 5% in 2012 (fillyourplate.org) - Good News for Holiday Food Shopping: Food Prices Down (fillyourplate.org)
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9531048536300659, "language": "en", "url": "https://www.ribaj.com/products/can-we-make-zero-carbon-in-use-a-reality", "token_count": 1675, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f507a346-b7f6-44ee-97c0-7ff8839d2390>" }
Despite umpteen regulations and rulings, the performance gap on sustainable buildings remains too large. Is there a solution? In recent years we have seen a plethora of legislation and incentives to improve the energy performance of buildings and it is hardly surprising that built environment professionals can find it challenging just to keep track of compliance requirements. Are these regulations achieving the expected reductions in energy use – and what is the cost? We’ve had changes in Building Regulations; the Green Deal; boiler replacement schemes; the CRC Energy Efficiency scheme; the Renewable Heat Incentive; Enhanced Capital Allowances, and Feed-In Tariffs and more. How effective have they been? Are the 2020 ‘nearly zero’ targets for new buildings realistic, and how do built environment professionals feel about them? A fundamental feature of current legislation is the lack of a feedback mechanism at both building and stock level. Determining the impact of legislation on building performance is not straightforward and is the subject of several PhD and post-doctoral research studies. Early results are not encouraging. Recent studies, such as Innovate UK’s Building Performance Evaluation (BPE) programme and UCL research into the impact of large-scale fabric and boiler improvements in housing, have gathered important evidence about lower than expected performance improvements in use from typical efficiency measures. The BPE programme offers insights into excessive costs and potential productivity losses associated with the performance gap that dwarfs the cost of excessive energy bills. Using crowd-sourced data, the RIBA/CIBSE CarbonBuzz platform has demonstrated a 1.5-2-fold difference between calculated and achieved energy use in the education and office sectors. This is worrying enough for the RIBA Sustainable Futures Group and Architects’ Council of Europe to make the ‘building performance gap’ a key priority. The European Commission is also looking to study further how low impact buildings, certified according to existing schemes, perform in reality. What’s missing is embedding feedback from completed buildings – the disclosure of predictions as well as performance in use Legislative drivers and change Finding ways to achieve drastic improvements in building performance is imperative. In 2008 the UK government signed up to a legally binding target of an 80% reduction in CO2 emissions by 2050 compared to 1990 levels (34% by 2020). Its 2011 Carbon Plan aims to reduce emissions from all UK buildings to ‘close to zero’ by 2050 – a reduction of 24-39% from 2009 levels by 2027. The EU Energy Performance of Buildings Directive (EPBD) and Energy Efficiency Directive (EED) have been updated, requiring the EU to implement a 40% reduction in emissions below 1990 levels by 2030 and for nation states to increase energy efficiency by at least 27%. Buildings account for around 45% of our total annual emissions, with 25% of these coming from homes. Energy efficiency improvements in buildings offer the most promising area for regaining growth in the construction sector. But achieving these goals is a challenge. The EPBD requires Energy Performance (EPC) Certificates for all buildings. From 2018 none with a rating below E can be let and there are indications that this is a major driver for landlords. However, growing scepticism surrounds the relationship of EPCs to actual performance. The cost of improving on an EPC rating is relatively low but several recent studies have found no relationship between EPCs and operational energy use (JLL, TSB BPE, etc.). The 2010 recast of the EPBD requires member states to ensure that after 31 December 2018 all new buildings occupied and owned by public authorities are nearly-zero energy buildings (nZEB), a demand covering all new buildings by 31 December 2020 . The definition of ‘nearly zero’ is up to member states and in the UK the debate around what constitutes a nZEB has been led by the Zero Carbon Hub for housing and the UKGBC for non-domestic buildings. The UK government has committed to meeting nZEB targets for new housing by 2016 and for non-domestic buildings by 2019. Yet there is still no final conclusion on what the ‘minimum on-site carbon emissions threshold’ might be, or on the definition of allowable solutions which permit remaining emissions to be ‘tackled through nearby or remote measures’. As countries like Belgium mandate all new buildings to comply with the highly credible PassivHaus standard, British lawmakers appear to be lowering standards. The right legislative framework is essential to improve building performance – and building regulations have repeatedly been shown to be the most effective way to deliver a step change in construction practice. What’s missing is embedding feedback from completed buildings – the disclosure of predictions as well as performance in use. A way forward Seven detailed studies undertaken by AHR as part of Innovate UK’s BPE programme have highlighted the unintended consequences of the existing EPBD-driven building regulations compliance process. As Part L only requires the performance evaluation of a building under standardised conditions, risk factors relating to construction and building operation cannot be considered and addressed at design stage, nor will solutions be incorporated into contracts and specifications. Likewise, a compliance calculation cannot act as a basis for comparing design stage predictions with actual performance as a full energy use forecast is required to diagnose any problems post-completion. Having studied the consequences, BPE participants have started to build on the lessons learned. AHR’s freshly completed design for the Bath and North East Somerset council offices and civic centre targetted operational energy use from the outset. Gaining the Display Energy Certificate (DEC) A rating was part of the client’s brief, and the team came up with a novel approach. A building ‘energy budget’ was developed early on, accompanied by a risk register listing every aspect of the design that contributed to the energy rating. Updated at key RIBA Plan of Work stages, the energy budget and the risk register were incorporated in the contractor’s prelims. The contractor (Willmott Dixon) in turn signed up to the delivery of the DEC A rating along with the requirement to measure and benchmark the building’s energy use on a monthly basis, following handover, during the first year of the building’s operation. This not only delivered innovative architecture but helped eliminate many of the usual problems that arise from the value engineering of critical elements or poor commissioning. Building features relating to the long-term resilience of the building were retained, such as passive ventilation, floor-to-floor heights, vent voids, thermal mass, window specification, etc. Significantly, the project team agreed that as the design exceeded BREEAM energy-related criteria, that certification was not needed. The process set out by the team facilitated collaborative working to share all energy-related data and to recover situations that may in different circumstances have led to adversarial action. If the project performs to expectations after year one, it will exceed building regulation requirements and operate over and above nZEB targets. It would also demonstrate that setting the right KPIs and opting for measurement, verification and disclosure could achieve better than nZEB performance in use and significantly reduce regulatory burden. With EU member states actively seeking a lighter legislative touch, this project has attracted the attention of organisations such as the Architects’ Council of Europe, the European Commission and UK government departments. To satisfy Article 11(9) of the EPBD recast, significant EU effort has developed a voluntary common certification scheme for energy performance of non-residential buildings. This would harmonise the rating system across member states and align the reporting of as-designed and in use performance. If it is successful, it may be possible to supplant this approach to the reporting of resource consumption in buildings too. At a recent Construction 2020 workshop the benefits of mandating disclosure as opposed to detailed regulations were discussed constructively. Given the simplicity of the scheme and the appetite for big business to adopt it, the regulatory framework might just be subverted. • Dr Judit Kimpian is director of sustainable architecture and research at AHR, chair of the Architects’ Council of Europe sustainability group, and leads CarbonBuzz
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9511300325393677, "language": "en", "url": "https://catchshareindicators.org/discarding/", "token_count": 363, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.00927734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1a092d60-12da-4619-9c65-43b3c9f40ec1>" }
What does this indicator measure? This indicator measures the fraction of caught fish that are discarded by fishing vessels for each species. Access the West Coast Shorebased IFQ Program Interim Results and the Northeast Multispecies Sector Program Interim Results for Ecological Indicators Why is this indicator important? Some catch share programs limit the amount of fish that can be brought back to shore, while others may consider discards as part of the total catch share. Either case may create incentives to discard fish that exceed the catch share, or to retain only those fish of a given species that are most valuable (“high-grading”). However, fishers can lease catch shares or join risk pools to cover their overages, which may reduce discarding. Many discarded fish die as a result of the physiological stress of being caught or from handling damage during the fishing process, so that discards are considered both economically wasteful and ecologically harmful. How is this indicator measured? Federal observers on board fishing vessels record the species and amount of fish caught and discarded. A simple calculation of discarded weight divided by total catch gives the annual discard fraction, or rate, in each year. This fraction can be compared from before and after the catch share programs were implemented. What are the strengths and limitations of this indicator? This indicator is a good measure of the amount of discarding. However, it should be noted that some management regulations are designed to ensure that total mortality (retained catch plus discard deaths) is sustainable. In other words, if discards are high, regulations could require lower amounts of landings. Where observer coverage is limited, the estimated discard fractions will be more uncertain, and the data may be biased if fishers change their fishing behavior when an observer is on board.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9514793753623962, "language": "en", "url": "https://thehubforstartups.com/2013/10/31/failure-is-a-part-of-entrepreneurial-journey/", "token_count": 339, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.076171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6ddc1f5e-6b93-43e7-afe1-225a7a702e66>" }
The general rule is that out of 100 new ventures, perhaps 50-60 will shut down by year 2, may be 20-30 will survive with their heads above water or at a lower scale than the aspiration was. May be 8 – 10 will be reasonably successful and may be 1 or 2 of these 100 startups will be ‘very’ successful. Just because a venture is not successful or shuts down does not mean that the entrepreneur has failed. It just means that this particular venture did not succeed. Simple. Of course, aspire for success. But remember, there is no shame in having tried and not succeeding.Like everyone will advice you not to let success go to your head, remember to not let failure deter you. Understand and evaluate your appetite for risks. Not just financial risks, but opportunity costs as well. Evaluate what the upside of success is and measure it against the risks. See if it makes sense. More importantly, DO NOT start up on the basis on just your enthusiasm. Validate the concept with your potential customers/consumers, seek mentors who can guide you, seek advice and guidance in building a good business pan and see if the concept has a good business case. Remember, entrepreneurs are NOT people who take unnecessary or unplanned risks. Good entrepreneurs make efforts to evaluate all the risks associated with a venture and take necessary steps to mitigate the risks. Yet, you can fail. And it is all right. Plan for how you will deal with failure too. Failing or shutting down is not the end of your professional or your entrepreneurial journey. It just means that there could be a diversion from the originally intended path.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9561331868171692, "language": "en", "url": "https://www.businesschief.asia/technology/how-bright-future-solar-power-australia", "token_count": 615, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0191650390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5b61cded-20b3-484e-9ee8-e0a6c9afe8f5>" }
Today, Australians have access to some of the least expensive electricity in the industrialized world, and a major reason for that is rooftop solar. However, the country’s grid electricity rates aren’t low, as Aussies pay a lot for the electricity they purchase from their utilities due to deregulation, infrastructure costs and network charges as well as the carbon tax. But the nation is in a good position to reap the benefits of residential energy storage in a way few other countries are, as it receives more solar radiation per square foot than anywhere else on the globe. All it may take is the right technology to begin the energy-storage boom that the industry has been waiting for. RELATED TOPIC: Solar Panel Breakthrough Could Mean Huge Energy Savings For years, Australia has gone through a stage of solar incentives and solar market jitters, even though about 1.4 million households – or nearly one in six – currently have rooftop solar. Despite all the criticism the nation has taken over the years as being far behind on renewable energy, Australia leads the world in the amount of rooftop solar installed by far, while the Aussie company Pollinate Energy continues to supply solar lamps in India. In South Australia and Queensland, the number of homes with rooftop solar is over 20 per cent, and 40 per cent for owner-occupied homes. Still, Australia’s passion for rooftop solar doesn’t translate into large-scale solar power generation as most other countries may believe. In fact, it only adds up to about four gigawatts – or three per cent of total electricity – according to the Energy Supply Association of Australia. This is because Australia’s renewable energy target mainly just promotes the use of large-scale wind power, since wind farms are still much cheaper to build compared to solar energy plants. It’s a lot harder for expensive solar technology to compete on cost, especially with a new report from the Grattan Institute that claims the total price of installing and maintaining rooftop solar – after adding in the emissions as well as the electricity use avoided – is about $9 billion. That’s why many experts view rooftop solar as more of an expensive and unfair policy than a victory. But changes in technology could be on the way to fix this problem. As was previously written in Business Review Australia, advances in battery storage technology such as Tesla’s Powerwall should help the lack of solar power being produced in the peak early evening hours when it’s most needed and most expensive. Specifically, this helps guarantee reliability during peak hours and when the sun isn’t shining. The view among many is that solar photovoltaic (PV) in addition to a battery will encourage people to move away from the electric grid completely, but that is certainly incorrect due to the cost and complications of complete independence from it. Instead, a true future for rooftop solar will only happen if Australia gets its price and regulations right. However, the cost of getting that done is anyone’s guess.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9425667524337769, "language": "en", "url": "http://solarpanelssale.com/check-out-these-great-tips-about-solar-energy/", "token_count": 1106, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.00848388671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f00093ae-5970-4aef-9173-c339e528002e>" }
Saving money starts with switching to solar power. Not to mention, you are about to find out that it also impacts the environment in more ways than you’re even thinking. Keep reading to learn more about how solar energy can change your life and save you money. Choose several panels that are efficient to maximize your energy generation. You will need to do a little math to calculate the number of panels you will need. It may be more cost effective to purchase more efficient panels. There are two major types of photo-voltaic panels: poly-crystalline and mono-crystalline panels. Poly-crystalline panels tend to be cheaper but they are not as efficient than mono-crystalline solar panels. Try to get the most efficient possible product for your home. Start out small when you begin using solar power. For example, solar path lights are a great start. Low-voltage outdoor solar lighting is available at most home improvement stores. Installing them means nothing more than shoving them into the soil. The solar panels’ density can determine their efficiency rates. Panels featuring higher levels of density typically cost more, but their expense is worthwhile, as you will have greater energy production ability. Before you select your solar panels, you should consider panel density. Your solar power system will function wonderfully if you maintain proper care of it. The panel surfaces must be cleaned, and all equipment should be inspected monthly. If you are not able to do so yourself, you should have a trained professional come to your home. Tackling the project on your own could save a great deal of money, however. You can make a solar system installation more affordable by looking into grants and rebates. Your solar energy system can cost quite a bit to get started with, but there is help available usually. Do your research and you may find great programs that offer rebates, grants or other incentives to help you get the solar power equipment to get you started. The cost savings can be substantial. You can even get some deductions at tax time. Solar Energy System There are many different things to factor in when deciding whether or not to install a solar energy system into your home. Solar panels might not be a good option if you cannot count on optimal exposure to sun rays in your area or if you use more power than a regular solar energy system can generate. Do your homework to determine if these panels are right for you. There are many tax credits, rebates and incentives to help you offset the initial costs of solar power. You could receive a rebate of around 30 percent. You should do some research on the Internet or get in touch with your local government to find out more about the incentives and programs you are eligible for. Your heating bill will be lower if you install solar panels that are photovoltaic on your house or use water heating that’s solar. For photovoltaic panels to effective, you will need a minimum of 5 hours of direct sunlight per day. You can benefit immensely by using a solar water heater to help heat your swimming pool. If you want to start using solar power in your home, look for areas that can be easily converted. Start with smaller appliances, one at a time. You can convert gradually, which will allow you to focus on a long-term commitment. If you have purchased your own home, consider investing in a complete solar energy system. If you are currently making payments, you are just adding an additional monthly cost which could put you in serious financial trouble. Regardless of the system you choose, the panels should face the sun. You get the energy from the sun, so it’s important for the panels to be located in an area where they can get all the solar energy possible. Be grounded in your expectations of what you can get out of solar water heating. They’re typically only 30% more efficient than any other form of water heater. Have no fear! Your early morning shower will be nice and warm with solar heating. Usually, water heated from a solar system will still be warm for about one day. Preserve your solar panels by having twice yearly maintenance performed on them. During a check, the technician can check connections, make sure panels are angled properly and make sure the inverter stays on and works right. Change solar panel angles during the seasons, or four times per year. As the seasons change, the amount of sunlight hitting your home, as well as the direction of that sunlight, will change as well. Changing the angle on your solar panels lets you optimize them to catch the most energy, and be much more cost effective. Look at a company’s financial background prior to buying anything from them. Ensure that the business you choose is reputable and doing well. Also, find a company that offers a quality coverage plan. You don’t want to only consider price when selecting your solar panels. These panels vary in size, brand, wattage, warranty, performance and quality of materials. Therefore, it is important to research each solar panel and base your decision on quality rather than cost. Get the best solar panels that fit in your budget. Everything you’ve learned about solar energy should be useful after choosing and installing panels at home or work. Remember all the tips you learned here about the benefits of solar energy. Go ahead and start working on your personal solar energy plans.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9802170991897583, "language": "en", "url": "https://goldmineapp.com/general/the-reference-price-for-gold-in-london-has-been-determined-for-more-than-100-years/", "token_count": 416, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.306640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:31a82173-2b2c-4e24-b5fe-e34f9d63714b>" }
Although the gold price has been determined on the London Bullion Market since the 17th century, the market structure as it is still used today was only introduced on 12.09.1919 at the request of the Bank of England. The aim was to create a much more transparent gold market which would allow tighter trading margins for buyers and sellers. However, “transparent” was to be seen in relative terms, as in the beginning only three banks participated in the pricing process. Today the London Gold Fix has 15 members. Central indicator for over-the-counter gold transactions The London gold fixing was carried out at Bankhaus Rothschild every trading day at 11:30 a.m. CET by bankers proposing a gold price to their clients – institutional investors, mining companies and large gold traders. If the majority of respondents subsequently wanted to sell gold, the proposed price was too high; if most customers wanted to buy gold, it was too low. The gold price was then adjusted until a balance of buying and selling requests was reached. This pricing principle has not changed to this day. Constant updates and optimizations characterize the process In the course of 100 years not only the markets but also the London gold price fixing have developed further. The original reference currency, the British pound, was replaced by the US dollar in 1968. A second price fixing round was also introduced at 6 p.m. CET in order to provide a current reference rate in time for the opening of the US market. The monitoring of the London Gold Fix, which was renamed LBMA Gold Price in 2015, and the protection against price manipulation were also improved again and again. Initially the Bank of England itself was responsible for regulation, but today it is an independent body under the auspices of the UK Financial Services Authority. Although the London Gold Fix has faced allegations of manipulation since its inception and has often been the subject of investigations, there is only one proven case to date. In 2012, a trader at Barclays Bank manipulated the price of gold and the bank was fined $44 million.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9438708424568176, "language": "en", "url": "https://jcarettrealestate.com/qa/quick-answer-what-is-difference-between-branch-and-department.html", "token_count": 839, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0242919921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d833474d-99d5-4746-b714-0014e4aa1b40>" }
- What is an independent branch? - What are the 7 branches of accounting? - What is the function of the branches? - What is dependent and independent branch? - What is meant by departmental accounts? - What are the different types of branches? - What are the advantages of departmental accounts? - What are two types of department account? - What are the 8 branches of accounting? - What are the types of branch account? - What are the need of branch accounting? - What does Department branch mean? What is an independent branch? Independent Branches are those which make purchases from outside, get goods from Head Office, supply goods to Head Office and fix the selling price by itself Thus an independent Branch enjoys a good amount of freedom like an American Son.. What are the 7 branches of accounting? The famous branches or types of accounting include: financial accounting, managerial accounting, cost accounting, auditing, taxation, AIS, fiduciary, and forensic accounting. What is the function of the branches? Legislative—Makes laws (Congress, comprised of the House of Representatives and Senate) Executive—Carries out laws (president, vice president, Cabinet, most federal agencies) Judicial—Evaluates laws (Supreme Court and other courts) What is dependent and independent branch? Dependent branch accounting :- When the policies and administration of a branch are totally controlled by the head office who also maintains its accounts the branch is called dependent branch. Independent branch:- When the size of the branches is very large their function become complex. What is meant by departmental accounts? Departmental Accounting refers to maintaining accounts for one or more branches or departments of the company. Revenues and expenses of the department are recorded and reported separately. The departmental accounts are then consolidated into accounts of the head office to prepare financial statements of the company. What are the different types of branches? Branches can be classified into two types.Dependent Branches. The term dependent branch means a branch that does not maintain its own set of books. … Independent Branch. An independent branch means a branch, which maintains its own set of books. What are the advantages of departmental accounts? The most significant advantages of departmental accounts are:Individual results of each department can know which helps to compare the performances among all the departments, i.e., the trading results can compare.Departmental accounts help to understand or locate the success, failure, rates of profit, etc.More items… What are two types of department account? There are two methods of keeping departmental accounts:Independent Basis: In this method, accounts of each department are maintained separately. Each department prepares Trading and Profit and Loss Account. … Columnar Basis: ADVERTISEMENTS: In this method, there is a single set of books. What are the 8 branches of accounting? If you need income tax advice please contact an accountant in your area.Financial Accounting. Financial accounting involves recording and categorizing transactions for business. … Cost Accounting. … Auditing. … Managerial Accounting. … Accounting Information Systems. … Tax Accounting. … Forensic Accounting. … Fiduciary Accounting. What are the types of branch account? In other words, these branches are operated and controlled by Head Office. Dependent Branch: Dependent branches are those which do not maintain separate books of account and wholly depend on Head Office. The result of the operation, i.e., profit or loss, is ascertained by Head Office. What are the need of branch accounting? The need arises for branch accounting as to ascertain the profitability of each branch separately for a particular accounting period,to ascertain whether the branch should be expanded or closed,to ascertain the requirement of cash and stock for each branch and to ascertain the quantity of stock held by each branch at … What does Department branch mean? A department is a technical area of a office which is under the same premises while the branch is an extension of the office with more or less the same features.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9497830867767334, "language": "en", "url": "https://texturetranscribed.com/qa/what-are-the-most-common-sources-of-debt-financing.html", "token_count": 384, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.142578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ddb76347-3853-488a-bbd8-0b886295c9a5>" }
What are the most common sources of equity funding and debt financing? On this page you’ll find some common sources of debt and equity finance….These include:business loans.lines of credit.overdraft services.invoice financing.equipment leases.asset financing.. Where does debt financing originate? Debt financing is when the company gets a loan, and promises to repay it over a set period of time, with a set amount of interest. The loan can come from a lender, like a bank, or from selling bonds to the public. Debt financing may at times be more economical, or easier, than taking a bank loan. Why is debt financing bad? Debt is a lower cost source of funds and allows a higher return to the equity investors by leveraging their money. … Because all debt, or even 90% debt, would be too risky to those providing the financing. A business needs to balance the use of debt and equity to keep the average cost of capital at its minimum. What are examples of debt financing? All of the following are examples of debt financing:Loans from family and friends.Bank loans.Personal loans.Government-backed loans, such as SBA loans.Lines of credit.Credit cards.Real estate loans. What are sources of debt financing? Debt finance – money provided by an external lender, such as a bank, building society or credit union. Equity finance – money sourced from within your business. What are the major types and uses of debt financing? Terms loans, equipment financing, and SBA loans are common examples, and they may be secured or unsecured loans. … Business lines of credit and credit cards are types of revolving loans. Cash flow loans: Like installment loans, cash flow loans typically provide a lump-sum payment from the lender after you’re approved.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.98276287317276, "language": "en", "url": "https://www.danhugger.com/2019/07/the-phillips-curve-is-still-dead.html", "token_count": 135, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.41015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f550c7b5-e42c-4f90-a7ef-0bcc5836a0a6>" }
"The puzzle and promise of the Phillips curve is the idea that tighter labor markets, traditionally measured by the unemployment rate, correlate with higher wages and prices. That takes more doing. Typically, you have to think that workers are fooled into working for what they think are higher real wages, and only later discover that prices have gone up too. And you have to think that firms rather mechanically raise prices passing on higher labor costs, and keep selling things when they do. Despite the intuitive appeal of tight markets leading to rising prices and wages, that simple intuition is wrong to describe a correlation between tight markets and both prices and wages, which is what the Phillips curve is and was."
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9716925024986267, "language": "en", "url": "https://www.educationquizzes.com/gcse/geography/modern-changes-in-industry-in-the-uk/", "token_count": 530, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.103515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bf1de777-294d-44c3-a9ff-458a215ab780>" }
This GCSE Geography quiz takes a look at modern changes in industry in the UK. The economic geography of the UK has undergone some dramatic changes since the 18th century. Before the industrial revolution, maps would have shown that the UK economy was based on agriculture and cottage industries. All that changed during the industrialisation of Britain (the Industrial Revolution) as heavy industry and factory mass production of textiles, ceramics and many other goods began. The nature of the British economy started to change once again after the 1939-45 war as cheap foreign textiles, raw materials and other products became available through increased globalisation, making manufacturing in the UK less economic. At the same time, the increased mechanisation of agriculture and in factories (e.g. using robots to build cars) gradually increased unemployment, forcing people to look for work in different industry sectors. There are three key industry sectors - primary, secondary and tertiary. The primary sector obtains the raw materials e.g. agriculture, mining and quarrying. The secondary sector is the manufacturing and assembly process that converts the raw materials into components and produces items for sale such as computers, smartphones, vehicles, ready meals, houses and bridges. The tertiary sector refers to the commercial services that take these items, selling and distributing the manufactured products. Also included in this sector are things like transport, teaching, advertising and health care. There is another sector to industry, a relatively new addition and often included with the tertiary sector - the quaternary sector. This refers to the information services that support industry and commerce and includes research and development (R&D), ICT and consultancy (companies and individuals who advise businesses). The tertiary and quaternary sectors account for almost 80% of employment in the UK in the 21st century. Heavy industry in the UK declined rapidly after the 1970s. At the time, foreign industries had become more competitive, transport costs were lower and so it started to become cheaper to import products such as coal and steel. A large part of the UK's economy depended on both coal and steel, like ship building and heavy engineering. There were a lot of workers strikes during the 1960s and 70s - workers were demanding better wages and working conditions. Gradually the UK-based heavy industries in the primary and secondary sectors became less and less economic to run and closed down. Since the start of the 21st century, China has built up a strong base of industry in the primary and secondary sectors and is a major supplier of raw materials and goods to the rest of the world, much like the UK used to be.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9709353446960449, "language": "en", "url": "https://www.moneygorounds.com/content/sitemaps/menu-item/7762070814178811199", "token_count": 855, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1591796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:37895c24-44d5-401b-a502-b7eddd02aef1>" }
A lending circle is when people get together to form a group loan. Everyone in the group contributes money to the loan, and everyone gets a chance at taking the loan out. People across the world organize loans between friends or family without a financial institution all the time. This practice is known by many different names across the world: Susus throughout Africa, Paluwagan in the Philippines, Lun-hui in China, and Tandas in Mexico. Participants may not have access to a bank, or may not be a good candidate for a traditional loan, or they may just prefer getting loans from people they know and trust. How do informal lending circles work? A small group of people come together, agree on how much they will put into the general fund each month, and they hold each other to it. Let’s say a group of five people agreed to contribute $10 per month for five months, at the end of each month, one person from the group gets $50. You keep going until everyone has had a chance at the $50. This is a popular way for people who don’t have bank accounts, who want to save money, or who can't get approved for mainstream loans to get access to capital. By relying on our neighbors, everyone benefits. These groups have an organizer who makes sure everyone makes their contributions on time, collects the contributions by keeping them somewhere at home or placing them in a private bank account. The organizer also distributes payments to the members and keeps track of whose turn it is to receive a distribution. Many users come to us with a bad taste in their mouth from a prior experience with an informal lending circle. We have heard heartbreaking stories about how one person in the group walked away with the loan and never repaid it, actually causing everyone else to lose money. Besides this risk, the other downside to an informal lending circle is that it doesn’t do anything to build your credit. While you may be able to get loans from your neighbors, you still can’t get one from a bank. Without credit, it’s hard to find apartments, build a business or even get a loan for school. Here’s an example of how it works: Alex, John, Robert and Carl have a lending circle together and each puts in $100 per month. Each month, they take turns getting $400 until everyone has had a turn. Alex needs $400 to buy equipment for her business. John has supplies for school. Robert has credit card debt she is paying down. And Carl is expecting a tax bill. With financial classes, they can better manage their money and meet financial goals. Every time they make a payment on time, it’s reported to the credit bureau. The result? Everyone gets and pays back a loan of $400 over the four months. And all the participants see an average credit score increase of 49 points in just six months. A Lending Circle (Rotating Saving Group) is an informal association of participants who make regular contributions to a common fund which is given in whole or in part to each contributor in turn. Members of a Rotating Saving Group may decide to contribute the pre-determined sum every day, week or month. During each round, the pot is given to one member. Once a member has received the collected money, he must continue to contribute but will not receive the lump sum until all the members have had a chance to receive it once. When the last member has received the lump sum, the group may decide to start a new cycle. At each round all the member of the Saving Group contributes $100, during the first round John will receive the pot. During the second round the pot will go to Carl, Robert will receive the pot during the third round and Alex during the fourth round. At each meeting the money is collected and given to one member by using your PayPal account. Once a member has received the collected money, he must continue to contribute but will not receive the lump sum until all the members have had a chance to receive it once.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9036083221435547, "language": "en", "url": "https://www.thebalance.com/oil-price-history-3306200", "token_count": 1439, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.13671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:18c041c7-1dda-4306-9ea7-9f1e2c858966>" }
Oil Price History—Highs and Lows Since 1970 What Makes Oil Prices So Volatile? Historically, oil prices in the 20th century remained stable, in real terms, until the 1970s. Since then, political, economic, and other changes have rocked the oil landscape. In 2020, the coronavirus pandemic sent prices plummeting in April. Prices have recovered as of March 2021. - Traders’ market perceptions influence oil prices more than actual global supply and demand do. - With shale oil extraction, the United States became the largest oil producer in the world. - In 2020, oil prices plunged to a negative value in the wake of an abrupt drop in worldwide demand due to the COVID-19 pandemic. - Prices have returned to pre-pandemic levels as of March 2021. Oil Prices in the 1960s and 1970s Global oil prices in the 20th century generally ranged between $1.00 and $2.00 per barrel (/b) until 1970. That's about $20/b to $40/b when adjusted for inflation. The United States was the world's dominant oil producer at that time. It regulated prices. Domestic oil was plentiful. Cheap oil and gas made the expansion of interstate highways, interstate trucking, and auto ownership part of the American Dream. But multiple changes have occurred since then. In 1960, Saudi Arabia and other foreign oil-exporting nations formed OPEC. They wanted more control over their most valuable natural resource. In 1971, regulators allowed U.S. companies to pump as much oil as they wanted. They began using up surplus reserves. As supply fell, prices rose. America became vulnerable to future shortages. OPEC didn't really begin flex its pricing muscle until President Richard Nixon effectively took the U.S. dollar off of the gold standard in 1971. The value of the dollar plummeted, taking oil revenues down with it. All oil contracts are traded in U.S. dollars, so oil prices follow the value of the dollar. OPEC halted oil exports to the United States in 1973. Its primary goal was to boost oil prices. It also wanted to punish America for its support of Israel in the Yom Kippur War. Congress created the Strategic Petroleum Reserve to ensure an adequate supply of petroleum products and prevent future shortages. Why Oil Prices Are Volatile Since the 1970s, oil prices have become more volatile. They're affected by more than the laws of supply and demand. Oil prices are determined in the short run by oil futures contracts on the commodities markets. This means that in the short run, commodities traders can also affect oil prices. They can drive prices up even if they only think there will be a surge in demand, such as during the summer driving season. They can lower prices if they think there will be a dropoff in demand. That usually occurs as demand falls in the winter. US Shale Oil Production In 2015, new U.S. production of shale oil increased global oil supply. By Jan. 19, 2016, the addition to supply had driven global oil prices down to a 13-year low of around $27/b. By November, OPEC had had enough. It cut production to revive prices. By April 2019, global prices topped $71/b. They remained in that range until early 2020. Today's oil prices fluctuate due to constantly changing conditions. In January 2020, many governments began restricting travel and closing businesses to stem the coronavirus pandemic. Demand for oil began falling. In the first quarter of 2020, oil consumption averaged 94.4 million barrels per day (b/d), down 5.6 million b/d from the prior year. Through the first quarter, OPEC and its members were abiding by an agreement to limit production. That agreement expired on March 31, 2020. At the March 6, 2020, meeting, Russia refused to lower production. OPEC responded by announcing it would increase production. As storage facilities filled, prices plummeted into negative territory. No one wanted delivery of oil, since there was hardly any place to store it. In April 2020, prices for a barrel of oil fell to an unprecedented negative oil price: around -$37/b in the United States for West Texas Intermediate (WTI) at Cushing and $9/b internationally for Brent oil. On April 12, 2020, OPEC and Russia agreed to lower output to support prices. At its most recent April 1, 2021, meeting OPEC decided to continue limiting oil production. Oil Prices by Year: Average, High, Low, and Events The following chart shows the nominal value for imported crude oil according to the U.S. Energy Information Administration. The first column shows the average annual price. It's followed by the monthly high and low oil prices for that year. The last column shows the reasons and accompanying events for the price variations. |1974||$12.52||$9.59||$13.06||OPEC oil embargo ended| |1977||$14.53||$14.11||$14.76||Fed raised and lowered rates| |1978||$14.57||$14.40||$14.94||Fed raised and lowered rates| |1979||$21.57||$15.50||$28.91||Iran-Iraq War, fed rate 20%| |1980||$33.86||$30.75||$35.63||Iran oil embargo| |1981||$37.10||$35.43||$39.00||Reagan cut taxes| |1982||$33.57||$32.78||$35.54||Recession ends inflation| |1987||$18.14||$16.45||$19.32||OPEC added to supply| |1991||$18.73||$17.17||$22.30||SPR released oil| |1994||$15.54||$12.90||$17.52||NAFTA allowed cheap oil from Mexico| |2001||$21.99||$15.95||$24.97||Recession and 9/11| |2006||$59.05||$52.70||$67.99||Bernanke becomes Fed chair| |2012||$101.09||$92.18||$108.54||Iran threatened Straits of Hormuz| |2014||$89.63||$57.36||$100.26||The dollar rose 15%| |2015||$46.34||$33.16||$58.89||U.S. shale oil increased| |2017||$48.98||$44.03||$57.44||OPEC cut oil supply to keep prices stable| |2020||$37.24||$16.74||$53.96||Pandemic reduced demand|
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9219843745231628, "language": "en", "url": "https://dev.tinkerfcu.org/five-steps-to-help-create-your-first-budget/", "token_count": 232, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.00506591796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f45ea92f-4819-4273-9d64-72f0bfb92a08>" }
Five Steps To Help Create Your First Budget Consider Budget Methods There are many ways to divide a budget but the 50-30-20 method is the most used. This budget type assumes you spend 50 percent of your monthly income on necessities, 30 percent on discretionary items and 20 percent for savings and debts. Determine Monthly Income Calculate your take-home pay after taxes and other deductions. Remember to add passive or irregular income, such as bonuses, dividends, etc. Calculate Monthly Expenses Break your expenses out into two categories: necessary and discretionary expenses. Bills that must be paid each month are necessary expenses. While eating out, vacations and entertainment are considered discretionary expenses. Establish Savings Priorities Here is where you decide which goals are most valuable to you. You should have short-, mid- and long-term priorities and consider dividing each goal into separate accounts. Create a Savings Plan Determine how much to allocate toward your savings goals each month. Set dollar amounts and time frames, keep track of your progress and revisit these to determine whether you need to drop a goal or downscale.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9556542634963989, "language": "en", "url": "https://pubmed.ncbi.nlm.nih.gov/21172799/?dopt=Abstract", "token_count": 339, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1865234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cd730880-cb35-4854-be22-06771d5f88f3>" }
Background: In order to support the case for inter-sectoral policies to tackle health inequalities, the authors explored the economic costs of socioeconomic inequalities in health in the European Union (EU). Methods: Using recent data on inequalities in self-assessed health and mortality covering most of the EU, health losses due to socioeconomic inequalities in health were calculated by applying a counterfactual scenario in which the health of those with lower secondary education or lower (roughly 50% of the population) would be improved to the average level of health of those with at least higher secondary education. We then calculated various economic effects of those health losses: healthcare costs, costs of social security schemes, losses to Gross Domestic Product (GDP) through reduced labour productivity and the monetary value of total losses in welfare. Results: Inequality related losses to health amount to more than 700 000 deaths per year and 33 million prevalent cases of ill health in the EU as a whole. These losses account for 20% of the total costs of healthcare and 15% of the total costs of social security benefits. Inequality related losses to health reduce labour productivity and take 1.4% off GDP each year. The monetary value of health inequality related welfare losses is estimated to be €980 billion per year or 9.4% of GDP. Conclusion: Our results suggest that the economic costs of socioeconomic inequalities in health in Europe are substantial. As this is a first attempt at quantifying the economic implications of health inequalities, the estimates are surrounded by considerable uncertainty and further research is needed to reduce this. If our results are confirmed in further studies, the economic implications of health inequalities warrant significant investments in policies and interventions to reduce them.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9625968337059021, "language": "en", "url": "https://termpapernow.com/samples/impact-of-globalization-on-income-inequality-term-paper-3571/", "token_count": 1222, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.396484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ba3d237e-6dff-4e4a-9294-3ceda064a483>" }
Problem 1: Find a research topic that you are most interested in, something related to international trade but not “international trade” as a topic and tells me the population, how you select a sample to do the research? Wage inequality refers o the gap or income disparity among a population. The degree to which income is distributed in amongst a country’s population can vary depending on other country’s level of growth and other political factors (Pettis, 2016). Over the past three years, globalization has been a major topic of interest because it did not only lead to economic growth in countries, but also result in the development of trade policies, regulations, and trade relations. Global trade relation might have been developed and redefined, but the impact of international trade or globalization has been positive and negative in equal measures. Apart from environmental pollution, globalization has contributed to a heightened level of income inequality across the world. Income inequality in American is not the same as the income inequality in other trading partners. For example, income inequality in countries such as Mexico is higher than income inequity in America. While globalization has negatively affected Mexico, it has also lead to some potential benefits as many Mexicans realized an economic growth that the country could not have realized had it not traded with America. Mexico got a ready market for its products and the same way; America got products that were not produced locally while also exporting its products. Never t the less, the globalization, and trade between the two countries did not benefit everyone as only a few people involved in the supply chain benefited. For example, only a few people were employed in the trade and only a few people benefited from increased income, increased household disposable income, as well as imported quality of life. On the other hand, the improvement in the economic lead to an increase in the cost of living. The gap between the rich and everyone else grew as the rich become more rich and wealthy while, the poor become markedly poor. The income gap was made worse further by the economic changes, economic recession, and climate change. To collect data from the population, it is important to use a systematic random sampling method that will guarantee both objectivity and reliability. To ensure that the entire population is adequately represented, it is important to ensure that the sample is randomly selected from the population. Additionally, it is important to ensure that an adequate sample is selected that would be fully representative of the entire population. This means that the researcher will have to ensure that the sample size is large enough to reflect the population features. The confidence level will be determined based on the interval level and the population size. Steps to do the research Step 1: Write the introduction and develop research hypothesis, research aims, and objectives as well as the research questions Step 2: Secondly, I will conduct a literature review by analyzing past literature on the topic of globalization and the impact of globalization on the income inequality. There are myriads of research both economic theories and business perspectives that are developed to explain the cause and effect relationship between globalization and income quality. Step 3: I will develop my research instrument in this case; I will use a questionnaire that will be filled by the respondents Step 4: I will select my sample, using the random sampling method stated above and each sample must meet specific criteria to be included in the study or to be allowed to participate in the study. I will select a sample of 60 respondents from a population of employees who have been affected by globalization. Step 5: I will distribute the questionnaire to the targeted sample, the sample groups will fill in the questionnaire and return the filled in the survey. It is important to note that there are ethical considerations in relations to research where participants are involved to avoid infringing on the freedom and rights of the participants. Anonymity should be upheld and so is privacy and freedom to opt out of the study. Step 6: After data collection, it is time to analyze the data to find the trends, averages, and skewness. Descriptive statistics or other statistical techniques such as regression analysis will be used to determine the causality model or interrelationship between variables. Step 7: Writing the conclusion. After data analysis, the researcher will write a research report with the conclusion either confirming or rejecting the hypothesis, answering the research question and comparing the findings with the other arguments proposed by other researchers. The researcher then makes recommendations for the future researcher and managerial implications. Problem 2: Do you think statistics is important in your daily life? Give some explanations to support your answer. Statistics is important in our everyday lives. It is important to note that without statistics it would be difficult to make a decision. Most of our decision is only easy to make when the statistic is used to rate variables in the decision. I usually conduct the descriptive statistics such as variance, correlation, regression, analysis of variances, means, mode, and median as well as the upper limit, and lower limits as well as the quartiles. For example, If I want to decide on what to buy from many options, I usually analyze the statistics such as positive user reviews and compare with the reviews for the alternative products and chose the one with the highest number of reports. One can find the products with the highest means or average. Personally, I use statistics to determine the best option out of many alternatives by finding the mean, the total or conducting t- test to compare the means and chose. There is a significant correlation between globalization and income gap between the poor and the rich. The numbers of people who become richer in developing countries remain the same but the level of wealth increased significantly. On the other hand, in the developed countries, the income gaps decreased as more people become richer. It is therefore healthy to conclude that globalization mainly increased the income gap in developing countries but decreases the income gap in the developed countries. Pettis, M. (2016). How Trade Can Reinforce Income Inequality. Carnegie Endowment for International Peace. Retrieved 28 December 2016, pub-55531
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8757807016372681, "language": "en", "url": "https://thesaurus.yourdictionary.com/assets", "token_count": 294, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1728515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9bb0a289-c669-4507-9004-ab73a46c51eb>" }
Part of speech: All of the rights of ownership, including the rights of possession, to enjoy, to use, and to dispose of a chattel or a piece of land. The value of property beyond the total amount owed on it in mortgages, liens, etc. The totality of an individual’s ownership of money, real and personal property. Recognition or approval for an act, ability, or quality: The sentimental value of something; emotional value. (Mathematics) The plus sign (+). Treasure is a valuable person or thing such as a collection of money, jewelry or other valuables. An arrangement for deferred payment of a loan or purchase: Distinction is defined as the act of separating people or things into different groups, or the feature that differentiates or a special recognition. The action of helping; assistance: Benefit or profit; gain: Extensive amounts of material possessions or money; wealth. (Mathematics) A number that typifies a set of numbers, such as a geometric mean or an arithmetic mean. Loss; detriment; hindrance. Find another word for assets. In this page you can discover 26 synonyms, antonyms, idiomatic expressions, and related words for assets, like: property, wealth, equity, estate, belongings, holdings, possessions, capital, credit, goods and money.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.918380618095398, "language": "en", "url": "https://www.rural21.com/english/news/detail/article/reviving-the-western-indian-ocean-economy.html", "token_count": 370, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.00555419921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:34ff3370-7a73-4da3-9c2c-b2362733b4b8>" }
Analysis of ‘Reviving the Western Indian Ocean Economy: Actions for a Sustainable Future’ a recently released report show that the leaders of the Western Indian Ocean face a clear and urgent choice. To continue with business-as-usual, overseeing the steady decline of ocean assets; or to seize the moment to secure the natural ocean assets that will be crucial for the future of fast-growing coastal communities and economies. The report released in January 2017, is the result of an in-depth, joint assessment by The Boston Consulting Group (BCG), CORDIO East Africa and WWF. It combines a new economic analysis of the region’s ocean assets with a review of their contribution to human development. The Western Indian Ocean region described in this report includes Comoros, France, Kenya, Madagascar, Mauritius, Mozambique, Seychelles, Somalia, South Africa and Tanzania – a mix of mainland continental and island states. The report shows that the region’s most valuable assets are fisheries, mangroves, seagrass beds and coral reefs. Adjacent coastal and carbon-absorbing assets are also central to the wellbeing of communities and the health of the ocean economy. The analysis finds that the region is heavily dependent on high-value ocean natural assets that are already showing signs of decline. The report offers a set of priority actions required to secure a sustainable, inclusive ‘blue economy’ for the region, and thus to provide food and livelihoods for growing populations. The report also points to the likelihood that much of the actual fishing in the region is for local, domestic consumption via small-scale fishing which is not adequately monitored, or measured in economic terms, so the actual extent of fishing and its importance to local communities is likely to be far greater than economic analyses indicate.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9452037215232849, "language": "en", "url": "https://metronews.co.nz/article/new-article-79", "token_count": 118, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.10400390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:504c8b2b-5e7c-4b59-829e-31bd0f149255>" }
How financially literate are Millennials? Why are so many of us in personal debt? And what does GDP mean!?? This weeks episode looks at where young people's gap in financial knowledge comes from and how our generation can take back control of our financial finances. Tune in for lighthearted approach to business education! Disclaimer: GDP= Gross Domestic Product. The Gross Domestic Product measures the value of economic activity within a country. Strictly defined, GDP is the sum of the market values, or prices, of all final goods and services produced in an economy during a period of time.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9494934678077698, "language": "en", "url": "https://svo1905.com/2013/03/25/economy-in-action-at-the-federal-reserve-bank-of-dallas/?replytocom=189", "token_count": 948, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0400390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8f66052f-637b-4549-a6ff-37db98fe83bf>" }
The recent openings of the Winspear, the Perot Museum and the brand new Klyde Warren Park usher in a new era for downtown Dallas. Just down the street from the Perot, and caddy corner from KWP is a more inconspicuous update to this thriving area: The Economy in Action Exhibit at the Dallas Federal Reserve Bank, open since October. Alexander Johnson, Media Coordinator at the Dallas Fed, explains that Economy in Action, “explores the roles and responsibilities of the Federal Reserve.” To clarify, the 12 District locations of the Federal Reserve Bank are part of the economic policy making structure, while the Bureau of Engraving in Fort Worth mints money. To drool over page after page of crisp new hundreds roll off the presses, you would need to be 37 miles away at the Fort Worth Mint. Behind the visitor’s desk at the Dallas Fed is a nicely done collage introducing the 11th district composed of southern New Mexico, northern Louisiana and all of Texas. Throughout the lobby is a rather plain, but still interesting history of both US and Texas currency replete with monetary factoids. Did you that “HAWAII” was printed on all Hawaiian currency during WWII so that if Japan invaded Hawaii the currency could be voided? The Money in Action exhibit starts in the 18th century with Alexander Hamilton and Thomas Jefferson squaring off pro and con (respectively) about whether it is necessary to establish a central bank. Hamilton, the first US Secretary of the Treasury, ultimately wins the argument, with The First Bank of the United States being established in 1791. Fifty years later President Andrew Jackson succeeded in disbanding The Second Bank of the United States, eliminating central banking from the US economy. After the central bank was quashed, banking shenanigans ensued and the United States lurched from one recession to another. Another surprising fact is that between the Civil War and WWI the US was in recession half the time? In 1913, the modern Federal Reserve was established and will celebrate its centennial on December 23. There is an interesting exhibit about how the Fed cities were chosen. There is an legendary story about the elaborately orchestrated “accidental” train meeting that led to Dallas being named as a location for one of the Fed’s district offices – sorry, New Orleans. The exhibition glosses over 20th century economic history of the US with a mural and moves on to explain the Fed’s mission and how it functions. This is the most interesting part of the exhibition. However, this presentation of the modern Federal Reserve, seems to use the obscure economic history explored in the first part of the exhibit as a didactic foil to explain the Fed’s role in the modern economy. Don’t like the Fed? Here’s what the economy looked like without us. Compared to the state of the art museums in the neighborhood, the Money in Action exhibit feels clunky. Unlike the currency display in the lobby (walk and read), Economy in Action exhibit is highly interactive. There are loads of short videos, sound clips, fun facts etc that require the push of a button or the lift of a panel. There isn’t much sizzle to these devices, but they do unlock loads of knowledge. My biggest criticism is the theoretical nature of the exhibit. Of course, there is a disparity between the long term orientation of the Fed and the short term reaction of the markets. But the exhibit passes over modern issues like Quantitative Easing, federal debt, the global economy, technology and even the Great Depression. In short, Economy in Action feels like it is in a bubble. As trivia, the entire Economy in Action experience is pretty unbeatable, from the big picture perspective, to the little tidbits presented in the exhibit, to the elaborate security system just to get in. It is almost impossible not to take America’s role as an economic superpower for granted, but the Economy in Action exhibit is a humbling reminder that America had to find its way economically, just it did (and does) with more top-of-mind issues. Chances are pretty good that eventually we will all find ourselves at the Winspear, the Perot, and the new park. But seeing the Economy in Action exhibit will take some planning as it is only open from 9-3 Tuesday through Friday (it will not be affected by The Sequester) – admission is free. Put it on your to do list, and be ready to roll up your sleeves, push some buttons and to come away from the experience more knowledgeable than when you went in.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9701981544494629, "language": "en", "url": "https://www.argolimited.com/changes-in-environmental-policy/", "token_count": 436, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.279296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8afa8937-df8e-4b93-a781-9bfc5695043d>" }
Federal standards influence state-level standards and become a yardstick against which all industries are measured. Environmental insurance providers must recognize when new standards are higher than what an industry was previously held accountable to. If a company finds itself out of compliance, that increases its risk exposure to third-party claimants. Take perfluorooctane sulfonic acid (PFOS), for instance: a man-made chemical used in manufacturing – packaging and products – beginning in the 1950s. In recent years, PFOS has been discovered in ground water and drinking water, including in the lake and wells near a small municipal airport in Wisconsin where firefighters trained using foam that contained PFOS, which ran off and contaminated the water. PFOS is currently not regulated at the federal level, but there are initiatives in place to make it so – a change that might incentivize manufacturers of products containing the chemical, as well as industries or businesses that use those products, to obtain pollution coverage. Environmental insurers, including Argo Environmental, must be aware of policyholders’ hazards and exposures. Mid-size to large manufacturing companies, for example, often have exposures related to air pollution, solvents and paints. Those companies should also have an environmental manager (one of the factors that makes them more attractive to insure) whose job it is to stay up to date on changes in environmental policy and ensure that the company meets the requirements. As environmental regulations begin to take effect, insurers will be looking more closely at clients’ exposures and tailoring coverage to meet their needs as well as to protect the insurance companies’ risks for liability. Tax incentives could spur environmental industry investments The environmental industry tries to be resilient, looking at the long-term benefits of their investments. If incentives offered during one administration have a four- to eight-year shelf life, it often may not be worthwhile for companies to put in the up-front investment just to receive benefits for a limited time. Alternative energy investments may be the exception. Over the last few decades, we’ve seen an increase in climate awareness, investment in renewables and fossil fuel brands reinventing themselves as energy companies.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9652555584907532, "language": "en", "url": "https://www.designorate.com/disruptive-innovation/", "token_count": 1175, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.024658203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d4793b0c-902b-4e63-92bf-6ecad65b19df>" }
As people who continuously pursue creative change, designers need to understand the type of change that they are pursuing as well as how to achieve it. Change can be achieved through two types of innovation processes: Sustaining innovation and disruptive innovation. Sustaining innovation change is unlikely to create new markets or values, such as evolution innovation that improves existing products or revolutionary innovation that makes a change within an existing market. Unlike with sustaining innovations, disruptive innovation tends to create a new market by using a different set of values. While the sustaining innovations have less of an ability to build business success, many examples provide proof of disruptive innovation’s ability to dramatically open new opportunities for companies since they are creating a new market. Designers are at the heart of the innovation process and their ideas contribute to the innovation process starting from the research stage through to the analysis of consumer feedback and experience regarding new products. Companies such as Apple depend on a dual team of CEOs and designers to innovate new products and services for their consumers. So it is important for designers to understand what type of innovation is required, and how to apply this to the innovation process. Characteristics of Disruptive Innovation Many managers find it difficult to define disruptive innovation and this confusion may expand to the production process and subsequently lead to failure; 60% – 75% of companies fail because they depend on sustaining innovations, which lead to sophisticated products that are too expensive or specialized for the consumers’ needs. While this unwittingly leads to disruptive innovation, clear understanding of the characteristics for disruptive innovations include the following: Build a new market – One of the main characteristics that depicts the contrasts between the disruptive innovation model from Sustaining innovation is the ability to build a new market and include new consumers that were not part of the previous company strategy. The iPhone is an example of disruptive innovation that allowed the company to open a new market. However, it can’t be considered a disruptive innovation for the mobile industry. Changing how performance is measured – Disruptive innovation creates a dramatic impact not only on the market but also in how the process or performance is measured. It can make a trending change in how we evaluate the product’s performance; products that we were satisfying us become obsolete and replaced by the new measurements implicated by studying the innovation. Kickstarter is an example of a disruptive innovation that changed the way we think in funding projects. It replaced the old rules and obstacles that small startups face in funding their projects, though the new measuring posts are changing the ordinary funding process through the crowd-funding model. Adapt new business models – At the core of disruptive innovation, number innovation and business models are adapted, including open innovation, closed innovation, open and closed business models. Combinations from those four models results in new methods that can lead to concerted disruptive innovation. Uber implemented an open business model by using services from drivers who are not actually hired by the company and then evaluating the quality of service by reviewing feedback from both drivers and consumers. Uber is an example of disruptive innovation where the three main characteristics have been applied. Achieving Disruptive Innovation Observing disruptive innovation examples sure as Airbnb, Skype, and Twitter provide clues and a number of tips that can contribute to achieving this disruption. These tips have changed the way designers and innovation managers think during the entry phase of product development. These tips include the following: - Investigate the small problems rather than the big ones. Most large companies focus on either obvious problems that have already been solved before by other companies, or in altering existing products and services provided by the company, which halts the company’s ability to create new markets. Small startups were able to achieve disruption because they think from the perspective of consumer needs and build new products that can fulfill these needs away from the large competition between big companies. - Put yourself in the shoes of the consumer. Thinking as the consumer and observing how they live their daily life, how they use the different products, and what they love and hate about a product helps in building a vision about how to reach a product design that fits exactly with their needs. - Focus on the consumer rather than the industry. As mentioned, focusing on the current production halts the ability for companies to focus on the needs of the consumer and then fill these needs with new products or services. - Connect the disconnected. This method can drive innovative ideas and is one of the oldest creative thinking methods used by Leonardo Da Vinci to think of new inventions. This method can help build new products that people may not have thought of, such as merging the phone and camera to create a new service for mobile users who would like to take photos. - Focus on a very narrow market segment rather than large or multiple business segments. Focusing on this segment contributes to reducing the cost and risks while also providing a better ability to observe the product, its behavior, and deliver innovative ideas based on these observations. Although disruptive innovation holds high risk and costs more resources compared with the Sustaining innovation process, it has the capacity to change the market and open new opportunities for companies. Examples such as Apple, P&G, Uber, Airbnb, Skype, and others were able to change the game by adapting disruptive innovation. However, many managers are still confused about the definition of disruptive innovation and this is reflected on the production level, causing a high potential for failure. Disruptive innovation is characterized by its ability to 1) build a new market, 2) change how the business measures its performance, and 3) adapt new business models. Along with these characteristics, designers and innovation managers can apply a number of tips to help them reach disruptive ideas. In order to minimize the risks associated with disruptive innovation, focusing on small problems and narrow market segments should be considered to overcome the barriers to disruptive innovation.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9458739757537842, "language": "en", "url": "https://www.gschambers.com/repudiation-contract-termination", "token_count": 588, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fff12251-a656-4339-9b40-545defef4577>" }
A contract can be brought to an end in a number of ways. The concept of terminating a contract encompasses different actions a party could perform contrary to the parties' intent when they signed the contract: breach of an essential term, serious breach of a non-essential term, mutual agreement to end the contract, a contractual term providing for termination, and finally repudiation. A party repudiates a contract when he shows an unwillingness to perform the contract or an intention not to be bound by the contract's terms. The party could tell other parties to the contract that it does not intend to perform its obligations, or it could simply act in such a way that shows an intention not to perform. Usually repudiation is anticipatory, meaning it happens before the party is due to perform an obligation under the contract. If one party to a contract anticipatorily repudiates it, other parties may be entitled to terminate the contract if they wish. Termination of the contract effectively discharges the other parties' obligations to perform the contract terms. Before termination, it is a good idea for the other parties to notify the repudiating party and request that it comply with the contract terms (this will be discussed in the next blog). Doing so could avoid a dispute if the repudiating party then begins to perform their obligations under the contract. Sometimes, a party will indicate that it refuses to perform only part of the contract. In that case a court would examine whether only partial performance would still breach a material term of the contract or deprive the other party of the majority of the contract's benefits. This may be considered a breach of an essential or non-essential term, rather than a full repudiation of the entire contract. Sometimes, one party misunderstands the terms of a contract. For example, a contract for the sale of goods between two companies could say that payment will be made by the 15th day of each month for shipments received on the 1st. If one company repeatedly asserts that it will make payment “as soon as it has the money”, the other company could interpret this statement as a repudiation. The first company is mistaken about the contract's terms. In this situation, the second company should attempt to correct the first company's mistaken impression and remind it what the contract says. Otherwise, if the second company does not perform its obligations because it views the statement as a repudiation, a court could interpret the second company's actions as a repudiation instead. In short, when dealing with potential repudiation of contracts, clear communication between parties is essential to preventing disputes. If you believe that a party to one of your contracts has anticipatorily repudiated it, consult an attorney before you take further steps to terminate the contract. To find out more about repudiating and terminating contracts, visit Gonsalves-Sabola Chambers online or call the office at +1 242 326 6400.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9319382905960083, "language": "en", "url": "https://www.lexology.com/library/detail.aspx?g=1588fe0d-64fd-4b84-b3b4-ef75efc72d08", "token_count": 1476, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.400390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5b2a2988-5889-4f05-bfea-20ee0ca59af7>" }
“Big Data” refers to datasets whose size is beyond the ability of typical database software tools to capture, store, and analyze. In a recent speech delivered at the Canadian Institute for the Administration of Justice, Patricia Kosseim, Senior General Counsel and Director General, Legal Services, Policy, Research and Technology Analysis Branch at the Office of the Privacy Commissioner of Canada, remarked that while Big Data is not a new technology, it is a new technological trend that allows for the “(processing) of huge volumes of data across varying sources, using much more powerful algorithms, to identify underlying patterns and correlations that can predict future outcomes.” It comes as no surprise, therefore, that businesses, advertisers, policy-makers, and researchers are increasingly using Big Data to spot and exploit trends. This rise in the collection and use of Big Data has led many to question whether current Canadian law can meet the need to regulate an industry where private information can be exposed and capitalized upon. Companies and technology professionals are also keen to learn how and to what extent they can protect and use this increasingly valuable economic asset. This article is the first of a three part article which will address the areas where the law and Big Data intersect – intellectual property, regulatory law, and contract. IP Ownership & Big Data Copyright in Databases It is an established principle of Canadian copyright law that copyright cannot exist in ideas or data alone. However, it can apply to certain forms that data takes such as a tables, graphs, or databases. The Copyright Act (the “Act”) is the governing statute for copyright law in Canada. Under the Act, copyright exists “in every original literary, dramatic, musical and artistic work”. After the 1993 North American Free Trade Implementation Act, the Act was amended to protect “compilations”. The definition of “compilations” includes works that “(result) from the selection or arrangement of data.” Therefore, assessing the originality of the compilation of data is key to determining whether or not copyright exists in a database. This question was addressed at the Federal Court of Appeal in the 1997 case Tele-Direct (Publications) Inc. v. American Business Information, Inc. Tele-Direct claimed copyright in respect of the organization of subscriber information and the collection of additional data contained in “Yellow Pages” directories published by Tele-Direct. Two of the main issues before the court were: (1) what was the correct approach for assessing the originality of a compilation, and (2) whether the compilation involved a sufficient degree of skill, judgment, or labour to qualify for copyright protection. The Court held that the selection or arrangement of data results in a protected compilation only if the end result qualifies as an original intellectual creation. For a compilation of data to be original, it must be a work that was independently created by the author, and display at least a minimal degree of skill, judgment and labour in its overall selection or arrangement. In 2004, the issue of originality in the context of copyright reached the Supreme Court in the landmark case CCH Canadian Ltd. v. Law Society of Upper Canada. While the case was not directly about databases, it dealt with the threshold for “originality”. In the case, the Court rejected both the “sweat of the brow” test for originality, as well as the U.S. favoured test that originality requires a work to be independently created and possess some minimal degree of creativity. Instead, the Court held that for a work to be considered “original”, it must be the product of an author’s exercise of skill and judgment. Furthermore, the skill and judgment required to produce the work must not be so trivial that it could be characterized as a purely mechanical exercise. The Act entitles the copyright owner to “the sole right to produce or reproduce the work or any substantial part thereof in any material form whatever”. If a copyright is infringed, the owner of the copyright is also entitled to all remedies by way of injunction, damages, accounts, delivery up and otherwise that are or may be conferred by law for the infringement of a right. While certain protections for database owners exist under the Canadian copyright regime, claims for copyright infringement of databases raise practical problems. If copyright infringement only occurs when a “substantial” part of a database is copied (as outlined in the Act), what if only some information is copied? Furthermore, merely using or accessing a database is unlikely to garner protection under the Act either. Copyright in Software While information in a database cannot be copyrighted, data-integration software, database-management systems, and data analytics software can be copyrighted as “computer programs” under the Act. A “computer program” is considered a “literary work” for purposes of the Act, and is defined as “a set of instructions or statements, expressed, fixed, embodied or stored in any manner, that is to be used directly or indirectly in a computer in order to bring about a specific result.” Therefore, infringement would occur where a computer program is copied without authorization from the owner of the copyright. The Act provides some exceptions for use of computer programs that would otherwise be infringement under the Act. This includes copying for purposes of a backup, as well as copying a program once for the purposes of making it compatible with a computer that is solely for personal use. The term “software” has also been given a broad meaning to include data files. It is important to note that the same standards of originalism apply when determining whether a computer program is subject to copyright protection. For example, in Delrina Corp. v. Triolet Systems Inc., the Ontario Court of Appeal held that computer programming that is dictated by the operating system or reflects common programming practices is not original expression and will not receive copyright protection. The Federal Court has also held that as a general principle, the owner of the copyright in a computer program does not have copyright in the user’s data, unless there is an agreement stating otherwise. No Database Right in Canada There is no database right in Canada – meaning additional protections are not afforded to the original creator of a database. This is a marked difference from European countries that adopted the EU Database Directive. The Directive provides that databases which “by reason of the selection or arrangement of their contents constitute the author’s own intellectual creation” are protected by copyright, and any temporary or permanent reproduction is prohibited. While some lobbying efforts have occurred to institute a similar policy in Canada, the strongest statutory IP protections for databases continues to flow from the Copyright Act and surrounding case law. Patentability of Software Canada’s Patent Act does not specifically mention “software” and is generally considered by the Patent Office to be an “abstract scheme” and consequently not an invention that can be patented. However, if software does more than just calculations, it may be patentable. Specifically, computer programs integrated with hardware could receive patent protection, as could programs that produce an outcome based on recovered data.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9388361573219299, "language": "en", "url": "http://congressionalresearch.com/RS20896/document.php?study=Wool+and+Mohair+Price+Support", "token_count": 3893, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.054931640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8bb2f760-1b62-4ee2-bb68-1e9a49c57864>" }
Wool and Mohair Price Support Wool and Mohair Price Support Technical Information Specialist Knowledge Services Group Price support for wool and mohair first became mandatory through legislation enacted in 1947 and in 1949. The National Wool Act of 1954 (P.L. 83-690) established direct payments for wool and mohair producers. The act’s stated purpose was to encourage production of wool because it was considered an essential and strategic commodity. No similar purpose was stated for the mohair program. Subsequent legislation extended the wool and mohair support programs several times, until a provision in P.L. 103-130 required a phase out, ending with the 1995 marketing year.1 Subsequently, assistance was provided on an ad hoc basis for marketing years 1999 and 2000. Wool and mohair were not funded during marketing year 2001. The 2002 farm bill (P.L. 107-171, the Farm Security and Rural Investment Act of 2002) authorized marketing assistance loans and loan deficiency payments for wool and mohair producers for crop years 2002-2007. Most recently, the 2008 farm bill (P.L. 110-246, the Food, Conservation, and Energy Act of 2008) re-authorized marketing assistance loans and loan deficiency payments for wool and mohair producers for crop years 2008-2012. Support Program History The Agricultural Adjustment Act of 1938 authorized a non-mandatory price support loan program for wool and mohair. In 1947, price support became mandatory for wool, followed by mohair in 1949. The National Wool Act of 1954 (P.L. 54-609) provided wool and mohair support authority funded through the United States Department of Agriculture’s (USDA's) Commodity Credit Corporation (CCC) from 1955 through 1959. Subsequent legislation extended the authority. The support program offered direct payments for wool and mohair, but differed from other commodity programs because incentive payments were higher for producers who received higher market prices. This was supposed to encourage the production of higher quality wool and mohair. The program also provided payments 1 A marketing year is the year in which a crop is marketed and usually begins with harvest. The wool and mohair marketing year is January 1-December 31. for unshorn lambs equal to payments received from shorn lambs. The Secretary of Agriculture had discretion to set the support price for shorn wool. While the act linked wool and mohair support spending to 70% of tariffs collected on imported wool and other textile products, these tariffs did not directly finance the program.2 Amendments to the National Wool Act (P.L. 103-130, November 1, 1993) reduced wool and mohair producers subsidies for 1994 and 1995, and made the 1995 crops the last to be supported under the act. The FY1999 Omnibus Consolidated and Emergency Supplemental Appropriations Act (P.L. 105-277, Section 1126) authorized interest free recourse loans for mohair produced during or before FY1999. Recourse loans provide producers with interim financing to assist them in marketing their crop in an orderly manner and must be repaid within a certain term. Producers could borrow $2 for each pound of mohair placed under The FY2000 USDA Appropriations Act (P.L. 106-78, Section 801) authorized a recourse loan program for mohair produced during or before FY2000. The loan rate was again $2 per pound and the interest rate was equal to 1% over the CCC interest rate. The Agricultural Risk Protection Act of 2000 (P.L. 106-224, Section 204) authorized direct payments to wool and mohair producers through the CCC for the 1999 marketing year. Wool producers received 20¢ per pound and mohair producers received 40¢ per The FY2001 USDA Appropriations Act (P.L. 106-387, Section 814) authorized loan deficiency payments of 40¢ per pound to both wool and mohair producers for the 2000 marketing year. Total CCC payments were not to exceed $20 million. The Crop Year 2001 Agricultural Economic Assistance Act (P.L. 107-25, Section 5) authorized direct payments through the CCC for wool and mohair producers who received prior payments under Section 814 of P.L. 106-387 for the 2000 marketing year. The Secretary determined the payment rate and total CCC payments were not to exceed The Farm Security and Rural Investment Act of 2002 (P.L. 107-171, subtitle B) authorized nonrecourse marketing assistance loans and loan deficiency payments for crop years 2002-2007 for wool and mohair producers. Current Program Provisions The 2008 farm bill (P.L. 110-246, Title I, subtitle B) provides wool and mohair producers with nine-month nonrecourse marketing assistance loans and loan deficiency 2 Collected tariffs went to the U.S. Treasury, then the CCC borrowed funds from the Treasury for the wool program. Each year Congress appropriated funds to reimburse the CCC, then the CCC reimbursed the U.S. Treasury for the funds it borrowed the preceding year. Whenever the program cost exceeded 70% of the tariffs, it was carried over to the next fiscal year. payments for crop years 2008-2012.3 Producers who obtain nonrecourse loans pledge their crop as collateral and can forfeit their crop in full payment of the loan. The loan rate is $1.00 per pound for graded wool, 40¢ per pound for nongraded wool, and $4.20 per pound for mohair for crop years 2008 and 2009. The loan rate for graded wool increases to $1.15 per pound for crop years 2010-2012. USDA determines the loan repayment rate based on either the lesser of the loan rate plus interest, or a rate that will limit loan forfeitures, stock accumulation, and storage costs, and will allow competitive marketing of the commodity.4 Producers who agree not to take out a loan can receive loan deficiency payments instead. The loan deficiency payment rate is the difference between the loan rate and the repayment rate. Production and Imports. In 2007, shorn wool production amounted to 34.5 million pounds (from 4.7 million sheep and lambs) with a market value of $30.3 million dollars. Shorn wool (greasy wool) is cleaned and the natural oils removed to yield clean raw wool. Sheep producers are influenced by both the price of meat (lamb and mutton5) and wool. Producers sell Figure 1. U.S. Wool Production, Imports, andlamb and mutton when meat Total Supply, 1950-2007prices are high therefore reducing the size of their producers sell wool when wool prices are high increasing the size of their inventory. Some of the issues involved in sheep production include predator losses, hired labor costs, labor shortages, the cost of treating sheep for hoof and skin problems, and competition with cattle producers for grazing land, labor, water, and marketing and transportation facilities. As shown in Figure 1, clean raw wool production stayed near 120 million pounds until the late 1960s, afterward 3 See CRS Report RL34594, Farm Commodity Programs in the 2008 Farm Bill, by Jim Monke for information on marketing assistance loans and loan deficiency payments. 4 As of September 2008, according to USDA, Commodity Credit Corporation (CCC) estimated net outlays for wool and mohair together are $7 million in both FY2008 and FY2009. 5 Meat is called lamb if sheep are slaughtered between 8-14 months of age and mutton if it is slaughtered after 14 months of age. trending downward to 18.2 million pounds in 2007, its lowest point. Wool imports totaled 14.3 million pounds in 2007. Imports long have been the primary source of wool for U.S. carpet and textile manufacturers. The major suppliers of wool to the United States are Australia, New Zealand, and the United Kingdom. Wool exports historically have been much smaller than imports--less than 10 million pounds annually. In 2003, wool exports rose above 10 million pounds to 11 million pounds. In 2007, wool exports increased to 17.1 million pounds, in part, because of increased global demand for wool. Wool and Lamb Production Legislation. In 1999, the U.S. International Trade Commission (USITC) (15 CFR 2014) ruled in favor of the United States in a section 2016 trade case on lamb meat. The case stated that increased imports of lamb meat from Australia and New Zealand caused the threat of injury to U.S. producers. In light of the USITC’s ruling, the Clinton Administration, established tariff-rate quotas (TRQs) and increased duties on imports of fresh, frozen, and chilled lamb meat. In 2001, the Bush Administration ended the TRQs on lamb meat imports to settle a World Trade Organization (WTO) dispute Australia and New Zealand brought against the United In 2000, the Lamb Meat Adjustment Assistance Program, a four year program, was established to provide direct payments to lamb producers to help stabilize the U.S. lamb market. In 2001, the Ewe Lamb Replacement and Retention Payment Program was established to provide direct payments to producers to replace and retain ewe lamb breeding stock. Both programs were implemented administratively by USDA under Section 32 of the Agricultural Adjustment Act Amendment of 1935 (P.L. 74-320), as P.L. 106-200, the Trade and Development Act of 2000, authorized the Wool Research, Development, and Promotion Trust Fund. The Trust’s purpose was to assist wool producers to improve wool production, disseminate information on improvements to wool production, and to help them develop and promote the wool market. This Trust is funded by the Treasury from duties on articles under chapters 51 and 52 of the Harmonized Tariff Schedule. A sunset provision in P.L. 106-200 abolished the Trust in 2004, but subsequent legislation has extended the Trust. Most recently, Section 325 of the Emergency Economic Stabilization Act of 2008 (P.L 110-343) extended it to 2015. Wool Prices. The average price of shorn wool increased from $0.68 per pound in 2006 to $0.88 per pound in 2007. From 1954 until the 1970s, the average market price of wool remained stable at around $0.50 per pound and the national average federal program payment rate for wool remained near $0.20 per pound, so wool producers received revenue of approximately $0.70 per pound. From 1982 to 1986 and 1990 to 6 Under section 201 of the Trade Act of 1974, domestic industries seriously injured or threatened with serious injury by increased imports may petition the U.S. International Trade Commission (USITC) for import relief. The USITC determines whether an article is being imported in such increased quantities that it is a substantial cause of serious injury, or threat thereof, to the U.S. industry producing an article like or directly competitive with the imported article. If the Commission makes an affirmative determination, it recommends to the President relief that would prevent or remedy the injury and facilitate industry adjustment to import competition. The President makes the final decision whether to provide relief and the amount of relief. wool (and mohair) program temporarily ended after the 1995 crop (by mandate of P.L. 103-30). Support was restored for marketing years 1999 and 2000. For marketing year directed that payments of $0.20 per pound be made to producers, and for marketing year 2000, Section 814 of P.L. 106-387, directed payments of $0.40 per pound (compared to the historically low average market prices of $0.38 and $0.33 per pound respectively). There was no funding for the 2001 marketing year. For crop years 2002-2007, the Farm Security and Rural Investment Act (2002 farm bill) defined the payment rate as the difference between the loan rate and the repayment rate. The 2008 farm bill continues this definition for crop years 2008-2012. Farm Structure. According to the 2002 Census of Agriculture, there were 46,255 farms with sheep and lambs used for wool production. There are two types of wool: territory and fleece. Territory wool is used to make better quality apparel and is produced in “territory wool states,” which include Texas, South Dakota, the Rocky Mountains, and the Pacific Coast states. The flock size for territory production typically ranges from 150 to 400 sheep, although some producers may have several thousand sheep. According to USDA, approximately 70% of all U.S. sheep are located in “territory wool states.” Fleece wool is used to make coats, blankets, and sweaters. It is produced in “fleece wool states,” which include Virginia, West Virginia, Pennsylvania, states north of the Ohio River, and the Great Plains area. The flock size for fleece production ranges from 20 to 50 sheep and typically is only a small part of a farm that may also raise cattle, hogs, and field crops. The demand for wool is affected by fashion, relative fiber prices, price variability, and the economy. Consumer acceptance of manmade fibers began in the mid 1950s. Manmade fibers, which are sometimes mixed with wool, are fashionable and offer conveniences such as drip-dry washing, no shrinking, and no moth damage. The U.S. textile industry started using noncellulosic manmade fibers (such as nylon, polyester, and acrylic) because of its relative price stability and durability. U.S. sheep and lamb prices and foreign supply and demand cause price variability because the United States has a small share of the wool market and textile mills import over half of the wool they use. Manmade fiber production has minimal price variability and does not depend on biological lags and annual shearings. Also, the quality does not vary, and since it is manufactured domestically, foreign supply and demand have little effect on U.S. prices. Production and Exports. In 2007, 1.1 million pounds of mohair with a value of $4.3 million dollars was clipped from 185,000 Angora goats and kids. Mohair production was 1.4 million pounds in 2006, an increase from 1.3 million pounds in 2005. According to the 2002 Census of Agriculture, there were 2,434 farms with mohair sales. The three major mohair-producing states in 2007, accounting for 90% of production, were Texas (79%), Arizona (7%), and New Mexico (4%). As shown in Figure 2, mohair production rose sharply in the 1950s then peaked at 32.4 million pounds in 1965. Mohair exports were 0.91 million pounds in 2007, a decrease from 1.3 million pounds in 2006. Over the past 25 years, about 75% of U.S. mohair production was exported. The United Kingdom, the world’s major importer of raw mohair, processes mohair and then re-exports it. In 1972, 1975, 1999, 2000, and 2002-2005, U.S. mohair export demands exceeded production and inventory stocks were drawn down to meet demand. Since most mohair is exported, domestic use depends on available supply, mohair prices, and fashion. The United States and South Africa have historically been major mohair producers and exporters. Figure 2. Mohair Production and Exports, 1950-2007 Source: Economic Research Service, U.S. Department of Agriculture. *1950-1954 and 1971-1987 data are from Texas only. 1955-1970 data are from Arizona, New Mexico, Missouri, California, Oregon, Utah, and Texas. 1988-1994 data are from Texas, Arizona, New Mexico, Michigan, and Oklahoma. 1995 to 2003 data are from Arizona, New Mexico, and Texas. 2004-2007 data are the U.S. total. Mohair Prices. In 2007, the average market price of mohair increased to $3.78 per pound from $3.70 per pound in 2006. From 1955 until the mid 1960s, the average market price of mohair remained near $0.75 per pound. In the mid 1960s the average market price of mohair dropped to nearly $0.45 per pound. The national average payment rate under the federal support program remained near $0.30 per pound, which kept total revenue received by producers at approximately $0.75 per pound. Both the average market price and the national average payment rate then became variable from year-to- year. During the 1980s, the mohair national average payment rate exceeded the market price, so the average payment rate remained near $2.50 per pound which raised total revenue received by producers to approximately $5.00 per pound. Along with wool, the mohair program was temporarily ended with the 1995 crop (by amendments to the National Wool Act, P.L. 103-130). However, subsequent legislation (the Agricultural Risk Protection Act of 2000, P.L. 106-224, Section 204 and the FY2001 USDA Appropriations Act, P.L. 106-387, Section 814) was adopted that mandated mohair payments of $0.40 per pound in marketing years 1999 and 2000. There were no mohair payments in 2001. Under the 2002 farm bill, the payment rate for crop years 2002-2007 was the difference between the loan rate and the repayment rate. The 2008 farm bill continued this formula for 2008-2012.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9456920027732849, "language": "en", "url": "https://futuredistributed.org/social-housing-uk/", "token_count": 1337, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0076904296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4fe12f08-8bad-4efa-89be-ddcbb65dfc3c>" }
Social Housing in the United Kingdom Social Housing in the UK has a long and winding history stretching back as far as the 11th century. Of course since then the stock of public housing and the policies (and organizations) that govern them have changed dramatically. Today, social housing remains a vitally important segment of the UK housing estate. This article unpacks the details about the past, present and future state of social housing in the UK. Related articles you might also like: It is hoped that it will be useful for both people involved in the design and management of public housing schemes (anywhere in the world), and those looking to make use of social housing in the UK. What is Social Housing (UK meaning)? In the UK context, social housing refers to homes provided by a Housing Association or a Local Council, rather than a private landlord. Although it should be noted that the act of renting social housing from a Housing Association and from a Local Council is slightly different, and has been explained in more detail in the section below. The defining feature of social housing is lower rental rates, when compared with a comparable property on the private market. Shelter [leading UK housing charity] explains it as follows: Social homes are provided by housing associations (not-for-profit organisations that own, let, and manage rented housing) or a local council. As a social tenant, you rent your home from the housing association or council, who are your landlord. What’s the difference between Council Housing, Social Housing and Affordable Housing? It can be a little confusing to unpick the minor differences between terminology in the UK on the matter of public, social, council, affordable housing. I have provided a brief description of each below: - Public Housing: is a somewhat broad umbrella term used to confer housing that is not owned by a private corporation or private landlord. Public housing can be either council housing or social housing. - Council Housing: refers to homes that are owned and managed by a UK local council. Until 2011, this was more common in the UK (see below for more historic detail). - Social Housing: refers to homes that are owned by Housing Associations and charities. Since 2011, most public housing has been sold off to housing associations and charities who now manage the UK public housing stock. - Affordable Housing: can be owned by either private developers or housing associations/ charities. In 2020, the UK Government introduced a change in the law meaning private developers now have to provide less affordable housing in their schemes. Affordable housing is not the focus of this article. Evolution of UK Social Housing through history Council flats in Leamington Spa - owned and managed by Warwick District Council Before 1914 (pre-war): the Social Housing History blog provides a great overview of this chapter of history, so I recommend you review that site for more information. For an in-depth look at the history of public housing in the UK since the First World War, Wikipedia is a good resource. Current situation in the UK Growing waiting lists As a result of many factors, there is currently an official waiting list for social housing of over 1 million households. In September 2020, the National Housing Federation estimated the actual number is 500,000 households higher. Then, then Local Housing Association released a report declaring the coronavirus pandemic had increased the waiting list to record levels - over 2 million households. The situation is complex and there doesn’t seem to be any obvious commitment to help these households from the UK Government at present. List of Social Housing providers in the UK The UK Government maintains a list of registered providers of Social Housing in the UK which can be accessed here. Who is eligible for Social Housing in the UK? In general, to gain a place on the waiting list, you must be either: - a British citizen, settled in the UK, and over the age of 18 (although some councils accept applications from younger residents), - a citizen of another country with a permanent right to remain in the UK. Most councils and housing associations have adopted a points-based system to access and prioritize applicants for social housing. The criteria that can affect your points (but scoring systems vary) include: - whether or not you are homeless currently, - whether or not you currently live in cramped accommodation, - whether or not your existing housing has caused some form of medical illness, - how long you have been living in that area, - whether you are working in the area, - your income level. Commitment to house key workers in London Key workers in London will gain priority access to Social Housing In March 2021, Mayor of London confirmed plans to update planning guidance with a list of key workers to be given priority access to new or rented affordable homes in the capital - report more about this amendment in Inside Housing. Social Housing near me (UK) Whether you’re looking for social housing in London, Liverpool, Birmingham, Leeds - in fact, anywhere in the UK, this list of resources should have you covered: - The UK Government website for Council Housing is a good place to start, here can search through your postcode for local authority housing in your area. - HomeFinderUK is another good resource for finding social housing in the UK. You can filter by number of bedrooms, property type and accessible housing categories. - Climate Just provides a free map of social tenant properties (requires login). - HousingNet provides a paid solution to businesses that want to conduct analysis on the UK’s housing stock (including Social Housing). Apply for social housing Visit this link to apply for social housing through the UK Government portal. You can only apply to one housing authority. Normally you must be living in the area already in which you are applying for social housing. Of course you should remember that waiting lists are incredibly long and you may be waiting years or even decades until you are given access to a social home in the UK. Screenshot of the UK Government portal to apply for social housing Social Housing Decarbonisation Fund (UK) The Social Housing Decarbonisation Fund (SHDF) was introduced as a demonstrator project by the UK Government in 2020. The Department for Business, Energy and Industrial Strategy (BEIS) commissioned IFF Research to conduct research into the decarbonisation of social housing stock in the UK. This research is seen as a precursor to guide the UK Government’s proposed £3.6bn SHDF programme.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9680579304695129, "language": "en", "url": "https://intelligence.wundermanthompson.com/2014/12/data-point-millions-rising-out-of-extreme-poverty-globally-but-no-progress-for-americas-low-income-families/", "token_count": 287, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.32421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6731d2c9-082c-4c81-b1de-89fb0af96a58>" }
Global poverty has dropped significantly while middle- and low-income American families have lost gains in wealth. Some good news to close out the year comes from the World Bank: Global poverty has dropped significantly over the last few decades. In the recent report Ending Poverty and Sharing Prosperity, the World Bank says the number of people in extreme poverty (living on less than $1.25 a day) has halved since 1990, according to the latest figures. From 2008 to 2011 alone, China and India combined saw an estimated 232 million people rise out of extreme poverty. The World Bank is working toward a goal of reducing extreme poverty to less than 3 percent of the global population by 2030. The World Bank report makes an interesting contrast with last week’s Pew analysis finding that middle- and low-income American families have lost gains in wealth growth made over the past few decades. Among low-income households, in fact, median net worth was lower in 2013 than in 1983 (calculated in 2013 dollars). With upper-income families now holding substantially more wealth than three decades ago, America’s wealth gap has reached a record high. (A recent Quartz essay notes that as of 2012, wealth inequality in San Francisco, as measured by a city agency, was more pronounced than that in Mumbai.) Developed and developing markets are coming to look more alike than different, for the better and also for the worse.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9744433760643005, "language": "en", "url": "https://sabew.org/2019/04/college-connect-spring-2019-college-budgeting-taking-it-one-step-at-a-time/", "token_count": 711, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.1435546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1eda44fe-5897-4596-891e-3cd9237c2a66>" }
By Ellie Bramel Kelsey Snelgrove was in the sixth grade when the Great Recession happened. The crash hit close to home, and she watched her parents lose the business they had worked to build. “My dad literally came to me one day and was like, okay, so we have a bag of money. It says for groceries. That’s it. We have no other money,” Snelgrove recalled. She said the experience gave her a deeper understanding of money as she learned how to stretch her family’s dollar. Now a junior at the University of Georgia, she uses that understanding to budget her paychecks, account for weekly expenses and adopt long-term savings goals. “I do my budget mentally in my head,” said Snelgrove. “I should probably record it, but it’s hard and scary to think about all the money and seeing exactly where it goes.” Snelgrove is not alone in her budgeting style. Kristy Archuleta, an associate professor of financial planning at the University of Georgia, has found that many students do not make a formal budget. Instead, she said, many conceptualize how much they spend and try not to go over that amount. “It’s important to know why you have a budget and know where your money is going,” said Archuleta. “When you know exactly where your money is going, you have an idea of what you can do differently to improve your budget and reach your goals.” Budgeting as a college student can seem challenging, given that, as Archuleta observes, most college students have a limited income. She said, however, budgeting can be a good way to build and practice life skills. “When you know how much money you take home,” she said, “and where it is going, you have better control over your financial future.” But budgeting can pose some challenges for college students, which Snelgrove understands firsthand. Striking a balance between treating yourself, enjoying college and overspending on unnecessary things is challenging, she said. She had to learn how to say no to dinners out with friends, putting her money into her savings account instead. Because of discipline with her money Snelgrove was able to afford a study abroad summer program to Argentina. She said budgeting for something specific like a trip was a good way to set financial goals and stick to them. Her advice for other students when budgeting for something big, like a trip, is to set small goals and take it one step at a time. She said taking it day by day can make money stretch a lot further than you would think. “Set short term and long term financial goals,” she said. “Track your spending. Building a budget based on what you expect to spend then track how much you actually spend.” Both Archuleta and Snelgrove agree that while budgeting can seem like a daunting task, it is a very important skill for college students to develop. “It’s never too early to start budgeting,” said Archuleta. “Building budget skills and habits now will help students budget in the future.” Ellie Bramel is a journalism student at the University of Georgia.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9367188215255737, "language": "en", "url": "https://sectigostore.com/blog/what-is-pki-a-laymans-guide-to-public-key-infrastructure/", "token_count": 2250, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06787109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4637daf8-01e5-4c7e-a84a-0ceec248277e>" }
If you’ve wondered what public key infrastructure (PKI) is, you’ve come to the right place. It’s something that protects our money, privacy and so much more. The crazy part, most people don’t even know what it is… So, you’re finally ready to ask. You’ve put this off, we know. We understand, it’s a big topic. It’s okay though. We’ll walk you through this one. And before you know it, you’ll be chatting with your friends at work and dropping knowledge when someone asks, “What is PKI?” Getting Straight to the Point: What Is PKI? Before we dive into a deeper, more layered explanation, here is a quick definition of PKI: Public key infrastructure is something that establishes and manages public key encryption and digital signature services. For public key encryption to work, digital keys and certificates need to be created, stored, distributed, managed, revoked, used and so on. PKI allows for encryption to do all of these things with software, hardware, protocols, policies, processes and services. If you want a deeper, more layered explanation of PKI, keep on reading! Answering “What is Public Key Infrastructure?” — A 100,000-Foot Perspective Like we said, we get it, this topic is tough. Just so we don’t undermine the complexity of PKI, let’s start with a riddle. What’s something that no can see but helps other’s see what can’t be seen? You guessed it, PKI! In a way, that’s quite the accurate description. Think of public key infrastructure as the almighty helping hand. The helping hand is there for anything and everything. PKI is no different. It’s there no matter what is asked of it. PKI is the helping hand that makes online banking, paying taxes online, shopping on Amazon and so much more safe and secure! It’s there to help in anyway it can! It’s a facilitator of sorts. Now, maybe that didn’t help entirely (maybe we’re not as helpful as PKI), but let’s bring you down from 100,000 feet to 50,000 feet. We can do this with a little backstory about encryption, keys, and Julius Caesar. The PKI Flashback: The PKI Story Starts Almost 4,000 Years Ago Yes, we’re doing a flashback like Marty McFly. Or was it a flash forward? Hard to say when he’s going BACK to the future… I digress. To truly understand PKI, you need to know some backstory. There have been signs of encryption dating all the way back to 1900 BC. Maybe the most famous example of encryption history is that of Caesar’s Cipher around 40-50 BC. Caesar used a sort of shift cipher that scrambled letters around by jumping ahead a fixed number of places in the alphabet. This proved to be an excellent tactic to protect his messages from enemies who have intercepted them. This brings us to conventional (aka symmetric) encryption. Sticking with what Caesar did — basically, he knew what the key was to decode his messages, which means the person who was receiving the message needed to know what the key was as well. That’s how conventional encryption works. Here’s another example of conventional encryption. Let’s say Daffy Duck and Yosemite Sam didn’t want Bugs Bunny to know they’re working together. How could they pass secret messages without letting Bugs find out? There would need to be a way to encrypt messages and a key to decrypt them, but the problem Daffy and Yosemite would run into is that they’d need to a way to pass the key. They can’t meet in person because that would ruin their plan of secrecy. They also couldn’t pass the key with the message because that would also make this process utterly useless. Looks like Bugs got them again. The problem Daffy and Yosemite face is figuring out how two people in different places can agree on an encryption key to exchange encrypted messages. Now, if Daffy and Yosemite had PKI at their disposal, then they just might be able to pull one off this time. To Answer, “What is PKI?” You Need to Understand Public Key Encryption As opposed to conventional encryption, which uses one key, PKI enables what we call public key encryption (aka asymmetric encryption) to be able to use two keys. One key encrypts while the other decrypts. The two keys used are the public key and the private key. The keys are aptly named as one key is available to the public and the other one is private. Using the public key encryption method, Daffy could encrypt his message using Yosemite’s public key. That way, only the person with the private key (Yosemite) could decrypt the message. Even if it was intercepted, Bugs couldn’t get anything valuable out of the message because the ciphertext would look like gibberish without the decryption key. An impressive method for sure. Maybe the only thing more impressive is that I somehow turned Bugs Bunny into a hacker. Now take this concept and apply it to two computers trying to communicate securely. With PKI, these two computers can basically speak to each other, agree, share keys and ultimately decrypt the message that was in transit. So, I think that covers PKI from the 50,000-foot perspective. We’re going to bring you to about 10,000 feet for the rest of the way through, which will leave you with a thorough understanding of PKI, and ultimately answer the question “what is PKI?” Who Are the Key Players Involved in PKI? There are three main elements to PKI: - The key pair, which we just covered is one of them. - Certificate authorities (CAs) are another. CAs are trusted third-party bodies that develop and manage digital certificates. Trusted is the key word there as CAs hold the prestigious honor of being trusted to issue certificates by meeting ultra-strict criteria established by the CA/Browser Forum (CA/B Forum), an independent group largely made up of representatives from the world’s largest browsers. - Digital certificates, which are created by the CAs, are the final element. A digital certificate acts as the passports of PKI. Just as you need a passport to travel internationally, you need a digital certificate to travel through PKI. That’s because a PKI digital certificate carries documentation that details information about the key and its owner. It also comes with a signature from the CA, similar to a passport coming with a signoff from the traveler’s government. These three elements (or “players,” as the title says) make up the inner workings of this infrastructure. Five Ways That PKI Helps Us in Our Everyday Lives As we said in the opener, PKI is something that protects our money, privacy, and so much more. It touches our lives nearly every day. So, how does this invisible infrastructure help so many people? It facilitates and supports safety and security in nearly every facet of digital communications. Here are five specific areas PKI does this: In today’s digital world, it’s vital we are able to interact with websites without having our interactions recorded or intercepted. PKI allows for HTTPS to happen. The secure HTTPS protocol allows for browsers and web servers (aka websites) to safely and securely communicate. To have an HTTPS website, you need an SSL/TLS certificate. By installing an SSL certificate on your website, you receive the aforementioned public and private key pair. The private key is securely housed in the web server, so that a user’s browser can identity a website (server) as legitimate. This allows for users to safely shop, submit personal information, and pay while browsing websites. Email is another key area that PKI touches. PKI provides the framework for emails to safely travel from one person to another. This process is known as secure/multipurpose internet mail extension (S/MIME). S/MIME certificates are used to encrypt the email message and digitally sign it, so that the sender and their message can be authenticated. This also helps to prevent bad guys from tampering with emails. Imagine getting on WhatsApp and feeling like you can’t send a message to your friends without someone intercepting and reading it. It’s a scary thought. PKI makes it more secure to use messaging services like WhatsApp with the use of encryption. So, PKI covers secure website communications, email and messaging. What else does it cover? Well, let’s say you download an app or software, and once you download it, it asks for you to make an account, put your credit card number on there to buy additional services and for more private information. But how can you trust this? By software developers and publishers using code signing certificates, that’s how! These certificates ensure the developer/publisher of the file is who they say they are. PKI enables the code signing certificates to authenticate who the publisher is using public key encryption. It also helps to prevent tampering once the software or application is signed. In today’s digital world, it would be completely inefficient to physically sign every document that requires you to do so. That is what brings us to document signing. PKI enables users to electronically sign documents with the ability to prove to the receiver that the signed document is coming from a legitimate source. This happens with document signing certificates. And PKI isn’t just offering a secure way to digitally sign documents, it’s also saving you from a lot of hand cramps. A Final Answer to the Question “What is PKI?” From Bugs Bunny to Marty McFly and Julius Caesar to HTTPS, we’ve reached the end of our story. PKI is so much more than a software or product. PKI is a fully functional everything that allows for all of us to safely and securely operate in the digital world. With encryption and authentication, it governs this world allowing messages to travel, documents and downloads to be trusted and above all us, PKI allows us to enjoy the beautiful world behind our computer screen. For something so largely unknown, it’s a pretty remarkable thing. So, next time someone asks, “what is PKI,” make sure you don’t skip a single “key” detail.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.974435567855835, "language": "en", "url": "https://www.infobloom.com/what-are-goods-and-services.htm", "token_count": 578, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0947265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ebf93802-e662-44de-8618-5b40bd918eef>" }
In business, products that are sold, traded or otherwise provided to consumers or other companies can be classified as either goods, which are tangible, or services, which are intangible. Most countries measure their economies on the production and consumption of both physical goods and intangible services. Some companies provide both goods and services, and others provide only one or the other. Some goods are consumed, which means that they are gone, ineffective or unusable after they have been used once. Food is one example of a good that is consumed and must be replaced. Toothpaste, hairspray and deodorant are other examples of goods that are not practical for consumers to use more than once — the amount in the container is reduced as it is used, until it is gone and must be replaced. Products that are not consumed as they are used will last longer before needing to be replaced. Some wear out after a few weeks, months or years, and others might be replaced by improved products before they wear out. Clothing can be worn many times but eventually can wear out or become out of style. Electronic devices might stop working after a few years, but consumers often upgrade to newer or better devices before then. Other goods are more long-term in nature and might last for many years or even decades. Furniture, dishware and houses are examples of durable goods that are intended to be used for extended periods of time. Some products, such as automobiles, can last for a very long time if they are maintained properly. Services are intangible products — those that cannot be seen or touched — that are provided to consumers or other companies. A physician provides healthcare to patients. Communications companies provide services such as Internet access, television programming and the ability to make local or long-distance telephone calls. Banks provide a range of financial services to customers, such as checking accounts and investment opportunities. Other companies provide services such as lawn care, plumbing, home repair, business consulting or transportation. Often, goods and services are provided as a unified package, providing a well-rounded and attractive option for the consumer. For example, a restaurant can sell food items, which are goods, as well as provide services — the food is prepared and served, the table is set and cleared, and entertainment might be provided during the meal. Other companies sell goods and provide maintenance or repair services for those goods, or they might offer classes to teach consumers how to use those goods. Most countries depend on the production of goods and services to power their economies. They also often impose taxes on goods and services to create revenue for their governments. Taxes sometimes are different for goods and services that are exported than for those that are consumed domestically. There also usually are taxes on goods and services that are imported from other countries. Some countries impose higher taxes on imports so that domestic goods and services will be more attractive to consumers.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9328107833862305, "language": "en", "url": "https://www.smartshaped.com/post/not-only-cryptos-alternative-blockchain-applications", "token_count": 817, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0033416748046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:87eb15f4-5cd5-4992-b611-70a3f725fe1d>" }
Not only cryptos: alternative blockchain applications Aggiornato il: 5 ago 2020 This article is the second of a series revolving around blockchain technology. In the first we introduced the technology by briefly illustrating its nature and functioning. In this article we will describe some of its most relevant applications other than cryptocurrencies. Impervious to the difficulties faced by cryptocurrencies, blockchain imposed itself as one of the most interesting emerging technologies, finding different applications in many unrelated fields. Wherever there is a necessity for information to be recorded in a safe, permanent, and verifiable way, blockchain can deliver a quality solution that is relatively easy to adopt. In the following paragraphs, we are going to detail some of the most promising among these solutions. Food traceability: many large food retailers chose a blockchain solution to quickly and reliably trace food across the supply chain. Knowing the origin of a food item and being able to use a shared database to transparently follow its path up until it is purchased is for obvious reasons greatly beneficial to both retailers and consumers.[1,2] Goods traceability: another similar application of the technology concerns the traceability of goods shipped out to sea via containers. This is a very relevant application, as over 80% of all goods are shipped that way. With respect to the faster but less flexible technologies currently used to this end the use of blockchain is able to aggregate different stakeholders in the same ecosystem, facilitating their interaction and thus greatly improving the efficacy and reliability of the process. Document certification: the non-falsifiability of information stored in a blockchain allows its use in the field of certifications. A digital signature of a document or certificate can be memorized in the chain immutably and permanently, making its legal verification easy and trustable. Gaming: in one of the sectors traditionally open to the adoption of new technologies blockchain was introduced as a system to trace and exchange the so-called game assets, objects that define a player’s identity such as skins, weapons, and special powers. Their storage in a blockchain allows for a better representation of their unicity and scarcity and at the same time grants the opportunity to use the same infrastructure to handle their exchange in a way that is not dissimilar to other rare objects such as art pieces or precious stones.[5,6] Digital identity protection: another promising blockchain application consists of the management and protection of individuals’ and businesses’ digital identities. Such sensitive information is usually stored in centralized databases susceptible to both external attacks and internal manipulations. A blockchain solution is able to avoid these issues due to the immutability, verifiability, and ease of attribution of the information memorized in the chain.[7,8] To be continued: blockchain types Thanks to its guarantees of immutability and safety blockchain was able to impose itself as an adequate technological solution in many fields that are often very different from one another (and from cryptocurrencies). This allowed the technology to evolve and take various forms according to the context of use. In the next article, we will delve deeper on this aspect and illustrate the functional necessities behind such evolutions. [1,3,4,5,7] Spagnuolo, E. (2019). Wired. Retrieved on 16/04/2019 from https://www.wired.it/economia/finanza/2019/03/15/blockchain-applicazioni/. Sayer, P. (2019). CIO. Retrieved on 16/04/2019 from https://www.cio.com/article/3323073/carrefour-modernizes-food-traceability-with-blockchain.html. Curran, B. (2019). Blockonomi. Retrieved on 16/04/2019 from https://blockonomi.com/blockchain-games/. IBM Blockchain (2019). IBM. Retrieved on 16/04/2019 from https://www.ibm.com/blockchain/solutions/identity.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9429106712341309, "language": "en", "url": "https://www.startupdonut.co.uk/business-planning/write-a-business-plan/business-plan-layout", "token_count": 256, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0174560546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:09529c28-ec57-4fee-824a-99b1ae396599>" }
It is essential to have a realistic, working business plan when you're starting up a business. A business plan is a written document that describes your business, its objectives, its strategies, the market it is in and its financial forecasts. It has many functions, from helping you secure external funding to measuring success within your business. How do I write a business plan? Your start up business plan should be based on detailed information but should focus on the information the reader needs to know. It should not be a long document. Before you start, you will need your financial information, market research - backing up the assertions you are making, information about your team and detailed product literature or technical specifications. Your business plan will have six sections: - an executive summary - an overview of your business - your business - a description of what you sell and who to. - your marketing and sales strategy - your management team and personnel - your set up - what facilities and IT you have and how it how it helps you deliver your products or services - your financial plans and projections This YouTube video will show you how to prepare a high-quality business plan using a number of easy-to-follow steps, and includes a template business plan.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9421499967575073, "language": "en", "url": "https://www.tendersontime.com/rfp-request-for-proposal/", "token_count": 3251, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.031005859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:74537da1-4031-49cd-b414-10e3776193a9>" }
Home > RFP: Secrets of Writing a Winning Request for Proposal We often hear the term RFP when dealing with projects, tenders, and third-party agreements. Most of us do not understand what is RFP and how to write one. This article will help you to understand the specific details of what is an RFP, the nuances of writing a good RFP, the RFP process and how to respond to an RFP. Understanding RFP Definition and its Contents RFP or request for proposal is a document released by business organizations, government organizations or not-for-profit agencies to call for proposals for new projects or for outsourcing existing operations. This document contains details about the project and intimates interested parties to revert with their proposals to take up the project. RFPs are published in national newspapers, trade journals, websites and can be distributed to potential bidders. The major reason for publishing RFPs is to attract multiple bids for the project. The organization publishing the RFP can get bids from multiple parties interested to take up the project. Each bid presents a unique perspective of handling the project as the bidders will include their action plan to handle the requirements. For instance, if a business organization is planning to automate its business operations and wants to finalize IT services companies for the job, it can float an RFP inviting companies interested to apply for the project. Once the organization receives bids from different companies interested in IT project management, it can go through each bid and know the action plan of different companies. It can then finalize an IT service company that has a comprehensive action plan and quotes the most competitive rates. A Request for Proposal generally consists of the following information Introduction to the company publishing the RFP Scope of the project Nature of the project How to format and present the bids Information required to be provided in the bids Evaluation criteria to finalize the contract Timeline to provide the finished work Incentives and penalties Last date of submitting the bids The Purpose of Writing a Request for Proposal RFP There are three major reasons to start the RFP process. Finding a Suitable Vendor - Publishing an RFP in trade journals and national newspapers helps business organizations to find suitable vendors to handle the project. Improve Accountability and Transparency - Publishing an RFP helps an organization to solicit bids from familiar as well as unknown companies. The organization can then compare the bids and select the most competitive bid with a suitable plan of action to handle the project. This process encourages transparency and eliminates corruption. Government Regulations- In some cases, government regulations need certain companies to float RFPs and attract bids. Understanding the RFP Process The steps in the RFP process can vary from one organization to another. However, these are some of the common steps in the RFP Process. Identify the Stakeholders- Writing RFP is a time-consuming process, which requires a detailed understanding of your organization, nature of the project and unique requirements of the project. It is essential to identify the stakeholders that can guide the RFP writer in identifying the project requirements and constraints, receiving and evaluating the bids and finalizing them. Understand the Project Requirements and Boundaries- The first step in writing an RFP is to understand the project requirements and major constraints. Speak to all the stakeholders and know about the budget of the project, non-negotiable deadlines, and compulsory technical requirements. Determine the Scoring Criteria- You have to determine the scoring criterion to evaluate the prospective bidders. You can take advise from the stakeholders of the project and other advisors to draft the scoring criterion. The scoring criteria mostly depend on the priorities of the organization in selecting the bidders. For example, You can select prior experience, location, price quote, size of the company as the scoring criteria. Writing the RFP- The next step involves writing the request for proposal. The RFP document should contain information about the organization, short description of the project, project objectives, budget, milestones, and deadlines to achieve them, information required in the bid and deadline for submission of the bid. You can either create the RFP from scratch or use the RFP templates available online if you are not confident about how to write an RFP. Publishing the RFP- Once the RFP is ready, the next step is to publish and circulate the RFP so that the potential bidders can know about the opportunity. You can publish the RFP document in the classifieds sections of local and national newspapers or in the trade journals relevant to the project. For example, if your project involves construction, you can publish in the trade journals related to the construction and Real Estate industry. Review the Bids - After the deadline for submission of bids, review all the bids. Pay attention to the plan of action, price quotation, experience, size of the company and other scoring criteria drafted by you. Research New Technologies- If a bidder mentions a new technology or unique solution to handle the project requirements, take your time to research the new idea or technology. Technology is changing at a lightning pace and advanced technologies provide cost-effective solutions to complex problems. Research about the technology and its benefits and how it is suitable for your project. Research the Track Record of the Selected Bidders- You can shortlist a few bidders with impressive proposals. Research the track record of these shortlisted companies. You can ask them to provide a list of references or past clients. You can also find feedback about companies on internet forums and local business directories. Schedule a Meeting- After the background check, shortlist a few bidders and schedule in-person meetings. Discuss the various issues and challenges related to the project and how the vendor plans to handle them. Score the responses and finalize a vendor for the project. Negotiation and Contract- The last step is to negotiate with the vendor and arrive at the final terms agreeable to both parties. Finalize details regarding deliverables, milestones, and deadlines. Document these details in the contract. How Long Does The Entire RFP Process Take? There is no hard and fast rule about the duration of the RFP process. However, it takes approximately 3 months to complete the entire RFP process steps. The duration of the RFP process depends on the size of the company, nature of the company, number of stakeholders, compliance regulations and scope of the project. These items are purchased through relatively a smaller number of, validated suppliers. The purchasers maintain long term relationship with the vendors. These items are classified under mission critical category and purchased in large quantities with close monitoring. How to Write a Request for Proposal? RFP allows organizations to search for the best vendors for a project. It is important to write a clear RFP that provides all the required information but does not give away too much information. You can research for RFP examples and RFP templates online to help you through the process of writing an RFP. however, if you intend to write the RFP from scratch, here are some tips on how to write a good request for proposal. Research And Define Your Project Requirements The first step in writing an RFP is to research well and define your project requirements. You will have to provide a lot of project related information in the RFP. Moreover, once the RFP document is published your bidders will be contacting you for further information. You have to be prepared to answer all the questions from your bidders. Doing some homework and researching the project will help you to be ready to answer the questions of your potential bidders and also evaluate the bids that suit your requirements. Decide How You are Going To Publish the RFP Earlier organizations used to publish the RFP in the local or national newspapers and trade journals. Today’s businesses are tech-savvy and hence it is also required to have a website or a dedicated project page for the RFP. The webpage should consist of details about the project overview, contact information of the organization, RFP download link, and all other information that you need to share with the potential bidders. Your RFP should consist of a link to this project website. Use a Standard Format There are many standardized formats for RFPs to refer to before you start drafting. However, it is important to note that you need not follow the format as it is. Look at the specific needs of the project and determine the elements to include in your RFP. Include The Right Questions It is crucial to ask the right questions to elicit the right responses from the bidders. Include these questions in your RFP You can also frame additional questions as per the unique requirements of your project. The answers you get from the bidders will help you to know their perspective about the project and shortlist the suitable bidders. Use the Right Tools You can invest in a good RFP software that helps you to create good RFPs and manage all the RFP process steps. You can also approach RFP consulting firms that provide RFP writing services. They have a huge RFP database suitable for different types of organizations such as Government RFP, website RFP, marketing RFP and so on. There are multiple types of RFPs to suit the requirements of different industries. RFPs can be classified depending on the industry or on the nature of the project. Here are some examples of different types of RFPs. Marketing RFP -Each and every organization require marketing to build brand awareness and attract customers. Marketing RFP is created to attract marketing companies to outsource the marketing function. For example, more and more organizations are trying to improve their brand reach through digital marketing. If a company decides to outsource its digital marketing strategy, it will come out with a marketing RFP to attract bids from digital marketing companies. Website RFP - Having a website has become critical for organizations to connect with their existing customers and attract the target audience. However, it is not possible for each organization to employ website developers for this purpose. Such organizations can publish a website RFP, soliciting bids from website developers and IT support companies. Government RFP - Governments RFPsare released by government organizations to outsource work from private bidders or to get competitive bids for new projects. For example, if a government organization wants to outsource its housekeeping function, it will publish a government RFP to attract bids from interested parties. Healthcare RFP - A healthcare RFP is either created by the private healthcare facilities or government to find vendors for different projects such as staffing services, construction, technology implementation, regulatory compliance, website design and development, medical and surgical equipment, maintenance and so on. Construction RFP - If an organization wants to take up new construction work, it can publish construction RFP. Some examples of construction RFPs include RFPs for road construction, construction of a new building for offices, schools and so on. Where to Find Request For Proposals? The answer to this question depends on what type of RFPs you are searching for. Government RFP - Government RFPs can be found on the website of the organization or department issuing the RFP. Moreover, they are also published in leading newspapers and trade journals. You can check the procurement or purchasing section of the website to find RFPs for various projects. You can also find information about government RFPs from private services that provide the information on a daily, weekly, fortnightly or monthly basis. RFPs from Private companies and Not-for-profit organizations - These organizations are not governed by public procurement laws and may not post their RFPs on their website. They notify the registered or approved vendors of their organization about the RFP. You have to contact the organizations and find out the terms to be inducted into their approved vendors’ list to get access to the RFPs. How To Respond to an RFP? Organizations shortlist the bidders based on their responses to the RFP. it is very important to take care while drafting an RFP response to ensure that you provide all the relevant information and convince the other party to hire you. Follow the guidelines provided in the RFP and provide specific information in the requested format. Here are some tips to create a convincing RFP response. Go through the RFP and understand the requirement of the project. Evaluate whether you meet the required criteria to submit the RFP response. Think of all the possible solutions for the challenges faced by the client. Devise unique solutions using the latest technology at your disposal. Create a letter of proposal outlining your perspective to solve the problem and your experience in handling such issues. Prepare a checklist of all the questions asked in the RFP and provide answers to all of them in your RFP response. Prepare a proposal. You have to follow the format if any is mentioned in the RFP. If no format is mentioned, you can use a standard RFP format. You can also search for RFP response templates online and build your proposal. Submit the proposal well before the last date of submission and make sure to address it to the designated person. How to Write a Proposal That Convinces the Client The organizations that float the RFP receives hundreds of bids in response. It is important to create an attractive business proposal or bid that stands apart from the competitors and makes the client take note. Let the client know that you understand their challenges and requirement. Write a short description of the problem before you start with the solution. This will help to reassure the client that you understand their needs and problems. Create a proposal in the client’s language. Each organization uses a specific language to communicate within the organization and with the vendors and third parties. Using this language will make you fit in the organization’s culture and gets easy acceptance. Refrain from using industry-jargon and use easy-to-understand language. Share your success stories to let the client know that you have already worked with other clients on similar projects. If the client sees that you have helped companies in their industry to overcome challenges, they are more likely to trust you with the project. You can also include references to your previous clients. Follow the basics. Your response to RFP should consist of the following vital details An RFP is an important document that helps business organizations to get bids from prospective vendors. Creating an RFP is a time-taking process and requires attention to detail. If you want expert guidance in drafting an RFP, get in touch with our expert RFP consultants. We will help you to curate a well-designed RFP to suit the unique requirements of your project. Finding RFPs is a tough task. You have to visit the websites of different government organizations and private companies, scour through lots of newspapers and journals to find RFPs. You can save time by subscribing to our services. We have a huge RFP database and can help you find RFPs from government, private and not-for-profit organizations Drafting a response to the RFP requires skill and deep understanding. You must convince the prospective client that you have expertise and experience to handle the project. Moreover, you have to communicate with the client in their corporate language and let them know that you understand their culture and working with your business will add value to their project. Consult our experienced RFP response writers to get a detailed idea about how to create a proposal in response to the RFP and get one written for you. Contact us for further information about RFP and RFP response. Our experienced professionals can guide through each step of the RFP process.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9494094848632812, "language": "en", "url": "http://scenicviewdairy.com/news/congress-considers-tax-credits-for-wte-facilities/", "token_count": 939, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.11279296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0d149962-5129-45cd-b86c-1c44eae0773a>" }
Nov. 1 — A bill in Congress would allow for a 30% energy tax credit for qualified waste-to-energy projects, but opponents say the tax break is unnecessary and should go to “cleaner” renewable projects. H.R. 66, filed in January, would give the tax credit for systems that use municipal solid waste or sewage sludge as feedstock for producing solid, liquid or gas fuel. The bill excludes landfills from approved projects. A similar bill was introduced in 2010, but it did not leave committee. The bill is sponsored by six Democrats and is in the House Committee on Ways and Means. “With the right incentives, we can tackle our unsustainable waste management and our unsustainable energy sources at the same time,” said Rep. Lloyd Doggett, D-Texas, in an emailed response to questions. “This credit ensures that we find the most environmentally sound solutions.” Doggett is the main author of the bill. Ananda Lee Tan, spokesperson for the Global Alliance for Incinerator Alternatives, said subsidies to support waste-to-energy facilities are a drain on the economy. “We´d like to see public support and incentives go toward real alternative energy options like wind and solar,” he said. The bill does not pinpoint specific types of waste-to-energy technology, giving applicants a broad possibility of facilities. The U.S. EPA would be tasked with evaluating applications for the credit based on environmental and energy criteria. Only projects that receive the highest scores, based on cleanliness and high energy content, would get a tax credit, Doggett´s office said. To be considered for the credit, applicants must prove that their lifecycle greenhouse gas emissions will result in a net climate benefit, Doggett´s office said. In addition, the tax credit would be capped. “While this incentive faces an uncertain future in committee … we are continuing to build support for it among my colleagues and to make it a part of the conversation about waste management and alternative energy,” Doggett said. Patrick Serfass, executive director of the American Biogas Council, said the group is pushing for passage, as it is open to allowing various waste-to-energy facilities besides incinerators. “We think that favors anaerobic digestion because it´s a viable, competitive technology that creates energy,” he said. Tan agreed that anaerobic digestion is the most promising. “You want the highest end use of the material,” he said. “So [with anaerobic digestion], you´re making a fertilizer that you can put back into the soil instead of just burning the waste.” There are other types of technologies that the bill would include, though. The United States lags far behind Europe in biogas technology, Serfass said, with about 10,000 operating biogas projects in Europe and only about 2,000 in the U.S. Serfass said this country could support about 11,000 biogas projects. “The industry is very much just beginning in the United States,” he said. “But only with technology that is known. We´re trying to grow quickly and that means creating new business, new jobs and doing all that by converting waste to energy.” Incentives are needed to level the playing field with other energy policy, Serfass said. “The place that we´re behind right now is developing policies that keep us from wasting our waste and would instead incentivize folks to use it to make domestic, renewable base load energy,” he said. Tan said his group opposes additional, more traditional WTE facilities because burning waste can be highly toxic. “Burning plastics, paper bags, glass and metal and all that consists in municipal solid waste produces some really complex toxic compounds into the air that we breathe,” he said. “There´s also a huge problem with the disposal of the ash that is the end result.” Recycling and composting are more effective, he said, and offer job growth. “If we are going after the highest end use of these materials, we can´t be sending it to burn facilities and incentivizing that process any further,” Tan said. Contact Waste & Recycling News reporter Jeremy Carroll at [email protected] or 313-446-6780.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9610985517501831, "language": "en", "url": "http://worldheritage.org/articles/eng/Pay_it_forward", "token_count": 1753, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.06884765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:53eac077-52f5-4c72-b3a0-7cb1a4820803>" }
Pay it forward Pay it forward is an expression for describing the beneficiary of a good deed repaying it to others instead of to the original benefactor. The concept is old, but the phrase may have been coined by Lily Hardy Hammond in her 1916 book In the Garden of Delight. "Pay it forward" is implemented in contract law of loans in the concept of third party beneficiaries. Specifically, the creditor offers the debtor the option of paying the debt forward by lending it to a third person instead of paying it back to the original creditor. This contract may include the provision that the debtor may repay the debt in kind, lending the same amount to a similarly disadvantaged party once they have the means, and under the same conditions. Debt and payments can be monetary or by good deeds. A related type of transaction, which starts with a gift instead of a loan, is alternative giving. - Robert Heinlein's contribution 1.1 - See also 2 - References 3 - External links 4 Pay it forward was used as a key plot element in the denouement of a New Comedy play by Menander, Dyskolos (a title which can be translated as "The Grouch"). Dyskolos was a prizewinning play in ancient Athens in 317 BC; however, the text of the play was lost and it was only recovered and republished in 1957. The concept was rediscovered and described by Benjamin Franklin, in a letter to Benjamin Webb dated April 25, 1784: Ralph Waldo Emerson, in his 1841 essay Compensation, wrote: "In the order of nature we cannot render benefits to those from whom we receive them, or only seldom. But the benefit we receive must be rendered again, line for line, deed for deed, cent for cent, to somebody." In 1916, Lily Hardy Hammond wrote, "You don't pay love back; you pay it forward." Woody Hayes (February 14, 1913 – March 12, 1987) was a college football coach who is best remembered for winning five national titles and 13 Big Ten championships in 28 years at The Ohio State University. He misquoted Emerson as having said "You can pay back only seldom. You can always pay forward, and you must pay line for line, deed for deed, and cent for cent." He also shortened the (mis)quotation into "You can never pay back; but you can always pay forward" and variants. An anonymous spokesman for Alcoholics Anonymous said in the Christian Science Monitor in 1944, "You can't pay anyone back for what has happened to you, so you try to find someone you can pay forward." Also in 1944, the first steps were taken in the development of what became the Heifer Project, one of whose core strategies is "Passing on the Gift". Robert Heinlein's contribution Heinlein both preached and practiced this philosophy; now the - Pay it Forward Day UK - Pay It Forward Foundation - International Pay it Forward Day - Be A Kind Voice to someone in need. - Hammond, Lily Hardy (1916). In the Garden of Delight. New York: - , 1841, (text of Emerson essay)CompensationRalph Waldo Emerson, - "Group to Combat Alcoholism Grows Apace in Anonymity" Christian Science Monitor Jan 8, 1944; pg. 3 - "Passing on the gift" is fundamental to Heifer's entire approach. Pay it Forward — Heinlein Society The most important aspect of Robert Heinlein’s legacy that we at The Heinlein Society support and adhere to is his concept of paying it forward. - "The Heinlein Society". - Hoffman, Paul (1998). - Sjœrdsma, Al. """Review of "What Price a Life?. ComicBoards.com. SpiderFan.org. Retrieved 6 March 2012. - Pay It Forward Foundation - Pay It Forward Experience - The Newton Project - Student Body of America Association - Project Pay It Forward - Random act of kindness - Reciprocity (social psychology) - Reciprocity (cultural anthropology) - Six degrees of separation - Social business - Social responsibility - Gift economy program to implement education with the pay-it-forward concept. Inspired by On April 5, 2012, WBRZ-TV, the American Broadcasting Company affiliate for the city of Baton Rouge, Louisiana, did a story on The Newton Project, a 501(c)(3) outreach organization created to demonstrate that regardless of how big the problems of the world may seem, each person can make a difference simply by taking the time to show love, appreciation and kindness to the people around them. It is based on the classic pay-it-forward concept, but demonstrates the impact of each act on the world by tracking each wristband with a unique ID number and quantifying the lives each has touched. The Newton Project’s attempt to quantify the benefits of a Pay It Forward type system can be viewed by the general public at their website. The Pay it Forward Movement and Foundation was founded in the USA helping start a ripple effect of kindness acts around the world. The newly appointed president of the foundation, Charley Johnson, had an idea for encouraging kindness acts by having a Pay it Forward Bracelet that could be worn as a reminder. Since then, over a million Pay it Forward bracelets have been distributed in over 100 countries sparking acts of kindness. Few bracelets remain with their original recipients, however, as they circulate in the spirit of the reciprocal or generalized altruism. In 2000, Catherine Ryan Hyde's novel Pay It Forward was published and adapted into a film of the same name, distributed by Warner Bros. and starring Kevin Spacey, Helen Hunt and Haley Joel Osment. In Ryan Hyde's book and movie it is described as an obligation to do three good deeds for others in response to a good deed that one receives. Such good deeds should accomplish things that the other person cannot accomplish on their own. In this way, the practice of helping one another can spread geometrically through society, at a ratio of three to one, creating a social movement with an impact of making the world a better place. Some time in 1980, a sixteen-page supplemental Marvel comic appeared in the Chicago Tribune entitled “What Price a Life?” and was subsequently reprinted as the backup story in Marvel Team-Up #126 dated February, 1983. This was a team-up between Spider-Man and The Incredible Hulk, in which Spider-Man helps the Hulk escape from police who mistakenly thought that he was attacking them. Afterwards, they meet in their secret identities, with Peter Parker warning Bruce Banner to leave town because of the Hulk’s seeming attack on police. But Banner is flat broke, and cannot afford even bus fare. As a result, Parker gives Banner his last $5 bill, saying that someone had given him money when he was down on his luck, and this was how he was repaying that debt. Later, in Chicago, the Hulk confronts muggers who had just robbed an elderly retired man of his pension money, all the money he had. After corralling the muggers, the Hulk turns towards the victim. The retiree thinks that the Hulk is about to attack him as well, but instead, the Hulk gives him the $5 bill. It transpires that the very same old man had earlier given a down-on-his-luck Peter Parker a $5 bill. The mathematician Paul Erdős heard about a promising math student unable to enroll in Harvard University for financial reasons. Erdős contributed enough to allow the young man to register. Years later, the man offered to return the entire amount to Erdős, but Erdős insisted that the man rather find another student in his situation, and give the money to him. Bradbury has also advised that writers he has helped thank him by helping other writers. Heinlein was a mentor to Ray Bradbury, giving him help and quite possibly passing on the concept, made famous by the publication of a letter from him to Heinlein thanking him. In Bradbury's novel novel Dandelion Wine by, published in 1957, when the main character Douglas Spaulding is reflecting on his life being saved by Mr. Jonas, the Junkman: made repeated reference to the doctrine, attributing it to his spiritual mentor Heinlein.Spider Robinson Author
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9638527631759644, "language": "en", "url": "https://accountingcoaching.online/how-can-a-company-with-a-net-loss-show-a-positive/", "token_count": 1608, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.11962890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e2e4279d-7627-4f7e-a51f-c7447e72e941>" }
- On May 18, 2020 - By Travis L. Palmer How can a company with a net loss show a positive cash flow? How can a company with a net loss show a positive cash flow? What Is Retained Earnings? Cash flowis the net amount of cash and cash-equivalents being transacted in and out of a company in a given period. If a company has positive cash flow, it means the company’s liquid assets are increasing. If a company is liquid, it has a higher probability of paying off its debts, paying dividends to shareholders, and paying its operating expenses. Cash flow is reported on thecash flow statement, which shows where cash is being received from and how cash is being spent. Free Financial Statements Cheat Sheet This is the amount of money that the company can save for a rainy day, use to pay off debt, invest in new projects, or distribute to shareholders. Many people refer to this measurement as the bottom line because it generally appears at the bottom of theincome statement. Retained earnings represent theportion of net income or net profit on a company’s income statement that are not paid out as dividends. What can occur if a company reports a net loss? A common explanation for a company with a net loss to report a positive cash flow is depreciation expense. Depreciation expense reduces a company’s net income (or increases its net loss) but it does not involve a payment of cash in the current period. It is a good idea to get comfortable reading the statement of cash flows. It should be included with a corporation’s income statement and balance sheet. Net income, also called net profit, is a calculation that measures the amount of total revenues that exceed total expenses. It other words, it shows how much revenues are left over after all expenses have been paid. A negative amount suggests the business is using its cash flow from operating activities to pay dividends and pay off its outside financing. Cash payment of dividend leads to cash outflow and is recorded in the books and accounts as net reductions. A company’s income statement for a recent year reported revenues of $2,000,000 and expenses of $2,075,000 for a net loss of $75,000. A comparison of the company’s balance sheets reveals that its accounts receivable decreased by $10,000 and its accounts payable increased by $7,000 during the same year. To keep our illustration simple, let’s assume that except for cash, the reported amounts for the othercurrent assets and current liabilities remained the same. A cash flow statement shows a company’s cash inflows and outflows and the overall change in its cash balance during an accounting period. There are some general signs to look for in a business’s cash flow statement that suggest it has strong financial health. Retained earnings are often reinvested in the company to use for research and development, replace equipment, or pay off debt. Both revenue and retained earnings are important in evaluating a company’s financial health, but they highlight different aspects of the financial picture. Revenue sits at the top of theincome statementand is often referred to as the top-line number when describing a company’s financial performance. Since revenue is the total income earned by a company, it is the income generatedbeforeoperating expenses, and overhead costs are deducted. In some industries, revenue is calledgross salessince the gross figure is before any deductions. Retained earnings are accumulated and tracked over the life of a company. What this means is as each year passes, the beginning retained earnings are the ending retained earnings of the previous year. Retained earnings are leftover profits after dividends are paid to shareholders, added to the retained earnings from the beginning of the year. Instead, the corporation likely used the cash to acquire additional assets in order to generate additional earnings for its stockholders. In some cases, the corporation will use the cash from the retained earnings to reduce its liabilities. - If a company is liquid, it has a higher probability of paying off its debts, paying dividends to shareholders, and paying its operating expenses. - Cash flowis the net amount of cash and cash-equivalents being transacted in and out of a company in a given period. As a result, it is difficult to identify exactly where the retained earnings are presently. The cash flow from the financing activities section shows cash flows from issuing and paying off outside financing, such as stock and debt, and from paying dividends. Depreciation expense reduces a company’s net income (or increases its net loss) but it does not involve a payment of cash in the current period. For example, if a company purchased equipment last year for $2,100,000 and depreciates the equipment over seven years, its depreciation expense this year might be $300,000. This year’s $300,000 entry involves a debit to Depreciation Expense and a credit to Accumulated Depreciation. A corporation must report its expenses as they are incurred and that is often before the corporation pays the invoice. Such items include sales revenue, cost of goods sold (COGS), depreciation, and necessaryoperating expenses. Retained earnings are the portion of a company’s profit that is held or retained and saved for future use. Retained earnings could be used for funding an expansion or paying dividends to shareholders at a later date. Retained earnings are related to net (as opposed to gross) income since it’s the net income amount saved by a company over time. Positive profits give a lot of room to the business owner(s) or the company management to utilize the surplus money earned. Often this profit is paid out to shareholders, but it can also be re-invested back into the company for growth purposes. Since Aaron’s revenues exceed his expenses, he will show $132,500 profit. If Aaron only made $50,000 of revenues for the year, he would not have negative earnings, however. The net income definition goes against the concept of negative profits. As the company loses ownership of its liquid assets in the form of cash dividends, it reduces the company’s asset value in the balance sheet thereby impacting RE. By definition, retained earnings are the cumulative net earnings or profits of a company after accounting for dividend payments. It is also called earnings surplus and represents the reserve money, which is available to the company management for reinvesting back into the business. When expressed as a percentage of total earnings, it is also calledretention ratio and is equal to (1 – dividend payout ratio). Retained Earnings Formula and Calculation Retained earnings (RE) is the amount of net income left over for the business after it has paid out dividends to its shareholders. A business generates earnings that can be positive (profits) or negative (losses). A common explanation for a company with a net loss to report a positive cash flow is depreciation expense. You can monitor your company’s cash flows by reviewing your current and past cash flow statements. Alternatively, the company paying large dividends whose nets exceed the other figures can also lead to retained earnings going negative. Any item that impacts net income (or net loss) will impact the retained earnings. For example, a corporation with an accounting year ending December 31 might have a huge expense at the end of 2012, but the invoice is not due until January 2013. The 2012 net income was reduced, but the corporation’s cash is not reduced until 2013. A corporation might receive a deposit from one of its customers in December 2012, but will not earn the revenues until 2013. In that case, the corporation’s cash increased in 2012, but the corporation’s revenues and net income will not increase until 2013.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.963262140750885, "language": "en", "url": "https://accountingcoaching.online/notes-receivable-definition/", "token_count": 1709, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:68154849-114a-4915-9fae-a38904d4a2a5>" }
- On July 2, 2020 - By Travis L. Palmer Notes Receivable Definition Notes Receivable Definition Customers with overdue credit accounts may sign notes promising to pay all of part of the balance due with interest by a specific date. The supplier debits the amount of the note, excluding interest, to the notes receivable account and credits the same amount to the accounts receivable account. Alternatively if the note is signed in exchange for goods, the supplier debits the notes receivable account and credits the sales account. Accounts receivable and notes receivable that result from company sales are called trade receivables, but there are other types of receivables as well. For example, interest revenue from notes or other interest-bearing assets is accrued at the end of each accounting period and placed in an account named interest receivable. Although credit customers owe $20,000 to TechCom, only $18,500 is expected in cash collections from these customers. (TechCom continues to bill its customers a total of $20,000.) In the balance sheet, the Allowance for Doubtful Accounts is subtracted from Accounts Receivable and is often reported as shown in Exhibit 7.6. When a company directly grants credit to its customers, it expects that some customers will not pay what they promised. The direct write-off method usually does not best match sales and expenses because bad debts expense is not recorded until an account becomes uncollectible, which often occurs in a period after that of the credit sale. The debit in this entry charges the uncollectible amount directly to the current period’s Bad Debts Expense account. The credit removes its balance from the Accounts Receivable account in the general ledger (and its subsidiary ledger). Many companies allow their credit customers to make periodic payments over several months. For example, Harley-Davidson reports more than $2 billion in installment receivables. What is the difference between Accounts Receivable and Notes Receivable? The accounts of these customers are uncollectible accounts, commonly called bad debts. The total amount of uncollectible accounts is an expense of selling on credit. Why do companies sell on credit if they expect some accounts to be uncollectible? Specifically, each receivable is classified by how long it is past its due date. Then estimates of uncollectible amounts are made assuming that the longer an amount is past due, the more likely it is to be uncollectible. After the amounts are classified (or aged), experience is used to estimate the percent of each uncollectible class. These percents are applied to the amounts in each class and then totaled to get the estimated balance of the Allowance for Doubtful Accounts. This computation is performed by setting up a schedule such as Exhibit 7.11. The second is based on the balance sheet relation between accounts receivable and the allowance for doubtful accounts. The allowance method estimates bad debts expense at the end of each accounting period and records it with an adjusting entry. TechCom, for instance, had credit sales of $300,000 during its first year of operations. Other receivables can be divided according to whether they are expected to be received within the current accounting period or 12 months (current receivables), or received greater than 12 months ( non-current receivables). Customers frequently sign promissory notes to settle overdue accounts receivable balances. Brown signs a six‐month, 10%, $2,500 promissory note after falling 90 days past due on her account, the business records the event by debiting notes receivable for $2,500 and crediting accounts receivable from D. Notice that the entry does not include interest revenue, which is not recorded until it is earned. Both accounts receivable and notes receivable are vital for organizations especially from a liquidity point of view. - Customers with overdue credit accounts may sign notes promising to pay all of part of the balance due with interest by a specific date. What is the difference between an account receivable and a note receivable? Key Difference – Accounts Receivable vs Notes Receivable The key difference between accounts receivable and notes receivable is that accounts receivable is the funds owed by the customers whereas notes receivable is a written promise by a supplier agreeing to pay a sum of money in the future. Create your account, risk-free These are two principal types of receivables for a company and will be recorded as assets in the statement of financial position. Accounts receivable and notes receivable play an important role in deciding the liquidity position in the company. The percent of accounts receivable method assumes that a percent of a company’s receivables is uncollectible. This percent is based on past experience and is impacted by current conditions such as economic trends and customer difficulties. The answer is that companies believe that granting credit will increase total sales and net income enough to offset bad debts. Receivables can be classified as accounts receivables, notes receivable and other receivables ( loans, settlement amounts due for non-current asset sales, rent receivable, term deposits). Is a note receivable a current asset? notes receivable definition. An asset representing the right to receive the principal amount contained in a written promissory note. Principal that is to be received within one year of the balance sheet date is reported as a current asset. The difference between accounts receivable and notes receivable is mainly decided based on the ability to receive interest and the availability of a legally binding document. Percent of sales, with its income statement focus, does a good job at matching bad debts expense with sales. The accounts receivable methods, with their balance sheet focus, do a better job at reporting accounts receivable at realizable value. The aging of accounts receivable method uses both past and current receivables information to estimate the allowance amount. The chart here is from a survey that reported estimates of bad debts for receivables grouped by how long they were past their due dates. Each company sets its own estimates based on its customers and its experiences with those customers’ payment patterns. The expense recognition principle requires expenses to be reported in the same period as the sales they helped produce. This means that if extending credit to customers helped produce sales, the bad debts expense linked to those sales is matched and reported in the same period. The total dollar amount of all receivables is multiplied by this percent to get the estimated dollar amount of uncollectible accounts—reported in the balance sheet as the Allowance for Doubtful Accounts. The estimated bad debts expense of $1,500 is reported on the income statement (as either a selling expense or an administrative expense). A contra account is used instead of reducing accounts receivable directly because at the time of the adjusting entry, the company does not know which customers will not pay. TechCom’s account balances (in T-account form) for Accounts Receivable and its Allowance for Doubtful Accounts are as shown in Exhibit 7.5. The Allowance for Doubtful Accounts credit balance of $1,500 reduces accounts receivable to its realizable value, which is the amount expected to be received. At the end of the first year, $20,000 of credit sales remained uncollected. Based on the experience of similar businesses, TechCom estimated that $1,500 of its accounts receivable would be uncollectible and made the following adjusting entry. Accounts receivable are amounts that customers owe the company for normal credit purchases. Notes receivable are amounts owed to the company by customers or others who have signed formal promissory notes in acknowledgment of their debts. Accounts Receivables Turnover Come of Age Unlike wine, accounts receivable do not improve with age. The longer a receivable is past due, the less likely it is to be collected. Wage advances, formal loans to employees, or loans to other companies create other types of receivables. If significant, these nontrade receivables are usually listed in separate categories on the balance sheet because each type of nontrade receivable has distinct risk factors and liquidity characteristics. Under the allowance method only do we estimate bad debts expense to prepare an adjusting entry at the end of each accounting period. One is based on the income statement relation between bad debts expense and sales.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.959355354309082, "language": "en", "url": "https://accountingcoaching.online/what-is-the-difference-between-equity-and-assets/", "token_count": 1643, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1661a8a7-6ccf-4048-9ecf-e6f102ba4749>" }
- On June 10, 2020 - By Travis L. Palmer What is the difference between equity and assets? What is the difference between equity and assets? Williams can cover their liabilities if the company has to hypothetically pay all its liabilities right now. Although the current ratio has improved but it covers all the current assets while quick ratio, which has declined, covers only the liquid assets. A company must ideally maintain its current ratio at 1.5 and quick ratio at 1, but this may vary according to the relevant business or industry. The financial accounting term quick asset is used to describe a subset of current assets used in the calculation of the quick ratio, also known as the acid test. Cash, short-term debt, and current portion of long-term debt are excluded from the net working capital calculation because they are related to financing and not to operations. A business may have a large amount of money as accounts receivable, which may bump up the quick ratio. Quick assets include those assets that can reasonably be used to pay current liabilities. The quick ratio is an indicator of a company’s short-term liquidity position and measures a company’s ability to meet its short-term obligations with its most liquid assets. Since it indicates the company’s ability to instantly use its near-cash assets (assets that can be converted quickly to cash) to pay down its current liabilities, it is also called the acid test ratio. An acid test is a quick test designed to produce instant results—hence, the name. In finance, the quick ratio, also known as the acid-test ratio is a type of liquidity ratio, which measures the ability of a company to use its near cash or quick assets to extinguish or retire its current liabilities immediately. The quick ratio can also be contrasted against the current ratio, which is equal to a company’s total current assets, including its inventories, divided by its current liabilities. The quick ratio represents a more stringent test for the liquidity of a company in comparison to the current ratio. Prepayments are subtracted from current assets in calculating quick ratio because such payments can’t be easily reversed. Quick ratio’s independence of inventories makes it a good indicator of liquidity in case of companies that have slow-moving inventories, as indicated by their low inventory turnover ratio. Quick ratio is a stricter measure of liquidity of a company than its current ratio. While current ratio compares the total current assets to total current liabilities, quick ratio compares cash and near-cash current assets with current liabilities. How do you calculate quick assets? Quick assets are assets that can be converted to cash quickly. Typically, they include cash, accounts receivable, marketable securities, and sometimes (not usually) inventory. Analysts most often use quick assets to assess a company’s ability to satisfy its immediate bills and obligations that are due within a one-year period. This ratio allows investment professionals to determine whether a company can meet its financial obligations if its revenues or cash collections happen to slow down. The quick ratio considers only assets that can be converted to cash very quickly. The current ratio, on the other hand, considers inventory and prepaid expense assets. In most companies, inventory takes time to liquidate, although a few rare companies can turn their inventory fast enough to consider it a quick asset. Prepaid expenses, though an asset, cannot be used to pay for current liabilities, so they’re omitted from the quick ratio. Higher the quick ratio more favorable it is for the Company as it shows the Company has more liquid assets than the current liabilities. - Williams can cover their liabilities if the company has to hypothetically pay all its liabilities right now. - Analysts most often use quick assets to assess a company’s ability to satisfy its immediate bills and obligations that are due within a one-year period. - This ratio allows investment professionals to determine whether a company can meet its financial obligations if its revenues or cash collections happen to slow down. Since near-cash current assets are less than total current assets, quick ratio is lower than current ratio unless all current assets are liquid. Quick ratio is most useful where the proportion of illiquid current assets to total current assets is high. However, quick ratio is less conservative than cash ratio, another important liquidity parameter. Quick Assets Versus Current Assets They include cash and equivalents, marketable securities, and accounts receivable. Companies use quick assets to calculate certain financial ratios that are used in decision making, primarily the quick ratio. LiquidityThe term working capital is used to describe the current items of the balance sheet. Working capital includes current assets such as cash, accounts receivable, and inventory, and current liabilities such as accounts payable and other short term liabilities. Net working capital is defined as non-cash current operating assets minus non-debt current operating liabilities. Liquid current assets are current assets which can be quickly converted to cash without any significant decrease in their value. Liquid current assets typically include cash, marketable securities and receivables. With a quick ratio of 0.94, Johnson & Johnson appears to be in a decent position to cover its current liabilities, though its liquid assets aren’t quite able to meet each dollar of short-term obligations. Procter & Gamble, on the other hand, may not be able to pay off its current obligations using only quick assets as its quick ratio is well below 1, at 0.51. To calculate the quick ratio, locate each of the formula components on a company’s balance sheet in the current assets and current liabilities sections. A ratio of 1 indicates the Company has just sufficient assets to meet the current liabilities whereas the ratio of less than 1 indicates the Company may face liquidity concerns in near term. The figures of both current assets and liquid assets are used to calculate the liquidity ratios of a business which are used to assess the ability of a business to meet its short-term cash needs. Thequick ratiois a liquidity ratio that compares quick assets to current liabilities. A quick ratio of .5 means that the company has twice as many current liabilities as quick assets. This means in order to pay off all the current liabilities, this company would have to sell off some of its long-term assets. A quick ratio lower than the industry average might indicate that the company may face difficulty honoring its current obligations. Alternatively, a quick ratio significantly higher than the industry average highlights inefficiency as it indicates that the company has parked too much cash in low-return assets. A quick ratio in line with industry average indicates availability of sufficient good quality liquidity. Quick ratio (also known as acid-test ratio) is a liquidity ratio which measures the dollars of liquid current assets available per dollar of current liabilities. Why Do Quick Assets Matter? It is defined as the ratio between quickly available or liquid assets and current liabilities. Quick assets are current assets that can presumably be quickly converted to cash at close to their book values. Quick assets refer to assets owned by a company with a commercial or exchange value that can easily be converted into cash or that are already in a cash form. Quick assets are therefore considered to be the most highly liquid assets held by a company. This may include essential business expenses and accounts payable that need immediate payment. Despite having a healthy quick ratio, the business is actually on the verge of running out of cash. The quick ratio measures the dollar amount of liquid assets available against the dollar amount of current liabilities of a company. This ratio goes one step ahead of current ratio, liquid ratio & is calculated by dividing super quick assets by the current liabilities of a business. It is called super quick or cash ratio because unlike other liquidity ratios it only takes into account “super quick assets”. What Are Quick Assets? Quick assets are defined as cash, accounts receivable, and notes receivable – essentially current assets minus inventory. The quick ratio is more conservative than the current ratio because it excludes inventory and other current assets, which are generally more difficult to turn into cash.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9379307627677917, "language": "en", "url": "https://accountinguide.com/imposed-budgeting/", "token_count": 463, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1357421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6333e47d-d2c6-4f62-91a8-7bd47d395773>" }
Imposed Budgeting is the way in which top management prepares the budget and impose on lower levels manager to implementation. This budget is prepared and reviewed by only the top management with little or no input from middle and low management. Most of the time, the budget follows the company’s objective and mission. After the budget approved, it will impose on all departments to follow. Each department needs to prepare individual targets to support the main objective. Sale team needs to ensure that the total revenue meets the target. Production managers have to control the cost under the standard cost in order to obtain the budgeted margin. Moreover, all departments must control their own fixed-cost under the budget, so the annual profit will be equal to or higher than the budget. Important of Imposed Budget |Advantages of Imposed Budget| |Working as a team||Everybody in each department knows their own role and responsibilities. They know their contribution to the company objective. If they fail to fulfill their department’s objective, it will have a negative impact on the company’s target.| |Easy and cheap||As the top management sets the target, it will take less time to process and approve. It will reduce the time to spend on revising to fit with management expectations. The bottom-up method will require all department to raise their own budget which faces with inconsistency and require a lot of work to put them together.| |Management in Control||The top management will in control of the company’s objective. They set the target and allocate it to all staff which will work on the same strategy. So the management just set the top strategy and everybody will follow.| |Disadvantages of Imposed Budget| |Lack of Motivation||The middle manager and staff may feel a lack of motivation as they do not input any idea to the budget. They just follow the top management who pay attention to the bottom line and high-level strategies.| |Not Practical||Top management may lack technical knowledge related to real-life work. By following such a target, it may be too aggressive which is only exist in the budget not for real. On the other hand, the target may be far below the capacity of the production.|
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9356163740158081, "language": "en", "url": "https://republicofmining.com/2018/02/15/crucial-to-find-cobalt-sources-outside-of-africa-by-rahul-verma-and-brent-a-elliott-my-san-antonio-com-february-14-2018/", "token_count": 310, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.052490234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fdebbbbd-d899-49ba-8463-b7483af67910>" }
Rahul Verma is a research scientist associate in the Bureau of Economic Geology at the University of Texas at Austin. Brent A. Elliott is an economic geologist in the Bureau of Economic Geology at the University of Texas at Austin. As we move toward integration of renewable energy sources and electric vehicles, we need to pay greater attention to the cobalt supply chain and diversification of supply for cobalt sources. Cobalt plays an integral part in the common lithium-ion battery, and as battery-powered applications such as electric vehicles become ubiquitous, cobalt mining will need to grow proportionally to avoid supply bottlenecks. Industry projections show that if we reach 24.7 million cars by 2025, we will need the cobalt supply for a compound annual growth rate of about 8 percent from 2020 to 2025. If demand is higher, such as upward of 63.2 million cars by 2025, it will require a growth of about 14 percent from 2020 to 2025. Such growth rates hinge on a precarious supply chain. The foremost risk, and perhaps the most challenging to solve, is geopolitical. Sixty-two percent of the world’s cobalt comes from the Democratic Republic of Congo, and combined with production from Zambia, Madagascar, South Africa and Zimbabwe, the five countries mine more than 71 percent of the world’s cobalt. Companies process ore locally and export more than 90 percent of the total to China for further processing and refining to produce commercial cobalt compounds used in batteries.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9522843956947327, "language": "en", "url": "https://rivertonroll.com/news/2019/11/29/european-space-agency-gets-big-funding-boost.html", "token_count": 397, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0267333984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fffbfae8-acfd-42e4-996c-87d417052fb9>" }
The European Space Agency announced that it has been granted more money than it expected from the 22 member states that fund it. During its meeting in Seville, Spain, representatives from all the countries that make up ESA agreed to fund all of the projects the agency proposed over the next five years. Member states are allowed to choose which programs to invest in, based on what best meets their industrial, scientific, and strategic priorities. According to ESA director-general Jan Woerner’s live-streamed news conference, the member states have pledged 14.4 billion euros ($15.9 billion) in funding over the next five years. Germany will be largest single funder, providing 22.9 percent of the total. Its industry will get the bulk of the R&D contracts. The United Kingdom has committed to contribute £374 million ($483.4 million) per year over five years, its largest ever investment in the ESA. According to the announcement, more money than expected has been committed to expand Europe’s Copernicus Earth-observation satellites program. These satellites monitor the status of the planet in a mission that is scheduled to run until 2028. Funding has also been secured for the UK-led TRUTHS mission to help tackle climate change by creating a more detailed survey of the Earth’s climate. European participation in a new wave of lunar exploration will also be funded. Missions getting backing include sending the first European astronaut to the Moon and participating in building the NASA Lunar Gateway space station intended to orbit the Moon. ESA will also develop new space shuttle and rovers, along with other technologies. The agency also has Mars in its sights, revealing plans to retrieve the first samples from Mars. It also plans to develop technologies to remove space junk to prevent collisions in space and will send European astronauts to the International Space Station before 2024. A mission to intercept and study a comet in our solar system is also on the schedule.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9436783194541931, "language": "en", "url": "https://vertavahealth.com/blog/financial-cost-of-addiction/", "token_count": 1941, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.02490234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3793f555-dab1-4720-b875-e8bc1f1b25f2>" }
Drug and alcohol addiction does not come without costs: costs to health, relationships, as well as financial cost. Addiction can cost a person thousands of dollars each year, depending on the type of substance abused, the amount, and other considerations such as healthcare costs. Financial barriers to seeking drug and alcohol rehab are a common concern. However, the value of seeking treatment, when and if affordable options are available, far exceeds how much this and can save you money down the road. Drug and alcohol treatment can also present more valuable benefits, such as skills-learning groups and access to a team of specialists capable of coordinating a personalized treatment plan. What Is The Total Cost Of Addiction In The U.S? The financial cost of struggling with drug or alcohol addiction over time is more than just a personal problem. Substance abuse is estimated to cost the United States economy over $600 billion each year, according to the National Institute on Drug Abuse (NIDA). Get the help you need today We are here to help you through every aspect of recovery. Let us call you to learn more about our treatment options. This number is staggering, and doesn’t include the significant indirect costs of drug and alcohol addiction. These indirect costs, which include drug-related deaths and reduced quality of life, raise the total cost to over a trillion dollars each year. Areas of spending that are considered when calculating these total costs include: - healthcare costs - workplace productivity loss - criminal justice costs - research and prevention - public assistance and social services - traffic collisions - intangible costs (e.g. decreased quality of life) How Do Treatment Costs Affect The Total Cost Of Addiction? A 2011 report from the U.S. Department of Justice shares that treatment costs for substance abuse make up only 6 percent of the total financial cost of addiction nationwide. Although millions of people struggle with drug or alcohol addiction in any given year, only a small percentage go on to seek treatment. There are many explanations for this that can vary for each person, and with every day, month, or year someone doesn’t seek treatment – the cost of addiction becomes greater. Annual Financial Costs Of Addiction By Drug Breaking the costs down even further, below is a detailed look at how the annual financial cost of addiction can be compared between different types of substances. Prescription opioid abuse, according to the CDC, has reached crisis levels in the United States. In 2017, more than 191 million opioid prescriptions were filled by patients in the United States – and this doesn’t account for opioids purchased illegally. Although opioids like oxycodone and Vicodin can be an effective treatment for severe pain, they can also be highly-addictive, leading many to a dangerous pattern of opioid misuse and addiction. Older adults especially are a population commonly receiving these prescriptions, and with a slower drug metabolism, can be at greater risk for developing a problem. In 2015, the economic cost of opioid misuse in 2015 was estimated at $504 billion. On an individual level, chronic opioid use can become very costly, with prices even higher for pills not received through a prescription. Common opioids of abuse and average costs per pill include: - prescription: $6 - street price: $50 to $80 - fentanyl patch - prescription: $9 - street price: $40 - Vicodin (hydrocodone) - prescription: $1.50 - street price: $2 to $10 - prescription: $6 - street price: $30-$40 Taking opioids multiple times a day over time can add up, with higher doses costing even more. In addition, as a person develops a higher tolerance for opioids, higher doses will be needed to continue experiencing the same effects. The majority of opioid-related costs in the United States occur as a result of fatalities. Non-fatal instances of opioid misuse make up only a fraction of the total cost, representing $72.7 billion of the total $504 billion. Alcohol is the most commonly abused drug in the United States and leads to a problem that kills 88,000 people each year. In 2016, the CDC reported an annual U.S economic loss of $249 billion due to the costs of excessive alcohol use. This costs each citizen in the United States about $807 each year.. Binge-drinking drives the majority of this economic toll, linking back to 77 percent of the total cost. Underage drinking and drinking while pregnant also make up sizable portions of this total cost. On a societal level, excessive drinking hurts the economy by causing significant losses of productivity in the workplace, healthcare costs, and more. From a 2016 CDC report on heavy drinking, the breakdown of this total cost follows as: - loss in workplace productivity (72 percent) - health care expenses (11 percent) - law enforcement and criminal justice (10 percent) - motor vehicle crashes (5 percent) The expense of buying alcohol for heavy drinkers can cost upwards of thousands of dollars each year. Adding in the expenses of treating medical and mental health consequences of alcoholism can even further raise this cost. Cocaine is powerful and addictive illicit drug that first rose to popularity in the early 1980s and have generally declined in price since. Since the 1990s, cocaine prices have remained relatively steady, despite new legal barriers that make it more difficult for drug cartels to ship and distribute the drug. The price of cocaine today varies based on location, the amount, and drug purity. On average, a gram of cocaine in the U.S. can cost between $93 and $163, depending on purity and location. This is equal to about ten lines, or twenty-five “bumps” of cocaine. Depending on the severity of a person’s problem, cocaine addiction can result in costs of $8,000 to $10,000 a year. People with a severe cocaine addiction may spend more. Heroin is an illicit opiate that sells between $5 and $20 a dose. For people with severe addictions, daily costs for heroin can range up to $150 to $200 a day. This can add up to $54,000 to $73,000 per year, without factoring in the financial burden of other heroin-related costs. Compared to the prices of prescription opioids, however, heroin is often seen as a cheaper and more easily-obtainable alternative. It may sometimes be taken or mixed with other addictive opioids such as fentanyl. Heroin is a powerful drug that can pose several dangers to physical and mental health. Having an addiction to heroin can make it difficult to keep a job, cause legal troubles, as well as increased healthcare costs, and other serious consequences. Heroin addiction also leads to increased costs on a societal scale due to higher incarceration rates, theft, and other criminal justice costs. Compare: The Cost Of Addiction Treatment Many people struggling with addiction have concerns about whether or not they can pay for treatment. Treatment expenses for drug and alcohol addiction can vary depending on the type of treatment being sought, location, and other factors. Detox services, for instance, can cost between $250 to $800 a day out-of-pocket, while residential rehab programs can cost between $2,000 to $25,000 depending on the length of stay and amenities offered. Depending on the substance and severity of someone’s addiction, it’s not uncommon to spend hundreds or even thousands of dollars in a short time to maintain drug or drinking habits. In comparison, getting treatment can provide significantly more value personally and financially. Most insurance companies are required by law to provide some level of coverage for mental and behavioral health services, which includes drug and alcohol rehab. This can reduce costs for people with insurance who are seeking treatment. Additional options for people without insurance may include: - state-funded rehab programs - facilities that offer sliding-scale services for low-income patients - treatment scholarships Seeking Treatment Helps The Economy The U.S. economy also benefits from people seeking treatment for drug and alcohol problems. According to the National Institute on Drug Abuse (NIDA): - for each dollar spent on addiction treatment, an estimated $4 to $7 is yielded due to a reduction in drug-related crime and theft - when healthcare costs are added, the ratio of savings exceeds the costs of treatment 12 to 1 - addiction treatment increases employment prospects by 40 percent and reduces arrests for drug-related crimes by 40 to 60 percent Get Addiction Help Today Getting help for an addiction is not a small decision. It can be life-changing in several ways, and provide an opportunity to save money that would have otherwise been spent feeding your drug or alcohol habits. Treatment for a drug or alcohol addiction reduces the risk for relapse, connects people to a support system, and can put them on the path towards a more hopeful future ahead. At Vertava Health, we strive to provide comprehensive treatment that helps each patient we treat discover their success story. Fighting addiction is not easy, and you don’t have to face it alone. Our treatment specialists at Vertava Health are available 24/7 to offer confidential support in finding a treatment program that best suits the needs of you or a loved one struggling with addiction. Don’t wait to seek help. Contact us to find drug and alcohol treatment options today.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9510819315910339, "language": "en", "url": "https://www.genpaysdebitche.net/how-many-users-on-crypto-kitties/", "token_count": 892, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.018310546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:146a8584-2645-4136-97e2-8141dfc6910c>" }
How Many Users On Crypto Kitties – What is Cryptocurrency? Basically, Cryptocurrency is digital cash that can be utilized in place of standard currency. Generally, the word Cryptocurrency originates from the Greek word Crypto which means coin and Currency. In essence, Cryptocurrency is simply as old as Blockchains. The distinction between Cryptocurrency and Blockchains is that there is no centralization or ledger system in place. In essence, Cryptocurrency is an open source protocol based on peer-to Peer transaction technologies that can be executed on a dispersed computer network. As an open source protocol, the protocol is highly flexible. This suggests that unlike Blockchains, there is an opportunity for the community at large to customize the core of the procedure to fit their requirements. As such, a lot of development has actually happened all over the world with the intent of offering tools and techniques that assist in clever agreements. One particular method in which the Ethereum Project is trying to resolve the problem of clever contracts is through the Foundation. The Ethereum Foundation was developed with the aim of developing software application services around clever agreement functionality. As such, the Foundation has actually launched its open source libraries under an open license. What does this mean for the wider neighborhood thinking about taking part in the advancement and application of wise agreements on the Ethereum platform? For beginners, the significant difference between the Bitcoin Project and the Ethereum Project is that the former does not have a governing board and for that reason is open to factors from all walks of life. However, the Ethereum Project enjoys a a lot more regulated environment. For that reason, anybody wishing to contribute to the project needs to follow a standard procedure. As for the tasks underlying the Ethereum Platform, they are both striving to offer users with a new way to take part in the decentralized exchange. The major distinctions between the two are that the Bitcoin procedure does not utilize the Proof Of Consensus (POC) process that the Ethereum Project utilizes. On the other hand, the Ethereum Project has taken an aggressive technique to scale the network while also tackling scalability concerns. In contrast to the Satoshi Roundtable, which focused on increasing the block size, the Ethereum Project will be able to carry out improvements to the UTX protocol that increase deal speed and decrease costs. The major difference in between the 2 platforms originates from the functional system that the two teams utilize. The decentralized element of the Linux Foundation and the Bitcoin Unlimited Association represent a conventional model of governance that places a focus on strong community involvement and the promo of agreement. By contrast, the heavenly structure is committed to developing a system that is versatile enough to accommodate modifications and add new functions as the needs of the users and the market modification. This model of governance has been adopted by numerous distributed application groups as a method of managing their projects. The significant difference between the 2 platforms comes from the fact that the Bitcoin community is largely self-sufficient, while the Ethereum Project anticipates the participation of miners to subsidize its advancement. By contrast, the Ethereum network is open to factors who will contribute code to the Ethereum software application stack, forming what is known as “code forks “. Just like any other open source technology, much debate surrounds the relationship between the Linux Foundation and the Ethereum Project. Although both have embraced various point of views on how to finest use the decentralized aspect of the technology, they have both nevertheless striven to establish a positive working relationship. The designers of the Linux and Android mobile platforms have actually freely supported the work of the Ethereum Foundation, contributing code to secure the functionality of its users. The Facebook team is supporting the work of the Ethereum Project by providing their own structure and creating applications that incorporate with it. Both the Linux Foundation and Facebook see the heavenly task as a way to enhance their own interests by offering a cost scalable and efficient platform for users and developers alike. Simply put, Cryptocurrency is digital money that can be used in location of traditional currency. Basically, the word Cryptocurrency comes from the Greek word Crypto which suggests coin and Currency. In essence, Cryptocurrency is just as old as Blockchains. The distinction between Cryptocurrency and Blockchains is that there is no centralization or ledger system in location. In essence, Cryptocurrency is an open source protocol based on peer-to Peer deal technologies that can be performed on a dispersed computer network. How Many Users On Crypto Kitties
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9058380126953125, "language": "en", "url": "https://press.thebig5.ae/speaker-interview-suleiman-al-dabbas", "token_count": 1363, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.02734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c76d82da-42fe-4e1b-be34-f6774434d4f8>" }
Suleiman Al Dabbas will be speaking at Windows, Doors and Facades Event next September. He shared with us some insight on the latest blockchain technology;. He is the Quality and Excellence Specialist, Smart Dubai Government Est. and will be talking on Quality in the Digital Government World at the Facade Seminar series. What is a digital government? Explores how governments can best use information and communication technologies (ICTs) to embrace good government principles and achieve policy goals. Provide government services through digital channels (e.g. Internet, Mobile, Kiosk) or automated systems. What is block chain technology and how will it impact the construction industry? A blockchain is a decentralized, distributed and public digital ledger that is used to record transactions across many computers so that the record cannot be altered retroactively without the alteration of all subsequent blocks and the consensus of the network. Construction brings together large teams to design and shape the built environment. With technology and in particular Building Information Modeling (BIM) becoming more widespread, openness to collaboration and new ideas is increasing across the industry. This momentum could be leveraged to bring the use of blockchain technology to the fore. The four potential uses of blockchains are: - Recording Value Exchange - Administering Smart Contracts; Smart contracts would have instructions embedded in a transaction so that unless the instructions are fulfilled, there is no payment. This ensures all parties are happy and that no one is having to chase people for payment, virtually eliminating payment disputes. The smart contract is visible to all users in the blockchain, so there’s no question about what the terms are and what instructions need to be followed. - Combining Smart Contracts to form a Decentralised Autonomous Organisation (DAO) - Certifying proof of existence for certain data (for instance, providing a securely backed up Digital Identification) What role does block chain technology play in UAE’s climb towards a global digital government? So for this, let's take Dubai blockchain strategy as example: Dubai will be the first city fully powered by blockchain by 2020. Today, Dubai is amongst the world’s leading smart cities in its adoption of new technology and pioneering of innovative smart pilots. Recognizing the potential impact of the blockchain technology on city services coupled with a worldwide blockchain adoption trend that saw $1.1 billion invested by the private sector in blockchain technology in 2016 alone, Dubai launched a city wide blockchain strategy in October 2016 with the objective of becoming the first blockchain powered city by 2020. Dubai’s adoption of blockchain technology at a city-wide scale comes at a time when the technology is increasingly being recognized as the ultimate trust machine. Blockchain eliminates the need for trusted third parties in transactions, an attribute which would contribute significantly to simplifying Dubai Government’s evolving processes. A detailed roadmap that is organized around the blockchain strategy’s three pillars has been developed. This roadmap defines the way forward for Dubai’s blockchain ambitions. For each pillar in the strategy, the city has a plan with actionable initiatives: 1. Government Efficiency 2. Industry Creation 3. Thought Leadership 4. How is blockchain changing the world? Its clear blockchain will change the world because of four key issues it addresses: - Decentralization: The core advantage of blockchain technology is that it does not require a traditional centralized organization. The distributed system of blockchain does not depend on point-to-point transactions, coordination, and collaboration of a credit center in a distributed system, thereby avoiding the prevailing problems of data security, coordination efficiency and risk control of centralized organizations. - Transparency: By blockchain technology, data is difficult to tamper with. The database for recording transactions can be accessed by anyone. Through this transparent and open mode, everyone can act as a supervisor. What changes in data can be easy-to-read and are more secure than traditional Internet technologies. - Data security: The blockchain technology is connected with multiple nodes in different places. The nodes in the blockchain interact via a point-to-point communication protocol. Different nodes can be used by different developers in different programming languages and in different versions of full nodes under the conformance of communication protocols. Simply put, when a node encounters with network problems, hardware failures, software errors, or is controlled by hackers, the operation of other participating nodes and systems will not be affected. So blockchains are more reliable than traditional technologies. - Low cost: Because the blockchain is decentralized, whose systems are maintained by all the participants, there is no need to pay a certain cost for the central management and supervision of third-party agencies. Coupled with the support of a wide range of development cooperation in different places, blockchain technology is a new low-cost, high-efficiency collaboration model. What can visitors expect to learn at your session? Visitors can learn about quality in digital government, use and benift of blockchain and Smart Dubai initiatives. About Suleiman Al Dabbas - A Quality/Excellence specialist, with over than 10 years of proven track record of excellence practices and awards. I’ve demonstrated competence in a wide range of best practices in Organizational excellence, quality management and smart transformation. Lead Auditor, ISO 9001:2015 Quality Management System - EFQM Excellence Assessor - Certified Train Of Trainer (TOT) - Lead Auditor, ISO 22301 Business Continuity system - Smart Transformation Assessor - Certified Creative Leadership & Innovation. Area of Expertise:- Manage Excellence and Quality projects including assessment, drafting reports, implementation, training.- Update/Manage and Control of Quality management systems according to ISO standards ISO9001:2015 including: documents control, Procedures, Quality awareness sessions, risk management, operational KPIs and customer satisfaction. - Smart Transformation Assessments for Dubai Government Entities based on excellence models issued by Smart Dubai Gov. for: websites, Smart Services and applications. - Facilitate workshops & training courses and prepare of training materials for many courses that have been delivered to governmental and non-governmental organizations in UAE, Saudi Arabia and Jordan, in the fields of management and organizational excellence, strategic planning, quality, financial management and others.- Awards: work with the concerned Dept. on the submission documents preparation for national and international awards, final review of documents and submission process. Sample of winning awards: Stevie Award - .Gov Award - World Smart City & Expo Award - Smart Government Excellence Award - HR Excellence Award - GCC Gov. HR Award - Telecom Review Excellence Award.- 4th Generation Excellence system implementation, assessment, training design and delivery. - Conducted several audits as a Lead Audit for ISO 9001.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9538299441337585, "language": "en", "url": "https://usgreenchamber.com/new-california-reycling-law/", "token_count": 225, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0693359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b1fb6c7a-19da-4d51-9dea-549e8397f3e7>" }
A new law, AB 341, went into effect this month in California that requires all businesses generating more than four cubic yards of solid waste per week and all multi-family residential buildings (greater than five units) to recycle their waste . The policy will affect 470,000 businesses and apartment buildings statewide. While the law only affects 20% of businesses, this small percentage is responsible for 75% of the state’s total commercial waste. CalRecycle expects the bill to save the state $40-60 million annually over the next 8 years. The law is part of California’s goal to reduce disposal rates by 75% by 2020. “To achieve this goal, we are working with local governments and businesses to provide optimal solutions in their recycling and educational efforts,” David Tucker, director of public affairs for Waste Management of Alameda County. California businesses are embracing the new legislation. “Many of commercial clients in the Bay Area have shown an eagerness to reduce the cost of their waste and, more importantly, what goes into the landfills,” Tucker said.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9387944936752319, "language": "en", "url": "https://www.assignmentpoint.com/business/finance/concept-of-derivative-securities-and-underlying-assets.html", "token_count": 428, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01177978515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:78a460c1-2b9d-4735-b606-44d17ee7c76a>" }
The concept of Derivative Securities and Underlying Assets A derivative is a financial security with a value that is reliant upon or derived from an underlying asset or group of assets. The underlying asset is a term used in derivatives trading. Options are an example of a derivative. Let us assume that the price of the agricultural product ‘X’ is $ 10,000 a ton in the market. You have contracted to purchase 100 tons at the rate of $ 9,500 per ton from the producer of that product. Now you can make a profit on the sale of every ton of ‘X’. Your total profit will be $ 50,000. Alternatively, the value of the agreement with the producer is worth for $50,000. In this example, the product ‘X” is an underlying asset and the agreement with the producer is the derivative securities. In the same manner, the contract can be made to purchase a specified number of financial assets at a specified price within the predetermined time period. For example, you can promise to purchase 100 shares of Standard Chartered Bank Limited at $ 3,000 a share by July 2010 to your friend or you can promise to sell 100 shares at $ 2,950. The value of your promise to your friend and yourself depends on the market price of the share. Here, the share of the bank is an underlying asset and the promise that you make is the derivative security (an asset derived from an underlying asset). We can conceptualize now that derivatives are the assets derived from other assets and in general, the value of such an asset depends on the market price of the underlying assets. The underlying is a fundamental concept in derivatives trading because it allows investors to speculate risk and purchase options to limit the downside risk of future stock price movements. Now we can define the derivatives. They are financial contracts whose value is linked to the price of an underlying commodity, asset, rate index or occurrence or magnitude of an event. Thus, this term is used to refer to the set of financial instruments that include futures, forwards, options, warrants, convertible and swaps.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9521563053131104, "language": "en", "url": "https://www.exchange2007demo.com/small-business-suggestions-4-steps-to-far-better-stock-management-and-management/", "token_count": 2019, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.11474609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0e2c6dd7-100e-4497-ba63-3e4fb50ce0aa>" }
Stock – A Negative or Very good Term? The phrase “Stock”, in accordance to Merriam-Webster, is simply defined as a checklist of merchandise that are in a location, such as a business place or warehouse. But several organization owners know that inventory can be a vastly far more sophisticated useful resource to manage and manage productively. Companies usually over-commit in inventory for the sole function of guaranteeing that they are not “out of inventory” when a client would like to get, or a producing procedure demands to build, products offered for sale. Cash – The Finite Source In excess of time, in addition to tying up valuable money resources, very poor inventory administration often results in organizations obtaining way too considerably of stock they do not need, and not ample of that which they do need to have. This often results in getting far more stock in response to fast requirements, with no considering the wisdom or necessity of getting inventory on an emergency basis. For occasion, it is not uncommon for purchases of components to be made, when the business already has the materials in stock. In environments with tough stock administration difficulties, the business often does not know precisely what stock is in the creating, or the warehouse individuals can not discover the stock they are striving to select. This is a common issue with several variations, all of which are normally a waste of time and resources. Persistent overbuying is usually adopted by below-utilization, devaluation and eventual obsolescence of inventory the company almost certainly need to not have bought in the very first area. Eventually, several companies discover they have so significantly money tied up in ineffective stock supplying no “return on investment”, that other parts of the organization start to suffer funds source shortages. Whilst this pattern does not apply to each enterprise with inventory, it is certainly a familiar story to a lot of little and medium firms, specially those that are battling, or go out of organization thanks to funds flow troubles. The Rapid Correct Numerous organization homeowners, faced with greater consciousness of stock administration troubles, right away start browsing for, and acquiring, swift-fix solutions. They often retain the services of much more people obtain limited-perform stock control or bar coding software program fire suppliers and employ the service of new types and situation edicts about maximum stock paying amounts, all with the laudable objective of speedily correcting stock administration concerns. But getting a solution just before understanding the dilemma is a little bit like getting shoes ahead of being aware of the needed shoe size. Similarly, the likelihood of actually solving stock control problems productively with this strategy are about the very same as acquiring the appropriate shoe dimension in this kind of a state of affairs… about one in ten. Cause & Effect Ahead of diving into inventory administration remedies, it is crucial to have a comprehensive knowing of the brings about and outcomes of stock control problems inside the enterprise. Listed here is a stage-by-phase method towards framing stock difficulties in comparatively straightforward, workable increments. The final results of these information accumulating steps (which need to be formally documented) can later be used as input when assessing and prioritizing potential cures to stock management and management issues. There will be a temptation to attempt and remedy problems as they are encountered and discussed in these measures. But the essential goal in this stage is to obtain and quantify details, not to provide answers. That will appear later, as soon as a total comprehending of stock-related problems and requirements have been extensively discovered and vetted. The four Steps Right here are four steps that can be undertaken quickly by organizations all set to improve their stock administration and handle techniques: 1. Defining the Difficulties The first action requires making a list of inventory troubles by division. This is a bold phase, because it entails inquiring employees and professionals the query: “what is improper with this picture?”. But even although they may not discuss about it openly (with no a little coaxing), staff are typically the very best resource of data relating to what performs and what isn’t going to in tiny firms. There may be a temptation for professionals to “fill in the blanks” on behalf of their personnel, or marginalize their enter entirely. Whilst it is certainly the owner’s prerogative to choose how to continue in this area, the best data comes from the men and women who actually execute the perform on a every day basis in every division. So, the greatest method is to contact a conference (or conferences), bring a yellow pad, inquire staff how stock manage troubles have an effect on working day-to-day operations, and compose down every little thing they say. Relying on the sector served by the organization, feedback these kinds of as the subsequent will not be unheard of: Income – “We are shedding bargains because we are unable to deliver what the customer is getting”. Advertising – “Our promotions are ineffective due to the fact customers get fired up about, and get action on specials, only to uncover the goods we are selling are not offered.” Getting – “We’re investing a fortune on freight due to the fact we buy so much inventory on an unexpected emergency foundation. We also routinely have suppliers drop-ship elements we truly have in stock, due to the fact the support techs are unable to uncover the parts they need to have just before they go away for the client site.” Warehouse – “We in no way know what we have and what we don’t have, so we typically feel we can fill an purchase fully, only to uncover out at the last minute that we can’t, due to the fact of unanticipated inventory shortages. That requires us to start the pick/pack/ship procedure more than yet again so the transport paperwork is appropriate.” Production – “Our manufacturing ideas are often a mess, simply because we’ll program and commence a production run, only to have to consider the operate offline due to the fact we’re lacking a vital raw substance. This halting and starting up of production jobs is killing us in unproductive labor price and diminished efficiency”. Accounting – “Our invoices a getting paid out more slowly and gradually because we partial-ship most of our orders, and our buyers have to take further actions to reconcile several shipments in opposition to their purchase orders. As well often, our invoices wind up in the customer’s analysis pile, as an alternative of becoming processed easily and swiftly”. two. Quantifying Stock Management Troubles This stage involves quantifying and applying a dollar value to the stock management troubles outlined in Action 1. It really is a much more difficult stage, but it has to be done, and the benefits will help prioritize problems and (down the street) evaluate the worth of possible remedies in opposition to the price of the problems. It will also provide a fact-examine in opposition to management’s notion of how stock troubles are really influencing the organization. Appropriate inquiries to staff may possibly contain the adhering to: Sales – “How numerous offers have we lost in the very last ninety days thanks to inventory-outs, and what is the greenback benefit of people losses?”. Advertising – “How numerous promotions have missed their targets due to the fact of supply troubles, and what is the value of individuals promotions?”. Purchasing – “How a lot have we invested on emergency freight shipments because of to raw content or concluded items shortages?”. Warehouse – “How a lot of orders are we not able to ship on time, and total due to the fact of completed products or packaging content shortages?” Production – “How several manufacturing runs have been pulled offline simply because of unforeseen uncooked material shortages? What is the benefit of labor and tools downtime thanks to manufacturing interruptions relating to stock shortages? How is our production ability becoming impacted by stock-connected concerns, and what is the worth of that effect?”. Accounting – “How are payment delays relating to inventory shortages influencing aged receivables, and what is the price of these payment delays?”. 3. Calculating Stock Turnover Ratio Despite the fact that there are variants for distinct industries, the stock turnover (or “switch”) ratio offers a key indicator as to how swiftly stock is being used or sold in excess of time. www.effectiveinventory.com/consulting-services/ is the quantity of moments inventory is sold or in any other case consumed (i.e. used in production) relative to price of items offered for a distinct accounting period. Optimal Stock Flip Ratios are normally exclusive to certain industries and the nature of merchandise becoming marketed. For occasion, high price inventory this sort of as real estate properties or expensive healthcare tools may not shift (or turn) as quickly as goods characterised by lower greenback values and larger desire for each capita. Still, Inventory Switch Ratio is an crucial metric for any organization investing in inventory. The most common calculation for Stock Flip Ratio includes two variables: Expense of Items Bought, and Common Inventory Carrying Value, each calculated during a typical reporting period of time. For occasion, in purchase to determine the Stock Switch Ratio for an yearly interval, the total Expense of Items Sold (from the Revenue and Reduction Statement) for that yearly interval should be established very first. Then, a calculation of the Typical Stock Carrying Cost per month ought to be created. This can be completed by averaging the Stock Asset worth on the harmony sheet for each and every thirty day period in the exact same reporting period as the Value of Merchandise Marketed worth from above.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9337435960769653, "language": "en", "url": "https://www.geminiesolutions.com/enlighten/the-hard-truth-about-renewable-energy", "token_count": 630, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.076171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:023f741f-1066-400d-85fc-f40472d96bb8>" }
We have the technology and the capability to move America to 100% renewable energy Professor Mark Jacobson from Stanford’s Atmosphere and Energy Program with the support of his graduate students have provided the blueprint to achieve 100% renewable energy in America. Reaching 100% renewable energy simply by increasing the current capacity is to slow. Humanity, yes humanity - as this is a global problem – has roughly ten years to make extraordinary reductions in our greenhouse gas emissions to have the best chance of mitigating the most devastating affects of global warming. Those devastating affects include: Even if we assume the political and institutional barriers that current exist were removed…that still would not be enough. In America, the time needed to: Renewable energy should be the last step in the movement to a low carbon-society This statement may seem contradictory to you but bear with me. First let me provide a few clarifications. First, a low carbon-society is a society that functions at a high level while only releasing greenhouse gases that can be managed by our planet naturally without increasing the planet’s global average temperature. Note. A low-carbon society includes transportation emissions so moving away from crude oil as the main source for powering our transportation. However, for this article we are focusing on carbon emissions from buildings. Second, when I state - “renewable energy should be the last step” - I am referring to a general rule for any particular situation. That rule is this: The renewable energy capacity added should be based on the energy consumption after conservation and efficiency measures. This means a homeowner can and should add solar panels to their roof immediately. However, the number of solar panels should not be for their current energy consumption but for the forecasted energy consumption AFTER conservation and efficiency measures are implemented. This rule should also be applied to commercial building owners and municipalities alike. Shameless plug: Gemini can help commercial building owners and municipalities calculate the renewable energy capacities they should be trying to reach. There are three reasons renewable energy should be the last step: Cost. Renewable energy is almost always more costly than conservation or efficiency measures. Typically, from a cost perspective it goes: Time. Implementing renewable energy is almost always slower that implementing conservation or efficiency measures. In fact, you should start planning for your renewable energy immediately after receiving an energy audit report to get the process started. Impact on Environment. Remember our goal is to reduce greenhouse gas emissions. Purchasing 10 solar panels when you only needed 5 if you reduced your energy consumption through conservation and efficiency measures hurts our goal. Constructing a solar photovoltaics (PV) panel, transporting, maintaining, and eventually disposing all release greenhouse gas emissions. Note. Renewable energy like solar PV are considerably less impactful than fossil fuels like coal. https://www.nrel.gov/docs/fy13osti/56487.pdf So…save your money, save your time, and save the planet by choosing renewable energy last. Your major takeaways should be:
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9499304294586182, "language": "en", "url": "https://www.trc.ac.uk/business/", "token_count": 566, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0289306640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2e6e2320-79d0-4b4d-a398-7aeb5849b4e3>" }
In this unit you will learn about the purposes of different businesses, their structure, the effect of the external environment and how they need to be dynamic and innovative to survive. You will produce a piece of coursework (that is internally assessed) that compares two businesses in terms of their organisation, purpose and features, stakeholders and the markets in which they operate and how this is influenced by the external environment. Developing a marketing campaign In this unit you will learn about the role of marketing, market research methods and the marketing mix. You will then learn how to use this information to produce a marketing campaign. This is externally assessed and will be a written task that is submitted by computer. You will be issued the marketing campaign brief two weeks before your assessment giving you time to plan and research what you might include in the campaign. Personal and Business Finance In this unit you will study the purpose and importance of personal and business finance. You will learn about the functions and role of money, different ways to pay, current accounts and how to manage your personal finances. You will also learn about different financial institutions, the purpose of accounting, types of expenditure and the different sources of finance a business can access. You will also learn how to use accounting concepts such as breakeven, cashflow, final accounts and ratio analysis. This is assessed by a written exam paper. Recruitment and Selection In this unit you will learn how effective recruitment and selection can contribute to business success and what is involved in this process. This is assessed internally, and you will investigate recruitment process in a large organisation and then take part in the process itself and evaluate how successful you were. There will be various study visits which form a compulsory part of the course and must be attended as far as possible, these may include Cadbury world, Manchester Airport, Manufacturing plants, Alton Towers, Meadowhall and Magna. All students studying BTEC programmes will be expected to complete a work placement as a compulsory part of the course. You will be expected to investigate and find your own placement in an area that interests you. Your teachers and careers staff will help you with this. Externally assessed piece of coursework. Standard TRC Level 3 entry requirements including Grade 4 in English Language. Grade 3 in Maths. Economics, IT, Law and Languages. Business Administrator, Sustainability Consultant, Business Advisor, Trainee Accountant, Book Keeping. Check out our Culture Vulture link to see what takes your interest. Click the link and have a go at our 10-week learning plan to get you off to the best start. Follow the link to see an introduction to the course, identifying what you will study with us in the first few months and what you might already know.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8958598375320435, "language": "en", "url": "https://completesuccess.in/index.php/2017/04/17/quiz-5/", "token_count": 3794, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06787109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:28ec5201-0561-4789-a8e7-d36fade16ffd>" }
Q1. An orange vendor makes a profit of 20% by selling oranges at certain price. If he charges Rs. 1.2 higher per orange he would gain 40%. Find the original price at which he sold an orange. a) RS. 5.6 b) Rs. 7.2 c) Rs. 4.8 d) Rs. 8 e) None of these Q2. If difference between CI and SI on a certain sum of money is Rs. 72 at 12% p.a. for 2 years then find the amount. a) RS.6000 b) Rs.5000 c) Rs. 5200 d) Rs. 4900 e) None of these Q3. Rs. 3000 is distributed among A,B and C such that A gets 2/3rd of what B and C together gets and C gets ½ of what A and B together gets. Find C’s share. a) Rs.750 b) Rs. 1000 c) Rs. 800 d) Rs. 1200 e) None of these Q4. 8 men can do a piece of work in 12 days while 20 women can do it in 10 days. In how many days 12 men and 15 women complete the same work. a) 4 b) 5 c) 6 d) 7 e) 8 Q5. A fair coin is toss repeatedly. If head appears on first 4 tosses then what is probability of appearance of tail in fifth toss- a) 1/5 b) 2/5 c) ½ d) 4/5 e) None Q6. Walking at ¾ of his usual speed a man is 16 min late for his office. the usual time taken by him to cover that distance is a) 48 minutes b) 60 minutes c) 42 minutes d) 62 minutes e) None Q7. A, B and C can do a work in 24, 32 and 60 days respectively. They start working together. A left after 6 days B left after 8 days. how many more days are required to complete the whole work? a) 30 b) 25 c) 22 d) 20 e) None of these Q8. The price of 2 oranges, 3 bananas and 4 apples is Rs. 15. The price of 3 oranges, 2 bananas and 1 apple is Rs. 10. What will be price of 3 oranges, 3 bananas and 3 apples? a) Rs. 10 b) Rs. 8 c) Rs. 15 d) Can’t be determined e) None of these Q9. The difference between the time taken by two cars to travel a distance of 350 km is 2 hours 20 minutes. If the difference between their speeds is 5 km/hr., what is the speed of faster car in km/hr.? a) 30 b) 35 c) 40 d) 45 e) 50 Q10. The sum of circumference of a circle and perimeter of a square is 210 cm. The diameter of the circle is 21cm. What is the sum of area of circle and square? a) 1158.4 sq. cm b) 1058.2 sq. cm c) 1642.5 sq. cm d) Can’t be determined e) None of these 1.b 2. b 3. b 4.b 5.c 6. a 7. c 8. c 9.a 10. c Directions (1-5): Read the following information carefully and answer the questions given below: In a party there are eight friends in which four are boys and remaining are girls, in which girls are Ela, Ishita, Chitra and Diya and boys are Badal, Hitesh, Faruq and Ganesh. In between these eight friends one girl and one boy are the host. They are sitting around a rectangular table. Three are on the longer side of the rectangular table and hosts are on the remaining side. Some additional information is given below: (a) All four girls are sitting adjacent to each other and Diya is 3rd right of Ganesh. (b) Ganesh is the host sitting 2nd left of Chitra. Hitesh and Chitra are sitting opposite to each other. (c) Ishita is 3rd left of Ganesh. Faruq is third left of Ela. Q1. Who is the 2nd host in this arrangement? (1) Hitesh (2) Badal (3) Ela (4) Chitra (5) none of these Q2. What is the Chitra’s position? (1) Immediate left of Ganesh (2) Immediate right to Ganesh (3) Faruq’s 2nd right (4) Faruq’s immediate right (5) Immediate left of Ela. Q3. Who is immediate left of Diya? (1) Ishita (2) Badal (3) Faruq (4) Ganesh (5) None of these Q4. How many girls and boys are sitting opposite to the same gender? (1) One (2) Three (3) None (4) Two (5) None of these Q5. Which of the following statement is true? (1) Three boys are sitting on one longer side (2) Ishita is the another host (3) Faruq and Ela are opposite to each other (4) One girls is sitting to the second right of Badal (5) None of these Q6. Vinay moves towards South-East a distance of 14m. then he moves towards West and travels a distance of 28m. From here, he moves towards North-West a distance of 14m. and finally he moves a distance of 8m towards East and comes to a halt. How far is the starting point where he stood? (1) 20 m (2) 22 m (3) 6 m (4) 8 m (5) None of these Q7. From the word ‘INTENSIFICATION’, how many independent words can be made without changing the order of the letters and using each letter only once? (1) Four (2) Five (3) Six (4) More than six (5) None of these Q8. In a class of 180, where girls are twice the number of boys, Rupesh, a boy, ranked thirty-fourth from the top. If there are eighteen girls ahead of Rupesh, how many boys are after him in rank? (1) 45 (2) 44 (3) 60 (4) Can’t be determined (5) None of these Q9. Two priests A and B were talking to each other. A said to B, “My mobile set rings every 15 minutes”. B retorted, “My mobile set rings every 18 minutes.” The mobile sets of both A and B rang simultaneously at 8 am. Four of the following five timings of a certain day are alike with respect to the above situation and hence form a group. Which of the following timings is different from the group? (1) 6.30 am (2) 5.30 pm (3) 3.30 pm (4) 9.30 am (5) 6.30 pm Q10. Pointing to a girl, Mr Raju said, “This girl is the daughter of the husband of the mother of my wife’s brother.” Who is Raju to the girl? (1) Husband (2) Brother-in-law (3) Father-in-law (4) Either husband or brother-in-law (5) None of these 1. (3) 2. (4) 3. (5) 4. (4) 5. (5) 6. (1) 7. (4) 8. (2) 9. (2) 10. (4) Directions (1-5): Which options will most nearly replace the Phrase given in the following question? Q1. Research in sciences and social sciences often complement each other… (1) At the same time, both retain significant differences because of the basic questions each deals with. (2) Sciences explain the causes of systems whereas social sciences explain the implications of actions. (3) The implications of actions do not explain why cause effects occur. (4) The differences between life and sciences are decreasing. (5) The questions posed by the two shares certain assumptions about reality. Q2. Studies show that there is hardly any difference between human beings and apes in their mental and physical capacities… (1) What a human can think an ape can also think (2) But the studies on mammals are often misinterpreted. (3) This is particularly true for India (4) Soon we will see apes replacing human beings in factories (5) None of these Q3. The cost of producing tillers in India is eight percent less than the cost of producing tillers in china,………. (1) India is planning to export tillers to China (2) India has a democratic form of government while China is not so much democratic in its decisions. (3) Tax rate in china is higher in comparison to India (4) China has to import raw material from India for manufacturing tillers (5) None of the above Q4. A film, to be successful at the box office, must satisfy the audience, by reflecting its values……………. (1) It is a doubtful perception as films with more of violence and sex than moral values are doing good repeatedly at box office (2) Central board of film censoring decides on the social values (3) Audience cannot be fooled for long with no content or low content films (4) Box office collections are the only criteria to judge the value of a film (5) None of these Q5. People who take drug X for obesity to reduce weight could end up defeating their purpose… (1) Since research shows that high levels of x may induce a craving for starch-based foods. (2) Since this drug has many side effects like high blood pressure and high cholesterol level (3) Drug X is prohibited for sale in India and its use is a punishable crime (4) Due to drug X muscles loose tension and become susceptible to become obese with even slightest intake of fat (5) None of these Directions (6-10): Choose the option which has correct pair to fill the blank space given in question. Q6. The ……………………reforms that are taking place in the global economic scenario are …………….as they are full of optimism. (1) Exorbitant , unnecessary (2) Colossal, unfavorable (3) Drastic, disappoint (4) Sweeping ,unrealistic (5) Positive, heartening Q7. Sita was so …………………in his prayer that she did not pay any ……………………to our presence. (1) Engrossed, remuneration (2) Absorbed, heed (3) Perfect, attention (4) Careless, significance (5) Indifferent, substance Q8. He expressed ……………………for his hasty ………………. (1) Regret , action (2) Pleasure , speech (3) Repentance ,movement (4) Anguish , provocation (5) Displeasure ,win Q9. The residents on this island are so………………..that they do not ………………even their closest relatives. (1) Callous, consider (2) Hospitable ,greet (3) Uncivilized ,recognize (4) Indifferent , hurt (5) Unreliable ,welcome Q10. The annual ……………………..of industrial products has risen …………………..in the recent years. (1) Output, enormously (2) Outcome, hugely (3) outlay ,paramount (4) Outbreak ,tremendously (5) Decline , scarcely Directions (11- 15): Rearrange the following seven sentences (A), (B), (C), (D), (E) ,(F) and (G) in the proper sequence to form a meaningful paragraph; then answer the questions given below them: (A) It is assumed that these banks work in a hassle free manner as compared to government banks. (B) Government banks are more trustworthy if you are considering taking a long term loan. (C) They offer you lower interest rates, and this is the main reason why people seek education loans from government banks. (D) The interest rate and terms are decided by the Reserve Bank and these national banks cannot make any changes by their own. (E) But private banks work more smoothly. (F) The employees in these banks are customer friendly and ready to help people. (G) Government banks offer lower interest rate it is well-known fact that the interest rate of government banks is much lower as compared to private banks. Q11. Which should be the FIRST sentence? (1) A (2) B (3) C (4) D (5) E Q12. Which should be the SECOND sentence? (1) A (2) B (3) G (4) D (5) E Q13. Which should be the THIRD sentence? (1) A (2) B (3) C (4) G (5) E Q14. Which should be the FOURTH sentence? (1) D (2) B (3) C (4) G (5) E Q15. Which should be the FIFTH sentence? (1) F (2) B (3) C (4) D (5) E 1. 1 2. 1 3. 3 4. 3 5. 4 6. 4 7. 1 8. 1 9. 3 10. 1 11. 2 12. 3 13. 3 14. 1 15. 5 Q1. Extension name of flash file is ___. 1) .pdf 2) .swf 3) .pmd 4) .prd 5) None of these Q2. Who invented world wide web? 1) Mosaic corporation 2) Opera corporation 3) Tim berner lee 4) Vint cert 5) None of these Q3. URL stands for ____. 1) Uniform Resource Locator 2) Universal resource locator 3) Address bar 4) All 1, 2 & 3 are correct 5) None of these Q4. A thing present in real world in physical form is called____. 1) DBMS 2) Entity 3) Modulation 4) Keywords 5) None of these Q5. BUG is – 1) To find error in any software testing 2) To find error in any software code 3) logical error of any program 4) both 1 & 2 5) None of these Q6. Which of the following is not a type of key? 1) Alphabetic Keys 2) Numeric keys 3) Function keys 4) Toggle keys 5) None of these Q7. If a previously saved file is edited ___? (1) it cannot be saved again (2) the changes will automatically be saved in the file (3) the file will only have to be saved again if it is more than one page in length (4) its name must be changed (5) the file must be saved again to store the changes Q8. Which of the following converts all the statements in a program in a single batch and the resulting collection of instructions is placed in a new file? (1) Compiler (2) Interpreter (3) Converter (4) Instruction (5) None of these Q9.A program that generally has more user-friendly interface than a DBMS is called a ____? (1) front end (2) repository (3) back end (4) form (5) None of these Q10.When you install new programs on your computer, it is typically added to the ___? (1) All programs (2) Select programs (3) Start programs (4) Desktop programs (5) None of these Q11. Which of the following statements is FALSE concerning file names? (1) Files may share the same name or the same extension but not both (2) Every file in the same folder must have a unique name (3) File extension is another name for file type (4) The file extension comes before the dot (.) followed by the file name (5) None of these Q12. Which of the following is the key function of a firewall? (1) Monitoring (2) Deleting (3) Record (4) Copying (5) Moving Q13. Programming language built into user programs such as Word and Excel are known as____? (1) 4GLs (2) macro languages (3) object-oriented languages (4) visual programming languages (5) None of these Q14. In MS Word Key F12 opens ____? (1) Save dialog box (2) print dialog box (3) New dialog box (4) Save as dialog (5) none Q15. What is gutter margin? (1) Margin that is added to the left margin when printing (2) Margin that is added to right margin when printing (3) Margin that is added to the binding side of page when printing (4) Margin that is added to the outside of the page when printing (5) None of these 1. 2 2. 3 3. 1 4. 2 5. 3 6. 5 7. 5 8. 1 9. 1 10. 1 11. 4 12. 1 13. 4 14. 4 15. 3
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9479930400848389, "language": "en", "url": "https://datafireball.com/2018/10/04/antidilutive-in-eps-calculation/", "token_count": 654, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.061767578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7bb20aa1-e720-438c-9d70-d9298e7ffd81>" }
EPS (earnings per share) is a very important ratio in income statement, it is calculated as earnings (net income) attributed to common shareholders divide by common shares outstanding. Actually, it is so important that it is required to include EPS on the face of the income statement. As you can see, they not only show the EPS, there is also another line right below it which is the diluted EPS. The reason that diluted EPS need to be disclosed to the public is that there are different kinds of equity like preferred stock, convertible stock that has the potential of “diluting” the EPS. How big a difference is can be? Usually it is pretty small, like for Walmart, the earnings per share is only 0.01 but in some cases, the difference can be material enough that investor want to know the potential downside. Say for example, convertible stock sometimes got paid dividend and can also be converted to certain amount of shares. If not convert, that is the simple calculation for EPS, however, the diluted EPS to evaluate if all the convertible stocks got redeemed into common stock, on one hand, the net income will increase because the earnings that used to go to dividend now can be retained, on the other hand, the number of outstanding common shares also increased due to the conversion. In this case, there is a scenario where the diluted EPS if converted can actually be higher than the basic EPS, if this happens, the diluted EPS sort of loses its meaning of providing a good projection of the potential downside. Both IFRS and GAAP require that this kind of EPS – Antidilutive Security be excluded from the diluted EPS calculation. Now, let’s do some simple calculation and see under what situation Antidilutive security could exist. Say a company’s net income is I, number of common shares outstanding is C and number of preferred stock is P. The term for preferred stock is that the annual dividend paid per share is D and it can also be converted to X amount of common stock if wanted. Basic EPS = (I – P * D) / C Diluted EPS = I / (C + P * X) The constrain is that Basic EPS >= Diluted EPS (I – P * D) / C >= I / (C + P * X) After a bit transform, we got: X*I – D*C – P * X * D >= 0 I like to rearrange it into the following format: D <= I / (P + C / X) This is easy to interpret, P+C/X can be interpreted as if all shared got converted into preferred shares. If the dividend is smaller than if all converted to preferred stocks, then it is dilutive. If not, then it is anti-dilutive which should be excluded. So in this case you can see, if the dividend for the preferred stock is too high, or the conversion X is too small, it is highly likely that the constrain will not hold and it will be anti-dilutive. Also, if the number of preferred stock is substantial, this will also become anti-dilutive.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9657676219940186, "language": "en", "url": "https://itsupportguys.com/it-blog/4-ways-logistical-problems-can-stifle-your-business/", "token_count": 566, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.009521484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:dae9be94-3ddc-4f4b-9067-6b9f2fa9b9ec>" }
Today’s business has to be concerned about more variables than ever. Companies that depend on their supply chain, and those that distribute goods, need to be able to rely on their management to coordinate efficient and effective business. In the past, business moved slower, and the management of a supply chain was done by a department of people. Today, the process is significantly more streamlined. Today, we’ll take a look at contemporary supply chain management and how thorough logistics can be a real difference maker. What Are Logistics? Logistics are the coordination and management of resources from their point of origin to the point of purchase or consumption. As a result it is a core variable for every manufacturer, especially ones with distribution arms. With the improved analytics systems that are available today, data can be visualized to make it easier to decipher. This, in turn, allows companies to optimize their procurement, production, packaging, and distribution systems to fit the needs of their clients more effectively. Managing a supply chain can be difficult. Procurement hardly ever goes as smooth as you’d like and production itself has its headaches for sure, but without a detailed and functional plan on how to get the resources you need to create the products your customers expect, at the price they expect, your business is likely going to have a tough go of it. The pieces that it takes to create the products that you sell, the management, the labor force, the resources necessary for production costs, and the schedule are all variables that have to managed. Production logistics provide the glue that makes all of this possible. Production logistics can also deliver a clear platform to view all produced products and where they need to be on the supply chain. Asset Control Logistics Asset control is typically utilized by retail organizations. These companies need to have some control over their products, so asset control logistics encompass strategies such as brand management and public relations. For the manufacturer that has a distribution arm of their business, and for distributors, their logistics can get pretty detailed. Transportation management, warehouse management, order fulfillment and more make up this process. The Internet of things has been a major benefit for businesses looking to build efficient and effective distribution policies. How do consumers acquire and utilize the goods and services they purchase? Consumer logistics not only deal with the supply chain, but also the production, packaging, shipping, and support from the company in which they purchase the goods and services. Knowing how their company affects the people that utilize their products helps manufacturing companies make better decisions to improve their coordination and distribution. Is your business having trouble managing their logistics of their business? Call IT Support Guys today at 855-4IT-GUYS (855-448-4897) to talk to one of our knowledgeable IT consultants.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9479886889457703, "language": "en", "url": "https://startupsuccessstories.com/how-blockchain-technology-will-impact-the-digital-economy-in-2018/", "token_count": 943, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0673828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:346a9938-b624-497a-bca5-fb96e0968357>" }
Blockchain Technology is one which creates a bonding between two unknown things without the intrusion of any third party which acts on nodes. Blockchain Technology is a decentralized technology. It is a shared database where you can perform entries but only after its authentication and encryption. Blockchain Technology offers you a decentralized register of its authority by defining any individual transaction in the system, from the creation of a block and through any number of transfers made. Each computer is patted into the system that stores a copy of this Blockchain, and before a transaction, it can be executed the system checks that their version of the Blockchain is in sync with all other versions within the network. Moving into 2018, it is significant to accomplish the full power of blockchain technology which will not only change the world of ‘Digital Economy’ but it will also have extremely diverse applications in other various fields. Also, there will be numerous enterprises which could benefit from enhanced security, safety, transparency and the removal of redundant mediators. Here we are going to discuss ‘How Blockchain technology will change the Digital Economy in the year 2018’. Ethereum will be the largest blockchain developer ecosystem in 2018 by several multiples Although Blockchain technology is commonly associated with Bitcoin due to which there have many applications that go way beyond digital currencies. In fact, Bitcoin is only one of several hundred applications that use blockchain technology today. Ethereum now is a thriving developer community. If we are creating blockchain applications then it has needed a mixed background in coding, cryptography, computation as well as significant resources. Earlier unimagined applications, from electronic polling & digitally recorded capital assets to regulatory compliance & trading are now actively being developed and deployed faster than ever before. By providing developers with the tools to build decentralized applications, Ethereum is making all of this possible which is also expected to do same in the year 2018. Unusual Features of Blockchain Technology Blockchain technology is an anonymous tool that protects the identity of users. This technology is made of an algorithm that lessens the confirmation which is required for online transactions. It cannot be controlled by any single entity that’s why it has no single case of failure where it also re-checks itself in every 10 minutes. So these unusual features of blockchain technology have gained a lot applaud worldwide that’s why it has the greater probability that this technology will impact undoubtedly in the year 2018. Enhanced size and space There are much expectation and demand of Blockchain from all sectors where it needs more flashing and more spacious. In the year 2018, Blockchain will be furnished to store more data almost at every stage of the transaction whereas, now it is able to process only 7 transactions per second, while thousands of financial transactions occur every second throughout the world. So in this year, it is expecting that technologists will improve its speed and performance of Blockchain. The huge growth and rising publicity of the blockchain market have attracted thousands of new investors over the past six months. A large proportion of these investors are drawn to the potential gains without completely knowing the technology. Newbies make decisions by using their passions and no passion is more compelling than anxiety where it has the anxiety of losing their money and anxiety of missing out there are the most common reasons for their anxiety. Hence newbies will often buy into an overpriced market so Blockchain technology is one of the best options for the newbies to invest its money that’s why it is also expected that after spreading awareness toward Blockchain technology digital economy will be most benefitted. The Internet of Things and Blockchain IoT is assumed as all about transactions, contracts, and liability in a spread environment. The combination of blockchain and IoT is viewed at and efficiently leveraged for multiple causes, ranging from smart contracts and IoT data monetization models across mixed chains of connectivity where trust is a crucial thing assumed. Already, there are many blockchain applications in the context of the Internet of Things and some vendors have distinct solutions to facilitate the use of blockchain for IoT to, among others enhance trust, which save costs and speed up transactions as well. So, this connection of both these are unique so we can say these are going to boost up the digital economy in the year 2018. Latest posts by StartupSuccessStories.com (see all) - Regent Seven Seas Cruises®Announces Christie Brinkley Will Serve As Godmother To Seven Seas SplendorTM - December 6, 2019 - Bethereum Won SiGMA iGaming Best Startup 2019 - December 5, 2019 - London Ranks 1st for the 8th Consecutive Year among 48 cities in the Global Power City Index (GPCI) - December 5, 2019
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9476439356803894, "language": "en", "url": "https://www.dtnpf.com/agriculture/web/AG/news/article/2021/02/01/la-nina-lowers-argentinas-yields", "token_count": 634, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a14c1994-2915-4763-a50c-d4e44c4e97e0>" }
The dominant large-scale weather feature in the last several months has been the presence of a cool-water La Niña event in the equatorial Pacific Ocean. Commodity markets, particularly corn and soybeans, surged to multiyear highs during this winter season, with fears of lower production from South America because of La Niña contributing to this rally. The market has had good reason to be concerned. DTN contributing analyst Joel Karlin dug into the statistical relationship between La Niña and subsequent production in Argentina, and found some noteworthy details. Karlin used a three-month running mean measurement of central equatorial Pacific temperatures known as the Oceanic Niño Index (ONI) for his inquiry. In this benchmark, a value of minus 0.5 is the threshold for La Niña, with more negative numbers indicating a more intense La Niña event. He found that the three-month ONI for September through November 2020 was a minus 1.2. "The three-month average reading ... for the September through November 2020 period ... is the lowest ONI reading since January 2011 ..." Karlin notes. "One can see that often in La Niña years, Argentine corn and especially soybean yields come in below trend, sometimes significantly so." The statistical correlation between the Argentina production numbers and the yearly averages of the Oceanic Niño Index is strong. Karlin found a correlation value of 39.1% for Argentine corn and a very high 46% for Argentina soybeans related to negative ONI values. The story is much different for Brazil production: Brazilian corn output has only a 13.6% correlation, while Brazil soybean production actually has a negative 15.6% correlation to La Niña. "Some of the best Brazilian soybean yields have occurred in La Niña seasons," Karlin notes. Varying relationships to La Niña in Argentina and Brazil can be explained by the dramatic change and relocation of Brazil's crop production during the past 20 years. The largest Brazil crop-production state, Mato Grosso, is in a subtropical climate; Mato Grosso is in central Brazil, where production has migrated during the past two decades. Previous top production states of Rio Grande do Sul and Paraná are in southern Brazil and in closer proximity to Argentina. Rio Grande do Sul and Paraná have shown more susceptibility to reduced yields in La Niña seasons. Projections for the life span of the current La Niña event indicate a weakening during the first half of 2021 and the Pacific equatorial temperatures to reach normal status during the second quarter of the calendar year. However, by then, La Niña will have been around long enough to do its work in affecting those Argentina yields. The question that may linger all the way through South American harvest is: How much of a reduction will we actually see? > Read Bryce's weather blog at about.dtnpf.com/weather. > You may email Bryce at [email protected], or call 402-399-6419. (c) Copyright 2021 DTN, LLC. All rights reserved.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9217751622200012, "language": "en", "url": "https://www.ecologic.eu/12404", "token_count": 670, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.042236328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b124b6f2-2b3d-451e-acb0-6122a3eefc4b>" }
Overview of Costs and Benefits of Adaptation at the National and Regional Scale Watkiss, P.; Hunt, A.; Rouillard, J.; Tröltzsch, J.; Lago, M. 2015: Overview of costs and benefits of adaptation at the national and regional scale. In: OECD (2015), Climate Change Risks and Adaptation: Linking Policy and Economic, OECD Publishing, Paris. http://dx.doi.org/10.1787/9789264234611-en, pp. 37-75. The published OECD-report "Climate Change Risks and Adaptation: Linking Policy and Economics" sets out how the latest economic evidence and tools can enable better policy making for adaptation. Scientists from Ecologic Institute con-tributed to the chapter on the costs and benefits of adaptation on the national and regional scale. Climate change is giving rise to diverse risks, ranging from changing incidences of tropical diseases to increased risks of drought, varying widely in their potential severity, frequency and predictability. Economic analysis has a vital role to play in supporting governments efforts to integrate climate risk into policy making, by identifying costs and benefits and supporting decision-making for an uncertain future. The chapter: "Overview of costs and benefits of adaptation at the national and regional scale" in the OECD-report: "Climate Change Risks and Adaptation: Linking Policy and Economics" was prepared with input from the European project: Economics of Climate Change Adaptation in Europe (ECONADAPT) funded by the European Commission. It reviews the latest evidence on the costs and benefits of adaptation, and draws out some of the key findings and emerging insights. It explores the use of information on the costs and benefits of adaptation to justify the case for action and prioritise resources to deliver the greatest benefits. Results of national and global studies are provided. The latest estimates are provided for the following sectors and risks: sea-level rise, coastal flooding and storms; river, surface water and urban flooding; water supply and management; infrastructure; agriculture; health; biodiversity and ecosystem services; business, services and industry. The main findings of the chapter are that: - The information base on the costs and benefits of adaptation has significantly evolved in recent years. It has moved beyond the previous focus on coastal areas to include water management, floods, agriculture and the built environment. However, gaps remain for ecosystems and business, services and industry. - The methods for identifying options and assessing costs and benefits are also changing. There is an increasing use of new approaches that aim to support decision making under uncertainty, and a focus on early low-regret options. This leads to a different suite of options, including a focus on capacity building and non-technical options, and differences in the timing and phasing of options. - Improved information is also available on the aggregate costs of adaptation. Recent implementation and policy orientated studies indicate higher costs than the previous review, because of existing policy objectives and standards, the need to consider multiple risks and uncertainty, and additional opportunity and transaction costs associated with policy implementation. - While important gaps exist in the empirical evidence, and there are issues of transferability and the limits of adaptation, the new evidence base provides an increased opportunity for sharing information and good practice.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.936843991279602, "language": "en", "url": "https://www.ideasforindia.in/topics/macroeconomics/a-comparison-of-automobile-industries-in-india-and-china.html", "token_count": 1592, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1298828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a20052d1-576c-4be4-8703-ffb28fd0fb79>" }
The automobile sector in both India and China developed due to waves of investment in these countries since the late 1980s. This column discusses how India’s automobile sector has grown differently from that of other developing countries, especially China. In contrast to China, India has relied much more heavily on domestically-grown lead firms and has hence, benefitted at a slower pace from global best practices. India is the second fastest growing market for automobile and auto-components in the world after China. The automobile sector in India and China developed due to waves of investment in these countries since the late 1980s. The development of the automobile industry in India has been different from that of other countries like Mexico and China (Ray and Miglani 2016). Foreign multinationals can build domestic firm capacity In the case of India and China, the transfer of good working practices was driven by the arrival of international car makers, often operating as joint ventures with local partners. Based on a survey of six auto-component suppliers in India and nine in China, Sutton (2004) examines the degree of development of the local supply chain in each country. From the early 1990s onwards, multinational companies (MNCs) entered both markets and in each case were required to achieve a high level of domestic content. Domestic content rules typically require foreign investors to source a minimum amount of goods and labour from the local market. This led to switching from imported components to sourcing from local vendors, which led to the establishment of Tier 1 suppliers of international standards. The role of the ‘lead firms’ was critical to this process and determined the extent to which local capability developed. Lead firms are typically medium or large firms with forward and/or backwards commercial linkages, endowed with a specific set of technical and/or infrastructure competencies – they manage or govern high-value global supply chains. What is the nature of exports in automobiles from these two countries? According to Amighini et al. (2012), China a net importer of cars, is acting as a supplier of parts to leading world producers, whereas India, a net exporter of cars, is dependent on foreign (imported) parts for its final production. Based on our calculations from United Nations (UN) Comtrade, India’s exports of intermediates have come down from 58% to 45% (as a share of automobile exports) in the period 2006-2015. In the same period, Chinese exports of intermediates have come down marginally from 93% to 91%. Indian imports are dominated by intermediates with share of 88% in total automobile imports in 2006 and which has gone up to 95% in 2015. The share of intermediates in China’s imports was 57% in 2006 and has declined to 36% in 2015. Auto components can be divided into three major categories according to Sutton (2004). Group 1 comprises cylinder head and cylinder block, and is usually made in-house. Group 2 consists of parts that are often outsourced. Group 3 consists of parts that are normally outsourced. China’s automobile intermediate exports in 2015 were dominated by the category ‘other motor vehicle parts’, which includes new pneumatic tyres and brakes and servo brakes. India’s exports in 2015 were in categories ‘other motor vehicle parts’, chassis fitted with engine, gear boxes and parts thereof. In terms of Sutton’s classification, India is exporting more items from Group 2 while China seems to be exporting more from Group 3. Even in terms of auto parts, the two countries are specialising in different products, reflecting different competencies. How India and China differ According to a Deloitte Report, currently Chinese enterprises mainly export material-intensive and labour-intensive products with low added value, like glasses, tyres, wire harnesses, sound equipment, etc. In 2010, drive system parts including wheel hubs and tyres, arresters and other arrester system parts, body and accessory system parts including glasses and lamps registered robust exports. India is emerging as a major sourcing destination for engine and engine components (Edelweiss 2014). Suppliers in Rajkot are known to supply engine auto parts to German car makers like Mercedes, BMW and Audi. The region specialises in castings and forgings, apart from precision machined parts. Also, players like Solapur-based Precision Camshafts supplies camshafts to Porsche, Ford and GM’s manufacturing destinations in Europe, Korea and Brazil. Ring Plus Aqua supplies flywheel ring gears to Fiat and Mitsubishi. The two countries are supplying to different destinations. China’s exports are concentrated in the Russian Federation and Ukraine. While India sells only a portion of its exports to developing countries, the majority of the exports go to Western Europe and the US (Amighini 2012). In terms of intermediate products, China supplies to Japanese and Korean manufacturers and is quite integrated in the Asian regional value chain. India on the other hand, does not supply to any of the Asian countries, barring South Korea. Imports by China are in categories gear boxes and parts thereof, ‘other motor vehicle parts’, and safety belts of motor vehicles. India’s imports are in ‘other motor vehicle parts’ and gear boxes and parts thereof. Why do these countries seem to be exporting and importing parts belonging to the same category of the HS1 classification? Amighini (2012) point out that China and India import parts from leading producers worldwide and then have them assembled both for the domestic market and for export. The imports to China and India also come from different countries: traditionally US, Japan and Germany have supplied more than 82% of total imports of parts to India in the mid-1980s. However, now South Korea has become the largest parts supplier to India, as a direct consequence of the operations of Hyundai. After South Korea, China has also become a major supplier of parts to India. The Indian automobile sector has grown differently from that of other developing countries, especially China. In contrast to China, India has relied more on home-grown lead firms to propel its industry. A disadvantage of this approach is that the absorption of global best practices has been slow. Also, Indian suppliers have been lagging Chinese suppliers in both productivity and quality. India has the potential to become the export hub for automotive components especially in terms of aluminium, steel, cast iron and rubber intensive parts. It is at a disadvantage in electronics and plastic-intensive parts. Research for new product development is lagging behind in India but will become critical for India to maintain its low cost advantage. This column first appeared on the IGC Blog. - The Harmonised System (HS) of coding is an internationally standardised system of names and numbers to classify traded products. - Amighini, Alessia A (2012), “China and India in the international fragmentation of automobile production’’, China Economic Review, 23(2): 325-341. - Deloitte (2011), ‘Gaining momentum: Recent trends in China’s automobile parts market’, Deloitte Touche Tohmatsu CPA Ltd. - Edelweiss (2014), ‘Auto Components: The Future - Mega Trends, Mega Factors’, Edelweiss Financial Services Ltd. - Ray, S and S Miglani (2016), ‘Innovation (and upgrading) in the automobile industry: the case of India’, ICRIER Working Paper 320, May 2016. - Sutton, J (2004), ‘The Auto-component Supply Chain in China and India – A Benchmarking Study’, LSE STICERD Research Paper No. EI 34, February 2004.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9265995621681213, "language": "en", "url": "https://www.openintl.com/demand-response-working-with-customers-to-ensure-energy-reliability/", "token_count": 754, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.022216796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ca88d416-6075-4551-bad6-4e11d7ffaba9>" }
Utilities are facing many challenges when it comes to the provision of a reliable service including the forecasting of growing demand and the balancing of intermittent energy production. This is why utilities are now setting up Demand Response Programs that strengthen the role of the customer and improve network stability. With the active participation of customers, utilities are no longer the only ones working to ensure energy reliability. The Smart Electric Power Alliance (SEPA) found that 30% of their utility survey respondents are already using DR programs, and 70% are planning or considering such programs.1 Demand Response (DR) refers to any program which encourages a reduction or reshaping of customer consumption patterns to reduce network loading during peak hours and avoid the inefficiencies of running a network which is rarely used to its full capacity. These programs involve a range of measures intended to dynamically balance energy demand and network capacity; customers willingly participate in these programs in exchange for savings on their energy bill. For residential customers, utilities offer two types of DR programs. The first type is known as dispatchable or automatic DR and involves direct control over devices, such as air conditioning and water heating; typically, these devices are cycled (turned off for short periods of time) when the utility calls for a load reduction. The second type is called non-dispatchable DR and relies on customers taking voluntary actions in response to financial incentives. Approximately 5.7 million residential and small business customers are enrolled in DR programs. Traditional forms of DR (such as air conditioning or water heater switching) are still in use. Utilities call demand response events an average of 36 times per year for water heaters, versus eight for air conditioning.2 One common way to encourage customer participation in these programs is the use of time-based rates which incentivize customers to reduce the amount of energy they import from the grid at peak times. Some common rate programs are time-of-use pricing (TOU), critical peak pricing (CPP), and real-time pricing (RTP). To really get the most out of demand response while strengthening their customer relationships, utilities also focus on empowering their customers with real-time usage information, sending notifications about DR events, and encouraging the purchase of DR-enabling smart appliances. |Time-of-Use Pricing||Critical Peak Pricing||Real-Time Pricing| |The energy prices are defined in blocks of hours, (such as on-peak and off-peak). Rates are fixed for each period so the customer knows well in advance what the prices will be.||The price for electricity is drastically increased (usually three to ten times higher)3 when the utility decides to initiate a CPP event. These events are typically scheduled a day in advance and can last for 2–6 hours.||These rates vary with the wholesale market price as opposed to a fixed rate schedule. Customers are typically notified of prices on a day-ahead or hour-ahead basis.| To support these rating schemes, utilities depend on their billing system’s capabilities to manage new granular rating schemes, calculate billing determinants4 at an aggregated level, and bill usage based on the customer’s DR tariff plan. Utilities also rely on Meter Data Management (MDM) systems to manage usage data and gain insights into consumption patterns, allowing them to make informed decisions to improve the reliability of energy supply. To make this all possible, utilities interested in implementing DR programs should start preparing their commercial offerings and the enterprise solutions required to support them. To find out which strategic tools are needed to get the most out of Demand Response, read the next article: 4 Billing determinants are the measures of consumption used to calculate a customer’s bill.