meta
dict | text
stringlengths 224
571k
|
|---|---|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9517745971679688,
"language": "en",
"url": "https://www.ecomena.org/energy-scenario-in-jordan/",
"token_count": 578,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1611328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:00b7e9e8-2f9f-46df-a778-950f8f7a6545>"
}
|
The Hashemite Kingdom of Jordan is an emerging and stable economy in the Middle East. Jordan has almost no indigenous energy resources as domestic natural gas covers merely 3% of the Kingdom’s energy needs. The country is dependent on oil imports from neighbouring countries to meet its energy requirements. Energy import costs create a financial burden on the national economy and Jordan had to spend almost 20% of its GDP on the purchase of energy in 2008.
In Jordan, electricity is mainly generated by burning imported natural gas and oil. The price of electricity for Jordanians is dependent on price of oil in the world market, and this has been responsible for the continuous increase in electricity cost due to volatile oil prices in recent years. Due to fast economic growth, rapid industrial development and increasing population, energy demand is expected to increase by at least 50 percent over the next 20 years.
Therefore, the provision of reliable and cheap energy supply will play a vital role in Jordan’s economic growth. Electricity demand is growing rapidly, and the Jordanian government has been seeking ways to attract foreign investment to fund additional capacity. In 2008, the demand for electricity in Jordan was 2260 MW, which is expected to rise to 5770 MW by 2020.
In 2007, the Government unveiled an Energy Master Plan for the development of the energy sector requiring an investment of more than $3 billion during 2007 – 2020. Some ambitious objectives were fixed: heating half of the required hot water on solar energy by the year 2020; increasing energy efficiency and savings by 20% by the year 2020, while 7% of the energy mix should originate from renewable sources by 2015, and should rise to 10% by 2020.
Concerted efforts are underway to remove barriers to exploitation of renewable energy, particularly wind, solar and biomass. There has been significant progress in the implementation of sustainable energy systems in the last few years to the active support from the government and increasing awareness among the local population.
With high population growth rate, increase in industrial and commercial activities, high cost of imported energy fuels and higher GHGs emissions, supply of cheap and clean energy resources has become a challenge for the Government. Consequently, the need for implementing energy efficiency measures and exploring renewable energy technologies has emerged as a national priority. In the recent past, Jordan has witnessed a surge in initiatives to generate power from renewable resources with financial and technical backing from the government, international agencies and foreign donors.
The best prospects for electricity generation in Jordan are as Independent Power Producers (IPPs). This creates tremendous opportunities for foreign investors interested in investing in electricity generation ventures. Keeping in view the renewed interest in renewable energy, there is a huge potential for international technology companies to enter the Jordan market. There is very good demand for wind energy equipments, solar power units and waste-to-energy systems which can be capitalized by technology providers and investment groups.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9666510224342346,
"language": "en",
"url": "https://www.financestrategists.com/finance-terms/clt/",
"token_count": 294,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0654296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ca3062a2-954a-41c7-bb20-64c98b22efa7>"
}
|
What is CLT (Central Limit Theorem)?
Central Limit Theorem (CLT) Definition
The Central Limit Theorem (CLT) is a statistical theory that posits that the mean and standard deviation derived from a sample, will accurately approximate the mean and standard deviation of the population the sample was taken from as the size of the sample increases.
The minimum number of members of a population needed in order for a sample to adequately represent the population it was pulled from, is 30 according to the central limit theorem.
Defining CLT in Simple Terms
To define CLT in another way, let’s imagine that a sample of 30 stock analysts were gathered together and asked how much they thought a certain stock was going to rise in the next quarter.
If the average answer from the sampled analysts was 5%, then according to the CLT, this answer would reasonably approximate the answer of every person working as a stock analyst.
Purpose of the Central Limit Theorem
In finance, the central limit theorem can be used to expedite analysis.
Since indices often have hundreds, sometimes thousands of stocks contained within them an analyst doesn’t have enough time in a month, much less a day to go through them all.
But by putting the CLT to work, an analyst can take just 30 stocks out of an index and be able to approximate the quality of the index as a whole and thereby make a confident assessment.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9550913572311401,
"language": "en",
"url": "https://www.industrialsage.com/global-effects-of-coronavirus-impacting-supply-chains-disrupting-manufacturing-operations/",
"token_count": 183,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.22265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:12d46003-d24f-4a1b-87f7-d9265234441b>"
}
|
The global effects of Coronavirus have continued to increase daily, impacting supply chains and disrupting manufacturing operations.
Companies that depend on products and parts overseas, particularly from China, are anticipating a hit to their bottom line.
With airport and customs closures due to the COVID-19 outbreak, many are already experiencing delays – showing dependency on foreign products and weaknesses in the world’s supply chain.
According to the Harvard Business Review, the activity of Chinese manufacturing has fallen in the past month and is expected to remain low for months to come. That’s why the most vulnerable companies are the ones that rely on factories in China for their materials.
From electronics to pharmaceuticals to clothing.. The impacts are far and wide; exposing a potential need for more community based manufacturing.
To shorten the supply chain may mean an increased cost but could allow for more innovation and security in the future.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9459006786346436,
"language": "en",
"url": "https://www.secf.org/BLOG/Engage-Blog-Tags/tagid/91/reftabid/4964",
"token_count": 2388,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.26953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:01f447f0-f98e-4e7c-b29a-8f0013b12c3b>"
}
|
Data Shows More Needs to be Done to Bring Widespread Prosperity to the South
Author: Stephen Sherman
While economic disparities in the U.S. are widespread, nowhere in the country is the gap in economic mobility more pronounced than the South. Just look at the map below and you’ll notice the broad swath of red indicating the lack of upward mobility in the region. Raj Chetty and a team of researchers from Stanford, Harvard, and Berkeley used data from the most-recent Census and tax returns to chart the chance a child born into the bottom fifth income bracket could reach the top fifth by adulthood.
From a list of 741 commuting zones, four Southern cities were ranked in the bottom ten in terms of upward mobility. These were Atlanta, Charlotte, Jacksonville, and Raleigh, all of which have shown indicators of strong economic growth. The chances of a child going from the bottom quintile to the top in these cities were some of the lowest in the country—nowhere higher than 5 percent. By contrast, the leading cities in upward mobility – New York, Boston, San Francisco, Seattle, to name a few – all measured 10 percent or higher.
But is it just geographic differences that are to blame for the lack of economic mobility in the South? In addition to location, Chetty and his fellow researchers found that another primary factor in upward mobility was an individual’s racial identity. The latest research from the Equality of Opportunity Project finds that in 99 percent of Census tracts in the United States, black boys earn less in adulthood than white boys who grew up in families with comparable income. This suggests that differences in resources at the neighborhood level, such as access to quality schools, cannot by themselves explain the intergenerational gaps between black and white children.
Staying in Touch With Philanthropy
“Let’s stay connected!”
I can’t count the number of times I’ve heard and overheard phrases like this exchanged among SECF colleagues. Our members crave connection with one another for a variety of reasons. Some appreciate the opportunity to learn and share information about best practices. Others enjoy the camaraderie of friends and colleagues who share a common passion and purpose. Some relish the tailor-made resources and network of potential collaborators. Still others rely on their SECF relationships to build networks beyond their local geography. For most, it’s a combination of the above.
For all these reasons and more, SECF serves as a source of deep and lasting regional connections. Through relationships, conversations, events, reports, newsletters and more, we’ve built a network like no other. And now, we’re pleased to introduce another way to communicate with peers, learn from experiences and opinions, and share stories: ENGAGE, the SECF blog.
Reinventing Food Banks
Last week I had the opportunity to attend a forum From Feeding People to Ending Hunger: Reinventing Food Banks, a forum hosted by the Social Enterprise program at Emory University’s Goizueta Business School. The panelists represented organizations working to address hunger at the national, state, and local level and provided a layered perspective on strategies for ending hunger in the U.S.
The event included remarks from Kim Hamilton, Chief Impact Officer at Feeding America, Jon West, Vice President of Programs at the Atlanta Community Food Bank, and Jeremy Lewis, Executive Director of Urban Recipe.
Each of these organizations is doing its part to fight hunger: Feeding America is a nationwide network of 200 food banks and 60,000 food pantries and meal programs that provides food and services to more than 46 million people each year. The Atlanta Community Food Bank is part of Feeding America’s network and partners with more than 600 nonprofit partners to distribute over 60 million meals to more than 755,000 people in 29 counties across metro Atlanta and north Georgia. Urban Recipe operates within a unique co-op model in which each family served becomes a member of a 50-family co-op that meets biweekly to apportion donated food.
Responding to Our Members' Needs With Relevant Programming
SECF has long demonstrated a high level of commitment to being responsive to the educational needs of its membership. As part of our recently conducted biennial member survey, we intentionally asked our members to provide feedback on what programming offerings they would most like to see delivered in the near future. The purpose was to ensure that our programs were both relevant and in alignment with our member interests.
We received a robust number of suggestions that have aided us in the planning and design of our in-person and virtual programming activities over the course of 2017. As our member survey key findings report indicates, there a few top areas of interest that our members expressed. We have aimed to be responsive to these interests based on our recent and upcoming programs.
Getting to Know SECF
If you’re new to the SECF family, or are considering applying for membership, we want to make sure you know everything that makes SECF a grantmaker network like no other.
Last week, we hosted “Getting to Know SECF,” a webinar highlighting the people, programs, events and benefits that have made us one of the strongest and most vibrant grantmaker networks in the country, one that has continued to attract new members all while hitting a 96 percent retention rate in the last year.
If you couldn’t make this webinar, or joined us but would like to review what makes SECF membership so valuable, you can view the entire presentation below. Our speakers included me, as well as SECF President & CEO Janine Lee, Senior Director of Programs & Partnerships Dwayne Marshall, Director of Marketing & Communications David Miller, and three members of the SECF Board of Trustees – Bob Fockler of the Community Foundation of Greater Memphis, Stephanie Cooper-Lewter of the Sisters of Charity Foundation of South Carolina, and Gilbert Miller of the Bradley-Turner Foundation.
Download Southern Trends Report Data With a Single Click
We’re excited to let you know about a recent upgrade to our Southern Trends Report. Users now have the ability to download the data behind each of the tables, charts and lists featured on the site. You might use the data to create your own charts and graphs, compare figures for different states, or format lists of top funders to share with your board.
This function is also context-sensitive, meaning that if you change one of the parameters on an interactive chart, the data included in the download for that page will reflect your modifications.
Try it out:
Help Put Southern Philanthropy – and Your Foundation – On the Map
In 2016, SECF teamed up with Foundation Center to release the Southern Trends Report – a comprehensive look at giving in our region. This year, we’re working to update the Southern Trends Report with new data on giving by Southeastern foundations.
Like any report, it’s only as good as the data that goes into it – and that’s why we’re encouraging all SECF members to Get on the Map by joining the eReporting program with Foundation Center. Grants data that is submitted through eReporting is fed into the Foundation Maps platform, which is the driving force behind such interactive sites as YouthGiving.org, BMAfunders.org, and our very own Southern Trends Report.
The more foundations we have participating in eReporting, the more reliable our sample becomes and the more confident we can be in drawing conclusions or predicting trends from the data.
2017 Salary Data for Southeast Grantmakers Now Available
Author: Stephen Sherman
In SECF’s 2016 market analysis, 43 percent of responding organizations stated that they anticipated adding new staff within the next 12 months and close to a third reported having replaced their executive director within the past three years.
With each new staff member, promotion, or position added, there are crucial decisions that have to be made regarding compensation. Not only do foundations want to remain competitive and attract the best talent, but they also have to show due diligence and demonstrate that staff and CEO compensation is, according to IRS guidelines, “reasonable and not excessive.” As a best practice, it is recommended that foundations and other charities review comparable salary and benefits data for other organizations with similar missions and of a similar budget or asset size.
Each year, SECF partners with the Council on Foundations (COF) to produce comparative analyses of salary data for foundation staff and CEOs in the Southeast. These reports are generated using COF’s Benchmarking Central tool that includes salary data for staff in multiple roles within foundations.
New Reports Highlight Growth of Donor-Advised Funds and Giving Circles
Author: Stephen Sherman
Even taking into account the Great Recession, we’ve generally seen the numbers, assets, and giving for private and community foundations in the United States continue to rise over the past decade. Over the long term, that growth has been steady and alludes to the staying power of foundations in spite of changing social, economic and political circumstances. However, while foundations have the capacity to make transformative grants in their respective communities, collectively they account for a relatively small share of charitable giving when compared with contributions from individual donors.
Donor-advised funds (DAFs) and giving circles lie somewhere in between foundations and individuals on the giving spectrum and are two of the fastest growing philanthropic vehicles. Two recent studies offer insight into the growth of these giving instruments in the United States.
The 2017 Donor-Advised Fund Report, published by the National Philanthropic Trust, surveys the growth of DAFs in the United States from 2010-2016 and provides an analysis of funds by sponsor type. Data was gathered from over 1,000 organizations that sponsor DAFs, including national charities, community foundations and single-issue charities. In 2016, there were approximately 285,000 individual donor-advised funds across the country – more than three times the number of private foundations. Nearly 44,000 DAFs are housed in organizations within SECF’s 11-state footprint, representing around 15 percent of all donor-advised funds in the country.
The 2017 report offered a glimpse at the growth and concentration of DAFs by state. Massachusetts (82,643), California (38,590) and Pennsylvania (20,819) were home to more than half of all DAFs in the country in 2016 thanks to prominent charities such as Fidelity and Vanguard. Georgia was one of the fastest-growing states for DAFs and ranked fourth nationally with 19,736 funds, slightly ahead of New York (18,481). Georgia’s leading position can be largely attributed to the National Christian Foundation, located in Alpharetta, which houses more than 16,000 donor-advised funds. Within the Southeast region, Florida, North Carolina, Tennessee, and Virginia also host significant numbers of DAFs.
Exponent Philanthropy Resources for Small-Staff Foundations
Author: Jaci Bertrand
Through a partnership between SECF, the United Philanthropy Forum and Exponent Philanthropy, members can take advantage of discounts on Exponent publications and programs. Keep reading to learn more about this SECF member benefit!
Exponent Philanthropy Publications
SECF members are eligible for a 20 percent discount on the following Exponent publications:
The Foundation Guidebook
This signature publication is written especially for newcomers to foundations or philanthropy. Gain the baseline knowledge to operate your foundation, including board responsibilities, tax and legal issues, administrative details, investment matters, grantmaking basics and more.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9529144167900085,
"language": "en",
"url": "https://www.statkraft.com/newsroom/news-and-stories/archive/2014/Researching-for-the-future/",
"token_count": 658,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.024169921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bbfa2f88-c565-4c38-b2a3-2d8f5084a8f9>"
}
|
Researching for the future
Where will the next big flood in Norway be? What if the Himalayan glaciers melt? Is building hydropower plants in Turkey profitable? These are some of the questions R&D programme manager Uta Gjertsen is trying to answer.
“Climate change will impact all of Statkraft’s business areas, not just hydropower, but also wind and biomass,” says Uta Gjertsen, Head of the Consequences of Climate Change R&D programme launched last year.
It is in the interest of the energy sector to map how to meet the consequences of climate change. Just knowing that it will be “wetter and wilder” is not enough. We need accurate information on what to expect in the different regions in Norway, as well as the countries where Statkraft has interests or is considering making investments.
“This research programme will collect and coordinate the climate research in the entire company, not just by gaining new knowledge, but also to gather and husband all the research leading up to the present. There is plenty of research out there, not least within Nordic hydropower, and the idea is that we need to apply this expertise in countries where we are going to invest,” says Gjersten.
The results from the programme will be relevant as a basis for investment decisions and energy optimisation, as well as operations and maintenance.
Change is coming
A recent study funded by the Nordic Council, in which Statkraft participated, concludes there is little doubt that hydropower in the Nordic and Baltic regions will be significantly impacted by climate change.
“Norway has had stable climate for a long time, enabling us to use a long-term time data series in our planning, but we now see that the past no longer provides us with a good indication of what will happen in the future,” Gjertsen says.
Statkraft’s goal is to grow internationally, but in the parts of the world without the same accurate measurements or meteorological and hydrological data as we have, it is even more difficult to predict the future.
“For instance, new global climate forecasts indicate drier climate in the Mediterranean region, and that plays into our assessment of the profitability of developing hydropower in areas that might have to use water for other means. It is even more difficult to predict the impact on wind power,” says Gjertsen.
Knowledge of climate change is important for making well-informed decisions about profitability.
“Due to climate change, reservoirs may be the way to go, rather than investing in run-of-river plants. Climate change will also impact maintenance and renovation of existing plants. Many were developed at a time when global warming was not an issue, and it is important to find out what has to be done to protect them from future climate developments.”
Statkraft and SN Power participate in research projects in India, headed by the Bergen-based Bjerknes Centre for Climate Research. The challenges in this area are enormous, with many and severe floods, changing monsoon patterns, and melting Himalayan glaciers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.7100873589515686,
"language": "en",
"url": "https://fdocuments.in/document/new-direction-in-economic-development.html",
"token_count": 1393,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b2f936a5-b197-472d-84de-581641df1966>"
}
|
New direction in Economic Development
Embed Size (px)
Transcript of New direction in Economic Development
- 1. A New Direction inEconomic Development Jim Claybaugh, MBA, EDFP October 24, 2012
2. OverviewWhat is Economic Development?HistoryTheoriesCommon StrategiesInfrastructure Capital ResourcesThe Knowledge Economy 3. Why Do Economic Development?Economic ReasonsRevenue from High Incomes/Biz ActivitySavings From Reduced Govt ServicesSocial ReasonsAffluence Lowers Crime, Drug Use, etc.Collective ConfidencePolitical Reasons 4. What is Economic Development?Depends who you ask As to the answer received Economists Quantifiable economic growth Businesses Barriers (taxes, regulations)Environment Keep ecology & environmentLabor Groups Wages, training, benefits Community leadersStrengthen local communities Government Officials Tax Base, Revenues 5. What is Economic Development?The enhancement of the factors ofproductive capacity Infrastructure for community well-beingEconomic Development v. GrowthMust Be A LONG-TERM Commitment 6. What is Economic Development?Should Be Consensus-drivenShould Be Market-based Work with market forces Economic development is not a salad bar.Should Be Sustainable As in Sustainable DevelopmentShould Be Bottom Up 7. What is Infrastructure?Five Types Physical Organizational Social/Political Financial/Economic Human/Intellectual 8. What is Infrastructure?Human/IntellectualFinancial/EconomicSocial/PoliticalOrganizationalPhysical 9. HistoryThree Waves First Wave: Industrial RecruitmentStarted in South, 1930s Second Wave: Retention & ExpansionLow ROI From Business Attraction Third Wave: New Economy ED 10. First Wave 1930s-1990sFocus on Attracting:ManufacturingOutside Financial InvestmentTo achieve this cities used:Tax IncentivesGrants & Subsidized LoansSubsidized Infrastructure InvestmentExpensive Marketing Techniques 11. Second Wave 80s-00sFocus Moved Towards:The Retention/Expansion Of BusinessAn Emphasis On Inward InvestmentTargeted To Specific Sectors/ClustersTo Achieve This Cities Provided:Technical Assistance To BusinessesFinancial Assistance/Loan ProgramsInfrastructure InvestmentPermit Streamlining 12. Third Wave 90s - NowFocus ShiftedFrom: Direct Financial IncentivesTo: Making Regions CompetitiveFocus is placed on:Public/Private PartnershipsSoft Infrastructure InvestmentsIncreased Competitive AdvantagesLeverage Public/Private Investments 13. Third Wave 90s - NowTo Achieve This Cities/Regions Are:Developing Holistic StrategyCreating Competitive Business ClimateNetworking And CollaborationSupporting Cluster DevelopmentHorizontal And Vertical IntegrationEducation/Workforce TrainingTargeting Investment To ClustersQuality Of Life Improvement 14. Theories of Economic DevelopmentStapleSectorGrowth PoleProduct CycleEconomic BaseEntrepreneurshipInter-Regional TradeNeo-Classical GrowthFlexible Specialization 15. Common StrategiesBusiness CreationBusiness RetentionBusiness ExpansionBusiness AttractionOther Strategies/Capacity BuildingNeighborhood/Downtown RevitalizationRedevelopment 16. Business CreationAKA Entrepreneurship Development Micro-Enterprise Development Economic Gardening Business IncubationPrograms to encourage start-ups Technical Assistance Financial Assistance2nd Highest Job Creation ROI 17. Business RetentionDesigned to Retain Existing BusinessesInformation GatheringSurveys, InterviewsTrack Trends and ID BarriersRed Team Visitsto Troubled or Recruited BusinessesEnergy Crisis, 2001CA Forced to Focus on Retention 18. Business ExpansionSimilar to Business RetentionAssists Local Businesses Expand Location Assistance Financing Assistance Permitting Assistance Job Placement AssistanceHighest Job Creation ROI ~80% of New Jobs Created 19. Business AttractionAKA Business Recruitment-MarketingFirst ThoughtPromotion of Region, Sites, AmenitiesMost Common ApproachLowest ROI of All ApproachesPlaying the LotteryAlabama ExampleInvolves Site Selection Process 20. Site Selection FactorsArea Development, Annual Survey Availability/cost of skilled labor Corporate tax rate State and Local incentives Tax exemptions Occupancy or construction costs Highway Accessibility Environmental regulations Low union profile Energy availability and costs 21. Site Selection FactorsConway Data, Inc. Work force, wages, productivity Market and demographic data Specific sites & Buildings Transportation Energy and utilities Materials, supplies, services Government programs Water and WW infrastructure Environmental impact, ecological factors Climate/Quality of Life factors 22. Site Selection FactorsLabor Availability/QualitySite AvailabilityEducation/Vocation Facilities Public SafetyLabor-Mgmt RelationsEnvironmental RestrictionsWage Levels Local Views to DevelopmentEducation Opportunities Financial IncentivesUtility Costs/AvailabilityHousing Cost/AvailabilityReal Estate Costs Shopping FacilitiesTaxes/CODBHotel/Motel AvailabilityHighway Accessibility Medical and Health ServicesMarket Location/Freight Community EnvironmentTransportation Services Recreational/Cultural Climate Source: Fantus Co. - Chicago 23. Capacity BuildingInfrastructure-Capital-ResourcesHard v. Soft InfrastructureWhy Focus on Infrastructure?Economic Climate Conducive to GrowthWe cant predict the next Big ThingAssists in All Econ Dev StrategiesEcosystem Analogy 24. Knowledge EconomyNew Strategies and ObjectivesInfrastructure InvestmentNew Definition of InfrastructureNew PartnershipsIncludes Colleges & UniversitiesNew ChallengesWidening Wealth/Income GapsWidening Education/Skills Gaps 25. Knowledge EconomyRegional and HolisticSector/Cluster-BasedHuman Capital-Based Linking Workforce DevelopmentInnovation-Based The R&D Jobs Food ChainInformation-Based Littleton, CO Example Use of Internet in Site Selection 26. Conclusions ( Mine)Econ Dev Does NOT Tackle PovertyTraditional Strategies Havent HelpedSoutheast U.S. ExampleHuman/Intellectual Capital is MVPP = Poverty-slayerBusiness Attraction = Negative ROIEconomic DevelopmentGrowth Is NOT Can Lead To Should Focus On 27. What Would I Do?Innovation And Tech CommercializationFocus Attraction to Inquiry ResponsesReview/Update Planning DocumentsTighten Relationships With PartnersLand And Infrastructure InventoryDevelop CRA-type Industrial REITDevelop Capital Index Scorecard 28. What Would I Do?Technology CommercializationExpansion of Industrial SectorsQuality WorkforceUse Balanced Scorecard ApproachInnovation CultureLand Zoned and Pre-PermittedAdvance Infrastructure 29. Appendix:Capital IndexPhysical Capital IndexIndustrial and Commercial Land RatiosRatio of Region with High BandwidthCommercial-Industrial VacancyRegional Water/Sewer CapacityRegional Traffic Capacity 30. Appendix:Capital IndexHuman/Intellectual Capital IndexHigh School Graduation Rates/Test ScoresRatio of Pre-School ParticipationCollege Degrees Per CapitaPatents Per CapitaUnemployment 31. Appendix:Capital IndexSocial/Political Capital IndexLocal Government Permit Fee StructureParticipation in School Business TeamsVirtual Enterprise, Junior AchievementLocal Efforts in Economic DevelopmentEDOs, Local Governments, Chambers, etc.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9612137079238892,
"language": "en",
"url": "https://odi.org/en/publications/country-responses-to-the-food-price-crisis-200708-case-studies-from-bangladesh-nicaragua-and-sierra-leone/",
"token_count": 148,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:caff1c94-6fd0-4712-82d1-5323499c6a3f>"
}
|
In 2007/08, cereals prices spiked on world markets, the sharpest spikes in over 30 years. When prices of staple foods started to rise, governments, international organisations, and NGOs took action. This report looks at responses that were taken to:
- Prevent the high food prices from transmitting from international to domestic markets;
- Maintain food availability at the time of the crisis using domestic production programmes, and;
- Mitigate impacts of high prices on vulnerable citizens.
To look at these responses in detail, three country case studies from different regions of the world were used. These studies, commissioned with funding from DFID and in partnership with the UK Hunger Alliance, were undertaken in Bangladesh, Nicaragua, and Sierra Leone.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9367351531982422,
"language": "en",
"url": "https://opsblog.org/2017/the-importance-of-business-database-analysis/",
"token_count": 588,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0830078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6e003c98-d026-4fd4-b2a7-78f2d1015439>"
}
|
A business generates data in large amounts and this data can be simply meaningless if not used properly. This actually means that you could miss out on valuable scopes of business if you let go of the data generated by your business. Business database analysis is all about formulating a kind of data plan around the specific requirements of a business for transforming data into usable and beneficial information. This enables businesses in identifying their sales trends and in driving their performance. Analysis business database is the procedure of extracting hidden information from a collection of databases for a certain specialised work.
The Uses of Business Data Analysis
Analysis of business database is a procedure largely used in different applications like product analysis; getting an understanding of customer research marketing; demand and supply analysis; investment trend in real estate and stocks; telecommunications and e-commerce. The entire procedure is based on analytical skills and mathematical algorithms for driving the desired outcome from a large database.
How is it Significant?
Business database analysis is of prime importance in the highly competitive and advanced business environment of the present times. It is a process that helps leading business and corporate houses in staying ahead in competition. The process brings forth latest information which can further be used for market research, competition analysis, consumer behaviour, economic trends, analysis of geographical information and industry research. It also helps in effective decision making. Data analysis applications are used in the field of direct marketing, CRM or customer relationship management, health industry, financial sector and FMCG industry. This type of analysis is available in different forms such as web analysis, text analysis, video data analysis, audio data analysis, relational databases, social networks data analysis and pictorial data analysis.
Outsourcing Business Data Analysis
Analysing business databases is not a very simple procedure. It takes in a lot of patience and time. A lot of time goes into collecting desired databases because of the complexity and also because of the massive structures of business databases. It is only because of this reason that many businesses look out for the services of outsourcing firms. The outsourcing firms possess good capabilities in analysing business data, filtering it and using it for the benefit of a business. This type of analysis has been used in varied context but it commonly used for organizational and business requirements.
How to Carry Out the Procedure?
The process of business database analysis requires tremendous amount of manual work like collecting data, assessing information and using the internet for getting details. However, these days, software products are also being used for scanning the internet and getting hold of relevant information and details. It serves best to make use of software products because they do not take up much labour and time and also offer instant results.
Sometimes, it might not be possible to get the right software for the procedure. In this case, it would be advisable to look for a skilled programmer for this procedure. This might have you paying extra money but you can remain assured of getting the best services.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9607109427452087,
"language": "en",
"url": "https://plutusfoundation.org/2021/costs-to-raise-a-teenager/",
"token_count": 1922,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0091552734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:342b1cbf-1438-4a7e-af93-4b90c89111bb>"
}
|
New parents hear all about how expensive babies are. From the car seat to the stroller, the diapers and the child care — there’s no doubt it can add up fast.
What other parents don’t warn you about, however, is the fact that the cost to raise a child only increases as your child grows up. In fact, the teenage stage is when a family’s spending on a child peaks, at $13,900 per year according to the USDA Expenditures on Children by Families report.
As your child gets older, looking ahead to the next stage in their life can help you anticipate and prepare for new costs. Here’s a look at why costs can increase as your child starts to grow up, and a breakdown of the costs to raise a teen.
Housing Costs for a Teenager
As your kids get older and your family’s needs change, that can mean upsizing to a home with more space. Siblings who used to share a room might now insist on their own, or you might need a larger shared area to have their friends over.
The added housing costs of a bigger family account for 27% of what parents spend to raise a kid. For the average family spending $13,900 per year on their teen, $3,800 of that goes to housing.
Costs to Feed a Teenager
Your teen is going through puberty, growth spurts, and plenty of social and developmental milestones all at once. It’s no surprise teens are notorious for eating a lot — it takes a lot of food to fuel all that!
No longer can you get away with four chicken nuggets and 8oz of milk for lunch.
Higher nutritional needs can also mean a rise in grocery and food costs. On the USDA’s moderate-cost food plan, families spend about $310-320 per month feeding a teen son. The average monthly grocery costs for a teen daughter are $255.
Compare that to the average $178 per month of less families would spend on a child 5 years or younger.
Costs for Transportation of a Teenager
Your family’s transportation costs also might rise as your child grows older. As a teen, they might have more commitments to keep and places to be between school, extracurriculars, and their social life. You might find yourself needing a bigger car and spending more on gas to meet those demands.
And when your teen earns their driver’s license, that can mean many more costs.
Insuring a teen driver isn’t cheap — it can add about $2,000 to your annual premiums. And then there’s the cost of buying a car for your teen, if you choose to do so. In all, the USDA puts transportation costs at 16% of overall spending on teens ages 15 to 17, about $2,225 per year.
Costs for Education and Extracurriculars for a Teenager
After paying for child care in the preschool years, education costs tend to fall. But then they increase again at ages 15 to 17, as many students get more involved in college prep and extracurricular pursuits.
According to the USDA, 15% of what parents spend on teens goes toward education costs, equal to just under $2,075 per year.
This can include tuition for students attending private schools, school books, fees, and supplies. Other common education costs include private tutoring, college entry exam prep and test fees, school trips and activities.
Costs of Personal Care and Interests for a Teenager
You’ll also probably be plunking plenty of cash down for your teenager’s miscellaneous costs and interests. Miscellaneous costs are about 7% of what parents spend on teens, on average (about $970 per year). This covers everything from personal care, such as hair cuts or beauty products, to personal interests.
Entertainment purchases like books, movies, music, and video games can add up for an average teen. And many teens also have an expensive hobby or pastime, whether it’s pricey private dance classes, one-on-one music lessons, or sports equipment.
And don’t forget spending money for incidentals like grabbing food or coffee, getting together with friends, or buying a ticket for an event.
Costs to Clothe a Teenager
Overall, clothing costs for teens aren’t much higher than at younger ages. They spend just under $800 per year, an average of 6% of total costs, on clothing for their teens.
They may want more-expensive clothing, and how they dress is more closely tied to their self-expression and even social status. I’m sure you remember being a teenager and being overly self-conscious about your looks. I know I do.
Saving for College and Adulthood
The costs of raising a teen don’t stop when they reach 18, of course. Many parents continue to help their children financially, even after they’re officially adults.
Commonly, parents will help pay for college. Parents say they contribute an average of $17,314 to their child’s college education, according to an HSBC study. Saving up that amount (or more) requires some careful budgeting and planning.
Many parents might also choose to allow their teen or adult child to continue living at home, remain on their insurance, or pay for other portions of their living expenses. In fact, about six in 10 parents with young adult kids (ages 18 to 29) say they provided that child with financial help in the past year, according to a Pew Report.
Dealing with the Costs to Raise a Teenager
Involving teens in the process of managing and paying for their own costs can be an important learning opportunity for your kid. It can help them practice crucial money skills while they still have the safety net of living at home to fall back on.
As a parent, you are responsible for covering your child’s living costs and paying for their basic needs. So you should cover things like housing, food at home, basic clothing, and other necessities.
Of course, it’s important to keep these expenses in check. For example, it’s always wise to choose an affordable home, make meals rather than eating out, and save and find deals for big purchases.
What you pay for beyond the basics up to you, however. Invite your teen to help in the process of budgeting, spending, and even earning the funds used to pay for their wants.
Give them an allowance. Figure out how much you can and are willing to spend on your child in a given month. Then, you can budget for this and give it to them in the form of an allowance.
Your child can then decide how they want to manage that money to buy clothing, go out with friends, buy a video game, or make other purchases. And you won’t be stuck arguing with them about spending or getting sucked into overspending.
Encourage them to earn their own money. Whatever age your teen is, they can find ways to earn some cash to buy the things they want. Teens who are 15 or older can get a summer or part-time job. And younger teens can earn cash by doing odd jobs like babysitting, dog walking, or yard work. Or they can make something to sell, like baked goods or DIY jewelry and art.
With them earning money, that’s cash they can use to pay for what they want and need — without it all having to come out of your wallet.
Share costs with your teen. It doesn’t have to be that either you or your teen buys the things they want or need. Especially if they have a big-ticket item come up, such as an overnight school trip or a car, it’s plenty fair to expect them to help cover a portion of the costs.
Task your teen with drawing up a budget for their desired purchase, along with how much you each contribute. Once you agree on an arrangement, you can both start planning and saving for the expense together.
Know when to say no. Not spending or putting purchases off is often part of smart money management, and it’s healthy to help your teen learn that!
It can help to reframe these decisions from “We can’t afford it,” to, “We already have enough,” or “We’re choosing to spend on other important things right now.” It’s also the perfect chance to model financial prioritizing and explain how you make decisions with your money.
Balancing the needs and costs of raising your teen with those of other family members isn’t easy. And how you handle your teen’s costs and family’s financial needs will look unique to you.
But understanding the general costs to raise a teen can give you a more realistic idea of what to expect. You could find places in your budget where you could cut back, as well as plan for expenses that are coming up in the next few years.
Best of all, managing your family budget and finances will provide your teen with an important example and role model of positive money habits.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9478155374526978,
"language": "en",
"url": "https://thecustomizewindows.com/2021/04/is-the-blockchain-hype-running-out-of-breath/",
"token_count": 548,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11865234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:17ab3a7e-02c0-4700-9758-498cc6270eb8>"
}
|
What do we need for the blockchain? Why is there such hype around the 40-year-old technology? What are the problems? Answers to these questions were given on this website through a number of articles on blockchain. Yes, blockchain has already exceeded the hype, but that does not mean that the technology is no longer attractive. We are now on the plateau of productivity and we have understood what the blockchain can and cannot do. Despite all the enthusiasm, we should never forget to just ask what we can do with technology, and always keep an eye on its benefits – what can technology do for us?
Blockchain is a team sport. We should warn ourselves against using blockchain only for the sake of technology, if we start a project that already implies blockchain as a technical solution, it will not scale. We should strongly question whether the problem to be addressed can only be solved with a blockchain or whether other approaches – such as confidential computing – are more productive. It should be considered that the blockchain in the enterprise area is a complementary technology that is added to existing IT system landscapes as a kind of trust layer. The blockchain does not replace any databases or other technologies, it has to be seen as complementary.
Confidential computing, according to IBM “is a cloud computing technology that isolates sensitive data in a protected CPU enclave during processing. The contents of the enclave – the data being processed, and the techniques used to process it – are accessible only to authorized programming code, and invisible and unknowable to anything or anyone else, including the cloud provider.”
But what is the blockchain used for and is the technology ready? The considerations of the banks prove that the blockchain has already outgrown its infancy and is already mature: they want to use the blockchain to implement digital central bank money – a solution that has a high-risk assessment and must scale. In the foreseeable future, this will only be possible with a digital proof of identity linked to vaccination confirmations or negative test results – realized through a combination of blockchain and biometrics.
Where is the blockchain journey going? The technology fan can well imagine that the blockchain and other distributed ledger technologies will develop into a kind of protocol – a trust protocol or an Internet of trust. In the best-case scenario, as users and citizens, we would not even notice the use of these technologies, but at the same time, we would regain sovereignty over our data because we would know who is doing what with it. The blockchain debate will get a boost with the topic of digital central bank money, the introduction of programmable money. And last but not least, as a possible convergence of the three areas of financial services (digital money), digital identity and supply chain.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9604671597480774,
"language": "en",
"url": "https://www.cgocmall.com/news/industry_4.0-12_a2c45.html",
"token_count": 686,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.19921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:35514322-6734-497c-9200-cbd0576365c6>"
}
|
During April 2013's Hannover Fair, the idea of "Industry 4.0" was officially presented by Germany and soon became a tag that every industrialized society wants to attach itself to. People in private sectors welcome it as a revolution in manufacturing, and governments see it as a stimulant to the sluggish economy.
This year's BRICS Summit, which is going to be held in South Africa's Johannesburg from July 25 to 27, also includes that as a new area of cooperation. So, what's the magic of it and how this developed country's notion should relate to developing countries?
The differences in Industry 4.0
Unlike the first three Industrial Revolutions which were driven by one specific discovery or breakthrough, the fourth one is based on "combinations of technologies," as Klaus Schwab, the founder of the World Economic Forum, once pointed out.
Observing the development timeline, we can find that Industrial Revolution started with the advent of machines and climbed up one stage after another when human beings tried to harness them to achieve larger and more complex output with less input. The final result, as far as we can see, for now, is "smart factory."
A factory is where mass production happens and being smart can be regarded as an advanced state of automation, meaning that we can teach machines to "think," which is enabling it to perform self-diagnosis, self-configuration and self-optimization, a lot more than doing repetitive work. As a result, the amount of time, energy and labor saved during that process can be put elsewhere to bring more productivity and generate far higher value.
Significance to developing countries
For developing countries – for example, the BRICS countries – Industry 4.0 poses challenges but also provides opportunities. The challenging part can be mainly categorized into three aspects: labor force not so highly-skilled, infrastructure yet to be improved, and world factory ceased-to-be.
The first two are easy to understand. Industry 4.0 represents an intelligence-intensive instead of labor-intensive production mode. That is clearly in favor of highly-skilled workers, especially those who have received higher education in relevant domains. However, a large proportion of developing countries’ workforce does not belong to that group.
Additionally, determined by its nature, Industry 4.0 can only be built on a solid base of physical and cyber infrastructure. For developing countries who may still stay in the second or early third phase of the industrial revolution, laying the foundation is necessary, either by their own efforts or through cooperation.
Last comes factory relocations. One characteristic of smart production is customization, which is very likely to reverse the once-popular strategy of setting factories on the other side of the globe. A closer-to-consumer manufacturer can respond to home markets quicker and better, not to mention the job opportunities some developed countries want back so desperately.
As for what Industry 4.0 can offer, developing countries' collective wisdom and efforts to address those challenges will bridge development gaps, advance existing cooperation and introduce great changes to their technology, industry and society.
The vision of BRICS countries has already been shown by adding that trendy topic on the agenda. To know more about their science and technology development in the current stage and cooperation ongoing, please follow CGTN's "Industry 4.0 in BRICS" series in coming day.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9401755928993225,
"language": "en",
"url": "https://www.erp-information.com/gross-requirements.html",
"token_count": 713,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.01513671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dc810df6-f1be-4a38-811e-8c6b566d1472>"
}
|
This article discusses the term gross requirements and all the questions that arise about it. Read on to find out more about the topic and develop a clear concept about it.
What is the gross requirement?
Gross requirements are the total of independent and dependent demand for a component before the netting of on-hand inventory and scheduled receipts.
The total requirement for raw materials, other components, and subassemblies required to produce a certain item is termed as the gross requirements.
It is an additive feature of both the dependent and independent demands. It takes into account both the availability of dependent demands and independent demands.
Dependent demand is that of the processed or unprocessed items through the production line, and independent demands are that faced by the ultimate finished product guided by external market factors.
How is it evaluated?
Even before netting the on-hand inventory or subtracting the demands based on the scheduled receipts, we calculate and fix the gross requirements.
It is the minimum amount of inventory required to keep the firm running smoothly. It does not consider the availability of raw materials in the inventory or any predetermined evaluation of scheduled receipts.
The total amount of input of a thermal plant can be termed as its gross requirement. It will be measured in amounts of tons of coal required at the end of each periodic productive cycle.
The total amount of flour required to produce, say 40 pieces of bread at a roadside food joint would be its gross requirement. This is done without considering the leftover materials from the previous month, or any orders placed beforehand that are scheduled to arrive.
Why is it essential to figure out?
A firm needs to figure out its gross requirements capacity before it starts operating. It is the amount of raw material at which the firm can function at its full potential.
Thus, the amount of on-hand inventory and other inventories based on rising demands can be easily calculated based on the gross amount required.
Costs based on storage can be cut down if a company has a clear idea of the gross amount required to function. This gives an idea of the on-hand inventory required to avoid running a risk of huge loss due to loss of demand.
Similarly, due to loss of supply, a firm can lose out customers, if it is not stocked sufficiently for the future.
Characteristics of Gross Requirements
- The gross requirement does not vary from day to day or even within two periodic productive cycles.
- Gross material meaning for a firm is the position it holds over the market it captures. The higher the demand is, among the customers, the more is the gross requirement of the company.
- The gross requirement can be increased by improving the facility, performing units, production units, and also demand for the finished product. This is where Materials requirement planning (MRP) can be implemented.
What is net requirement?
Net requirements are requirements for a product based on its gross requirements minus on-hand stock and scheduled receipt.
The difference between a gross requirement and a net requirement is as follows.
A net requirement plan adjusts for on-hand inventory and scheduled receipt at each level, whereas a gross requirement plan is a plan that shows the total demand for a product and it also shows when production should start to meet its requirements.
Most manufacturing industries are all about maintaining the balance between the gross and net requirements, also keeping in mind the on-hand inventories and scheduled receipts.
Striking the balance is very important, and is the key to a thriving industry.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9788405895233154,
"language": "en",
"url": "https://www.legalsecretaryjournal.com/Justice_May_Now_Be_Out_of_Reach_for_Some",
"token_count": 689,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.41015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:06409a55-f88d-4f4b-808e-79035a5dd094>"
}
|
In April 2015, new charges came into effect which have dramatically increased the cost of court proceedings in England and Wales. Since these charges were put in place, there has been much protest from civil liberty groups and legal professionals. Over 50 magistrates across England and Wales have stepped down as a direct result of the charges. They believe that the increase in the price of justice violates the core principles of the Magna Carta (which incidentally celebrated its 800th birthday in 2015).
The idea behind increasing the fees was that the courts are necessary only if there are people who break the law, so the people who break the law should be the ones to front at least their fair share of the costs of running them. But there is much doubt as to whether the increase in fees will actually reap much extra income for the justice system, since the majority of defendants cannot afford to pay the fees anyway. In fact, some studies suggest that higher fees will actually cost the government more money. Defendants who can’t pay the charges could be sent to jail or have their sentences extended, which is estimated to cost an extra £5 million a year on top of the costs of running the courts.
But the rise in fees has started to affect the course of justice. Because the fees rise as the court cases proceed, defendants who plead their innocence but who eventually lose their cases will end up having to pay much more. So now defendants aren’t necessarily being given a fair trial, as increasing numbers of them are choosing to plead guilty in order to keep the costs down. Those who plead guilty to their charges at the magistrates’ court are required to pay only £150. This fee jumps to £1,200 for those who lose their case at Crown Court and are ultimately convicted. The fee is not calculated based on how much the defendant can afford to pay; it is a blanket fee which anyone in the same position will have to pay. This means that the punishment is disproportionate and it is the country’s poorest who will suffer.
The increase in the fees has been said to put unnecessary pressure on people who are struggling financially to plead guilty against their will. The fees also are taking away some of the options that magistrates have available to them in the course of their work; magistrates now have a much more limited ability to use fines as a deterrent for lesser crimes, because there is little or no chance that some defendants will be able to pay these on top of the increased fees.
There have been cases in which those convicted have been ordered by the court to repay upwards of £1,500 in weekly instalments of £5. At this rate, it will take these people more than five years to pay back the total amount owed, which in many cases may be a punishment far more severe than the offence committed.
It is thought that as more and more cases arise where the outcomes seem completely unfair and unjust to those being prosecuted, there will only be more resignations to come from those working in the legal profession. There has been no indication from the government that these changes are going to be reversed, so for the foreseeable future, the increased fees will continue to have a huge impact on the prosecution of the poorest members of English and Welsh society. Only time will tell whether the charges have any influence at all as a deterrent or simply make access to justice virtually impossible for those who cannot afford to pay.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9407145977020264,
"language": "en",
"url": "https://www.moneycowboy.net/2018/08/18/iota-and-vw-to-build-autonomous-system-cars/",
"token_count": 1759,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.000335693359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8adcf9b0-2e2d-4d35-b00a-27688361b087>"
}
|
What’s next in the evolution of autonomous cars? Based on Volkswagen’s recent demonstration just this past June at the CEBIT ‘18 Expo in Germany, there seems to be a lot in store for us.
In its presentation, Volkswagen joined forces with the IoT-focused IOTA blockchain project and vehicle manufacturer to show a proof of concept (PoC) for how the IOTA system can be used for autonomous cars. The PoC demonstrated how IOTA’s Tangle architecture can be used by car manufacturers such as Volkswagen to securely transfer software updates “over-the-air” as part of Volkswagen’s new “Connected Car” systems. The demonstration included a panel discussion entitled “Blockchain in Future Mobility.”
AUTONOMOUS VS. AUTOMATED
Not to be confused with “automated,” Jim Tung, a fellow from a leading developer of mathematical computing software Mathworks, gives a clear distinction between automated and autonomous:
“Practically speaking, autonomy is the power of self-governance — the ability to act independently of direct human control and in unrehearsed conditions. This is the major distinction between automated and autonomous systems. An automated robot, working in a controlled environment, can place the body panel of a car in exactly the same place every time. An autonomous robot also performs tasks it has been “trained” to do, but it can do so independently and in places it has never before ventured.”
It is crucial for an autonomous car to be designed so effectively that it chooses the most favorable course of action in every unique situation. What this comes down to is having enough data, which is where IOTA comes into the picture.
WHAT IS IOTA?
IOTA is a revolutionary distributed and open source ledger. Rather than relying on the blockchain, it is based on its very own invention called a “Tangle.” The platform allows connected devices to transfer money numerically in the form of micropayments. Also, the platform has no transaction fees, which is optimal for a micro-payment infrastructure.
The IOTA team is driven by their vision to “enable all connected devices through verification of truth and transactional settlements which incentivize devices to make available its properties and data in real time. This gives birth to entirely new general purpose applications and value chains.” And now they are applying this vision to autonomous cars.
WHAT IS A “TANGLE?”
The Tangle is a new data structure based on a Directed Acyclic Graph. Their system has a topological order that allows for different types of transactions to run on different chains in the network simultaneously. For this reason it has no Blocks, Chain, or Miners. This is a radical new architecture that greatly differentiates IOTA from other Blockchains, allowing for zero transaction fees, secure data transfer, and infinite scalability.
When it comes to Volkswagen distributing data for its connected cars, the fact that different types of transactions can run on different chains simultaneously make this network highly useful.
Another significant difference in IOTA is how transactions are made and the fact that it requires consensus. Because there are no miners, transactions can only be made by participants actively engaging in the consensus of the network. They do this by approving two past transactions. In this way, the system ensures that the entire network achieves consensus on the current state of approved transactions.
IOTA has a range of unique features due to its architecture:
The IOTA website lists a range of its unique features that are enabled by its architecture:
- Scalability: IOTA can achieve high transaction throughput thanks to parallelized validation of transactions with no limit as to the number of transactions that can be confirmed in a certain interval
- Decentralization: IOTA has no miners. Every participant in the network that is making a transaction, actively participates in the consensus. As such, IOTA is more decentralized than any Blockchain
- Quantum-immunity: IOTA utilized a newly designed trinary hash function called Curl, which is quantum immune (Winternitz signatures)
- No Transaction Fees: This is particularly optimal for a micro-payment infrastructure
Learning how to implement this technology effectively will be a process but, once refined, it shows great potential for security and transparency. For example, data integrity in the context of vehicles is pivotal to safety. Even in 2016 the FBI warned consumers of the threat that malicious parties present in exploiting vehicle software.
Recent data by the SANS Institute Infosec Reading Room further demonstrates that it is possible to inject malicious code into vehicle software updates. Threats can include disabling or interfering with power steering, overriding acceleration, applying brakes at any speed, and even tightening seat belts.
“Distributed Ledger Technologies (DLT) are crucial for the future of trusted transactions. IOTA has great potential to become a DLT leader with the Tangle approach,” commented Johann Jungwirth, Volkswagen’s Chief Digital Officer and a member of the Supervisory Board of the IOTA Foundation on the potential importance of blockchain technology in data-sensitive industries.
However, the two-year legal battle between Volkswagen and a team of European security researchers indicates that achieving optimal use of this technology will not come without trials and tribulations. When the researchers had uncovered the details of a security flaw present within the Volkswagen remote keyless vehicle entry system, they tried to publish this information but were hindered by Volkswagen, which used litigation to try to keep things quiet. Now, Volkswagen’s current work with the IOTA project steers in the direction of transparency in the hopes of gaining digital trust with customers, authorities, and third parties.
WHAT TO EXPECT IN THE FUTURE
IOTA’s big dream is a fully autonomous machine economy in which IoT devices can communicate and transact with each other through the Tangle. This greatly applies to the self-driving car. Earlier this year, the IOTA team announced the successful operation of the world’s first vehicle charging station that utilizes IOTA for charging and paying. Also, IOTA plans to integrate their system into a system MaaS (Mobility as a service). This system will use its distributed accounting technology for things such as making reservations and payments for Volkswagen’s autonomous vehicle. This could also lead to using its distributed ledger technology for services such as booking and trip planning, as well as payment services within the smart vehicle ecosystem. Additionally, IOTA partnered on a substantial Mobility Open Blockchain Initiative (MOBI) earlier this month for the transport industry; in doing so they join other manufacturing giants including Ford, GM, BMW, and Renault, along with IBM, Bosch, and Hyperledger.
According to Jungwirth, the IOTA platform will “allow connected devices to transfer money numerically in the form of micropayments,” which is very advantageous for the future of IoT.
As for Volkswagen, they describe their vision for using Tangle in their PoC press release to distribute data within its developing smart car economy wirelessly and securely. By 2020, over 250 million connected cars are expected to be on the road, underscoring the need for frequent updates to remote software and transparent access to data.
IOTA’s partnership with Volkswagen holds implications far beyond connected cars. On its website, the IOTA team describes its technology as “the missing puzzle piece for the Machine Economy to fully emerge and reach its desired potential.” They envision IOTA “to be the public, permissionless backbone for the Internet of Things that enables true interoperability between all devices.” An autonomous system for cars is just one step toward their much larger vision, and how they implement this system could pave the way for future devices.
As MathWorks fellow Jim Tung explains, “there is no one-size-fits all approach to designing — or defining — autonomous systems. In some cases, the goal is to remove human engagement. In others, it’s to augment our physical and intellectual abilities. In all instances, however, the utility of autonomous systems is bound by how much data is collected and what value can be extracted from that data.”
Volkswagen’s progress with the IOTA system could set a precedent for potential future use of distributed ledger technology at-large. It will be interesting to watch how these autonomous systems evolve and where the lines will be the drawn.
This article was originally posted on Mintdice. Republished with permission.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9600911736488342,
"language": "en",
"url": "http://www.huppi.com/kangaroo/L-governmentsize.html",
"token_count": 4866,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.36328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:77e63b21-83a6-46ab-b2cc-ff27c4d1ba11>"
}
|
In the previous section, we laid out an argument that democracy is the best form of government. But a second question remains: how large should the government be?
In determining the best size of government, we should first note that both governments and markets do the same thing: they exchange goods and services for money. For example, a customer may pay $10 for a restaurant dinner, whereas a citizen pays tax money for police protection.
But if they both do the same thing, then why not let the market do it all? Or why not let the government do it all? The answer is because it depends on the goods and services being offered. Governments and markets are better suited for providing different things.
Below is a comparison of how government and markets make transactions. First weíll describe the general model, and then show how both the market and government fit the model. To make the comparison easier, letters will mark the appropriate analogs:
The General Model: A group (A) delegates power to individual providers (B) within an institution (C) to provide goods and services in exchange for money (D). The group has their choice of many providers competing to provide them goods, and they give consumer satisfaction units (E) to their preferred choice. Those providers receiving a sufficient number of units will be delegated to power (F), and those that do not will be denied power (G). This competition keeps prices down, quality high, and incompetent providers out of the system.
The Market: Customers (A) delegate power to individual companies (B) within the market (C) to provide goods and services in exchange for money (D). Customers have their choice of many companies competing to provide them goods, and they give dollars (E) to their preferred choice. Those companies receiving a sufficient number of dollars will stay in business (F), and those that do not will go bankrupt (G). This competition keeps prices down, quality high, and incompetent companies out of the market.
Government: Citizens (A) delegate power to individual representatives (B) within government (C) to provide goods and services in exchange for taxes (D). Citizens have their choice of many candidates competing to provide them goods, and they give votes (E) to their preferred choice. Those candidates receiving a sufficient number of votes will be elected to office (F), and those that do not will be denied office (G). This competition keeps prices down, quality high, and incompetent representatives out of government.
The fact that customers vote with their dollars while citizens vote with their votes is an important difference with enormous implications. Consider how this difference affects the issue of natural monopolies:
In any marketplace, competition is essential to keep things efficient. Providers who have no competitors are called monopolies. Economists consider monopolies to be a market failure, because monopolies can raise prices, drop quality, and receive extra profits for nothing. People could better spend this wasted money elsewhere, on things that actually raise their standard of living.
Monopolies arise in several different ways, but a common one is the natural monopoly. This is a monopoly where competition is prevented by the very nature of the market or technology itself. Examples include telephone, electrical, gas and water utilities. The only way these services could see competition would be to install competing electrical lines and water pipes in the neighborhood an absurd and wasteful idea. Because private competition is not desirable, public competition is the best solution. Governments restore competition to natural monopolies because the elected officials running them must compete for votes. Most nations allow their governments to run their natural monopolies directly, but the U.S. has a hybrid system, in which private utilities are publicly regulated to avoid monopolistic abuse.
Sometimes improved technology can turn a natural monopoly into a competitive marketplace, as in the case of cable TV eroding the monopoly power of network TV, or fiber optics introducing competition to long-distance phone service. But new natural monopolies are always arising, often created by new technology. For example, the invention of cars created the natural monopoly of roads. (You can't have several competing roads leading to your door). The result is that the number of natural monopolies in the economy remains fairly constant, even if their constituency changes.
Utilities are not the only example of natural monopolies. Most public goods are natural monopolies as well.
Public and Private Goods
To understand this part of the debate, it's important to distinguish between a public and private good. A public good is non-exclusive and non-rival. Non-exclusive means that itís difficult to keep non-payers from consuming the good. Non-rival means that one personís consumption doesnít subtract from another personís consumption of the same good.
The classic example of a public good is national defense. National defense, once established, protects payers and non-payers alike. And one personís enjoyment of national defense is not decreased by an immigrant who enters the country and enjoys it also. In other words, once the nation is defended, it doesnít cost more to protect 200 million citizens as 100 million.
By comparison, a merchant selling apples is selling a private good, because he can exclude non-paying customers from consuming his apples. And every bite of an apple that a paying customer eats is one less bite available to others.
As it turns out, private markets cannot provide most public goods. The reason is the free-rider problem. Suppose private companies, not government, supplied our national defense. Customers would pay these companies to defend the nation, and their decision to buy the protection would be voluntary, otherwise it would not be a free market. Unfortunately, many citizens could decide to take a free ride, enjoying national defense for free while others pay for it. But if everyone took advantage of this, no one would pay for national defense at all.
Public goods are best provided by public institutions like government. The government requires citizens to pay for the good by law; citizens then become forced riders, or compelled taxpayers. This "coercion" is justified because the majority of voters prefer it to the alternative, which is defeat and enslavement by the Hitlers and Stalins of the world.
Examples of public goods include environmental protection, public parks, law and order, standardizing weights and measures, a common education, a common language, public health, printing and controlling a national currency, and more. Examples of public goods provided by private merchants include fireworks displays and street musician performances although getting paid for these services by all who enjoy them is impossible.
The ultimate public good: law and order
Imagine a land with no law and order. Everyone would be free to commit violence and aggression without worrying about police retaliation. Greed would spur individuals to rob, cheat and steal at every opportunity. Jealous lovers could kill with impunity. Nothing could stop your neighbor from driving you off your land and taking your property, except your own use of defensive force.
In such anarchy, only the fittest and luckiest would survive. But even after these survivors won their first battles, they would only find themselves in a new round of conflict, this time against proven and battle-tested survivors. The price of continual war isnít worth it, even to the survivors. Society avoids this bleak scenario by agreeing to cooperate for survival, or at least limiting the competition to fairer and less harmful methods. This more stable and peaceful approach makes everyone richer in the long run.
But cooperation requires rules that everyone lives by. Unfortunately, private markets cannot provide such law and order. Take, for example, the law against murder. How could the market enforce such a law? With government, the answer is simple: the police enforce it. But how would the free market provide police protection? Some libertarians have proposed imaginative solutions, like having private police agencies compete on the free market. You might subscribe to Joeís Security Forces, and I might subscribe to Bill's Police Agency. But suppose one day I steal your car. You could call your police agency to come and arrest me. But I could claim the car is rightfully mine, thanks to a bad business deal between us, and call my own police agency to defend against your theft of my property. The result is tribal warfare. Whatís worse, the richest citizens would be able to afford the largest private armies, and use them to acquire yet more riches, which in turn would fund yet larger armies. Libertarian scholars have attempted to save their idea with even more imaginative arguments, but the exercise only proves the unworkability of the idea, and the vast majority of scholars reject the whole approach.
The folly of this exercise becomes even more apparent when you consider how the free market would provide the law itself. Again, some libertarians propose private legislative companies competing on the free market. By paying a legislative company a few hundred dollars a year, you could buy whatever slate of laws you would like to live by. Unfortunately, two people might claim sole ownership of the same property, and point to their different slate of laws awarding them ownership. In that case, the law is of no help in identifying the true owner, and the two parties are left to negotiate. These negotiations would occur under conditions of anarchy, and the side with the most power, influence or police force would win the negotiations. This would be a society of power politics, where might makes right.
True law and order can only be provided by a single entity covering the entire group in question. That is, law and order is a natural monopoly. A single private company canít run this natural monopoly for two reasons. First, it would have no competition, unlike government, which could restore competition through voting. In other words, governments are democracies, but private companies are dictatorships, and if only one company provides law and order, you might as well have a monarchy. Second, true law and order is also a public good, much like national defense, but one that offers protection against internal enemies instead of external ones. Free riders could enjoy the benefit of the private companyís law and order without paying for it. Having democratic government provide law and order is the only way to solve these problems.
The true extent of law and order
When most people think of "law and order," they generally think of police officers fighting street crime. However, the most important laws in society are actually the laws that set up our social, property and business systems.
For example, business laws protect us against fraud, false advertising, breach of contract, copyright infringement, embezzlement, insider trading, monopolistic abuse, unfair market manipulations and hundreds of other ills that would occur under true anarchy. Without business laws, the market could not even operate. For example, if we did not have copyright laws discouraging people from pirating all their software, computer programmers could not even make a profit, and would have no incentive to produce.
Property laws protect us against theft, invasions of privacy, trespassing, pollution, vandalism, and disputes over property boundaries and ownership. Without these laws, we would have no stable system of private property.
Social laws guarantee our freedom of speech, religion, press, ballot box, due process, and equal rights. Without these laws, we would not live in a free society, but in tyranny.
Again, the free market could not provide these public goods without suffering from free riders and tribal warfare. This leads to an important conclusion: the public sector creates the rules that the private sector needs to operate.
Another irreplaceable role of government is providing national infrastructure, which includes roads, electricity, telecommunications, postal systems, and other large-scale underpinnings of the national economy. Historically, private enterprise has been unable to afford building national infrastructure. Only government has the pockets deep enough to fund such huge projects. Almost always, these projects lay dormant or underdeveloped until the government takes them up, and then progress is rapid.
Nor would we want private companies so large that they could provide national infrastructure; any company that large would surely be a monopoly, for competitors of equal size would be a waste of the nation's resources.
The classic example is road building. Private companies tried building toll roads and turnpikes in the early 1800s, but the projects were not viable. Most companies lost money in the long run, and only a few made slim profits. As a result, Americaís road system languished. But a dramatic boost in road building came with Eisenhower's Federal Aid Highway Act of 1956, which authorized the creation of over 40,000 miles of interstate highway. These highways expanded, interconnected and accelerated the U.S. economy, with profound results. They allowed the middle class to migrate from the cities to the suburbs, with an enormous increase in privacy and quality of life. They also breathed new life into commerce.
Another reason why governments are better at road building is eminent domain. This is the power to build roads where they are logically needed, by compelling land owners to sell their property at fair market values. Critics protest the coercive nature of eminent domain, but consider the alternative. If private road-building companies asked landowners to sell their property voluntarily, roads would either not be built at all, or they would zigzag crazily across the map.
Why? Because some property owners would not sell their land at any price, for reasons of sentimentality, convenience, stubbornness, or misjudgment. Others would jack up their price tenfold or a hundredfold, knowing how keenly, say, two cities would like to connect to each other. Some libertarians argue that such a high asking price would reflect the true value of the land between the two cities, if they were willing to pay it. But the problem with that argument is that if every individual landowner asked an astronomical sum, the total costs of the project would skyrocket. The costs might easily exceed the budget of the road-building company. And they would certainly make tolls skyrocket, reducing the potential economic benefit and activity between the two cities, and diverting it instead to the former landowners who do not produce anything more for their windfall. So eminent domain makes society richer in the long run.
Highways are but one example of how publicly funded infrastructure has increased commerce. Others include:
Settling the West: The U.S. government played a primary role in settling the West. It conducted massive land purchases like the Louisiana Purchase ($15 million), the Texas/California purchase ($25 million), and others. It then gave the land to American settlers for a song, thanks to the Homestead Act and other giveaways. Conquest, where it occurred, was done primarily by the U.S. Army, not gun-toting pioneers. The government also subsidized the Wells Fargo postal routes, agricultural colleges, rural electrification, telegraph wiring, road-building, irrigation, dam-building, farm subsidies, and farm foreclosure loans.
Funding Railroads: In the late 19th century, the government gave away 131 million acres in federal land grants, at enormous cost to itself, to railroad companies to build their railroads. Four of the five transcontinental railroads were built this way. To help them, Congress authorized loans of $16,000 to $48,000 per mile of railroad (depending on the terrain).
Rural Electrification: In 1935, only 13 percent of all farms had electricity, because utility companies found it unprofitable to wire the countryside for service. Roosevelt's Rural Electrification Administration began correcting this market failure; by 1970, more than 95 percent of all farms would have electricity.
U.S. Mail: Many people think that the privately owned UPS, which delivered 3 billion pieces of mail in 1997, is Americaís postal success story. But this figure pales in comparison to the U.S. Postal Service, which delivered 190 billion pieces of mail that same year. The U.S. Postal Service also achieves a 91 percent on-time delivery rate charging among the lowest rates in the industrialized world. No private organization could hope to match these numbers. It is also interesting to note that the privately-funded Pony Express was a financial failure that lasted only a few years. The government subsidized the Wells Fargo Company, which succeeded delivering mail to California for rest of the 19th century.
The Internet: In the 1960s, the government created ARPANET, which was used and developed by the Defense Department, public universities and other research organizations. In 1985, the National Science Foundation created various supercomputing centers around the country, linking the five largest together to start the modern Internet we know today.
NASA: Thanks to Americaís space program, today we have a fleet of satellites that conduct global telecommunications, weather observation and warning, ozone and global warming studies, intelligence missions, high-resolution and high-accuracy mapping, as well as detection of forest fires, oil spills, El Nino events, natural disasters and earth-threatening asteroids. Space exploration was so inherently difficult that it took decades and hundreds of billions of dollars before the practical benefits became possible. Private companies could not have possibly afforded such investment, or waited so long until it bore fruit.
The Treasury and Federal Reserve System: The Treasury prints the very money the economy runs on. And using Keynesian policies to expand or contract the money supply, the Fed has completely eliminated economic depressions in the last six decades.
Federal Emergency Management Agency: Today FEMA has won widespread praise for its response to natural disasters like earthquakes, hurricanes, floods and tornadoes. No private business could wait the long intervals between disasters like FEMA does, or bring relief to entire cities or states.
Human Genome Project: The government provides the money and the organization for this 20-year project, which will give medical science a road map of the human genetic code. Researchers have already found genes that contribute to 50 diseases.
Centers for Disease Control and Prevention: This legendary American organization, popularized by the movie Outbreak, isolates and wipes out entire plagues and diseases that strike anywhere in the world. "The CDC," says Dr. James Le Duc of the World Health Organization, "is the only ballgame in town."
Mass education: This is probably the most remarkable example where the government overcame a market failure. Prior to the 1840s, the vast majority of Americans were illiterate. What few schools existed were private schools that educated boys only from the richest families. However, during the 19th century, the government began funding mass education at both the elementary and high school level. Between 1900 and 1996, the percentage of teenagers who graduated from high school mushroomed from 6 to 85 percent. The government also began issuing grants and loans for college education, and college enrollees aged 18 to 24 mushroomed from 2 to 60 percent. In essence, the government is responsible for the educated workforce that causes todayís economy to excel.
Finally, government is useful for correcting market failures. Economists define market failure as "an imperfection in the price system that prevents the efficient allocation of resources." There are many types of market failure; here are the definitions of the most important ones:
Asymmetric Information: This is any difference in information and expertise between two negotiating parties. For example, in the used-car market, the seller's information is based on sales that he conducts every day, but the buyer's information is based on a purchase he conducts only a few times in his lifetime. The resulting exchange is likely to be unfair or one-sided.
Adverse Selection: This is any unfair exchange based on asymmetric information.
Externality: Also called the spillover effect. This occurs when someone other than the buyer shares the costs or benefits of the product. The classic example is pollution. Factories can either treat pollution, which costs money, or dump it for free into the air or water. If they dump it, then not only are customers paying a price for the product, but local citizens too, in the form of higher mortality and disease rates, less fertile land, environmental catastrophes, etc. Sometimes the spillover effect is both positive and negative. An airport benefits its flying customers, but it also subjects the local neighborhood to various externalities. Positive ones include increased local business; negative ones include noise pollution.
Imperfect competition: This is any situation where a monopoly or oligopoly controls the market for a certain product. The lack of competition raises prices, lowers quality, slows down innovation and exploits customers.
Path dependency: This is the tendency to stick to a certain path, trend, technology, method or location, even after more promising alternatives appear. The most commonly cited and now disputed example is the QWERTY typewriter keyboard. This 19th century system placed the most commonly used letters far apart on the keyboard, purposely slowing down typing to avoid key jamming. Of course, today's electronic keyboards do not suffer from jamming, and a better system, DSK, cuts down on typing time by 10 percent. Unfortunately, society is committed to the old system, because it is too costly to retrain all typists and retool all keyboard production everywhere. Conservatives have raised objections to the QWERTY example, but path dependency has been found in thousands of other places in the economy as well. Examples include the English vs. the metric system, steam vs. gas engines, water-cooled vs. gas-cooled nuclear reactors, and the centralization of entire industries in a single city, like auto production in Detroit, or aircraft production in Seattle, or movie-making in Hollywood.
Failure to provide public goods: As outlined above, free markets cannot provide most public goods, or goods that are non-exclusive and non-rival. Attempts to do so result in a free-rider problem, where consumers may enjoy the good without paying.
Because markets are the cause of market failures, it follows that markets cannot correct them. But they are solvable by government. For example, governments can educate consumers, regulate polluters, break up monopolies, subsidize retraining, retooling or relocating programs, and provide public goods like national defense.
Once you consider all the goods and services that only government can provide (or provide well), it should become clear that government plays an extensive, beneficial and irreplaceable role in society. Conservatives and libertarians who wish to scale back government would only create more problems than they solve.
The advantages of markets
Markets do have their advantages over government, depending on the type of goods and services offered. Markets are better at handling most private goods. Why? It is a truism that democracy only works when the people are educated. Voters would be overwhelmed trying to educate themselves on the best prices for bicycle parts, the best safety features for surgery or what 32 flavors an ice cream store should sell. It is easy to see that a lot of ignorant votes would be cast in a system where voters attempted to run every aspect of the economy. In a free market, customers can become experts only on the things they want to buy, and then vote with their dollars.
Under the current (and imperfect) system, markets also have other advantages of specificity. First, elections take place only once every two or four years, so consumer choice mechanisms are much weaker in government. (This could be solved by holding more frequent elections, initiatives and referendums.) Also, markets allow people to vote for very specific things like Ben & Jerry's ice cream over Haagen Daz. In an election, people vote on generalities like a politician's overall record, which may include disagreeable as well as agreeable policies. (This, too, could be resolved by allowing voters to vote on more specific issues and offices.)
In the final analysis, the correct ratio of government to market has a logical answer, based on the above considerations. We'll explore more of these considerations in greater detail throughout this FAQ.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9272151589393616,
"language": "en",
"url": "https://chidi0048.medium.com/cartesi-as-a-solution-to-scalability-issue-in-blockchain-4c06af465dc7",
"token_count": 869,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0079345703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:42ef0282-a457-4ae3-8749-34fd6a4b6696>"
}
|
With an increased number of blockchain users, there is a need for the blockchain to be scalable. Here is a little history into Bitcoin. A few years ago (2008 precisely), an anonymous individual or group of persons named Satoshi Nakomoto released a whitepaper on Bitcoin. The following. Bitcoin was created and over the years has become one of the biggest currencies of our times. As of today, Bitcoin is worth over $11000. However, Bitcoin is labeled as the biggest bubble of this age built on the genesis technology of our times. Many people are gradually becoming interested in blockchain because it has many use cases and the potential to improve processes in different industries. Digital currencies are just a part of the blockchain and not the full picture.
One of the features that distinguish the blockchain from traditional databases is transparency which establishes the integrity of the blockchain. Anyone can verify any transaction on the blockchain. The blockchain advantage is decentralization — it is not controlled by a single person. Therefore, there is no single point of failure. The network is between decentralized nodes and not a central authority. Let’s look at the blockchain as a link list with a hash pointer and a genesis block. This genesis block connects to the next block with pieces of data. Each of the blocks on the link list recognizes the next block with data and the hash pointer confirms that the data in the succeeding blocks are not corrupt.
The blockchain validates transactions. Validating transactions on a decentralized platform is more complex and this brings in consensus mechanisms. Consensus simply means to reach an agreement about value within the nodes on the network.
Blockchain versus the mainstream
In the case of bitcoin, it involves getting all the nodes to solve complex computation mathematics. This type of consensus is termed proof of work consensus mechanism (or mining). when a miner successfully solves the puzzle, he can write to the blockchain. Bitcoin is simply an example of how the proof of work consensus mechanism works. All systems need a kind of consensus mechanism to resolve conflicts.
Two factors contribute to the timing needed to write on the blockchain; the block size and the block interval. A block contains several transactions. The size of this block is capped at 1mb. The inter-block interval is set to about 10 minutes which is dependent on how fast the miners can solve the puzzle. This inter-block interval enables the block to distribute its arrival on the blockchain before a new block arrives to be authenticated. Due to the block size and the inter-block interval, confirmation of transactions is capped at about 7 transactions per second. Also, blocks become permanent and irreversible when a new block is added to the blockchain.
Summarily, this is very slow compared to the mainstream payment systems like Mastercard and Visa which can process over 20,000 transactions per second with a latency of a few seconds. The best that Bitcoin can do is 24 transactions per second and latency of about 12 seconds.
Cartesi as a solution to this scalability issue
In typical scalable technology, more nodes should mean more verified transactions. However, with the blockchain, the reverse is the case because of the consensus needed to confirm a transaction. Sharding of data is used by traditional systems to achieve scalability. Sharding means splitting data into multiple groups within the network and these small committees become in charge of managing transactions. This reduces the pressure on each node to handle every transaction.
Improving scalability in the blockchain is an area open to research and Cartesi proffers solution to this major challenge of the blockchain. With Cartesi it is possible to run intensive computation off-chain while retaining the security of the blockchain. The core of Cartesi will provide components that will specify and verify computations 0ff-chain. One of the challenges Dapp developers encounter is the usability of these applications in real-life scenarios. Applications are almost not scalable because of the rigid and inconvenient environment of the blockchain. Cartesi will resolve the data storage challenge by keeping on-chain Merkle tree hashes of off-chain data.
What do you think about “sharding” as a way of scaling the Blockchain? The blockchain is here to stay but it still has a long way to go from where we are right now to outshine the traditional databases
Cartesi’s Ecosystem Links:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.942396879196167,
"language": "en",
"url": "https://esdnews.com.au/report-gas-best-way-to-unlock-wind-and-solar/",
"token_count": 727,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.025634765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2d32ba4a-4ae2-4fbc-8500-64305563f613>"
}
|
A new report from Frontier Economics has found that gas is a critical element in the electricity system’s net-zero future, as it is best placed to secure the energy system during renewable droughts – situations where renewable generation is unavailable to meet energy demand over a prolonged period.
The report titled: Potential for Gas Powered Generation to Support Renewables, uses South Australia’s experience to understand the role gas-powered generation plays in a system with a large amount of renewable generation. The report found that its role in providing firming power is often undervalued in long-term investment modelling, and that its emissions from – brought online to support renewable generation – are likely to be very low.
Welcoming the report today, Jemena’s managing director Frank Tudor said it demonstrates the crucial role it will continue to play in delivering reliable, affordable, and sustainable energy.
“This report demonstrates that the most efficient way of achieving net-zero emissions in the electricity sector is to ensure gas-powered and renewable generation work in concert with one another, with it providing crucial firming power when renewable generation is unavailable, particularly over prolonged periods when other firming solutions such as battery and pumped hydro will have depleted,” Mr Tudor said.
“Modelling in the report demonstrates that the gas/renewable generation partnership is also the cheapest way of achieving net-zero emissions, with total resource costs reducing by as much as 36 per cent when gas-powered generation is used to support renewables.”
Mr Tudor said businesses like Jemena are prepared to continue investing in its infrastructure, but called for greater importance to be placed on the insurance role provided by gas-powered generation.
Decarbonising Australia’s gas network
Mr Tudor said a number of trials around the country are testing the application of renewable gases – including hydrogen and biomethane – in residential and commercial, storage, and transport settings. These trials offer a pathway towards decarbonising Australia’s gas networks, while also developing a potential renewable gas export industry.
“Through partnerships with the Australian Renewable Energy Agency (ARENA), Sydney Water, and others Jemena is on track to produce green, zero-carbon, hydrogen and biomethane which can be injected into our gas distribution network in New South Wales,” Mr Tudor said.
“Our Australian-first biomethane project will generate around 95 Terajoules of renewable green gas per year, which is enough to meet the gas demand of approximately 6,300 homes. If proven, we believe around 30,000 Terajoules of biomethane can be produced which is enough gas to meet the needs of our 1.4 million customers in New South Wales.
“Similarly, our hydrogen project – the Western Sydney Green Gas Project – will consider the role of hydrogen in residential and importantly transport settings. We believe hydrogen has great potential in powering haulage, public transport, and other large vehicles which are required to travel long distances without refuelling or recharging. Additionally, hydrogen can be easily blended into the gas distribution grid to give customers the option of accessing renewable gas.”
Mr Tudor said Jemena and ARENA are investing $15 million in the Western Sydneyproject, which is expected to deliver later this year, while Jemena’s biomethane project will receive $14 million in funding, comprised of $5.9 million in grant funding from ARENA and $8.1 million in funding from Jemena.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9721797704696655,
"language": "en",
"url": "https://togetherwomenrise.org/programfactsheets/educate-the-children-etc/?mode=grid",
"token_count": 3080,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.447265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dbea743a-37b2-4409-8909-20ce7982e265>"
}
|
Educate the Children (ETC) works with women and children in Nepal to improve health, welfare, and self-sufficiency by building skills that families can pass down to later generations.
Life Challenges of the Women Served
The total population in the Ramdhuni Municipality in the Sunsari District, Nepal is 20,178 people living in 4,427 households. About 16 percent of households are female headed. More than half the total population is living below the poverty line, which is about $900 per year. (At an average household size of 4.56 people, that comes to approximately 54 cents per day, per person.) Annual incomes are as low as $540. More than 80 percent are involved in agricultural work for pay, although this is most often as tenants or wage laborers for relatively wealthy landowners, because while the land is fertile and productive, ownership is highly concentrated in the hands of a relatively few wealthy people.
Food security is a real problem for most families. For example, according to the Asian Development Bank, 9.5 percent of children under age five were considered undernourished during the period 2015-2017, and 36 percent were considered to suffer from stunted growth in 2016. Only 31 percent of households raise enough food to last their families for nine or more months; 55 percent raise enough to cover zero to four months’ worth of food needs. Kitchen gardens could be grown even at homes of landless or nearly landless families, but only 17 percent of households were cultivating kitchen gardens. Most people own only one or two livestock animals at most – sample averages being 1.87 goats per family and 1.5 chickens per family.
There are about five microcredit organizations and several women’s cooperatives active in the area. However, these institutions are not addressing the problem of cyclical poverty, and in fact are arguably contributing to it, because they do not emphasize either income generation/enhancement or savings. Many people take out one high-interest loan to pay off another, thus further strengthening the grip of cyclical debt and poverty. During ETC’s visit to the project area in February 2019, only 16 percent of households surveyed were found to have no debt at all.
The causes of these problems are very deeply rooted and related to traditional sex roles and the caste system, among other factors. They include:
- Very uneven resource (land) ownership, i.e., most land is owned by relatively few people.
- Cyclical poverty exacerbated by children dropping out from school at all grade levels.
- The ongoing practice of child marriage, despite it being illegal.
- Relatively large household size.
- Few or no opportunities to learn how to increase earnings and make better lives other than by taking higher paying but possibly dangerous jobs in the big cities or abroad, which in any case is not an option open to most people.
ETC works intensively for a period of several consecutive years in areas of Nepal where their services are needed and wanted. For this project they will add four rural wards of Ramdhuni Municipality, Sunsari District, more than ten hours’ drive to the southeast of Kathmandu.
They work in these predefined geographic areas providing training and resources as well as helping residents develop the leadership skills and confidence that will enable them to manage the activities on their own. During the final year, ETC begins to phase-out their direct involvement and by the end of the year village residents can manage and support the ongoing activities, and ETC begins working in a new set of villages.
Among the first things ETC does upon beginning a new program cycle is to establish the women’s groups, which are the foundational structures of the work. These groups consist of 15-25 women, are formed based on geographic proximity, and choose their own officers. Members attend monthly meetings and contribute to their groups’ microcredit funds – perhaps only a penny or so per month at first, because they are unable to afford more. The women support each other in their efforts to gain practical skills such as literacy and agricultural training. All have access to loans at reasonable rates, which many use to start and grow their own small businesses, usually of an agricultural nature. The typical loan size is about $50, which is enough to allow women to purchase chicks, seeds/tools, or other necessary supplies for income generation. Toward the end of the multi-year program cycle, women’s groups are combined by geography into legally recognized and indefinitely sustainable cooperatives, which have their own officers and management committees.
ETC’s integrated community development program (ICD) model includes three components (the exact activities for which depend on the needs of the target population and on other factors such as soil/weather conditions): women’s empowerment, children’s education, and sustainable agricultural development.
First, through this project and participation in ETC-sponsored women’s groups (and later, legally recognized cooperatives managed by the members themselves), women will learn literacy, numeracy, learn business skills such as handling money and keeping records, establish small businesses to help support their families, increase their self-confidence, and become more active and respected in their communities
Second, ETC training and resources improve nutrition and food security and increase families’ incomes. All women’s group members start kitchen gardens, which can be grown year- round in very small spaces and include crops rich in vitamins and minerals. Some women expand to market gardening and livestock businesses that enhance the nutrition of entire communities and contribute significantly to families’ incomes for many years. These sustainable agricultural development activities provide women with the skills and initial resources necessary to increase their earning potential and improve their own and their families’ well-being, both immediately and indefinitely.
Third, the women receive support to encourage their children’s continued school attendance. In addition, the women serve as excellent role models of self-sufficiency and determination, for their daughters and other girls in their lives.
Women’s groups choose from among their members a Leader Farmer (LF), who will commit to attending as many of the horticultural trainings as possible. LFs transfer the knowledge thereby gained and offer ongoing technical support to their women’s group peers and assist with the distribution of kitchen garden supplies. LFs also meet, interact with, and learn from agricultural specialists from the government and/or other entities, via site visits and special training opportunities.
For this project, ETC estimates that 1,000 or more women ages approximately 18-45 (the members of the women’s groups) will be served directly through receiving agricultural training and resources. Their 3,500+ household members, including children and the elderly, will also benefit from increased household incomes and from improved food security and nutrition. Many neighbors and extended family/friends will also benefit indirectly through transfer of knowledge and/or access to more food of better nutritional value.
Year 1, Direct: 1,000+; Indirect: 4,000 (est.) Year 2, Direct: 1,000+; Indirect: 4,000 (est.)
UN Sustainable Development Goals
Questions for Discussion
- How do you think this project addresses food insecurity?
- How do you think education affects this issue?
- How do you think ETC’s work impacts gender equality?
The Project Budget and How DFW's Donations will be used
DFW’s grant of $50,000 over two years will be used for the following:
Why We Love This Project
We love ETC’s focus on this sustainable agricultural project to enable Nepali women to improve food security and household incomes, while improving the nutritional intake of the entire family.
Evidence of Success
Below are just a few pertinent examples of ETC’s success, derived from a 2009 independent evaluation and a 2019 ethnographic study, both of which were conducted in project areas in which their program cycles had been completed, thus attesting to the lasting benefits and sustainability.
From the 2009 independent evaluation:
- One hundred percent of women’s group members surveyed were still raising food in their kitchen gardens. As a result, intake of Vitamin A (found in many leafy green vegetables commonly grown in kitchen gardens) had increased significantly during ETC’s presence and remained high afterward. Common health problems associated with Vitamin A deficiency (such as lack of resistance to infection, visual impairment/blindness especially among children, and problems during pregnancy) had decreased significantly.
- Women’s group members reported average annual household income increases of about $200 at the 2009 exchange rate. This represented an increase of 50 percent or more for many women and their families and was primarily due to women’s profitable agricultural activities.
From the 2019 ethnographic study:
- After receiving training, free resources and skills to maintain kitchen gardens, participants stated there is not only an increase in vegetable sufficiency throughout the year, but also there has been improvement in dietary diversity, resulting in positive change in household health.
- Many women stated that being a member of their women’s group has helped them in a very practical sense to develop their skills/knowledge related to income-generating. Masali, age 71, reported that “ETC’s project has made us knowledgeable and skillful in terms of running a kitchen garden with a variety of seasonal vegetables, being able to read and write, although it was quite challenging in the beginning. As a result of the training, I have also produced and sold vegetables in large scale.”
“Being a woman, it was difficult for us to get loans before the formation of the group because nobody trusted us. Now we are able to pay for our children’s school fee as well as invest in livestock and agriculture. Because we are able to do such things, we feel valued and respected in our household. My husband has started consulting me in every decision he makes. I feel worthy.”
- Debaka, age 52
“ETC’s project has made us knowledgeable and skillful in terms of running kitchen garden with a variety of seasonal vegetables, being able to read and write, although it was quite challenging in the beginning. As a result of the training, I have also produced and sold vegetables in large scale. The first cauliflower that I produced in my garden after receiving training from ETC was 13 kilograms. I was awarded by ETC for producing such a big organic cauliflower.
- Masali, age 71
“Previously we used to have very little vegetables and mostly potatoes, radishes and green leafy vegetables. No matter how much effort you put, it was only sufficient for a few months. After receiving training, we now have varieties of vegetables including cauliflowers, cabbages, tomatoes, broccoli, turnips etc. that can be used throughout the year. Sometimes we even have some excess vegetables, which we sell.”
- Indramaya, age 50
About the Organization
ETC was founded in 1989-1990 by Pamela Carson and several close friends and has had 501(c)(3) status since July 1991. ETC originally provided resources and support for impoverished children to attend school. They soon realized that their efforts would be yet more effective if they also assisted and engaged the children’s (usually illiterate) mothers and ensured that the children and their families have enough nutritious food to eat. Since the mid-1990s, ETC’s integrated community development program model has included three mutually supportive components: Children’s Education, Women’s Empowerment, and Sustainable Agricultural Development.
Where They Work
Nepal is in Southern Asia, between China and India in an area slightly larger than New York state. It is a landlocked country which contains eight of the world’s 10 highest peaks, including Mount Everest and Kanchenjunga. The total population of Nepal is 30,327,977 (July 2020 est.), with most of the population divided equally between the southern-most plains of the Tarai region and the central hilly region. About 1.424 million live in Kathmandu, the country’s capital. Overall density is quite low.
Nepal is among the least developed countries in the world, with about one-quarter of its population living below the poverty line. Agriculture is the mainstay of the economy, providing a livelihood for almost two-thirds of the population but accounting for less than a third of gross domestic product. Industrial activity mainly involves the processing of agricultural products, including pulses, jute, sugarcane, tobacco, and grain.
Massive earthquakes struck Nepal in early 2015, which damaged or destroyed infrastructure and homes and set back economic development. While political gridlock and lack of capacity have hindered post-earthquake recovery, government-led reconstruction efforts have progressively picked up speed, although many hard-hit areas still have seen little assistance. Additional challenges to Nepal’s growth include its landlocked geographic location, inconsistent electricity supply, and underdeveloped transportation infrastructure.
The median age in Nepal is 25.3 years. The median age of first birth among women is 20.8 years. Maternal mortality is 186 deaths/100,000 live births (2017 est.), and the infant mortality rate is 25.1 deaths/1,000 live births. The literacy rate for the total population is 67.9 percent, with 78.6 percent of males being literate and 59.7 percent of females being literate.
A Closer Look at Nutrition in Rural Communities
For decades, the number of hungry people had been declining – this isn’t true anymore. Globally, more than 820 million people do not have enough to eat, about one in every nine people. Food and nutrition insecurity can be defined as the inability to access adequate quantities of nutritious foods required for optimal growth and development. There is a direct relationship between food and nutrition insecurity and poverty. The issue affects all countries, including the United States. In 2018, 11.1 percent of U.S. households were food insecure at some time.
Maternal and child undernutrition contributes to 45 percent of deaths in children under five. But food insecurity isn’t just about hunger. Obesity and being overweight are on the rise in almost all countries, contributing to 4 million deaths globally. The various forms of malnutrition are intertwined throughout the life cycle, with maternal undernutrition, low birthweight and child stunting giving rise to increased risk of being overweight later in life.
Food insecurity is strongly associated with chronic disease and poor health, both of which disproportionately affect rural populations. Poverty and the lack of access to nutritious and balanced diets remains a major impediment to the health and well-being of people living in rural areas.
Long-term food insecurity can affect learning, development, productivity, physical and mental health, and family life. Addressing rural health disparities must be a priority.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9213181138038635,
"language": "en",
"url": "https://www.carnegiecouncil.org/publications/articles_papers_reports/787",
"token_count": 5012,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.43359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2e3d8cfe-e894-4c63-b210-a3598faac88e>"
}
|
Since 2009, Greece has become synonymous with crises: the sovereign debt crisis, the economic crisis, the Eurozone crisis, and finally the migration crisis. While the sufferings of migrants and refugees have understandably attracted agonized international attention, the problems, challenges, and hardships that the Greeks have been enduring over the same period have been reduced to an implicit "that's how it goes"—a new normalcy. In fact, watching people walking in the sunbathed streets of Athens, contemplating the azure skyline stretching over the Aegean, listening to the murmur of groups of friends sipping apparently bottomless frappes, tourists get truly confused and ask: "So where's the crisis?" Appearances can be misleading: the crisis is there, devouring the very flesh of Greek society. Greece is swamped in a group depression, trying to cope with what has become a new social, economic, and political normalcy. The attempted July coup in Turkey has added to Greeks' feelings of being beset by uncertainties and changes from all sides, all in the space of less than a decade. To understand how Greeks are faring these days, we must look at a set of interrelated issues: the impact of the economic crisis, the flip side of the migration crisis, and the foreign-policy related hazards stemming from the losses in political leverage that Greece has endured over the past seven years. This essay addresses the first two of them.
Economic Recession: Behind the Numbers
From 2008-2015, Greece's GDP (current prices) dropped 27.3 percent from EUR million 241,990.4 in 2008 to EUR million 176,022.7 in 2015.1 Average annual unemployment rates rose from 7.8 percent in 2008 to 24.9 in 2015 (it peaked at 27.5 in 2013).2 And unemployment figures do not record the full picture: self-employed are not included, as they are not entitled to unemployment benefits. The statistics also obscure the strikingly high unemployment levels among women and youth.
Real adjusted gross disposable income of households per capita dropped from EUR 19.519 in 2008 to 15.059 in 2014.3. Gross fixed capital formation, i.e. investments, fell from 23.8 percent of GDP in 2008 to 11.7 percent of GDP in 2015.4 As a result of a mix of economic policies hostile to business introduced in 2010, the private sector of the economy shrunk and thousands of businesses either collapsed or relocated to more business-friendly neighboring countries, such as Bulgaria. During the first seven months of 2016 alone, 19,056 companies closed down and only 16,478 new companies were set up.5 Notably, the majority of businesses today are micro-enterprises operating mostly in fast food and other low-cost services sectors, including shoe repair and seamstress services. Interestingly, bakery franchises have mushroomed in Greece, suggesting that as consumers' disposable income dropped, and so their propensity to spend, the structure of their spending adjusted accordingly, i.e. away from durable goods to nondurables and services.
Poverty has reached an extraordinary level. In 2015 the percentage of population at risk of poverty or social exclusion reached 35.7 percent, up from 28.1 in 20086. This impoverishment is the direct result of a policy-mix—frequently confused with austerity—implemented in Greece starting in 2010. It resulted in the collapse of the private sector of the economy, layoffs, recession, several consecutive increases in direct and indirect taxation, horizontal cuts in pensions, and reduced spending on health care and education.7 The continuous sharp drop in savings levels (as percentage of household disposable income) over the past years, e.g. ca. 15 percent in 2013 and 17 percent in 2014,8 suggest that Greeks have been surviving by using up their savings. It is in this context that one should view the cuts in pension levels. It turns out, that as many elderly are struggling to meet their basic needs, their ability to support their unemployed family members financially has been undermined as well. Malnourished children are frequently the first victims.
According to a recent study, homelessness in Athens has increased dramatically by 69 percent over the last five years, with the majority of the homeless (62 percent) being Greek. Roughly 55 percent are between 30-50 years old.9 According to other sources, the number of homeless in Greece overall has increased by 30 percent since 2009 to totally approximately 40,000 people across Greece in 2016. The majority of the new homeless are neither drug addicts nor illegal immigrants, but individuals who lost their jobs and apartments virtually overnight.10 Stories abound of how many homeless, especially men, conceal that fact from their adult children, not wanting to become a burden and/or admit their helplessness. Reportedly, in Athens alone, 20,000 people survive thanks to soup kitchens and other organized voluntary community support networks. Among those who regularly eat at soup kitchens, 66 percent are Greeks, 51.5 percent of whom obtained tertiary level education.
The Reverse Brain Drain
The Bank of Greece estimates that ca. 427,000 people have left Greece over the period 2008-2015. The majority of them, as recent reports by Endevor11 and by the Bank of Greece12 suggest, were young and educated. Although, most probably due to different methodologies employed and different time-frames applied, the reports indicate different figures, i.e. 350,000 Greeks and 223,000 respectively, the scale of this reverse brain drain is daunting. Both reports make a case that it is driven by unemployment, adverse economic conditions, and lack of professional development opportunities. In fact, there are indications that when abroad young Greeks do very well and climb the promotion ladder quickly. Trained in coping with "user unfriendly" public administration and adverse economic circumstances in Greece, when abroad they are able to employ knowledge and skills gained at educational institutions at home and their typically Greek wit to advance their professional development.
The data collected from host countries suggests that the new Greek diaspora generates EUR 9.1 billion in taxes for the hosts annually.13 As ever, diasporas play an important role in view of maintaining the standards of the families left at home. According to the World Bank,14 in 2014 the value of remittances sent by the Greek diaspora to Greece reached 735 million USD. The top three remittance-sending countries were Germany, the United States, and Australia.15 However, as the brain drain continues, Greece's future becomes even more uncertain.
A Shattered Society
The sudden growth in unemployment, followed by sudden loss in disposable income level, and accompanied by a disintegrating state administration means that no social provision exists for those in need; and the numbers are growing. The private sector, swamped by excessive taxation, operating in an inflexible labor market framework, under conditions of a liquidity squeeze, cannot absorb the unemployed. Therefore, as the crisis continues, amidst political instability at home and abroad, the resources at the disposal of families dwindle. In this view, the degree of social deprivation is bound to increase. No specialized training in psychology is required to understand that the implications for individuals' well-being will be grave. Indeed, some studies have hinted at a suicide epidemic in Greece16 and linked it to people's inability to deal with the pressures inflicted by these crises, and a lack of hope that anything will improve.There is much discussion of unemployed Greek youth, looking at their damaged futures and current frustrations. Much less attention has been paid to those already in the labor market, young enough to have professional ambitions and not old enough to think of retirement. Even if they did not lose their jobs, they have seen their salaries shrink, and work load expand. Their promotion prospects have been crushed by a very common argument used by employers across the board: "You should be happy that you are still employed." Clearly, these factors have a direct negative impact on those peoples' health, productivity, and family life; there is deep hopelessness and frustration. Entangled in family obligations such as raising children, paying the mortgage, and the costs and responsibilities of caring for elderly parents, it is largely impossible for those people to move abroad. This is particularly true since, unlike in the United States, very specific constraints to labor mobility exist in the European Union (EU) and these have to be factored in, i.e. different languages, different labor markets' regulations and a lack of a pan-EU pensions' and specialized health insurance scheme.
At the company level, businesses are under immense pressure because of continuous increases in taxation (corporate, one-off, and presumptive taxation) and labor wedges, followed by huge delays in returns of VAT. In these circumstances managers frequently forget or feel forced to ignore the fact that happy employees are the most productive and that professional development prospects tend to serve as the best incentive to boost a company's productivity. As the micro- and macroeconomic prospects for Greece remain bleak, the migration crisis constitutes another challenge that Greek society has to endure in these trying times.
The Migration Crisis
Over the past two years, over 1 million migrants have arrived by sea on the shores of Greece; a country of roughly 11 million people. International media have reported extensively on the migrants' sufferings and the many tragic deaths. Simultaneously, accounts of heroic, generous, and welcoming Greeks have won the hearts of the international audience. In spite of the EU emergency refugee relocation system enacted in fall 2015, as of August 2016 only 3,386 refugees have been relocated from Greece to other EU member-states.17 Overall, about 58,580 migrants remain in Greece, as data from late August 2016 indicate. Notably, there has been an increase in arrivals by sea following the attempted July coup in Turkey and the resulting uncertainty regarding the implementation of the March 2016 agreement between the EU and Turkey on stopping irregular migration.18 In fact, over the period January-July 2016, 176,743 arrivals to Greece were recorded. That traffickers adjust their trafficking routes to the political circumstances in the region has become evident when in the height of the tourist season in mid-August, 41 migrants landed on one of the beaches on Mykonos.
Although the majority of migrants originate from Syria (79, 471), migrants from Afghanistan (41,222), Iraq (25,781) and Pakistan (9,310) constitute a sizeable part of the current migration wave.19 At present, again according to official statistics, 11,548 migrants are on the Greek islands. The migrants are stranded in reception centers the capacity of which was set at 7,540.20 The Greek government's plan aimed at decongesting these centers by relocating ca. 2,000 people to four new centers in Crete has been harshly criticized by the local authorities. The latter see the government's plan as a way of absorbing the bulk share of the 3,000 immigrants that the German government, in line with the provisions of the Dublin III convention,21 supposedly plans to deport from their territory in fall this year.22
The housing of migrants remains a challenge for the Greek authorities and the UNHCR, which is also involved in Greece. Apart from the reception centers, which host the lion's share of the immigrants, the latter also inhabit baseball and hockey fields. Very few of them live in apartments or hotels, well below the government target of 20,000.23 In line with data released on August 27, 2016, Samos, Lesvos and Leros, the three islands where most migrants landed, are still coping with the influx of migrants. Several cruise lines no longer stop in Mytilene, the capital of Lesvos, depriving the local economy of a significant regular source of income. As Lesvos records a 70 percent drop in bookings, for Samos the data might be higher as tourists avoid these islands. Certainly, the drama of the Greek tourism industry due to the migration crisis is not limited to the islands. Many mainland destinations accessible by car and popular limited to the islands. Many mainland destinations accessible by car and popular among visitors from Bulgaria and Romania have also been affected and the numbers of tourist arrivals have dropped.
Experience related to managing irregular migratory flows to Greece over the years 2008-2012 suggests that it is difficult to estimate the cost of managing irregular migration flows and to assess their effectiveness.24 With regard to the current wave of migration, in March 2016, the Bank of Greece published data suggesting that 2016 alone, the cost of managing the migration crisis will exceed EUR 600 million.25 Given the fact that the number of migrants increased, the cost will rise. Since the beginning of 2015, emergency assistance of EUR 181 million has been awarded by the European Commission to Greek authorities, international organizations and NGOs involved in managing the migration crisis in Greece. This emergency assistance comes on top of EUR 509 million already allocated to Greece under the national programs for 2014-2020.26 It is unclear how much of the sums allocated will be actually disbursed.
In the frequently dramatic and emotion-filled media narrative on migration, very little attention has been paid to the trauma that the tiny local communities in Greece endured and still have to cope with. Hundreds of unnamed migrant graves, bodies washed ashore, mountains of safety jackets, human suffering: all these have left a mark on the local communities. Another rarely talked about issue is that of the negative implications of the migration wave for safety and security on the islands. There has also been very little discussion on how the migrants will interact with the local communities when their children will be given the opportunity to go to school this year. The point is that many of them are still hopeful that they will leave Greece; therefore, they are disinterested in integrating. Finally, as the Greek society belongs to the fastest aging populations in the EU, no one dares to open up Pandora's box and discuss how this wave of migration will influence the functioning of Greek society in the years to come.
The multiple crises that Greek society has endured since 2009 have resulted in disturbed life-work balance, a wave of depression, and a suicide epidemic. They have affected work ethics and culture, children's performance at school, and health indicators. These observations are consistent with the findings presented in the March 2016 OECD Economic Survey of Greece report,27 which demonstrates that the social costs of the prolonged economic depression, including the collapse in labor income and pensions, the increased risk of unemployment and uncertainty about the future, have significantly reduced life satisfaction.28 The subjective well-being score for Greece is the lowest in the OECD.29 All these factors have serious implications for Greece's future, especially since educated youth continue to go abroad and there has been little discussion on how to effectively keep them in Greece or how to make them return. The multifaceted implications of the migration crisis add to the equation, thus giving us a closer insight into the Greek reality today. And yet, from a broader perspective, what is particularly worrying is that people are adjusting and incorporating this bleak reality into their rational daily choices. Thus, this reality is not only turned into a new normalcy but also—and unfortunately so—is perpetuated. Given the already fragmented and unstable political scene in Greece, it would require tremendous courage to launch a discussion on the need to introduce deep structural reforms in the country's political and economic systems; and it would also require a receptive audience prepared to listen, an audience resistant to political manipulation. It seems that such a discussion is still a long way off for Greece.
1 Eurostat. http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=nama_10_gdp&lang=en
2 Eurostat. http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=une_rt_a&lang=en
3 Eurostat. http://ec.europa.eu/eurostat/tgm/table.do?tab=table&init=1&language=en&pcode=tec00113&plugin=1
4 Eurostat (2016) http://ec.europa.eu/eurostat/tgm/refreshTableAction.do?tab=table&plugin=1&pcode=tec00011&language=en
5 Eurostat (2016) 'Business demography by size class (from 2004 onwards, NACE Rev. 2)' [bd_9bd_sz_cl_r2] Last update: 21-07-2016. http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=bd_9bd_sz_cl_r2&lang=en (accessed 2016-07-22).
6 Eurostat. http://ec.europa.eu/eurostat/tgm/refreshTableAction.do?tab=table&plugin=1&pcode=t2020_50&language=en (accessed: 07-22-2016)
7 More on this see: Visvizi, A. (2016) 'Greece and the Troika in the context of the Eurozone crisis', [in:] Magone, J., Laffan, B., Schweiger, Ch. (eds) CORE-PERIPHERY RELATIONS IN THE EUROPEAN UNION. The Politics of Differentiated Integration in the European Political Economy, Routledge, pp. 149-165.
8 OECD (2016) Household savings, OECD Data, https://data.oecd.org/hha/household-savings.htm (accessed on 07-26-2016)
9 Municipality of Athens (2016) https://tacklingpovertyblog.wordpress.com/street-work-eng/
11 Kathimerini (2016) "Human Capital is Greece's no 1 export", Kathimerini English edition, 07-19-2016, http://www.ekathimerini.com/210585/article/ekathimerini/news/human-capital-is-greeces-no-1-export, accessed 07-19-2016
12 Kathimerini (2016) Brain drain amounted to 223,000 people in 2008-2013, Business section, Kathimerini 07-20-2016, http://www.ekathimerini.com/210626/article/ekathimerini/business/brain-drain-amounted-to-223000-people-in-2008-2013 (accessed on 07-22-2016)
13 Kathimerini (2016) "Human Capital is Greece's no 1 export", Kathimerini English edition, 07-19-2016, http://www.ekathimerini.com/210585/article/ekathimerini/news/human-capital-is-greeces-no-1-export, accessed 07-19-2016
14 World Bank Bilateral Remittances Matrix (version October 2015), accessed: 07-03-2016
15 Kathimerini (2016) Brain drain amounted to 223,000 people in 2008-2013, Business section, Kathimerini 07-20-2016, http://www.ekathimerini.com/210626/article/ekathimerini/business/brain-drain-amounted-to-223000-people-in-2008-2013 (accessed on 07-22-2016)
16 Davis, E. (2015) '"We've toiled without end': Publicity, Crisis, and the Suicide 'Epidemic'in Greece", Comparative Studies in Society & History, 57(4): 1007-1036. doi:10.1017/S0010417515000420
17 European Commission (2016) 'Member States' Support to Emergency Relocation Mechanism', Press Material 2016-08-15, http://ec.europa.eu/dgs/home-affairs/what-we-do/policies/european-agenda-migration/press-material/docs/state_of_play_-_relocation_en.pdf (accessed 2016-08-28).
18 European Commission (2016) 'Implementing the EU-Turkey Statement - Questions and Answers', Fact Sheet, Brussels 2016-06-15, http://europa.eu/rapid/press-release_MEMO-16-1664_en.htm (accessed 2016-08-28).
19 Hellenic Police (2016) Jan-July 2016' [In Greek only. Translation, Captured illegal immigrants, by illegal site of entry and by nationality], 2016 [Statistics on illegal immigration] http://www.astynomia.gr/index.php?option=ozo_content&lang=%27..%27&perform=view&id=55858&Itemid=1240&lang= (accessed 2016-08-28)
20 ProNews (2016) [In Greek only. Translation, Review of refugee flows in the country, as of August 27], ProNews 2016-08-27, http://www.pronews.gr/portal/20160827/defencenet/esoteriki-asfaleia/65/i-synoptiki-katastasi-ton-prosfygikon-roon-simera-27 (accessed 2016-08-28).
21 EU (2013) REGULATION (EU) No 604/2013 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 26 June 2013 establishing the criteria and mechanisms for determining the Member State responsible for examining an application for international protection lodged in one of the Member States by a third-country national or a stateless person (recast), 29.6.2013 Official Journal of the European Union L 180/31.
22 Ekriti (2016) [In Greek only. Translation: The refugees will be send from Germany to Crete], News, 2016-08-13, http://www.ekriti.gr/kriti/apo-tin-germania-tha-steiloyn-toys-prosfyges-stin-kriti#sthash.6G3tYhj0.3MA6t9d6.dpbs (accessed 2016-08-28).
23 Kathimerini (2016) "Greece plans more apartments for migrants," News, Kathimerini 2016-08-09, http://www.ekathimerini.com/211121/article/ekathimerini/news/greece-plans-more-apartments-for-migrants (accessed 2016-08-28).
24 Danai Angeli, Anna Triandafyllidou, Angeliki Dimitriadi (2014) "Assessing the Cost-effectiveness of Irregular Migration Control Policies in Greece," MIDAS Policy Paper, October 2014, ELIAMEP.
25 Kathimerini (2016) 'Migrant costs to exceed 600 million euros, says Greek central bank', Kathimerini, 2016-03-13. http://www.ekathimerini.com/206939/article/ekathimerini/business/migrant-costs-to-exceed-600-million-euros-says-greek-central-bank (accessed 2016-07-22).
26 European Commission (2016) 'MANAGING THE REFUGEE CRISIS: EU Financial Support to Greece', Fact Sheet, Migration and Home Affairs 2016-04-12, http://ec.europa.eu/dgs/home-affairs/what-we-do/policies/european-agenda-migration/background-information/docs/20160412/factsheet_managing_refugee_crisis_eu_financial_support_greece_en.pdf (accessed 2016-07-22).
27 OECD (2016) OECD Economic Surveys 2016: Greece, Paris: OECD Publishing, March 10, 2016, http://www.oecd.org/eco/surveys/GRC%202016%20Overview%20EN.pdf (accessed 06-20-2016)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8915895819664001,
"language": "en",
"url": "https://www.destinationhealthlaw.com/2021/03/big-data-as-a-valuable-asset-avenues-for-protecting-and-commercializing-big-data-through-contractual-agreements/",
"token_count": 2308,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.494140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a3c327ec-dbe8-4a7e-b2f5-46c95be6bd35>"
}
|
Big data entails nearly every aspect of commerce. However, protecting big data as a form of intellectual property is complex. For instance, patents, copyrights and trade secrets provide limited protection for datasets. Moreover, the ownership of datasets can be uncertain. Additionally, datasets may be subject to numerous regulatory laws. In view of the aforementioned complexities, contractual agreements play a pivotal role in protecting and commercializing big data.
Big data is a valuable asset
Big data can be in many forms. Such forms can include market data, consumer data, business records, health records, and experimental results.
Additionally, big data can find applications in numerous fields, including the healthcare and life sciences industries. For instance, in the healthcare industry, data extracted from electronic health records can be supplied into a software with artificial intelligence (AI) or machine-learning algorithms for diagnostic applications, such as detecting early heart failure and predicting surgical complications. Similarly, in the life sciences industry, DNA sequences generated through next generation sequencing techniques can be supplied to various AI-based software for the identification of potential drug targets.
Patents, copyrights and trade secrets provide limited big data protection
Despite a broad range of applications, protection of big data as a form of intellectual property can be complex. For instance, unless a minimal amount of creativity exists in the selection, coordination, and arrangement of datasets, datasets may not be protectable by copyrights, regardless of the laborious efforts involved in collecting and compiling the data. Furthermore, data compilation processes and compiled data may not qualify as patent eligible subject matter.
Datasets are protectable as trade secrets. However, trade secret protection may not be practical in some circumstances because trade secret protection would require the maintenance of the datasets as confidential. Trade secret protection would also require the proactive implementation of reasonable measures to maintain the secrecy of the datasets. Moreover, trade secret protection provides a narrow protection of datasets because it does not protect against reverse engineering or independent development of the datasets.
Big data ownership can be uncertain
Complexities also exist in ascertaining the ownership of datasets. For instance, in the absence of contractual agreements to the contrary, different individuals or entities may claim ownership to datasets, including the generators, compilers, users, purchasers, and guardians of the datasets. Such complexities may escalate further when different individuals or entities generate, compile, use, store or purchase datasets at different times, at different institutions, or at different locations.
Big data can be highly regulated
Additionally, the use, storage and distribution of datasets may be subject to numerous state and federal laws. For instance, if the datasets contain personally identifiable information, then the datasets could be subject to numerous state and federal data protection laws. As an example, the Health Insurance Portability and Accountability Act (HIPAA) mandates the protection of an individual’s health information that is held or transmitted by health plans, healthcare providers or healthcare clearinghouses .
Contractual agreements help protect and commercialize big data
In view of the aforementioned complexities in safeguarding big data intellectual property rights, ascertaining big data ownership, and complying with big data regulatory requirements, contractual agreements play a pivotal role in big data protection and commercialization. For instance, assignment agreements help establish the ownership of datasets while confidentiality agreements help maintain their confidentiality. Additionally, license agreements help establish the terms by which others can exploit and commercialize datasets.
Assignment agreements help establish the ownership of big data
Where applicable, assignment agreements can help establish ownership over datasets. For instance, in order to obtain clear title to a dataset, an entity that retains or employs individuals to generate or compile the dataset should execute comprehensive assignment agreements with those individuals. Preferably, such assignment agreements should require the individuals to assign all of their rights to the datasets (including intellectual property and commercialization rights) to the entity.
Confidentiality agreements help maintain the confidentiality of big data
Confidentiality agreements can help maintain the confidentiality of datasets and prevent their unauthorized disclosure. Confidentiality agreements can also help ensure compliance with numerous state and federal regulations by helping prevent the unauthorized disclosure of any protected information. Additionally, confidentiality agreements can help protect any trade secrets within a dataset.
In order to provide maximum protection of datasets, entities that own or control datasets should ensure that all individuals who have accessed or will access the datasets (including compilers, generators, users, and purchasers) have executed comprehensive confidentiality agreements. Such confidentiality agreements should clearly set forth the authorized and unauthorized uses of the datasets.
For instance, an entity that retains or employs individuals to generate or compile datasets should execute comprehensive confidentiality agreements with those individuals, where the individuals agree not to disclose the datasets to anyone other than the authorized representatives of the entity. Similarly, an entity that distributes a dataset to a programmer for the training of an AI software should execute a comprehensive confidentiality agreement with the programmer, where the programmer agrees not to disclose or reverse engineer the dataset.
License agreements help establish the terms of commercializing and exploiting big data
An entity that owns or controls datasets (i.e., a licensor) can provide a third party (i.e., a licensee) with certain rights to the datasets by executing a database license agreement with the third party. Database license agreements can be standalone agreements that focus on the grant of rights to certain datasets. Database license agreements can also be part of a broader license agreement that includes the grant of rights beyond datasets, such as software.
- Level of exclusivity
Database license agreements should also define the level of exclusivity that a licensee will obtain to the licensed datasets. For instance, an exclusive license can bar the owner or controller of the dataset from granting additional licenses to other parties for the licensed datasets. On the other hand, a non-exclusive license could allow the owner or controller of the datasets to grant additional licenses to other parties for the licensed datasets.
- Sublicensing rights
Database license agreements should also clearly define the ability of the licensee to sub-license the licensed datasets to third parties. For instance, sublicensing rights could provide the licensee with the right to partner with other parties in the commercialization or use of the licensed datasets. However, a lack of sublicensing rights could prevent the licensee from entering into such partnership agreements.
- Warranties and disclaimers
Database license agreements should also include numerous warranties and disclaimers in order to provide assurances between the parties and minimize liability. For instance, database licensors generally include disclaimers that they are providing the datasets “as is” without any warranties regarding their suitability for an intended purpose. However, licensees generally seek warranties and representations from a licensor that the licensor owns or controls the datasets at issue, and has sufficient rights to grant a license to the datasets.
Moreover, both the licensee and licensor usually represent and warrant that they are in compliance with applicable laws, such as applicable data security and privacy laws. For instance, if the licensed datasets contain protected health information, then both the licensor and licensee should provide assurances in the license agreement that they will comply with relevant regulatory laws, such as HIPAA. On the other hand, if the licensed datasets do not contain any regulated data, then the licensee may request the licensor to provide assurances that the licensed datasets are devoid of any regulated data, such as protected health information.
Additionally, the database license agreement should clearly define the confidentiality obligations of the parties towards the licensed datasets. For instance, if the licensed datasets contain trade secrets, then both the licensor and the licensee should provide assurances that they will take reasonable measures in order to maintain the secrecy of the trade secrets.
Devising a proper strategy for protecting and commercializing big data is a fact-specific inquiry that depends on numerous factors, including the type of data, the origins of the data, the storage location of the data, the destination of the data, and the intended use of the data. Regardless, carefully drafted contractual agreements play a pivotal role in protecting datasets, maximizing their commercialization value, avoiding disputes between parties, and limiting liability.
See, e.g., Kilic A. Artificial Intelligence and Machine Learning in Cardiovascular Health Care. Ann Thorac Surg. 2020 May;109(5):1323-1329. doi: 10.1016/j.athoracsur.2019.09.042. Epub 2019 Nov 7. PMID: 31706869.
See, e.g., Dlamini Z et al., Artificial intelligence (AI) and big data in cancer and precision oncology. Comput Struct Biotechnol J. 2020 Aug 28;18:2300-2311. doi:10.1016/j.csbj.2020.08.019. PMID: 32994889; PMCID: PMC7490765.
See Feist Publications, Inc. v. Rural Telephone Service Co., 499 U.S. 340, 111 S. Ct. 1282 (1991).
See 35 U.S.C. §101 (identifying patent eligible subject matter as “any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof”, which generally exclude standalone datasets). Also see In Re Board of Trustees of the Leland Stanford Junior University, No. 20-1012 (Fed. Cir. 2021) (holding that processes directed to generating datasets through the utilization of mathematical formulas were not eligible for patenting under 35 U.S.C. §101). Also see WinTech blog article entitled “Determining the Patent Eligibility of Inventions under the new USPTO Guidelines” (explaining that the patent eligibility of computer-implemented inventions, such as methods of generating datasets, remains an unsettled area of law).
See 18 U.S.C §1839(3) (identifying trade secrets under the Defend Trade Secrets Act as “all forms and types of financial, business, scientific, technical, economic, or engineering information, including patterns, plans, compilations, program devices, formulas, designs, prototypes, methods, techniques, processes, procedures, programs, or codes, whether tangible or intangible, and whether or how stored, compiled, or memorialized physically, electronically, graphically, photographically, or in writing.”)
See 18 U.S.C §1839(3)(A) (requiring the owner of a trade secret to take “reasonable measures to keep such information secret.”). Also see WinTech blog article entitled “Protecting Your Most Valuable Assets: How to Identify and Maintain Your Institution’s Trade Secrets” (outlining the reasonable measures that a trade secret owner must take in order to maintain the secrecy of the trade secrets).
See 18 U.S.C §1839(6)(B) (indicating that trade secret misappropriation “does not include reverse engineering, independent derivation, or any other lawful means of acquisition.”)
see WinTech blog article entitled “Protecting Your Most Valuable Assets: How to Identify and Maintain Your Institution’s Trade Secrets” (outlining examples of reasonable measures that a trade secret owner must take in order to maintain the secrecy of the trade secrets).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9304146766662598,
"language": "en",
"url": "https://www.mcatoolkit.org/",
"token_count": 2244,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.033935546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4fff569a-a122-47a4-a76a-5b1bbc438cd6>"
}
|
Over the past several years, non-governmental organizations (NGOs) have realized that the creation of formally protected areas may not be sufficient to protect the ocean and coastal biodiversity, particularly in areas where rights have already been granted to specific owners and users. To address this, NGOs are increasingly integrating Marine Conservation Agreements (MCAs) into ocean and coastal protection efforts to provide greater surety of long-term success.
This overview introduces the MCA concept. For more overview information on MCAs, see the sub-sections on Basics, Myths, and Definitions.
What are Marine Conservation Agreements?
MCAs are defined as:
Any formal or informal contractual arrangement that aims to achieve ocean or coastal conservation goals in which one or more parties (usually right-holders) voluntarily commit to taking certain actions, refraining from certain actions, or transferring certain rights and responsibilities in exchange for one or more other parties (usually conservation-oriented entities) voluntarily committing to deliver explicit (direct or indirect) economic incentives.
The above MCA definition has seven distinct elements, which include:
- any formal or informal contractual arrangement that
- aims to achieve ocean or coastal conservation goals in which
- one or more parties (usually right-holders)
- voluntarily commit to
- taking certain actions,
- refraining from certain actions, or
- transferring certain rights and responsibilities
- in exchange for
- one or more other parties (usually conservation-oriented entities)
- voluntarily committing to deliver explicit (direct or indirect) economic incentives.
These seven elements not only form the definition of MCAs but also establish the sub-steps within the Field Guide’s Phase 1 Feasibility Analysis, thereby establishing the enabling conditions for MCAs. Within each of the seven elements, there are several variables which can be mixed and matched to meet the specific needs of a wide variety of implementing entities in a diverse range of ocean and coastal conservation efforts.
The summary table below identifies the major elements and variables of MCAs. MCAs can be entered into by governments, communities, private entities, and private individuals. They are based on agreed upon terms and conditions, are often bottom-up approaches, and include quid-pro-quo incentives wherein all parties receive benefits.
|1. Agreement Mechanisms *||2. Conservation Goals **||3. Right-holders ***||4. Conservation Commitments ****||5. Conservation Entities ***||6. Economic Incentives ****|
Purchase & Sale Agreements
Memorandums of Understanding
Memorandums of Agreement
|Restore and protect reefs. |
Recover and manage fisheries sustainably
Preserve cultural sites
Promote sustainable tourism
|Owners, Managers, or Users:|
Private individuals and families
Organized community or user groups
Develop management plan
Refrain from actions:
Stop using destructive gear
Stop turtle harvesting
Transfer rights/ responsibilities:
|7. An Exchange|
|* Can be a defined term or undefined term; can be long-term (over 10 years) or short-term (less than 10 years).|
** Constitute the desired project outcomes.
*** Make up the parties to the agreement.
**** Make up the assured project benefits for both parties.
Common examples of MCAs include leases, licenses, easements, management agreements, purchase and sale agreements, concessions, and contracts. NGOs have used MCAs to help manage specific areas, harvesting methods, and access to resources. These efforts have protected important marine biodiversity while positioning NGOs as vested and solution-oriented stakeholders with governments and communities responsible for decision-making.
One confusing issue regarding MCAs is that existing programmatic and project-specific efforts that likely fall under the MCA definition are often called different things by different organizations, especially ones like Encinitas Roofing Pros do to help sponsor things. For example, the following programmatic efforts are all very similar and more or less meet the definition of MCAs:
- Conservation Agreements (or CAs) led by Conservation International (CI)
- Payments for Marine Ecosystem Services (or MPES) led by Forest Trends
- Translinks led by Wildlife Conservation Society (WCS); and simply
- Agreements led by Seacology
The existence and use of these different terms that essentially mean the same thing is often not helpful and can be confusing to conservation practitioners. As such, practitioners would do well to understand the similarities and differences between the terms, appropriately identify and appreciate the perspectives of their own target audiences for different outreach efforts, and then determine which term may or may not resonate best with those audiences. In some cases, none of the terms may resonate with target audiences; in such cases a new or different term should be used.
Compounding the confusing effect of the many programmatic efforts as identified above using varying terms to describe similar things, many practitioners who are responsible for implementing field projects that meet the definition of MCAs do not identify or otherwise describe their projects as such. Many practitioners successfully work at field sites in isolation from global programmatic efforts related to MCAs. In the grand scheme of things, there is nothing wrong with or detrimental to these field projects in doing so; practitioners do not need to identify themselves as being affiliated with one or more of the MCA-related programmatic efforts.
The hope is, however, that these field projects and others have opportunities to learn from or otherwise benefit from the collective programmatic efforts. Also, recognizing that these field projects, like some of the projects a concrete contractor Lake Forest has been working on, are part of a larger, thematically consistent set of efforts enhances the potential for effecting policy change, attracting finance, catalyzing replication, and scaling up.
International Treaties and Conventions
International treaties and conventions that address ocean management and conservation issues have also been referred to as Marine Conservation Agreements1, or as International Ocean Agreements. This toolkit does not discuss or label these types of broader, nation-to-nation agreements since they are typically high-level, top-down, and do not include NGOs and local users as potential signatories.
Similarity with Terrestrial and Freshwater Strategies
MCAs are in many ways quite similar to the widely used and understood conservation practices that employ traditional upland acquisitions, conservation easements and freshwater rights acquisitions. Upland acquisitions, conservation easements and MCAs all give grantees (i.e., conservation organizations) the right to protect or direct management of sites and habitat features that may otherwise be degraded from destructive human activities and development. Similar to freshwater rights acquisitions in which water is left in streams for conservation purposes, MCAs establish conservation (i.e., no use in the minds of some) as a legitimate use that can be acquired or directed.
The obvious difference between MCAs and upland acquisitions and conservation easements is that MCAs are applied to areas lying under ocean and coastal waters—areas normally viewed as open to free and unfettered access by the public. In many (but not all) cases, MCAs are applied to areas that are publicly-owned or managed. When applied to public areas, MCAs differ from upland acquisitions and conservation easements in that the public may continue to have rights to the areas, the agreements typically have a specific term (or time period) associated with them and they usually require some form of active, participatory management, as opposed to simply preventing property from being developed.
Relationship between MCAs and MPAs
Marine Conservation Agreements and Marine Protected Areas (MPAs) are different, but can often lead to the similar things. Formal MPAs are often established by government entities through law or policy. Conversely, MCAs are established between different entities, usually a resource owner or user and an NGO. Both MPAs and MCAs, however, can be used to protect specific sites and resources. MCAs can also be used to complement and augment the number and effectiveness of formal MPAs when the establishment of additional MPAs is not possible. Under some circumstances, MCAs can be used as catalysts for the formal establishment of MPAs or can provide a mechanism for local stakeholder involvement in collaborative management of MPAs.
MCA Field Projects
There are numerous existing MCA projects throughout the globe. One of the best known MCA projects is the Chumbe Island Coral Park in Tanzania. Other examples include The Nature Conservancy’s (TNC’s) 13,000-acre Great South Bay Project on Long Island, New York, and the 180,000 sq km Phoenix Island Protected Area which was established in part based on a “reverse fishing license”—an agreement being developed by the Government of Kiribati, Conservation International (CI), and the New England Aquarium. For additional projects, see the Field Projects in the toolkit. One was just completed by diamondbarconcrete.com
Current Limitations and Strategic Next Steps
Although the potential application of MCAs is broad and significant, the strategy is currently underutilized. This is due in part to the fact that MCA practitioners do not generally communicate among each other for information exchange or collaboration. As a result, MCAs remain insufficiently understood and applied by the marine conservation community. To reverse this, TNC and partners are working to:
- Promote a willingness to pay for conservation: We must develop, advertise and make available economic markets for biodiversity conservation, which will require NGOs to be intermediaries between the global demand and potential suppliers of biodiversity conservation services.
- Undertake demonstration projects: The most effective way to spread awareness of MCAs and persuade various stakeholders of its promise is to continue implementing pilot projects. Additional successes will broaden financial support for MCAs among donors, encourage governments to embrace the tool in policies and legislation, cultivate implementation capacity within the conservation NGO community, and build confidence among local communities that participating in conservation can yield tangible benefits.
- Educate donors: In the short term, the greatest constraint to the application of MCAs may be funding, especially in terms of the types of funding available for conservation. We must promote awareness of the long-term recurrent costs of conservation management among the donor community, placing a strong emphasis on the essential role of dedicated endowments to support individual projects into the distant future.
- Exploit development synergies: Given that many sites of conservation interest are in poor rural areas that are also of interest to development institutions, there is significant potential for collaboration with other NGOs, government bodies, and bilateral or multilateral agencies receiving development funding. Scaling up the use of MCAs will benefit greatly from concerted efforts to seek out opportunities for collaboration with mainstream development organizations.
- Foster communication and consistency: The concerns raised about MCAs often reflect misunderstandings about the approach. Therefore it is important that organizations working on MCAs continuously engage each other, other conservation organizations and governments to ensure clear and consistent articulation of the rationale and application of the tool.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8805289268493652,
"language": "en",
"url": "https://www.slideshare.net/aliciatempleton9/crowdfunding-presentation-jan-21",
"token_count": 533,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2216796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9c2c8e4e-f7a6-4640-b5ec-96e1ed4e94b5>"
}
|
Crowdsourcing Definition“the act of an institution taking a functiononce performed by employees andoutsourcing it to an undefined (andgenerally large) network ofpeople in the form ofan open call.”– Jeff Howe
Crowdfunding Definition“the practice of funding a project or venture byraising many small amounts of money from alarge number of people,typically via theInternet.”– Forbes
Crowdfunding Legislation U.S.A. CANADAO April 2012, Obama passed the JOBS Act O Investment (equity)O Eased federal laws to model not legal in encourage small O No national securities business and startup regulator (implemented funding at provincial level) thenO Businesses can accept small contributions from coordinated through CSA citizens without making O Gov’t wants to take a IPO “wait-and-see” approach
Start-Up Equity CrowdfundingO This new model allows large numbers of“regular” people to invest small amounts onlineto fund early start-ups.
Debt-Based CrowdfundingO Sometimes called micro-financing or peer-to-peer (P2P) lending: O start-ups borrow money from a number of people online and pay them back after the project is finished.O This model of fundingnot available in Canada.
Good-Cause CrowdfundingO People invest (donate) money to a projectthat has good moral/ethical value.O Most companies using this model are not-for-profits.
Pre-Order CrowdfundingO Investors make online pledges to pre-buy theproduct for later delivery (if it is ever built).O No financial return should be expected otherthan the product.
Rewards-Based Crowdfunding O Investors get the satisfaction of helping, and immediately get a pre-determined reward or perk of value, such as a t-shirt, or other recognition. O This is a variation on the two previous ones, but there is no equity or finished product.
Live It!“When you’re an entrepreneur, you don’t havea job--at least not in the conventional sense ofthat word. You have a calling. And unlike a job,a calling defines you as a person. It’s who youare.”– Arlene Dickinson
O Convey your idea clearly and quicklyO Quality is much more important than lengthO Match style to type of project
Campaign Length O 30-60 daysTarget Amount O Covers expenses O In line with expectationsLive Fundraising/Pitching O Meet & Greet / Demos
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9639918208122253,
"language": "en",
"url": "https://www.tax2efile.com/blog/how-the-economy-needs-the-trucking-industry/",
"token_count": 550,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.224609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8badbca3-0d9d-43e4-8407-6a9f51b2abd3>"
}
|
The trucking industry is one of the best industries in the United States and has much impact on the economy of the nation. People who are associated with the trucking industry are hard-working and they contribute more to society. Daily activities of a common man are undistributed because of the tireless effort of the truck drivers. The truckers are responsible for moving the goods and delivering them to the destination. Every good that we buy is handled by the truck industry. In the United States, there are about 8.7 million people are associated with trucking-related jobs. Apart from driving, the trucking industry includes fleet owners, operators, dispatchers, and many more jobs.
How is the Trucking Industry connected with the Economy?
The trucking industry is connected both directly and indirectly with the economy of the nation. Yes, every common man buys products from the store on a daily or weekly basis for their survival. Those goods that are sold/ bought from the stores are delivered by the truckers. The truck drivers move a lot of goods and materials to ships, trains, and planes. Statistics say that more than $7 billion dollars’ worth of goods is moved by the trucking industry.
Even with tough conditions on the road and climatic disasters, the truck drivers ensure to deliver the goods on time. Because of the tireless and continuous work of those truckers, the life of the common man is undisturbed. Hence whenever a common buys a product from the store, there is a hidden contribution from the truckers and they are closely connected with the economy.
Truckers are in control of moving stuff like food, wastages, clothes, raw materials, gas, manufactured items, medicines, and more. They move the raw materials like wood to the factory which is to be made as a chair, once done they move to the stores. And when you order a chair, again the truckers deliver the chair to your destination. So a trucker is responsible for hauling the raw materials to the finished goods.
Apart from moving on the road, they are also responsible for that road maintenance. They apply for Heavy Vehicle Use Tax or Form 2290 for qualifying heavy vehicles. These paid taxes are kept for repair and maintenance of highway roads.
Truckers make our day
When the trucking industry goes down for a day due to some reason, then it will have its impact on the economy of the nation. So they contribute directly and indirectly to a common man’s daily activity. If you are a trucker, be proud of yourself. And when you need more trucking tips and Form 2290 filing assistance visit us at Tax2efile.com.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9613228440284729,
"language": "en",
"url": "https://groww.in/p/gst-impact-on-mutual-funds/",
"token_count": 2268,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.04931640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9dd192a1-ae09-42e4-a5cb-18a694d7cb36>"
}
|
In this article
What is GST (Goods and Services Tax)?
GST is an indirect tax, introduced in India in July 2017. GST is governed by a GST Council and the Chairman is the Finance Minister of India, Arun Jaitley. GST has slab rates of 0%, 5%, 12%, 18% and 28%. Along with these, there is a special rate of 0.25% on rough precious and semi-precious stones and 3% on gold. Also, cess on top of 28% GST applies to specific products like aerated drinks, luxury cars and tobacco products.
GST is a multi-stage tax since it will be levied on each stage that the item goes through from the manufacturer to reach the final customer. For instance, the item goes through the following stages:
- Purchase of raw material
- Sale to retailer
- Sale to final customer
Now, GST will be levied on each of these stages and hence is a multi-stage tax. Similarly, GST will be levied on value additions, that is, monetary worth added at each stage. GST is also a destination-based tax which means that the tax revenue from a product produced in location A and sold in location B will go to location B.
The four-tier tax structure of GST, 5%, 12%, 18%, 28%, has lower rates for essential items and the highest for luxury goods. Service tax has gone up from 15% to 18% while essential items including food will be taxed at zero rates. The lowest rate, of 5%, is for common use items while ultra luxuries, demerit and sin goods attract a tax rate of 28%.
Impact of GST on Mutual Funds
The implementation of GST has caused temporary problems for various industries. However, in the long run. the impact of GST is expected to be positive. The following are the sectors or industries that might be most impacted by GST. These sectors also happen to be the sectors mutual funds invest in heavily.
Automobile and Transportation
The automobile industry in India is a vast business. Under the previous taxing regime there were several taxes applicable like excise, VAT, sales tax, road tax, registration duty, etc. Under GST, the tax burden on the end consumer has reduced. The importers and dealers are now eligible to claim GST paid on goods imported or sold which was not possible previously. GST also helps the manufacturers in procuring auto parts at the cheaper cost with the help of improved supply chain operations.
The stock impact has been expected to be positive for companies like Maruti Suzuki, MotoCorp, Excide, Mahindra & Mahindra. UTI Transportation and Logistics Fund is a sector fund that invests heavily in this sector. Many large cap funds also heavily invest in companies belonging to this sector.
The logistics sector comprises of the road transport sector, storage and warehousing and third-party logistics. The logistics sector has been traditionally involved with a lot of problems. These include complicated networks, high coordination costs, inefficient supply chains, deficient infrastructure, entry taxes, etc. A large number of taxes made the logistics process cumbersome and costly. GST, however, has replaced the multiple state VATs and the need to have a hub in all states. This has helped firms redesign supply chains and centralize hub operations and take advantage of economies of scale. GST has helped in the smoothening of the inter-state trade process.
Stock impact has positive expectations from companies including Container Corporation of Inda, Adani SEZ, and long-term positive impact on Gujarat Pipav Port.
FMCG is the fourth largest sector in the Indian economy. There are some cases where the tax rates under GST are higher than the present rates, while lower in other cases. GST impacts the FMCG sector by readjusting tax brackets and reducing distribution costs. Some companies have gained while others have lost due to changes in the tax regime.
The stock impact is expected to be positive for Hindustan Unilever, Emami, Godrej Consumer and negative for Titan, Bata and ITC. ICICI Prudential FMCG is a sector fund that invests in this sector.
Consumer durables are now being taxed at 28% which is slightly higher than the previous tax regime. Market analysts do not see any significant impact on the margins of consumer durable companies after the change in taxation regime.
The stock impact has positive expectations from Voltas, Havells, Crompton Greaves.
The GST rate on under construction projects remains 12% only. The impact of GST on the real estate sector is limited to cost structure and input credit available.
Stocks of companies like Sobha, Brigade Enterprises, Oberoi Reality and Sunteck have positive expectations from GST implementation. HDFC mutual fund has launched an NFO for a close-ended fund HDFC Housing Opportunities Fund. This fund aims to invest in the real estate sector.
Travelling in business class is now expensive since the tax rates have been increase to 12% from 9%. However, GST on economy class has been reduced by 1% to 5%. Aviation fuel is not under the purview of GST and therefore, indirect tax needs to be paid on the same. The airline industry now has to pay both the type of taxes, GST and indirect tax. Tax input credit is available only for economy class.
Lower tax rate on economy travel seems to be a positive for companies like InterGlobe Aviation, Jet Airways and SpiceJet.
What is Dual GST?
Most countries have a single unified tax system which means that a single tax will be applicable all around the country. However, in many countries, like Canada, Brazil, and now India, exists the concept of dual GST where tax is charged by both, Central and State government.
What is CGST, SGST, and IGST?
- Central Goods and Services Tax- CGST is a tax levied on intrastate supplies of goods and services by the central government and is governed by the CGST Act.
- State Goods and Services Tax- SGST is a tax levied on intrastate supplies of goods and services and is governed by SGST Act.
Integrated Goods and Services Tax- IGST is a tax levied on all inter-state supplies of goods and services and is governed by the IGST Act. It is applicable to the supply of goods and services in both imports and exports.
Advantages of GST
- Eliminating cascading tax effect– Before GST, several taxes were levied on the same product that led to an increase in the price of products. GST has eliminated the tax on tax effect.
- Product identification– The earlier classification of products into different categories caused a lot of confusion and led to litigation issues. GST aims to solve this issue by clearly defining product classifications as per international standards.
- One tax– Instead of several taxes being levied by state and central government, GST is now the only tax. It has replaced several hidden taxes and improved ease of doing business.
- Investment boost– As per GST, one can avail input tax credit on capital goods. This might lead to a surge in investments.
- Easy compliance– All compliances such as registration, returns, etc, have now to be done online which will make the process easier, hassle-free and transparent.
- Transparency and less corruption– There will be a significant reduction in corruption in the long run as all money spent needs to be reported for taxation purposes. Also, retailers are no longer able to make sales without generating a bill.
- Regulating the unorganized sector– Industries like construction and textile are highly unregulated in India. GST has provisions for online compliances and availing input credit only when the supplies accept the amount, therefore bringing around accountability and regulation.
- Increased efficiency in logistics– Restrictions on inter-state movement of goods have been lessened and now the logistics sector can start consolidating several warehouses across the country. Reduction in unnecessary logistics costs has increased profits for businesses involved in the supply of goods.
Disadvantages of GST
- Change in business software- Most businesses used accounting software or ERPs for filing tax returns which already had excise, VAT, and service tax incorporated. Now the businesses need to change their ERPs or upgrade their software to make them GST compatible.
- GST compliances- SMEs are still not fully aware of the several compliances that need to be taken care of under the new tax regime.
- Increase in operating costs- Most SMEs have usually preferred to file and pay the taxes themselves instead of hiring a tax professional, in order to save costs. However, they now require hiring a professional to do their taxes as it is a completely new system.
- Online procedure- For an economy like India which has been using pen and paper for a long time, this change to go digital is massive. Many small businesses are tech-savvy since they do not own the resources for fully online compliances.
- Increased burden on manufacturing SMEs- Under the previous tax regime only manufacturing businesses with a turnover exceeding Rs.1.5 crores had to pay excise duty. But under GST the turnover limit has been reduced to Rs.20 Lakhs, therefore increasing the tax burden for several manufacturing SMEs. However, SMEs with a turnover of up to 75 lakhs can opt for the composition scheme and pay only 1% tax on turnover in lieu of GST. But these businesses then cannot claim any input tax credit.
- Mid-year policy change- The mid-year launch of GST will lead to problems in taxation and reporting during the end of the financial year. A lot of confusion could have been avoided had the policy change been done at the beginning of a new financial year.
Recent Changes in GST
After the 23rd GST council meeting, there have been some changes in the GST regime. These include the following-
- The list of items in the top 28% in the GST slab has been reduced by 50% to 228 items now. Only luxury and sin goods are now in the highest tax bracket and items of daily use have been shifted to 18%.
- All restaurants will now be levied the GST at 5%, without input tax credit benefits.
- GST on 13 items has been reduced from 18% to 12%.
- Tax on six items has been reduced to zero from 5 percent.
In short, there has been a moderately negative impact of GST on mutual funds. The impact is not that big but it definitely changes things to a certain extent in terms of mutual fund investments. However, the overall impact on the economy of India will be positive in the long run.
With GST and other reformative measures being implemented in India, the economy of the nation is poised to grow. One of the best ways to benefit from this growth is to invest in mutual funds. You can start investing in mutual funds using Groww. It is completely online and free of hassles.
Disclaimer: The views expressed here are those of the author. Mutual funds are subject to market risks. Please read the offer document before investing.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9398935437202454,
"language": "en",
"url": "https://marketrealist.com/2015/03/interest-income-important-banks/",
"token_count": 414,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.064453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:344014ff-ea64-4378-b194-1597ed55915d>"
}
|
Net interest income
Taking deposits and lending money is the most basic function of a bank. Banks usually charge higher interest on the money it lends than the interest it pays on deposits. The difference between interest earned and paid is called a bank’s net interest income.
Banks play a crucial role in mobilizing savings for productive investments. This forms the basis of economic growth.
Banks provide a number of other services in addition to lending and depositing money. For example, they provide credit and debit cards to their customers. Banks also charge fees for deposit services, processing loans, card services, and other services.
Banks also perform capital market oriented activities such as underwriting, mergers and acquisitions, advisory, marketmaking, research, and a host of other services. For all the services offered, banks charge certain fees. Income earned through fees and other charges is called non-interest income.
Four big banks – JP Morgan (JPM), Bank of America (BAC), Citigroup (C), and Wells Fargo (WFC) – are the key players in the US banking sector. They control 45% of the industry’s total assets. Together, these four banks form ~27% of the Financial Select Sector SPDR ETF (XLF).
Importance of interest income
The sum of net-interest income and non-interest revenues are a bank’s net operating revenues. Expenses other than interest are deducted from net operating revenues to arrive at a bank’s net income.
The chart above shows the breakdown of net operating revenues into net-interest and non-interest income for FDIC- (Federal Deposit Insurance Corporation) insured institutions in the United States. Interest income typically contributes more than 60% to a bank’s total operating income.
Thus, interest income plays a key role in a banks’ performance.
In this series, we’ll be looking at some key indicators that are poised to impact the US banking industry’s performance in the near future.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9856213331222534,
"language": "en",
"url": "https://musingsofanoldfart.wordpress.com/tag/michael-lewis/",
"token_count": 1138,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.33984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8b928f20-b09a-40e6-96e9-2c347c958408>"
}
|
Best-selling author Michael Lewis’ latest book is called “The Undoing Project – A Friendship that Changed our Minds” which focuses on how we make decisions. Two transplanted Israeli psychologists named Daniel Kahneman and Amos Tversky partnered together for years and were acclaimed for their work in showing we are less rational decision-makers than we think we are, especially where risk is involved.
In short, we include our biases in how we interpret data and probabilities, so we do not all see the issue the same way. But, even more telling is we can be influenced by how the question is posed to us. Their analysis eventually led to a Nobel Prize in Economics, which was awarded to Kahneman after Tversky had passed away. The reason is their work created a new breed of economics called “Behavioral Economics.” But, their work had converts using it in the practice of medicine, setting public policy and even in making NBA draft picks. They ask that people step back and question things. Your bias may lead you to pick the most improbable cause or choice, so if you question yourself and others you may find the best probable path forward.
The other key takeaway is the tremendous partnership these two had over the years. They were very different personalities, yet it was difficult for them to know who had more input into their work. They often flipped a coin to decide whose name should go first in a paper. Their partnership was so constructive, it was difficult on people in the US who tend to believe one of the partners was a greater contributor. Tversky, being more outgoing and confident, was more easily and incorrectly thought of as the lead. Kahneman questioned everything even when he was far more right than wrong, so he came across as less confident. Ironically, it was his questioning things that challenged Tversky to reconsider strong positions. They yin and yanged like an old married couple.
It would be difficult for me to define their work in such a short piece, so let me share some of their examples which may be illustrative. Their most famous piece is called “Prospect Theory: An Analysis of Decision under Risk.” If you were given two options where (1) gave you a 50% chance to win $1,000 and (2) provided a gift of $500, most people would pick (2) as a sure thing. Yet, if the question is reframed and the two options were (3) which gave you 50% chance to lose $1,000 and (4) which provided a sure loss of $500, most everyone would pick (3) the gamble.
As they dived further into questions like this, they discovered that people would regret losing the sure thing as they did not have the money, yet were more risky with money they did not have. When they altered the probability of winning or losing, the same result would occur, even when the odds were much more in your favor to win (or not lose). But, they also learned how the questions were framed made a huge difference.
If an Asian disease was expected to kill 600 people and you could take one of the following actions, which one would you choose where Option (1) would save 200 people and (2) had a 1/3 chance of saving all 600 and a 2/3 chance of saving none? Most people chose Option (1) to save 200 people. Yet, if the question is framed as Option (3) where 400 people would die and (4) where there is a 1/3 chance none would die and 2/3 chance all would die, most people chose Option (4). Yet, it is the same question.
Another key concept they introduced through study is “representativeness.” If you added information to a question, people would believe the greater accuracy meant they should choose that option. This would even be true if the information added was irrelevant or unimportant. In other words, if something is described in more detail than other options, it creates an information bias. They illustrated this to be true with experts in a field, as well as with laypersons.
Lewis uses the example of medical doctor who embraced Kahneman and Tversky’s work named Don Redelmeier. Redelmeier would question quick conclusions by doctors made under stress, where they would use information bias. A good example came when a car accident left a woman with an irregular heartbeat after they treated her. The doctors hung their hat on the fact she had a medical history of excess thyroid hormones and just assumed that was causing the irregularity.
Yet, this was a remote probability. They were led down this path because of the extra piece of information. Redelmeier had them question this remote idea and look further. It turned out the more likely cause was indeed the reason for the irregular heartbeat – a collapsed lung from the accident. Because they had more information on a condition, they stopped looking for other causes that did not obviously surface.
I encourage you to read the book for the two reasons Lewis wrote it. It is more than just the work of Kahneman and Tversky on making decisions. It is also about how two different people can collaborate so successfully and be far more together than they were separately. They valued this partnership and made it work well for them and us.
Note: Lewis also wrote “The Blind Side,” “Moneyball,” “Liar’s Poker” and “The Big Short,” to name a few.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9431753754615784,
"language": "en",
"url": "https://wandilesihlobo.com/2020/05/12/breaking-new-ground-in-global-agriculture-post-covid/",
"token_count": 970,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.37109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7d9eb53b-00a4-4cfb-b9f1-b19a11d79ed2>"
}
|
This essay first appeared on the Financial Mail, May 7, 2020
As the coronavirus continues to spread around the world, governments have intensified efforts to contain the pandemic by limiting the movement of people and temporarily shutting down parts of the economy. Though the full extent of the economic fallout remains unknown, the effects of the pandemic will probably be felt for years.
Within this malaise, the agricultural sector and food manufacturing value chain seem likely to be among those least affected, due to supportive consumer demand. But this doesn’t mean there won’t be long-lasting structural changes in the sector.
The potential changes will not emerge from SA but from Europe and the US. However, over time they will filter into the local agricultural sector.
In particular, the pandemic has exposed the dependence of countries including Germany, Italy, France and the Netherlands on foreign agricultural labour. As borders have been closed to contain the spread of the virus, these countries have been faced with a shortage of farmworkers.
It’s a challenge that extends to the US, where parts of the agricultural sector are reliant on seasonal labourers, mostly from Mexico. In fact, US farmers had already raised concerns about labour shortages prior to the Covid-19 shock. At that point, the US was running the risk of losing about 10% of its crop due to challenges in processing the so-called H-2A visa for temporary farmworkers from neighbouring countries. The pandemic will only aggravate the problem.
Farmers across the US and Europe worry that their crops may rot in their fields, a situation that would weigh on their finances and on food security.
The impact is not limited to primary agriculture. The US, Brazil and Canada — which accounted for nearly a third of global meat and edible offal exports in 2019 — have closed some of their meat processing plants in response to the spread of Covid-19. The closures have led to speculation about potential global meat shortages — resulting in US President Donald Trump ordering US meat processors to reopen, despite the health risks.
Though such challenges have financial consequence for the farming sectors in Europe and the US, they raise broader questions about the need for automation in the agricultural and agro-processing sectors.
Admittedly, automation will not necessarily be an easy step across all agricultural subsectors (horticulture, for example, is likely to remain labour-intensive). But where possible, and where there is capital available, technological diffusion is likely to accelerate.
Such a transition would start in the developed world. But it is set to pose a challenge for policymakers across Sub-Saharan Africa and other emerging markets, where agriculture is a large part of the economy or a potential driver of large-scale job creation.
Consider SA, for instance. The country’s overarching developmental policy framework, the National Development Plan, outlines a broad policy objective to increase employment in the agriculture and agro-processing sectors by roughly 1-million by 2030.
This is underpinned by the prospect of increasing the level of investment in the sector (including in irrigation), boosting agricultural productivity, expanding export markets, promoting labour-intensive agriculture subsectors, and, where feasible, increasing the area of land being farmed.
Fortunately, SA hasn’t faced a scarcity of farmworkers since the Covid-19 pandemic started to intensify. On the contrary, it is in the unique (and, in another sense, difficult) position of having a labour market with a large pool of unskilled and often unemployed workers. So agriculture is well placed to provide livelihoods to those struggling to enter the workforce.
This doesn’t mean that a permanent shift towards automation in parts of Europe and the US after the pandemic won’t spill over into SA. The domestic agricultural sector is well integrated into the global market, which means any changes in the world’s leading agricultural countries will, over time, be transferred to our market.
When the time comes for the post-Covid-19 recovery phase, SA policymakers and industry — and those in other developing countries — will have to pay close attention to the gravitation toward automation. They will have to assemble policies that ensure each country’s agriculture sector remains competitive by keeping up with technological changes. At the same time, they’ll need to ensure that the sector continues contributing to rural economic growth, which is vital for some of society’s most vulnerable.
Policymakers will need to ensure farmworkers are upskilled, and better prepared to complement any structural labour market changes that may arise in the agricultural sector.
Follow me on Twitter (@WandileSihlobo). E-mail: [email protected]
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9384100437164307,
"language": "en",
"url": "https://www.carriermanagement.com/news/2019/11/07/200099.htm",
"token_count": 438,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.275390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:76a34c4b-b1db-443c-8b84-d5b56f7e69c2>"
}
|
Although the lightning fast speeds of quantum computing will provide many benefits to insurance carriers, there’s also a big down side: increased cyber risk, Fitch Ratings noted in a recent briefing.
Late last month, Gerald Glombicki, director of Insurance for Fitch, noted that quantum computers are estimated to run 100 million times faster than current technology—speeds that will mean “seemingly unlimited long-term benefits” across industries.
But the processing power and speed that has the potential to boost research efforts, new product development and operating efficiencies also creates “the theoretical ability to undermine current encryption standards,” Glombicki added.
Encryption provides protection via highly complex mathematical formulas that cannot be solved at current computer speeds for many, many years, he explained, also noting encryption now serves “as the lynchpin protecting online commerce.”
That spells trouble for property/casualty insurers and reinsurers that sell cyber insurance coverage. In addition, all types of insurers—including life and health insurers—may be prime targets of cyber attacks, he suggested.
Insurers are viewed as rich targets for cyber attacks given the access to large volumes of personal healthcare and financial data.
Quantum computers could decrypt the complex formula behind encryption standards “in less than a day,” Glombicki wrote. “Thus, if quantum computers are first fully developed and made operational by ‘bad actor(s),’ the risk of compromises to current encryption is real,” he said, also outlining the more positive flip side—quantum computers being employed by friendly governments, major cloud providers or other friendly technology firms initially, so that they are used to enhance encryption instead.
Which will happen?
Notes Glombicki: “Industry pundits are mixed as to when the disruption to encryption will take place, with estimates ranging from imminent to 30 years out.”
The full discussion is available on Fitch’s website: “Fitch Rtgs: Quantum Computing a Potential Cyber Risk for US Insurance Cos.”
Source: Fitch Ratings
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9637695550918579,
"language": "en",
"url": "https://www.investopedia.com/terms/o/oandne.asp",
"token_count": 731,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09423828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9bdc258f-9af5-4bef-8666-98314802f9e2>"
}
|
What Are Ordinary and Necessary Expenses (O & NE)?
Ordinary and necessary expenses are expenses incurred by individuals as the cost of owning a business or carrying on a trade. "Ordinary and necessary" expenses are categorized as such for income tax purposes, and these expenses are generally considered tax deductible in the year they are incurred.
These expenses are outlined in Section 162(a) of the Internal Revenue Code and must pass basic tests of relevance to business, as well as necessity. However, the IRS does not publish a compendium of what expenses can be considered ordinary and necessary to the pursuit of running a business or carrying on a trade, so it is the responsibility of the taxpayer to make this determination.
- O&NE are generally the expenses you incur as a cost of owning a business.
- Common ordinary and necessary expenses include business-related software for a computer or rental expenses.
- Portions of the home used for business are sometimes tax-deductible.
Understanding Ordinary and Necessary Expenses (O & NE)
This section of the tax code is the source of a large number of deductions by individuals, especially in years of transition between jobs or careers. Typical expenses that can be included in the "ordinary and necessary" group include a uniform for work or business-related software purchased for a home computer.
Startup costs associated with setting up a new business may also be tax deductible, but typically must be spread out over several years; these costs do not qualify as ordinary and necessary for IRS purposes but are instead usually deductible as capital expenses.
The IRS defines an "ordinary" expense as anything that is "common and accepted” to a specific trade or business. The IRS defines a "necessary" expense as anything that is "helpful and appropriate,” but not indispensable. Key examples of “ordinary and necessary” business expenses include:
- Employees Compensation: wages or salaries paid to employees for services rendered.
- Retirement Plans: money allocated to employee-sponsored retirement plans such as 401(k), 403(b), SIMPLE (Savings Incentive Match Plan for Employees), and SEP (Simplified Employee Pension) plans.
- Rental Expenses: money for a property a business owner leases but does not own. The rental expenditures are not deductible if the business owner receives equity in, or holds title to the property.
- Taxes: any local, state, federal or foreign taxes paid that are directly attributable to a trade or business.
- Interest: any interest expenses on money borrowed, to cover the costs of business activities.
- Insurance: any type of insurance acquired for a professional business.
In general, “ordinary” expenses refers to those that are commonly and typically used by people in your trade or industry. “Necessary” expenses refers to those expenses that are helpful and appropriate; necessary expenses must also be ordinary expenses in order to be tax deductible.
Business Use of Your Home
Business owners may be able to deduct expenses related to the portions of their homes that are allocated toward business use. These expenses may include utilities, mortgage interest, and repairs. But for business owners’ homes to qualify as deductions, they must prove their dwelling is their principal place of business—even if an individual conducts ancillary business at locations outside of the home. Furthermore, deductions for a home office are based on the percentage of a home that a business owner dedicates to business use. Consequently, individuals who operate out of the home are responsible for making this calculation.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9493094086647034,
"language": "en",
"url": "https://www.rand.org/pubs/periodicals/health-quarterly/issues/v2/n2/17.html",
"token_count": 2593,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.4765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2fdd3ff8-8598-4c79-ad7e-048be7c7df16>"
}
|
The Harmful and Hazardous Use of Alcohol Is a Serious Problem in the EU
The harmful and hazardous use of alcohol results in serious health, social and economic harms, and is the third-leading risk factor for death and disability in the European Union (EU) after tobacco and high blood pressure. Alcohol generates high costs to society; it was estimated that the costs in the EU of alcohol-related harms was around €125 billion in 2003, equivalent to 1.3 percent of GDP. Against this background, there is intense pan-European interest in developing and implementing measures to combat alcohol harms.
Evidence suggests that consumers respond to changes in alcohol prices, and increases in alcohol prices have been linked to reductions in consumption and positive health and social outcomes. We also know that price changes impact on what people drink or where they purchase their alcoholic beverages.
There are many types of pricing policies that governments have at their disposal to address alcohol harms. Taxes are one such policy, but others include restrictions on promotions and discounts, bans on below-cost sales and the introduction of minimum prices on a unit of alcohol.
However, there remain important gaps in our understanding of the various factors that affect how different pricing policy initiatives translate into actual price changes across the EU. At the same time, there is considerable opportunity to learn from the experiences of countries that implement various (non-tax) pricing policies.
This research aims to further our understanding of these issues by addressing the following specific questions:
- To what extent have alcohol tax changes been passed through to consumer prices?
- What are the trends in the ratio of on-premise to off-premise sales of alcoholic beverages? What factors may be driving these trends?
- What are the trends in the use of on- and off-trade alcohol price promotions and discounts?
- What is the regulatory landscape in the EU with reference to non-tax alcohol pricing policy, and what lessons can we learn from the diversity of regulatory experiences?
There Is Heterogeneity in Pass-Through in Different Countries, for Different Beverages and in Different Types of Premise
Extensive research has been conducted on the effect of changes in alcohol excise duties on alcohol consumption and harms. The mechanism by which taxation influences consumption is through its pass-through to prices. Pass-through refers to the extent to which taxes are passed through to the price the consumer pays. We estimated pass-through for four Member States that were able to provide relevant data: Finland, Ireland, Latvia and Slovenia. We performed regression analysis for beer and spirits taxes and prices for off-trade alcohol for each country, focusing on tax changes experienced in recent years. As we also obtained on-premise data from Ireland and Finland, we analysed pass-through in the on-trade in those two countries. We provide estimates of the change in real retail prices following a €1 increase in real excise duties. Full pass-through means that consumer prices change by the currency amount of the change in excise duty.
We found there is less than full pass-through in Ireland and Finland for beer excise duties both in the on- and the off-trade, whereas they are more than fully passed through in the off-trade in Latvia and Slovenia (Figure 1).
Pass-Through for Beer in Ireland, Finland, Latvia and Slovenia
For spirits, the picture is more diverse. We find less than full pass-through in the on-trade in Finland and Ireland, but more than full pass-through in the off-trade in Finland and Latvia. Ireland's and Slovenia's off-trade sectors did not pass on the full amount of excise duty change to prices of spirits (Figure 2).
Pass-Through for Spirits in Ireland, Finland, Latvia and Slovenia
It is possible that factors such as market structure, consumer preferences, other pricing policies (e.g. price floors such as Ireland's Grocery Order) and alcohol-related policies (e.g. changes in drink-driving legislation) affect the extent to which excise duty changes are passed on to consumers. Therefore, it is difficult to predict with precision the effect of changes in excise duty. In view of this, it is useful for policymakers to assess carefully prior responses to excise duty changes in their countries and the other key changes occurring in that environment before implementing new changes.
There Is a Trend Towards More Off-Trade Alcohol Consumption in Many EU Member States
Research suggests that in Belgium, the Netherlands, Portugal, Scotland and other EU countries the share of on-trade alcohol consumption is decreasing relative to the off-trade. We obtained data from six EU countries (Finland, Germany, Ireland, Latvia, Slovenia and Spain) to examine this trend in more detail. In all six countries the ratio of off- to on-trade consumption went up for at least one type of alcoholic beverage during the observed period. The ratio of off- to on-trade consumption indicates the litres of alcohol that are consumed in the off-trade for every one litre of alcohol consumed in the on-trade. In four countries out of six, ratios went up for all beverages, as Table 1 indicates.
Ratio of Off- to On-Trade Consumption of Alcohol, by Beverage, in Six EU Countries, 1997–2010
This is the case even in Ireland and Spain, which had traditionally higher consumption of on-premise alcohol. In those countries in our sample with traditionally higher off-trade alcohol consumption (Finland and Germany) the proportion of alcohol sold through the off-trade has also been increasing relative to on-trade alcohol sales. Latvia and Slovenia, where off-trade consumption has been higher than on-trade consumption since at least the mid-1990s, exhibit stability in the ratio of on- and off-trade sales for selected beverages, an exception in our sample of six countries. The only instance of a decrease in the ratio of off- to on-trade consumption is for wine consumption in Slovenia.
Both Policy and Social and Economic Changes May Influence the Movement of Alcohol Consumption Between the On- and the Off-Trade Sectors
Lower off-trade alcohol prices, driven in part by growing competition in the supermarket sector (and at least in some countries possibly driven by cross-border consumption), may be causing at least part of the shift. Preventive alcohol policies as well as social, cultural, economic and demographic determinants also can play a large role in shift between on- and off-premise consumption of alcohol. In this study we conduct an exploratory analysis of the effect of a number of social, cultural, economic and demographic factors on alcohol consumption by premise. This is the first study we are aware of that attempts to analyse statistically the potential relationship between a variety of determinants. Results suggest that population density, broadband concentration and GDP per capita are statistically significant factors. The relationship is positive for population density and broadband penetration in which increases in those factors are associated with relatively more consumption in the off-trade; whereas the relationship with GDP per capita is negative, so increases in wealth are associated with shifts towards on-trade consumption. The economic downturn experienced in Europe in the last few years may have influenced the trends observed towards increased off-trade consumption.
Alcohol Price Promotions and Discounts Are Prevalent in Many EU Member States
There is some informative research on the impact of off- and on-trade price promotions and discounts, although the evidence base is not well developed. Existing data about the extent of alcohol price promotions and discounts across the EU are limited. A few studies suggest that in France, Ireland, Latvia, the Netherlands, Poland and the UK, price promotions and discounts are common in the off- and on-trade, but this has increasing significance for value in the off-trade.
Many Different Types of Non-Tax Pricing Regulations Are Used Across the EU, but We Know Little About Their Effectiveness in Reducing Alcohol Harms
The regulatory landscape in Europe is diverse, with most countries implementing at least one type of non-tax alcohol pricing regulation. Examples include off-trade retail monopolies (such as in Finland and Sweden), restrictions in off- and/or on-trade discounts and promotions (such as in parts of Germany and Spain), and bans on below-cost sales (such as the one recently abolished in Ireland). In theory, these policies should limit the availability of cheap alcohol; in fact, research shows that retail monopolies have been effective in curbing alcohol harms. However, in practice we know little about whether, and to what extent, the other policies actually achieve their aims. More research is needed in this area (focusing in part on implementation, enforcement and compliance) to assess which ones of these policies are promising and which ones should be improved.
In spite of extensive evidence that raising alcohol prices reduces alcohol consumption and harms, the real price of alcoholic beverages is decreasing across the EU. This trend has fuelled debate among policymakers, public health practitioners and other stakeholders across the EU about the opportunities, and challenges, of alcohol pricing policies. This study aims to contribute a robust evidence base to inform pricing policy in the region.
As alcohol-related harms continue to present a public health challenge across the EU, this study makes an important contribution to the evidence base on alcohol pricing policy. In addition to the findings from its own analysis, this report also makes a strong case for improved data collection in a number of key areas (such as alcohol prices by beverage and premise type, on- versus off-trade consumption, and the use of price promotions and discounts) that would enhance research and policymaking in the region.
We reviewed influences on alcohol prices and locations of alcohol purchases using a mixed-methods approach. Each research question required a particular approach.
Excise duty pass-through
In order to analyse pass-through, we obtained data on prices and excise duties from Finland, Ireland, Latvia and Slovenia. These were analysed by means of regression analysis to identify the relationship between excise duties and prices.
On- and Off-Premise Sales Trends
We obtained data from six EU countries (Finland, Germany, Ireland, Latvia, Slovenia and Spain) to examine the trend in off- and on-premise sales in more detail. We constructed a ratio of off- to on-premise sales volumes from 1997 to 2010. In order to explore potential factors influencing the off- and on-premise sales trends, we performed regression analysis of selected social and economic determinants of alcohol consumption that have been identified in the literature.
Promotions and Discounts Sales Trends
Existing data and research about the extent of alcohol price promotions and discounts across the EU are limited. Nevertheless, we obtained data on the volume of alcohol sales through discounters (supermarkets selling mostly own-brand products or major brands at discounted prices) as an indication of trends in the retail of discounted alcohol in a small sample of EU countries. We also collected further data and information on alcohol retail practices and pricing regulations across the EU by means of an online survey of experts and policymakers, and interviews with key informants representing 23 national authorities and economic operators across ten Member States.
Alcohol Pricing Regulations
In collaboration with the European Commission Directorate General for Health and Consumers, we identified five regulations seen as of particular interest for more in-depth analysis. Research towards these case studies of non-tax pricing regulations included a review of relevant documents and materials, and key informant interviews.
As with any research endeavour, there are limitations to the findings. The main constraints in this research are related to data. Analysis of pass-through required mean prices by beverage for at least one month and monthly price indices. Despite searches and requests for this data from Member States with potentially enough changes in excise duty to identify the pass-through relationship, we obtained data for only four countries. For the overall assessment across countries, improved accuracy and a fuller picture for the range of pass-through could be achieved with data from more countries.
In order to construct the ratio of on- to off-premise sales, data need to be purchased as publicly available information is not available. Resources for this study only allowed for purchase of data on six countries and, again, a more comprehensive picture of the situation across Member States could be made with more data.
Responses to our online survey of EU alcohol experts and government representatives were limited. In order to improve our understanding of the nature and extent of alcohol price promotions and discounts, more systematic (and comparable) efforts to collect information are needed across the Member States. Finally, while there are numerous examples of non-tax price regulations across the EU, research on their effectiveness is scarce. Further research on this is desirable for countries to be able to learn from each other's good practice and use robust evidence as they develop approaches to tackling alcohol harm.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9423718452453613,
"language": "en",
"url": "http://www.bessettepitney.net/2019/03/federal-incomes-taxes-are-progressive.html",
"token_count": 235,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.025390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d30ea482-9876-4a0a-b210-c1f2a208a73d>"
}
|
As we think about tax policy proposals designed to target specific or narrow groups of individuals, it’s important to understand who shoulders the burden of income taxes under the current system. The federal income tax system is already progressive, with high-income taxpayers paying a larger share of the tax burden under higher average tax rates than lower- and middle-income taxpayers.
Data from the Internal Revenue Service (IRS) shows us who pays federal income taxes. As illustrated below, higher-income taxpayers are responsible for paying a significantly higher share of the tax, and this trend has increased over the past three decades. For instance, in 2016, the top 1 percent of taxpayers paid about 37 percent of federal income taxes, more than twelve times the tax burden of the bottom half of taxpayers.
We see a similar trend when looking at other income percentiles. The bottom 90 percent of taxpayers accounted for about 45 percent of the overall tax burden in 1986, compared to approximately 31 percent in 2016. Conversely, the top 10 percent of taxpayers have seen an increase in their tax burden over the same period, from 55 percent of total income taxes in 1986 to almost 70 percent in 2016.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9423218369483948,
"language": "en",
"url": "https://ipolitics.ca/2021/03/24/ottawa-should-plan-for-heavy-industrys-role-in-a-green-future-report/",
"token_count": 792,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.38671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:79196368-a45b-41e1-912e-a8e3deeffb4d>"
}
|
A new report from Clean Energy Canada says Ottawa should develop a plan to help heavy industry reduce its emissions while boosting its ability to supply the world’s growing demand for valuable materials used in clean technologies.
The paper, released on Wednesday by the think tank at B.C.’s Simon Fraser University, says turning heavy industries into clean-tech ones is the “next frontier” in the fight against climate change.
The metal, steel, mining, and chemical sectors are well-positioned to take advantage of the “green economy super-cycle,” a predicted spike in demand and prices for materials used in batteries, electric vehicles, and solar panels, the report says.
At the same time, these energy-intensive sectors must cut their emissions if Canada is to meet its 2050 net-zero emissions targets. Excluding oil and gas, heavy industry represents 11 per cent of Canada’s greenhouse-gas emissions.
Ottawa doesn’t currently have a clean-growth strategy for heavy industry, but the think tank says one could work in tandem with carbon pricing to approach economic growth in a more targeted, deliberate way.
Canada often debates the future of its oil and gas sector, but argues less about other industries that will be around for good, said Sarah Petrevan, Clean Energy Canada’s policy director.
“We need to spend more time focusing on the industries we will need in a net-zero future,” she said. “How do we help them through the transition, and make sure that Canada remains economically competitive?”
Reducing industrial emissions, such as through carbon-capture techniques and using cleaner fuel, could woo more international players into buying much-sought-after minerals and materials from environmentally sound Canadian companies.
Such players include the U.S. government, where Canada is seeking specific exemptions to “Buy American” policies, including for green energy.
The Biden administration is already eyeing Canada’s minerals and metals for American clean tech, the report notes.
“If we want there to be an asterisk besides Canada, we have to adapt to what this administration … wants more of, which is clean energy and low-carbon goods,” Petrevan said.
The global production of metals and minerals could increase by up to 500 per cent over the next 30 years, in order to meet the growing demand for clean-tech products, according to the World Bank. Worldwide demand for steel is also projected to rise by up to 55 per cent.
Canada is one of the world’s top producers of cobalt, aluminum, graphite, nickel, and copper — minerals used in electric vehicles. Canadian heavy-industry firms have, on average, a smaller carbon footprint than overseas competitors, making them more appealing for foreign entities to do business with.
In addition to the U.S., the European Union, China, and the U.K. could be big buyers of Canadian materials used in clean tech, Petrevan said.
Industrial facilities like cement plants are currently subject to Canada’s output-based carbon-pricing system. Companies also benefit from a patchwork of government programs that promote decarbonization.
Heavy industry must further decarbonize, and a national strategy can map out what governments should do next, Petrevan said. The EU and the U.K. already have climate strategies for such sectors.
An action plan should include a “buy clean” approach to federal procurement, ensuring that infrastructure is built with low-carbon materials, and encouraging other levels of government to do the same.
Canada should also adjust tax incentives to encourage investment in clean tech, and encourage more private-sector investment. Ottawa can also promote its clean-economy exports through trade missions and technology demonstrations, Petrevan said.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.933114230632782,
"language": "en",
"url": "https://latestcrypto.news/an-article-to-understand-bitcoin/",
"token_count": 1729,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.00150299072265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:357dbab4-b9bd-4f4e-a133-58a244357b22>"
}
|
Blockchain is a data structure in which blocks containing transaction information are sequentially linked from back to front. It can be stored as a flatfile (a file that contains records that have no relative relationship), or it can be stored in a simple database. The Bitcoin Core client uses Google’s LevelDB database to store blockchain metadata. Blocks are linked in this chain in an orderly manner from back to front, and each block points to the previous block.
The blockchain is often regarded as a vertical stack, with the first block as the first block at the bottom of the stack, and then each block is placed on top of other blocks. After using the stack to visualize the concept of stacking blocks in sequence, we can use some terms, such as: “height” to indicate the distance between the block and the first block; and “top” or “top” to indicate The newly added block.
Perform SHA256 encryption hash on each block to generate a hash value. Through this hash value, the corresponding block in the blockchain can be identified. World-class digital goods-currency exchange-easy. At the same time, each block can refer to the previous block (parent block) through the “parent block hash value” field of its block header. In other words, each block header contains the hash value of its parent block. In this way, the hash value sequence that links each block to its parent block creates a chain that can be traced back to the first block (the genesis block).
Since the block header contains the “parent block hash value” field, the hash value of the current block is also affected by this field. If the identity of the parent block changes, the identity of the child block will also change. When there is any change in the parent block, the hash value of the parent block also changes. A change in the hash value of the parent block will force the “parent block hash value” field of the child block to change, which in turn will cause the hash value of the child block to change. The change of the hash value of the child block will force the “parent block hash value” field of the grandchild block to change, which in turn will change the hash value of the grandchild block, and so on.
Once a block has many generations later, this waterfall effect will ensure that the block will not be changed unless the block is forced to recalculate all subsequent blocks. It is precisely because such recalculation requires a huge amount of calculation, so the existence of a long blockchain can make the history of the blockchain immutable, which is also a key feature of Bitcoin security.
You can think of the blockchain as a geological layer in a geological structure. The surface layer may change with the seasons and may even be blown away by the wind before deposition. But the deeper it goes, the more stable the geological layer becomes. When you reach a depth of a few hundred feet, you will see rock formations that have been preserved for millions of years but still remain intact. In the blockchain, the most recent blocks may be modified due to recalculation caused by the blockchain fork. The latest six blocks are like a few inches of topsoil. However, after exceeding these six blocks, the deeper the position of the block in the blockchain, the less likely it is to be changed. After 100 blocks, the blockchain is stable enough, and at this time, Coinbase transactions (transactions containing newly mined Bitcoin) can be paid. The blockchain, after a few thousand blocks (one month), will become a definite history and will never change.
In the Bitcoin system, the Bitcoin full node saves a local copy of the blockchain from the genesis block. The blocks are connected in the form of chains by referring to the hash value of the block header of the parent block.
With the concepts of transaction, block and blockchain, if we only consider the blockchain as a database, we can use these concepts to construct a blockchain system. We can use the transaction data structure to store transaction data (checks, blogs, etc.), package transaction data through blocks, and finally, connect blocks through block hashes. This is what we usually call blockchain.
But the blockchain designed in this way is just a database, and there is no way to guarantee that the data cannot be tampered with. Therefore, a blockchain that lacks a consensus mechanism is not a blockchain, and a blockchain must be supported by a consensus mechanism. The consensus mechanism in Bitcoin is guaranteed by mining.
In the first chapter, we introduced that digital currency has experienced more than ten years of development history before Bitcoin was born. This period includes e-Cash, HashCash, B-money, and other related digital currencies. This period belongs to the development stage of digital currencies. Although digital currency has undergone more than ten years of development, no one has proposed the realization of a decentralized digital currency system. Since digital currency is a string of strings, the cost of copying is very low, so the difficulty in implementing decentralized electronic currency is how to avoid digital currency being paid twice or more at the same time, which is what I often call double spend. . (I can’t agree with this translation. I personally think that the translation into double payment is closer to the original meaning and easier to understand.)
In the Bitcoin system, mining is a process of increasing the supply of Bitcoin currency, while also protecting the security of the Bitcoin system, preventing fraudulent transactions, and avoiding double payments (double payment refers to spending the same bitcoin multiple times).
Miners provide computing power for the Bitcoin network in exchange for the opportunity to obtain Bitcoin rewards. The miners verify each new transaction and record them in the general ledger. Every 10 minutes, a new block will be “mined.” Each block contains all the transactions that occurred from the previous block to the present, and these transactions are added to the blockchain in turn. We refer to the transactions included in the block and added to the blockchain as “confirmed” transactions. After the transaction is “confirmed,” the new owner can spend the bitcoins he got in the transaction.
Miners receive two types of rewards during the mining process: new currency rewards for creating new blocks and transaction fees for transactions contained in the blocks. In order to get these rewards, miners are vying to complete a mathematical puzzle based on cryptographic hashing algorithms. The answers to these puzzles are included in the new block as a proof of the miner’s computational workload, which is called “Proof of Work.” The competition mechanism of the algorithm and the mechanism by which the winner has the right to trade on the blockchain are the cornerstones of Bitcoin security.
The term “mining” is somewhat misleading. It is easy to cause the association of precious metal mining, so that our attention is focused on the reward generated by each new block. Although the reward brought by mining is an incentive, its main purpose is not the reward itself or the generation of new coins.
If you only think of mining as the process of producing new coins, then you are using means (incentives) as an end. Mining is a process of decentralizing clearing houses. Each clearing house needs to verify and settle transactions processed.
Mining protects the security of the Bitcoin system and enables the entire Bitcoin network to reach a consensus without a central organization.
P2P network and nodes
Bitcoin uses an Internet-based P2P network architecture. Each node in the P2P network is equal to each other, each node provides services together, and there is no special node. There are no servers, centralized services, and hierarchical structures in the P2P network. P2P networks are inherently scalable, decentralized, and open. Bitcoin is designed as a peer-to-peer digital cash system, and its network architecture is not only a reflection of this core feature but also the cornerstone of this feature. Decentralized control is the core principle of the design. It can only be achieved by maintaining a flat and decentralized P2P consensus network.
Although each node in the P2P network is related and equivalent, depending on the function of each node, each node will have a different division of labor. In the Bitcoin system, each Bitcoin node is a routing, blockchain database, mining, and wallet service. A collection of functions, a full node includes wallets, miners, complete blockchains, and network routing nodes.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9306694865226746,
"language": "en",
"url": "https://www.blogsaays.com/5-solid-powers-of-bitcoin-and-its-5-tricky-problems/",
"token_count": 3069,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.46484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:824fbd69-91e5-41b2-90db-a23ecb5ec18a>"
}
|
Bitcoin is the first decentralized cryptocurrency, or “the original cryptocurrency”. Satoshi Nakamoto conceptualized Bitcoin in 2008. A vibrant community of enthusiasts from diverse backgrounds grew organically around this concept. And in 2009, they implemented and released Bitcoin into the real world!
Whether you support it or not, Bitcoin has certainly ushered in a new era in the progression of human civilization in the early 21st century. Thus, it is comparable to the invention of the World Wide Web in the late 20th century and to the development of Electricity in the 19th century. We can even compare it to the Steam Engine that sparked the First Industrial Revolution back in the 18th century!
It has become trendy to downplay the powers of Bitcoin. People find it easy to dismiss Bitcoin because of some tricky problems associated with it. As a result, only its underlying Blockchain technology is upheld as the real innovation.
Undoubtedly, Blockchain is a powerful combination of preexisting ideas from computer science and cryptography. It continues to find endless applications every day. But Bitcoin can stand tall on its own. It is a game-changer to our economic, banking, business, and financial sectors!
We can expect the global human society to take its good time for clarifying whether Bitcoin is just a currency or an entire monetary system. But it is clearly evident to the public eye that Bitcoin, the thousands of altcoins that followed it, and the plethora of decentralized apps are together growing into a vibrant ecosystem.
Communities and societies will gradually find practical and optimal ways of adopting this Greater Bitcoin ecosystem into their everyday lives and businesses. Here, we shall look both at its solid powers and the tricky problems around it.
The 5 Solid Powers of Bitcoin:
1. Rules, not Rulers, determine the issuance of Bitcoin
Human greed is like water: everywhere it can go, it will go! But the designers of Bitcoin have made it “water-tight” and “water-proof”, or rather “greed-proof”. They achieved this by designing Bitcoin’s money supply purely using mathematical and algorithmic rules. The Bitcoin Blockchain implementation has ensured to bake into its source code the rate of creation and the maximum size of the Bitcoin money supply.
History doesn't have to repeat itself!
In the past, rulers have debased their currencies again and again. At times by adding cheaper metals to gold and silver coins. And sometimes by printing a disproportionate amount of paper money to fund enormous government spending during wars. But no ruler can over-print and debase Bitcoin.
2. Decentralization: Let the Markets optimize the prices organically!
In the mainstream economy, the Government, via its Central Bank, seeks to artificially control and fix prices, rates and taxes. This government intervention in markets distorts the natural prices that would emerge if we allow real supply and demand to function untethered.
With Bitcoin, a natural market emerges from the bottom-up organically, as an emergent property of the distributed global community of miners, exchanges, merchants, and consumers. No central authority can meddle with the functioning of this complex, self-organizing system.
3. Truly International and Fast Settlement Time
Traditional fiat currencies, or the national currencies of various nations, have to convert values according to ongoing foreign-exchange rates. This is a complicated system is held together, as if by duct-tape, by progressively linking assorted database systems using ad hoc or outdated techniques.
Duct-tape Complicated Systems
It involves lots of middle-men in the form of banks, settlement agencies and payment channels. This increases the number of hops required to complete a transaction. At each step, a middleman charges some fee, and also delays the transfer. Thus, it is an unduly complicated system comprising self-serving players.
Well-designed, self-organizing complex systems
Instead, Bitcoin has been designed to be truly international. It is native to the digital, networked topology of the Internet. It does not belong to any country. 1 Bitcoin = 1 Bitcoin anywhere in the world.
It usually takes several days or even weeks for International Banks working with one another using their traditional systems to settle transactions. But it only takes 10 minutes on average to fully complete a Bitcoin transaction.
To top that, layer-2 implementations like the Lightning Network are achieving near-instant settlement times for small and medium-sized transactions on a global scale. Layer-2 implementation means additional features and services coded on top of the preexisting Layer-1 Bitcoin Blockchain architecture.
4. Bitcoin’s Difficulty Adjustment mechanism makes it better at being Gold than Gold itself!
Gold's power as a store of value is in remaining unchanged over time, its scarcity, fixed supply, and the difficulty of mining.
The Bitcoin Blockchain system issues a fixed, predetermined amount of new Bitcoin whenever the miners make a new block available to the network for recording transactions. We refer to this activity as Bitcoin "mining". This happens every 10 minutes on average, determined by the logic in the source code.
3 factors maintain a foolproof control over the mining of new bitcoins:
1) Halving Event
Factor 1 is the Halving event. Every 4 years, the source code halves the amount of bitcoins issued per block. Thus, this number went from 50 to 25 in 2013, 25 to 12.5 in 2016, and 12.5 to 6.25 in 2020.
2) Max Limit
Thanks to the halving events, the Bitcoin Blockchain system issues fewer and fewer bitcoins per block every 4 years. We will reach the max limit of 21 million bitcoins sometime in the 22nd century. This Max Limit is the Factor 2.
3) Difficulty Adjustment
But it is Factor 3 that is the most powerful: Difficulty Adjustment. Let's suppose that the price of gold were to become a million dollars per ounce. Greater the reward, greater the competition! This big price will attract miners in huge numbers than usual. They will be ready to invest more mining rigs and more labor hours into the goldmining activity.
Thanks to the added collective ability and will power, the goldminers will collectively manage to dig out more gold than ever before. They will also go deeper and to unexplored locations. This increased mining activity, in turn, will drive up the supply of gold into the market. This increased supply will naturally push down the earlier astronomical price of Gold towards its normal range. And as usual, the market forces of supply and demand would have modulated human labor and commodity prices towards "equilibrium".
No Bitcoin "Gold Rush" Possible
But these forces of supply-demand cannot work on the supply of Bitcoins. First, Factor 2 already ensures a fixed upper limit on the total number of bitcoins. Besides, if the number of bitcoin miners and the processing capacity employed skyrockets, the difficulty of the cryptographic problem being cracked also goes up in proportion. This difficulty adjustment mechanism ensures that it will always take 10 minutes on average to crack the code and issue the next block.
What if the Miners go on strike or something?
Even if the number of miners and the computing power was to drastically fall down for some unexpected reason, the difficulty level would automatically drop down as well. It will take only a few laptops and just about 10 minutes to mine the next block during this "easy" difficulty level. Along with the previous two factors, this ingenious mechanism makes Bitcoin better at being gold than gold itself!
What happens after we finish mining all 21 million bitcoins?
We know that miners earn new bitcoins every time they mine a new block successfully. However, after hitting the Max Limit of 21 million bitcoins in 2140, no new bitcoins will be available to mine. Does the system shut down at that point?
Don't worry! Even after that, we can rest assured that the miners will continue to run the system. This is because they will still receive transaction fees for processing and validating the blockchain transactions.
5. Bitcoin is an attractor for cheap and clean energy generation
Bitcoin is often criticized for requiring massive amounts of computing power driven by fossil fuel energy. Firstly, as discussed in the previous point, there is no real necessity for expending huge computing power to run the Bitcoin Blockchain network. The currently massive computational expenditure is not a function of its source code or design.
Even a few hundred laptops could run the system, as the difficulty level will self-adjust to match the available computation power. The rise in computing power currently used in the Bitcoin network is attributable to human greed. But even such greed has been shown to invariably drive up competition, which in turn sparks innovation.
How does the Bitcoin Mining “gold rush” contribute to human society?
For one thing, it has driven up the demand for more efficient and powerful processors and led to the construction of powerful data centers all over the world. With the rising application of deep learning and AI tech in formalizing problem-solving in all walks of life, these Bitcoin Miners could find plenty of alternatives, lucrative and powerful uses. Human creativity, determination and foresight will play a crucial role in taking sound decisions regarding how to put these available powers to the best use! Perhaps the next big innovative idea will be delivered by the readers of this blog? What do you think? 😉
Marching towards The End of Fossil Fuel Profitability
We can reasonably anticipate that fossil fuels will become scarcer and increasingly costly to extract over time. Then, the only way to continue running Bitcoin Miners at a profit will be to utilize clean and cheap sources of energy, such as solar farms, wind farms and hydro. It requires very little human presence on a day-to-day basis to run the bitcoin mining centers. We can construct the bitcoin mining "rigs" in far-flung places that have clean and cheap energy sources.
This is going to be really game-changing because until now, human settlements first arose near seashores and river beds for the convenience of geography, trading, and agriculture. Then, electricity had to be made available to these settlements, by generating it at the available source and then carrying it using wires to the preexisting settlements and communities.
With Bitcoin and its preference for cheap clean energy however, electricity and computational infrastructures will be the first to arise in new untouched locations. And only later, these areas of clean, cheap energy will attract new human settlements thanks to the market forces of supply, demand and pricing. This trend will create an organic, emergent mass migration of human beings to new areas with clean and cheap energy. Imagine this as a global, real-world algorithm in which entire human communities are getting attracted to cheap and clean energy pools! As we increase our use of clean energy at scale, we will drive down our impact on Climate Change. Is this Elon Musk's game plan? Is this why he is so supportive of Bitcoin?
The 5 Tricky Problems with Bitcoin (quick version):
Thanks to its libertarian and decentralized ethos, Bitcoin finds resistance from most central authorities that gatekeep and control the mainstream economics and finance. And if your Government discourages it or even outright bans it, then it disables a major bulk of the citizenry from accessing it, because of the fear of legal action, and also due to the lack of technical savvy among the general public.
It always takes more time to politically and legally establish a new system, even if it is technically much superior to the extant systems. Most of the Governments are trying to regularize this system by implementing their own centralized rules and control measures over it. But doing so distorts and corrupts the original value and purpose of this powerful decentralized technology!
Increase of Dark WEB
The decentralized and anonymized nature of Bitcoin attracted people engaged in illegal or criminal activities. This includes unauthorized selling of illegal drugs, weapons, or money laundering.
Even though intelligence agencies can trace and curtail many such activities using cutting-edge data analysis and surveillance methods, the early history of criminal activities using Bitcoin has already given it a bad name among the law-abiding and ethical citizenry.
Market Speculation & Volatile Nature
While we discussed how the design of Bitcoin is intrinsically greed-proof, human greed does find alternative options of manipulating any available asset. Bitcoin has become a speculative instrument for many. Its frequent short-term buying and selling leads to volatility in its prices w.r.t. mainstream fiat currencies.
This can change only when a critical mass of users and investors realize Bitcoin is less of an alternative currency. But in many ways, it is a superior monetary, banking, and financial system. The main reason behind it not being used as such, is that of government resistance. This type of cultural and collective imagination shift takes time to grow and find a foothold in different subcultures. That is, until it finally hits critical mass, crosses the tipping point, and arrives in a big way overnight, visible to everyone!
Crypto hasn't become the default platform for day to day transactions
This “infrastructure rather than currency” aspect of Bitcoin discussed in the previous point being poorly understood in the general public, presents several confusions. Are we supposed to use Bitcoin as a speculative instrument for making a quick profit; or as a medium of exchange in everyday transactions (i.e. as “money’); or as something much more valuable that must be held onto for a long time, until the new system makes the old one obsolete on account of its superior design?
0% chance of recovery & refunds
Lastly, since bitcoins are encrypted virtual assets and don’t exist physically if you somehow forget or lose the private keys to a wallet, you and everyone else has potentially lost access to those bitcoins forever. These lost coins will become stagnant, unused part of the total money supply. This will reduce the network's liquidity, and even form tragic stories of lost fortunes because of technical glitches or human errors. There is a related tricky problem regarding irreversible transactions, i.e. refunds not being possible for transactions done on the blockchain. This is because since it is a non-editable database.
We have been running our economy and society in a suboptimal manner for several decades. The culprit? The slow, costly, inefficient, corruptible, centralized architectures running our banking, finance and governance sectors at present. This has continued to cause frequent occurrences of systemic problems in our markets and lives.
While looking for answers to the 5 tricky problems of Bitcoin, we must remember the 5 solid powers of Bitcoin. We must learn to see Bitcoin not as a currency, but rather as the Bitcoin Blockchain Network. This ecosystem presents a superior, decentralized, citizen-driven, bottom-up, truly international, and natively digital infrastructure. It holds the promise of completely replacing the suboptimal, inhumane and outdated architectures and systems of the past. It surely must be looking forward to leading the entire human civilization towards progress and prosperity!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9206071496009827,
"language": "en",
"url": "https://www.magnifymoney.com/blog/banking/routing-number/",
"token_count": 753,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0172119140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:feb5123a-b97c-447f-8d54-eb4783173a91>"
}
|
Editorial Note: The content of this article is based on the author’s opinions and recommendations alone. It may not have been previewed, commissioned or otherwise endorsed by any of our network partners.
Updated on Wednesday, January 6, 2021
Routing numbers identify the bank location in which you opened your account, and are required to settle transactions. Typically, you can find your routing number at the bottom left-hand corner of your checks, though there are other ways to find it as well.
This article covers everything you need to know about your bank routing number, including how they differ from account numbers.
What is a routing number?
A routing number identifies the location of the bank’s branch where you opened your account. This number allows financial institutions, such as banks and credit unions, to trace where the money is coming from and where it’s going, so as to not confuse one bank with another. Along with your bank account number, a routing number is part of the information required for financial institutions to process direct deposits, checks, auto payments and wire transfers.
A routing number consists of nine digits and three components. The first four digits represent the Federal Reserve routing symbol. The next four digits identify an ABA institution. The last component (the ninth number) is the “check digit.” This single number is important because it is used to verify the authenticity of the routing number.
Different types of routing numbers
Some banks may have different routing numbers for different types of transactions. For example, the routing number for direct deposits and automated clearing house (ACH) transfers may be different from the one used for wire transfers.
It’s critical to find the right routing number for the type transaction you intend to make. If you’re not sure which number to use, you should contact your bank for help.
Routing number vs. account number
It is important to differentiate a routing number from an account number. An account number identifies your specific account. A routing number, on the other hand, identifies the bank that’s responsible for money going in and out of your account.
One bank may have multiple routing numbers, determined by factors like the region where the account is opened.
|Routing Number vs. Account Number|
|What it identifies||Number of digits|
|Routing number||The location in which your account was opened||Nine digits|
|Account number||Your actual, personal account||Usually 10-12 digits|
Where to find the routing number
While you can easily figure out your routing number by calling your bank, there are a number of ways you can find the number on your own as well.
On a check
The routing number generally appears in the bottom left-hand corner of a check. It is the first set of numbers. The next set of numbers — just to the right of the routing number — is your account number, which is generally followed by the number of that specific check. These three components are usually separated by symbols, spaces or a combination of both.
Banks often list their routing numbers on their websites. Typically, you can easily find your routing number after securely logging into your bank account online, either through your bank’s website or through its mobile app.
Once you’re logged on, you can typically find information like your account number, routing number and more through your online account dashboard.
On the ABA website
You may look up a routing number on the ABA website by inputting a bank’s name and its location. On the same website, you also may look up this number for another type of financial institution.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9700170159339905,
"language": "en",
"url": "https://www.ofwat.gov.uk/regulated-companies/resilience-in-the-round/long-term-potential-domestic-demand-management-water-sector/",
"token_count": 465,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.06982421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1c9272b1-ec6b-407d-afb9-430f8f9be281>"
}
|
As we look to the future, we have identified long-term, large environmental and socio-economic shifts that could have a big impact on the water sector in England and Wales.
We know that climate change is going to have a significant impact on the water sector, though due to its inherently unpredictable nature, we do not know to what extent. There are trends towards warmer, drier weather, which could impact water security, particularly in the south-east. Yet, there is also a high likelihood of more extreme weather, meaning a greater risk of floods and droughts.
We also know that the population is growing. There are projections that suggest the UK population could grow by upwards of 10 million people over the next 20 to 30 years. Compounding this issue from our sector’s perspective is the fact that most are expected to live in the most water stressed areas, particularly the south-east of England.
As water becomes an increasingly scarce resource, and becomes more expensive to supply, there is a risk that affordability will also suffer. This could exacerbate the long-term squeeze on living standards associated with a rising cost of living, and falling real incomes.
The main way water companies have met demand in the past has been through supply-side measures – taking more water from the environment, and building infrastructure to store it. But this isn’t without its problems. Abstracting water from the environment can damage habitats, and building infrastructure is expensive.
Yet, the demand side could play an important role in securing water supplies, protecting the environment, and saving customers money. Over the past decade, we have seen a slight reduction in the amount of water that people use –from a historic high of 155 litres to 140 litres, per person, per day, but it needs to drop even more. Through this study, we are asking how big this role would be, and what change is needed to make it happen.
With a fifty-year time horizon, we can afford to look beyond the current constraints, to think about the deep reductions that consumers could make, if we all work together. As well as being an important resource for our future price reviews – and to an extent the one that’s gearing up now, we hope that this study provokes discussion in the sector.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9409754276275635,
"language": "en",
"url": "https://www.thefencepost.com/opinion/the-importance-of-trade-to-us-agriculture/",
"token_count": 529,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.26953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:73f5502d-fbe1-413e-a8e8-41bef317bf16>"
}
|
The Importance of trade to US agriculture
WASHINGTON, D.C. — U.S. agriculture creates jobs and supports economic growth in rural America, and American agriculture depends on maintaining and increasing access to markets outside the U.S. Trade is vital to the success of our nation’s farmers and ranchers. More than 25 percent of all U.S. ag production ultimately goes to markets outside our borders.
While President Trump signed an executive order withdrawing our nation from the Trans-Pacific Partnership, we viewed TPP as a positive agreement for agriculture — one that would have added $4.4 billion annually to our struggling agriculture economy. With this decision, it is critical that the new administration begin work immediately to do all it can to develop new markets for U.S. agricultural goods and to protect and advance U.S. agricultural interests in the critical Asia-Pacific region.
American agriculture is virtually always a winner when trade agreements remove barriers to U.S. crop and livestock exports because we impose very few compared to other nations. We have much to gain through strong trade agreements. AFBF pledges to work with the administration to help ensure that American agriculture can compete on a level playing field in markets around the world. But we need the administration’s commitment to ensuring we do not lose the ground gained — whether in the Asia-Pacific, North America, Europe or other parts of the world.
This is why we believe it is also important to re-emphasize the provisions of the North American Free Trade Agreement with Canada and Mexico that have been beneficial for American agriculture. U.S. agricultural exports to Canada and Mexico have quadrupled from $8.9 billion in 1993 to over $38 billion today, due in large part to NAFTA. Any renegotiation of NAFTA must recognize the gainsachieved by American agriculture and assure that U.S. ag trade with Canada and Mexico remains strong. AFBF will work with the administration to remove remaining barriers that hamstring the ability of America’s farmers and ranchers to benefit from trading relationships with our important North American trading partners.❖
Start a dialogue, stay on topic and be civil.
If you don't follow the rules, your comment may be deleted.
User Legend: Moderator Trusted User
Nevah and I ventured through our first post-COVID-two-vaccinations trip last weekend. We traveled to northwest Arkansas to visit with long-term accountant friend, ol’ P.N. Cilpusher, and his biz associate, Phillip deLedger. We hadn’t seen them…
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9378023147583008,
"language": "en",
"url": "https://www.trainingaid.org/news/online-training-human-capital-development",
"token_count": 673,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.018798828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ea289940-7971-4d55-a5be-61df9e7d68c4>"
}
|
It is safe to say that in the development world human capital development (HCD) is generally considered important for achieving sustained growth and sustainable results. What does it really mean to invest in people, and how can such an “investment” make a measurable impact? Human capital refers to the skills, ability, and efficiency of individuals (thus also of companies and organizations) that contribute to their productivity and improves their competitiveness.
Why Invest in Human Capital Development?
“Achieving the [Millennium Development Goals] is about making core investments in infrastructure and human capital that enable poor people to join the global economy and … to make full use of infrastructure and human capital.” – Millennium Project, Investing in Development
HCD is considered both an integral part of the Millennium Development Goals (particularly those addressing education and community well-being) and an effective approach to achieving those goals. That’s why investment in human capital is seen as just as important as investing in physical infrastructure.
*See various reports on the Millennium Development Goals here.
Measuring the Impact of HCD Investment
While the importance of human capital development, particularly in the context of international development (but also for ensuring social well-being of countries and communities in general) is well recognized and forms a critical piece of global development approaches and strategies, measuring the ROI of investing in HCD is far from simple.
“Although the conceptual definition of human capital is clear, its measurement is difficult because it is practically impossible to observe individual skill, and even harder to design a metric that is comparable across individuals and countries.” – Hyun H. Son, ADB Economics Working Paper Series No. 225
With the understanding that finding an effective method for measuring human capital development is an important challenge within the field of international development today, there is a possible role that e-learning technologies and online training tools can play in assisting with this important challenge.
Benefits of Online Training for Human Capital Development
In addition to increased flexibility, one of the most exciting benefits of e-learning and online training is the enhanced ability for both trainers and trainees to gather, monitor, and analyze data, follow progress, and diagnose problems. By saving the time and resources that would otherwise be required to gather data on trainees’ knowledge retention and skills development, online training tools empower trainers and training providers to evaluate the effectiveness of their programs in a more concrete and consistent manner. A possible long-term and global outcome of this is a better way for the international aid and development community to track, measure, and improve on the ROI of their investment in human capital.
Of course, online tools will never replace the benefits of in-person learning for some of the critical areas of HCD, such as primary education for children, but when effectively utilized to support and complement existing capacity building efforts in international development, online training can unlock the immense potential of HCD, as it provides an important mechanism to justify – and therefore further encourage – investment in HCD.
Are there existing examples of online training program in the field of international development that support this idea? If you have any best practices to share, or if you have any thoughts related to HCD, online training, and capacity building in the development field, please post your comments!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9558407068252563,
"language": "en",
"url": "http://indearizona.com/wifa-helping-communities-convert-solar/",
"token_count": 500,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.00311279296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:77e08b46-0f25-4da9-97b8-9c981851b447>"
}
|
Five rural cities in Arizona are being powered by solar thanks to loans from the state’s Water Infrastructure Finance Authority.
Another three cities will be considered for low-interest authority loans to install solar in parts of their cities as well, all in an effort to support a more sustainable society while reducing electric bills.
Susan Craig, the authority’s communication director, said that over the past year, five of the nine loans they provided went to rural areas in Arizona. Two of the most recent projects were started in Bisbee and Douglas.
“[Arizona] is an ideal environment because we have so much sun and so it makes sense to take advantage of that opportunity to get your energy from solar,” said Melanie Ford, technical program supervisor. “It works very well here in our sunny environment and it’s also very cost effective.”
The authority is an independent state agency that works with municipalities to improve their drinking water, wastewater, wastewater reclamation and other water quality facilities and projects. The agency offers below-market interest rates on its loans.
The City of Douglas received a $1.3 million loan to install a 300-kilowatt solar system, according to an authority press release.
In July 2013 a $1.6 million loan went to the city of Bisbee to pay for the installation of a 400-kilowatt solar system to power its San Jose Wastewater Treatment Plant.
Thomas Klimek, Bisbee’s public works director, said the system will save the city $50,000 a year and reduce its electric bill by about 60 percent.
Klimek said that the only way this project could happen was with the Water Infrastructure Finance Authority, which also agreed to forgive $400,000 of the loan. This allowed Bisbee to increase the savings from the treatment plant and continue making energy efficient improvements.
Klimek said going solar reduces emissions, which puts less of an impact on the environment.
“It definitely reduces the carbon footprint,” Ford said, “because you’re not using as much electricity generated from power plants from fossil fuels.”
Klimek said the solar system should be completed by late December.
Whitney Burgoyne is a reporter with Arizona Sonora News, a service from the School of Journalism at the University of Arizona. You can contact her at [email protected]
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9571159482002258,
"language": "en",
"url": "https://castocks.org/battery-metal-mining-sinking-to-new-depths/",
"token_count": 1631,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.380859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:02cd047f-a26f-4bc8-a669-7d64f73c035f>"
}
|
One of the biggest drivers of mineral demand growth right now is the booming Electric Vehicle (EV) market and its insatiable appetite for an ocean of batteries. But there are ongoing concerns that a shortage of these key energy storage components will increasingly act as a limiting factor on potential EV production growth in the short term. And while a dearth of Gigafactories and other production facilities are at the root of the most immediate problems facing the EV industry, building new factories is an easily solved problem.
The real problems, however, arise out of strains located deeper in the supply chain. With a reliance on raw materials like Cobalt and Lithium, the possibility of an impending shortage of these key minerals is a major area of concern. Prices for battery metals have already been on the rise as EV production consumes an increasingly large proportion of global supply and, by as early as 2022, Cobalt supply is likely to go into deficit.
Seabed mining in the deep sea may uncover veritable ocean of Cobalt
One idea being floated around right now is the possibility of exploring the ocean floor as a source of cobalt. This would be achieved through recovering what are called polymetallic nodules. These polymetallic nodules are rich, concentrated sources of battery metals that can be recovered from the seabed without drilling or extensive excavation works.
Proponents of this method of Cobalt mining include Canadian company DeepGreen, who’ve already completed preliminary explorations in the Clarion Clipperton Zone (CCZ), located in international waters between Hawaii and Mexico. Here, working at depths of up to 5,500m below sea level, moving into production would see it claim the title as one of the deepest mining operations in the world, which is currently held by the Mponeng Gold Mine in South Africa at 4,000m.
Seabed mining is said to have numerous advantages over land-based mining
The companies and scientists promoting seabed mining as a viable alternative to land-based mining do not just point to the ocean as another potential location for mineral resources. More than just another source, the seabed is cited by proponents as being a more efficient, lower-impact way to extract mineral resources from the earth.
Amongst the most promising of its advantages over traditional land-based mining is the fact that polymetallic nodules are said to contain 100% usable mineral resources. This leads to more efficient processing and up to 99% less solid waste from the mineral extraction process. Other advantages cited include zero deforestation, dramatically reduced CO2 emissions, and zero pollution of surrounding rivers and/or the water table. However, not everyone is convinced by the idea of mining the seabeds.
Mineral resources on the seabed are for the “benefit of mankind as a whole”
While seabed mining certainly has its proponents, it is not without its fair share of resistance either. Currently Greenpeace—the most vocal of its opposers—is particularly concerned by the granting of licenses to private corporations for the development of resources held in trust for the common heritage of mankind.
This notion of common heritage resources, which was first mentioned in the 1954 Hague Convention and later reiterated in the 1958 UN conference on the Law of the Sea forms the basis for current regulations governing the exploitation of seabed resources in international waters. The essential thrust of this notion is that there should be some resources (like outer space) held in trust for the common benefit of all humans, and not for the advantage of any particular nation or corporation.
Thus, Greenpeace and other concerned parties are taking issue with the current granting of licenses to a handful of corporations, who must only seek the sponsorship of a small handful of sponsoring nations.
Further, there are concerns about the potentially thin capitalization of some miners and the interplay of its home jurisdiction and its (potentially much smaller) sponsoring nations—nations that may be small and already reliant on foreign aid for the financing of their own necessities. This raises concerns that, in the event of any significant fallout from seabed mining activities, eventual liability holders would be unable to meet obligations or, worse yet, that no one would be left holding liability.
Regulatory and environmental risks for seabed miners
With the finalization of regulations for the extraction of mineral resources from the seabed only just undergoing finalization by the UNCLOS mandated International Seabed Authority last year, it’s still early days. It should also be noted that the Common Heritage of Mankind principle is still highly philosophical, despite its enactment in international conventions. This opens up a great unknown in the event of future legal challenges to seabed mining operations.
Purely academic work exploring the principle still raises more questions than it answers, and notes that common heritage considerations extend further than just the distribution of financial proceeds from resource extraction. Indeed, the potential scope of considerations encompassed is exceedingly broad, and includes such things as considerations for wider area environmental and economic Common Heritage impacts. Such a broadening of the scope is only further complicated by other developments, such as the current interest in the water column as a Common Heritage asset.
And while there are no clear acknowledgments from DeepGreen that such considerations are seen as existential threats, its emphasis on the “unproductive” nature of the CCZ water column throughout its communications is perhaps indicative of a prophylactic move. With sharp reductions in the Pacific Ocean biomass directly attributable to sustained fishing activities above sustainable yields, an expansion of the Common Heritage of Mankind to the water column could be problematic. If the results of environmental impact assessments indicate some impact on the broader water column, the column’s current “unproductive” state may not hold much water as a defense.
Is there a future for seabed mining?
Assuming proponents of seabed mining are at least somewhat accurate with their claims that it’s a cleaner alternative to land-based mining, it’s well within the realm of possibility that seabed mining will become an important source of battery metals. Indeed, momentum is already headed in this direction. And, given that, globally, political will is largely leaning towards adopting more environmentally sound policies, that may be sufficient to quell any serious threats to full-scale seabed operations.
Of course, much still remains to be seen, and threats from activism and legal action may not be dismissed as easily as in land-based mining where a single nation-state is involved. In the single state scenario, the economic windfalls from mining act as persuasive motivators in getting past any problems; when those windfalls are concentrated to a handful of all stakeholders, things may not pass as smoothly. DeepGreen is also still yet to finalize its environmental impact assessment and mining news developments are moving quickly in the emerging deep-sea mining sector. Whatever the case, it will be an interesting project which, if all goes well, may just be the panacea needed to cure our impending battery metal woes.
DISCLAIMER: This article was written by a third party contributor and does not reflect the opinion of CAStocks, its management, staff or its associates. Please review our disclaimer for more information.
This article may include forward-looking statements. These forward-looking statements generally are identified by the words “believe,” “project,” “estimate,” “become,” “plan,” “will,” and similar expressions. These forward-looking statements involve known and unknown risks as well as uncertainties, including those discussed in the following cautionary statements and elsewhere in this article and on this site. Although the Company may believe that its expectations are based on reasonable assumptions, the actual results that the Company may achieve may differ materially from any forward-looking statements, which reflect the opinions of the management of the Company only as of the date hereof. Additionally, please make sure to read these important disclosures.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9365065097808838,
"language": "en",
"url": "https://www.capgemini.com/2019/04/second-life-batteries-a-sustainable-business-opportunity-not-a-conundrum/",
"token_count": 1903,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.15625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f0684bb5-8f97-4a8f-8d29-81037a40c5cb>"
}
|
3.4 million used electric vehicle (EV) batteries are expected to hit the market by 2025, representing a cumulative capacity of 95 GWh, and potentially meaning just as much hazardous waste. Environmental regulations are growing more and more stringent, especially in Europe, and disposing of such amounts of used batteries is hardly conceivable, while recycling processes are yet not convincing from an ecological standpoint. However, developing business models supporting the extension of batteries’ lifetime in second life applications is a promising market: the European industry (car original equipment manufacturers/OEMs, battery producers and utilities) must seize the opportunity to maximize the residual value of these assets, while at the same time leverage their environmental stance into a competitive advantage.
Old EV batteries: should they be re-used or recycled?
Recycling processes for industrial Li-ion batteries remain immature and expensive, and are not expected to take off for a while. While the cost of fully recycling a battery is falling towards €1 per kg (approx. €10 per kilowatt-hour), this is still approximately 3 times higher than what can be expected from selling the reclaimed materials on the market.
For instance, lithium cannot be recovered from smelting processes, instead ending up as a byproduct unfit for reuse in batteries. Additional processes can help reclaim lithium, but these are so costly that currently less than 3% of battery lithium is recycled. As a comparison, some sources claim that about 90% of a lead acid battery can be recycled for use in new batteries, almost in a closed loop. The design and production processes of Li-ion batteries make them harder and more expensive to recycle. Only the recovery of cobalt makes recycling li-ion batteries just about economically interesting, but this raw material included in the composition of some cathodes is subject to strong upward and downward price fluctuations. Paradoxically, innovation in battery chemistry tend to make li-ion battery recycling even less profitable, by aiming at reducing the share of high-value materials, such as cobalt, in their composition (thus materials economically interesting to retrieve).
The solution to absorb the mass of used batteries will probably not be found in recycling, at least on the short to mid-term. Developing a second life for these assets would instead maximize their value while giving the recycling sector more time to structure and find its profitability model.
A wide range of applications possible to turn used batteries into valuable assets
Although used EV batteries are no longer adapted to supplying energy to demanding engines like cars, most of them retain 50 to 90% of their capacity after their first life in a vehicle. The upfront cost of second life batteries is attractive, even after factoring upcoming cost reduction: the cost of a second life repurposed battery is around $50/kWh, versus $200-300 for new build today, and should remain competitive at least until 2025, when the price of a new battery should reach $90/kWh.
Less demanding applications than mobility, such as stationary uses, may constitute promising options to harvest the spared value of used EV batteries. In such applications, old batteries are expected to be able to provide services for about ten years more. The first generations of used EV batteries are already being tested for various purposes around the globe, such as managing peak demand or regulating grid frequency.
Several profiles of the players involved in this landscape can be identified:
- Car OEMs are particularly active in this field: as sellers of EVs, they either remain the owners of the batteries leased with their vehicles, or in any case are, according the EU regulations, responsible for collecting and recycling them. In this context, BMW set up a battery storage farm in Leipzig, relying on 700 new and second-life i3 battery packs. The facility has a capacity of 15MW, and provides storage capacity to local wind energy generation and grid balancing capability.
- Major utilities, often in partnership with car OEMs, are also more and more involved in second life battery projects. Trying to find the best combination between new and used batteries, they expect to benefit from grid services that could be provided at lower costs. With this objective in mind, Nissan partnered with EDF Energy to explore how second-life batteries can be used to support demand-side management.
- Smaller players, such as providers of residential and commercial storage, or portable storage, are also starting to develop projects at a smaller scale, often repacking old modules themselves into new products. For example, Powervault partnered up with Renault to turn old Renault Zoe battery packs into home storage systems, helping households cut electricity bills by more than a third.
Other projects are mushrooming and being tested around the world, but low volumes are preventing real large-scale applications using second life batteries from really taking off yet.
Some obstacles must be overcome to push the transformation of the batteries’ lifecycle
While working with key players active in the second life battery market, the following risks have often been mentioned:
- Cumbersome transportation regulation: due to stringent European regulations on dangerous goods and the lack of harmonization between countries, moving used or damaged battery packs across borders is logistically and administratively complicated. At current volumes, such logistical issues can make the cost of second life battery projects unsustainable.
- Unclear safety and environmental standards: the lack of perspective on aging properties, together with current regulations on safety mostly adapted to new batteries, limit the number of applications possible, such as in-house residential storage. Ongoing works on the Ecodesign and Battery directives at the EU level should help clarify part of the two first points by 2020.
- Lack of data on the performance of different batteries chemistries and designs: assessing the remaining capacity at the cell level remains difficult, as first-generation BMS (Battery Management System) have not been developed for a close monitoring, nor do they provide a full range of technical data history of batteries at different levels.
- Inexistent roadmap on recycling: uncertainty remains regarding the potential take off-of recycling as a main competitor to second life battery projects.
The European battery industry can strongly benefit from collectively developing business models for second life batteries: maximizing the remaining value of their assets while setting foot on the EU battery market
Several conditions can be secured to tackle the challenge:
- Secure partnerships with local stakeholders dealing with the batteries end of life (collectors, recycling companies,…) at the end of car lifetimes;
- Crack the logistics conundrum to centralize the collection and testing of used batteries, reach economies of scale, and reduce transportation & storage costs to a minimum
- Push the development of energy management systems that can combine 1st and second life batteries, using different brands, chemistries and designs. In this regard the use of artificial intelligence will play an important role
- Stay in close connection with battery manufacturers to understand their technological roadmap and anticipate disruptive innovation
As Asian players gain ground in the production of battery cells, with 60% of the world market in 2018, second life applications could constitute a way of competing in the battery business by capturing the residual value of already amortized assets and limiting the need for cells imports to supply stationary storage projects. It is time to find the right balance between tackling the environmental imperative and developing sustainable business models for second life battery projects, to make sure European battery players stand up in the world battery value chain.
Capgemini Invent activities on second life batteries
Connected to a strong ecosystem of industrial partners, Capgemini Invent Energy and Utilities team conducts studies on second life battery applications and supports major players in developing their projects. Our team can support you in designing the right project to make the most of used batteries, using an end-to-end approach.
To know more about Capgemini Invent projects on second life batteries, or to be assisted in your project, please contact [email protected]
About Capgemini INVENT
As the digital innovation, consulting and transformation brand of the Capgemini Group, Capgemini Invent helps CxOs envision and build what’s next for their organizations. Located in more than 30 offices and 22 creative studios around the world, its 6,000+ strong team combines strategy, technology, data science and creative design with deep industry expertise and insights, to develop new digital solutions and business models of the future.
Capgemini Invent is an integral part of Capgemini, a global leader in consulting, technology services and digital transformation. The Group is at the forefront of innovation to address the entire breadth of clients’ opportunities in the evolving world of cloud, digital and platforms. Building on its strong 50-year heritage and deep industry-specific expertise, Capgemini enables organizations to realize their business ambitions through an array of services from strategy to operations. Capgemini is driven by the conviction that the business value of technology comes from and through people. It is a multicultural company of over 200,000 team members in more than 40 countries. The Group reported 2018 global revenues of EUR 13.2 billion. People matter, results count.
Visit us at www.capgemini.com/invent
|This document contains information that may be privileged or confidential and is the property of the Capgemini Group.
Copyright © 2019 Capgemini. All rights reserved.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9578104019165039,
"language": "en",
"url": "https://www.ccpc.ie/consumers/shopping/pricing/price-display-goods/",
"token_count": 823,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.011962890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c943b350-1a43-43ff-864f-7ad8a65e7bb0>"
}
|
Price display for goods
- Shops must display the price of goods they sell.
- You should be given clear and accurate information on the price of goods so you can compare your options.
- The price must be displayed in euro on or near the goods. A common way to display the price is on a shelf edge label (SEL). However, a shop can put a price sticker or label on the goods, or just have a price list near where the goods are displayed.
- The price displayed must include VAT.
- These rules also apply to goods sold on websites. The price must be displayed near the information about the goods on the website. Where there are additional charges, such as a delivery charge, information on these charges must also be made available to you on the website. Get more information about buying online
- The unit price is the final selling price in euro, including tax, for goods in the following measurements:
- one kilogramme
- one litre
- one metre
- When you are shopping for groceries, you will notice that most products will have a total selling price. However you may notice that some items have a unit price, such as fruit, vegetables and meat, and are sold by weight. Under consumer law, if an item is being sold by weight (either loosely or in a packet) the unit price must be displayed. It is designed to help you compare the cost of groceries that are sold by weight or volume.
- Many items are priced by weight and sold in a pack. In these cases the shop should give you both prices – the unit and pack selling price.
- Unit pricing allows you to compare the cost of similar products that are sold in different sized packs. Comparing prices this way can help you save money as you can see which one is the best value – regardless of the brand or the size of the pack. You might think that items sold in bigger packs will be better value but this is not always the case when you look at the unit price.
- Some goods are not sold by weight but by the number of items, for example, five bananas for €1. These do not need to be unit priced. However, they still need to have a selling price displayed.
|Did you know?|
|If a shop doesn’t have the equipment to print shelf edge labels or for point of sale scanning, then it does not have to display the unit price, only the selling price.|
Prices must be displayed in euro but it is not against the law for shops to also display prices in other currencies such as sterling.
If a price is displayed in another currency, it doesn’t have to be a direct conversion of the euro price. Other currency prices displayed are usually the price you would pay if you bought the item in another country.
|Did you know?|
|A shop doesn’t have to accept payment in another currency, such as sterling, where both sterling and euro prices are displayed.|
In general, there are no price controls in Ireland. This means that, in most cases, there is no minimum or maximum price for goods or services. This is to allow competition among businesses, and each sets their own prices for goods or services.
A shop is not breaking the law by charging more than their competitors. If you feel that you are not getting good value, then you should shop around for a better price.
Shops must display the full and final price of goods for sale in euro. The final price must include any taxes, such as Value Added Tax (VAT).
Under consumer law, a shop should give the price including any applicable VAT charges. However, for services like your phone and electricity bills, the VAT can legally be shown separately, as long as the total amount is clear.
Shops and businesses that sell goods to commercial customers, for example marked as ‘trade only’, are allowed to show prices that exclude VAT.
Last updated on 19 August 2019
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9600604772567749,
"language": "en",
"url": "https://www.factcheck.org/2017/05/double-counting-growth/",
"token_count": 1537,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1435546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4d11f71a-95a2-4454-8987-747d93d7c296>"
}
|
How does President Donald Trump’s proposed budget reach a balance in 10 years, as the administration says it will?
With 3 percent economic growth, says Mick Mulvaney, director of the White House Office of Management and Budget.
How does Trump plan to deeply cut taxes without reducing federal revenues?
Economic growth, says Steve Mnuchin, secretary of the treasury.
Wait a minute, say tax and budget experts, that’s double-counting the same money.
“The same money cannot be used twice,” said Maya MacGuineas, president of the Committee for a Responsible Federal Budget.
Mulvaney, May 23: It’s new in that it balances for the first time in at least 10 years. The last time we looked, we couldn’t find a President Obama budget that balanced ever. I think he tried a couple times to convince us that primary balance, which was balance without regard for interest payments on the debt, was balanced. We reject that. We get to an actual balance on this budget within the 10-year window.
In order to reach that goal, the White House assumes “sustained, 3 percent economic growth,” Mulvaney said. The nonpartisan Congressional Budget Office projects a more modest expansion of gross domestic product at an average annual rate of 1.9 percent during the second half of the next 10-year period. Mulvaney said that kind of growth would never allow the country to balance its budget.
Mulvaney, May 23: If you assume 1.9 percent growth, my guess is you’ll never see a balanced budget again. So we refuse to accept that that’s the new normal in this country. Three percent is the old normal. Three percent will be the new normal again under the Trump administration. And that is, part and parcel, one of the foundations of this budget.
In addition to assuming annual economic growth of 3 percent, the president’s budget “assumes deficit neutral tax reform, which the Administration will work closely with the Congress to enact.” We don’t know what kind of tax reform Congress might enact, if any, but when the administration last month outlined Trump’s tax plan, administration officials assured that despite large corporate and individual tax cuts, it would not add to the deficit.
At a press briefing on the tax plan last month, Mnuchin said the tax plan would be revenue-neutral, in part, because it would stimulate growth to bring in enough extra revenue to offset the cuts.
Mnuchin, April 26: This will pay for itself with growth and with reduced — reduction of different deductions and closing loopholes.
But budget experts say Trump can’t have it both ways. Either the growth pays for the tax cuts, or it pays for bringing the budget to balance. It can’t do both.
In a statement on Trump’s budget, Taxpayers for Common Sense President Ryan Alexander said, “These same growth projections are what the administration was counting on to pay for tax reform, but they’re not accounted for in here as such.”
MacGuineas, president of the Committee for a Responsible Federal Budget, noted in a press release the same “inconsistency.”
MacGuineas, May 22: The budget also uses the entirety of the dynamic revenue from growth to pay down the debt – a move that we support but that is inconsistent with their past statements that economic growth would help pay for tax reform. The same money cannot be used twice.
The Trump budget makes deep cuts to discretionary spending (offset some by increases to the military), but those cuts aren’t enough to balance the budget, Roberton Williams of the Tax Policy Center told us. Balancing the budget also would require growth to create additional revenue.
But you can’t assume growth will balance the budget and offset tax cuts, Williams said.
“Both of those are not plausible,” Williams said. “They are counting it twice.”
In a blog post for the Washington Post, Lawrence Summers, who served as treasury secretary under President Clinton and director of President Obama’s National Economic Council, called it “an elementary double count” and “the most egregious accounting error in a presidential budget in the nearly 40 years I have been tracking them.”
Summers, May 23: Apparently, the budget forecasts that U.S. economic growth will rise to 3.0 percent because of the administration’s policies — largely its tax cuts and perhaps also its regulatory policies. Fair enough if you believe in tooth fairies and ludicrous supply-side economics.
Then the administration asserts that it will propose revenue neutral tax cuts with the revenue neutrality coming in part because the tax cuts stimulate growth! This is an elementary double count. You can’t use the growth benefits of tax cuts once to justify an optimistic baseline and then again to claim that the tax cuts do not cost revenue. At least you cannot do so in a world of logic.
Asked about Summers’ claim of double-counting, Mulvaney said it was necessary to make assumptions about “a document that will look 10 years into the future.”
The administration could have assumed tax reform would be revenue neutral, or that it would reduce or add to the deficit. “Given the fact that we’re this early in the process about dealing with tax reform,” Mulvaney said, “we thought that assuming that middle road was the best way to do it.”
Mulvaney did not directly address the inconsistency between those two plans, but he went on to say that “one of the assumptions we didn’t make was that we didn’t close any of the tax gap.”
The tax gap is the difference between total taxes owed and the actual taxes paid on time. In 2016, Mulvaney said, that gap was $486 billion, “almost enough to close the deficit that year. And we don’t assume an additional penny of that being closed as part of our tax reform.” With a simpler tax code such as Trump has proposed, he said, it is reasonable to assume a reduction in the tax gap.
Williams, of the Tax Policy Center, doesn’t see anything in the Trump tax plan that would close the tax gap nearly enough to be able to offset the deficit.
“You’d need a big change to incentivize people from not hiding money,” Williams said. And if Trump administration officials really thought their plan would cut into the tax gap, Williams said, then they would have made it part of the budget plan.
Taxpayers for Common Sense also points out that while the Trump tax plan calls for abolishing the estate tax, the budget includes the revenue from that tax over the next 10 years anyway. In fact, the proposed budget says tax reform should “abolish the death tax, which penalizes farmers and small business owners who want to pass their family enterprises on to their children.” And yet, TCS notes, the proposed budget includes $328 billion in revenue from the estate tax between 2018 and 2027, the very same amount under “current law.” In other words, the budget counts the estate tax revenue while arguing for its demise.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.94908207654953,
"language": "en",
"url": "https://www.insuranceflavor.com/what-is-form-16-full-details/",
"token_count": 1322,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0235595703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a43eb85a-bc85-4c96-8e5a-e13340ea70ba>"
}
|
Form 16 is a record all of us wait once we are in provider to document our returns. But have you ever questioned, what is Form 16 & what it tells in regards to the taxpayer?
What is the explanation to factor Form 16 and what is the importance?
According to segment 203 – Income Tax Act 1961, FORM 16 is a record or certificates to a salaried skilled in India through their respective employers. Form 16 is additionally issued to pensioners retired from central executive provider.
Form 16 comprise the entire data you wish to have to arrange and document your source of revenue tax go back. It additionally known as a “salary certificate” and it accommodates all of the main points in regards to the wage which is given through group in a selected monetary yr, the source of revenue tax that is deducted from the wage of the person payer.
Form 16 will have to be issued through yearly through an employer to staff the remaining date to give you the Form 16 is prior to 15 June (for the yr 2019 it is 15 July, as prolonged through the Government).
Free Download>>>> Tax Reckoner for FY 2019-20 AY 2020-21
Form 16 & TDS
As according to the Income-tax Act, the employer is required to deduct a TDS (Tax deducted at supply) at the foundation of this the source of revenue tax slab charge. They calculate tax at the foundation of wage given to worker and deductions declared through the worker.
So the TDS is deducted through the group and deposited with the source of revenue tax division and the FORM 16, in flip, is evidence of the similar.
Tonnes of E Books, E Courses, Newsletters, Tips & Tricks: Join Us & Feel the Change
Let's us get started through sharing 2 E Books on Wealth Creation
Components of Form 16
There are two elements
Part A of FORM 16
Part A provides the summary of tax accumulated through the group, from the wage source of revenue, at the worker’s behalf and deposited within the executive’s account.
In case, the worker has labored for multiple employer all the way through the monetary yr, he /she will get gets 2 Part A. Part A is often referred to as Annexure I.
Components of FORM 16 Part A TDS certificates:
- Name, TAN No., PAN No. , and deal with of the employer
- Name, PAN No., and deal with of the worker
- Detail of tax deducted and deposited quarterly with the federal government
- Assessment yr for which the TDS has been deducted
- The time frame of employment with the employer
- The TDS fee acknowledgment no.
- Summary of the wage paid.
- Date of tax deduction from the wage
- Date of tax deposit within the account of presidency
This phase A of FORM 16 is generated throughout the TRACES portal of the source of revenue tax division. All pages of Part A must be digitally or bodily signed through the deductor.
PART B OF FORM 16
This PART B covers main points referring to wage paid or every other source of revenue as disclosed through the worker to his/her group. It accommodates information about the computation of taxable source of revenue and tax to be paid. Part B is often referred to as Annexure B. The records in Annexure II is the knowledge that the source of revenue tax division considers for verifying the tax legal responsibility on an annual foundation.
You will like to learn this too E-Nivaran - Online Grievance Redressal Mechanism for Income Taxpayers
Components of FORM 16 phase B TDS certificates:
- Detailed wage explicit like HRA, the deduction claimed like PPF, NSC, PENSION, GRATUITY, LEAVE ENCASHMENT, LTA, and many others.
- Education tax and surcharge
- Deduction allowed underneath the source of revenue tax act
- Total wage won
- Gross source of revenue
- Net taxable wage
- Rebate underneath segment 87, if acceptable
- Relief underneath segment 89
- Total quantity of tax payable on source of revenue
- Tax deducted and the steadiness tax due or refund acceptable.
If you had multiple jobs in a yr, then you are going to get multiple FORM 16. The employers get ready phase B FORM himself and factor it alongside Part A.
What is new in FORM 16?
From this yr (AY 2019-20) there are some adjustments acceptable.
In the remaining yr, the funds (Download Budget 2019 Guide) has re-introduced the usual deduction of Rs 40000 (AY2019-20) in lieu of annual clinical compensation of Rs 15000 and Rs 19200 of annual delivery allowance.
The reason why for standardization of FORM 16 within the new structure is to make sure the suitable exemption are being claimed in order that the similar may also be matched with the go back filed through employers.
The revised FORM may even come with main points of deduction in recognize of passion on deposits in financial savings accounts, rebate and surcharge are acceptable. The exemption is to be had on saving financial institution passion as much as Rs 10000 and Rs 50000 exemption to be had to the senior citizen. Previously the folk would cover the passion source of revenue through keeping up fastened deposits in numerous branches.
The new one additionally require other disclosure of source of revenue/ loss from area assets and source of revenue from different assets through the worker, which might be presented for TDS.
Newly announced codecs will expose all deduction of source of revenue which is tax exempt. Hence source of revenue underneath segment 10 must be captured underneath ‘’source of revenue from different assets”
It is essential for the source of revenue tax division to resolve NRI’s standing for taxpaying person or corporate. The NRI’s must appropriately mark the selection of days to stick in India and aboard.
Benefits of FORM 16
It is prompt you take a look at each and every element discussed in shape 16 prior to submitting returns.
Do ask us if in case you have any doubts on What is Form 16 or any of its elements.
Hope we have now justified your efforts of studying this long article. Will look forward to your queries.
(This article is researched & written through Janvi Soni (Intern at WealthWisher Financial Planners & Advisors)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9070318937301636,
"language": "en",
"url": "https://www.kiausa.org/accounting",
"token_count": 591,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09716796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b097d514-e250-4fdd-99e0-d11e4732e8c3>"
}
|
ACCOUNTING / BOOKKEEPING
To prepare students for entry-level positions in business accounting. The program prepares individuals to provide technical administrative support to professional accountants and other financial management personnel. Includes instruction in posting transactions to accounts, record-keeping systems, accounting software operation, and general accounting principles and practices.
This program does not prepare students for professions requiring license and licensure
Bookkeepers, Accounting Assistants, Account Clerk, Account Payable/Receivable clerk, Payroll Clerk
Learn the fundamentals of financial accounting and bookkeeping in a practical, hands-on methodology. Familiar with the processes involved in day-to-day accounting and bookkeeping tasks. Understand the fundamental building blocks of the accounting process including debits and credits, T-accounts, how to balance double entry accounts, depreciation methods, and different kinds of business legal structures. How the income accounts connect to the balance sheet will also be taught. Students learn how to prepare and analyze financial statements. Other topics include receivables, liabilities, stockholders’ equity, and internal control.
Learn to set up books from scratch, setting up all of the ledgers and journals needed to do full-service accounting. Practice industry-specific accounting systems that makes them an expert in the specialized accounting and reporting procedures for many fields. Learn how to prepare payroll, create quarterly reports, calculate the cost of goods and the relationship between markup and profit.
Bookkeeping for Small Business
Learn to set up books from scratch, setting up all of the ledgers and journals needed to do full-service accounting. Practice industry-specific accounting systems that make you the expert in the specialized accounting and reporting procedures for many fields. Learn how to prepare payroll, creating quarterly reports, calculate the cost of goods and the relationship between markup and profit.
J-O-B Search Workshop
This course covers communications skills, effective resume writing, and job hunting techniques. Students are also taught how to improve their interpersonal skills and how to promote and market their skills using effective interviewing techniques.
A student must obtain an overall average of at least 70% in order to graduate and receive a certificate. A student is allowed to retake a class in which the grade was below 70%.
Books and materials
Handouts and worksheets from the Instructor
MS Office 2016
College Accounting by Heintz and Parry
USB memory stick
Equipment used in classroom
Personal computers with Internet
Methods of Instruction
This program will be taught through a combination of classroom lectures, hands-on laboratory projects, small group, and individual projects.
Methods of Evaluation
Students will be evaluated using a variety of traditional methods including, but not limited to, performance evaluations, quizzes, exams, and attendance.
Mon-Fri: 9am - 5pm
Sat: 9am - 1pm
2268 Quimby Rd #E
San Jose, CA 95122
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9530442953109741,
"language": "en",
"url": "https://www.transmissionhub.com/articles/2012/03/eia-expects-coal-fired-generation-to-dip-in-2012.html",
"token_count": 529,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06591796875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d3399736-f06a-49da-8cf5-66aea681ad85>"
}
|
The U.S. Energy Information Administration (EIA) expects the amount of electricity generated from coal to decline by nearly 5% in 2012 as generation from natural gas increases by about 9%.
At the same time, EIA forecasts that electricity produced from coal will increase by 3.8% in 2013, as projected coal prices to the power sector fall slightly while gas prices increase, and coal regains some of its generation market share.
Those are a couple of observation included in EIA’s Short-Term Energy Outlook posted on the agency’s Web site March 6.
The EIA recently reported that coal’s share of U.S. electric generation dipped to 39% in December 2011, the lowest level in three decades. Coal remains, however, the largest single power sources for the U.S. electric grid.
Recent data show that the trend in displacing coal with natural gas as a generation fuel has accelerated in response to the current low price of natural gas, EIA said. “EIA expects this fuel displacement pattern to continue at least through the first half of 2012, causing the annual average share of total generation fueled by natural gas to rise from 24.8% in 2011 to 27.1 % for 2012,” the agency said.
Coal’s share of electric generation is predicted to be 40.4% in 2012 and 41.2% in 2013.
As delivered natural gas prices begin increasing later this year, in response to higher demand and flattening growth in production, EIA expects the trend in fuel displacement will reverse slightly in 2013, with natural gas’ share of U.S. generation falling back to an annual average of 26.1%.
The price of natural gas delivered to electric generators is estimated to have averaged about $3.30 per MMBtu in February 2012, which would be its lowest nominal value in 10 years, EIA said.
Delivered coal prices to the electric power sector have increased steadily over the last 10 years and this trend continued in 2011, with an average delivered coal price of $2.40 per MMBtu (a 5.8% increase from 2010). But the decline in demand from coal plants, and reduction of mine production in Appalachia could push delivered coal prices about 3% lower than they were in 2011, EIA predicted.
Other short-term observations on the EIA Web site include a prediction that the U.S. residential average for power prices will dip from 11.84 cents per KWH in 2012 to 11.73 cents per KWH in 2013.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9330788850784302,
"language": "en",
"url": "http://www.other-news.info/2015/08/climate-change-set-to-fuel-global-food-crisis/",
"token_count": 1915,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1142578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3c576323-3f3a-420f-ba0c-2dd9d31829e4>"
}
|
Doha – Global food shortages will become three times more likely as a result of climate change according to a report by a joint US-British taskforce, which warned that the international community needs to be ready to respond to potentially dramatic future rises in prices.
Food shortages, market volatility and price spikes are likely to occur at an exponentially higher rate of every 30 years by 2040, said the Taskforce on Extreme Weather and Global Food System Resilience.
With the world’s population set to rise to nine billion by 2050 from 7.3 billion today, food production will need to increase by more than 60% and climate-linked market disruptions could lead to civil unrest, the report, published on Friday, said.
“The climate is changing and weather records are being broken all the time,” said David King, the UK foreign minister’s Special Representative for Climate Change.
“The risks of an event are growing, and it could be unprecedented in scale and extent.”
Globalisation and new technologies have made the world’s food system more efficient but it has also become less resilient to risks, said King.
Some of the major risks include a rapid rise in oil prices fuelling food costs, reduced export capacity in Brazil, the US or the Black Sea region due to infrastructure weakness, and the possible depreciation of the US dollar causing prices for dollar-listed commodities to spike.
Global food production is likely to be most impacted by extreme weather events in North and South America and Asia which produce most of the world’s four major crops – maize, soybean, wheat and rice, the report found.
Such shocks in production or price hikes are likely to hit some of the world’s poorest nations hardest such as import dependent countries in sub-Saharan Africa, the report found.
‘Violence or conflict’
“In fragile political contexts where household food insecurity is high, civil unrest might spill over into violence or conflict,” the report said.
“The Middle East and North Africa region is of particular systemic concern, given its exposure to international price volatility and risk of instability, its vulnerability to import disruption and the potential for interruption of energy exports.”
To ease the pain of increasingly likely shocks, the report urged countries not to impose export restrictions in the event of extreme weather, as Russia did following a poor harvest in 2010.
The researchers said agriculture itself needs to change to respond to global warming as international demand is already growing faster than agricultural yields and climate change will put further pressure on production.
“Increases in productivity, sustainability and resilience to climate change are required,” the report said.
This will require significant investment from the public and private sectors, as well as new cross-sector collaborations.” 2015-08-15
Q&A: Bee crisis stinging world food production
Al Jazeera speaks to food campaigner Tiffany Finck-Haynes about how alarming bee deaths are putting ecosystem at risk.
Ryan Rifai |
As an essential pollinator, honeybees are responsible for helping produce about one third of the world’s crops, according to the United Nations.
The UN’s Food and Agriculture Organisation (FAO) estimates that out of about 100 crop species, which provide 90 percent of food worldwide, 71 of these are bee-pollinated.
But a global phenomena over the last decade, known as the Colony Collapse Disorder (CCD), has seen an alarming number of bee colonies die-off, fueling serious fears over the future of the world’s sustenance.
As the scientific community debates over the key causes of the collapse, a growing number of movements have pointed the finger at toxic pesticides used in conventional farming.
Al Jazeera spoke to Tiffany Finck-Haynes, a food futures campaigner at Friends of the Earth, a US-based global network of environmental organisations, about the effects and causes of the CCD.
Al Jazeera: How important are bees in the production of food and other basic needs?
Tiffany Finck-Haynes: Bees are essential for our food system and agricultural economy. One out of every three bites of food we eat is pollinated by honeybees. Bees and other pollinators are essential for two-thirds of the food crops humans eat everyday such as almonds, squash, cucumbers, apples, oranges, blueberries, and peaches. Bees contribute over $20bn to the US economy and $217bn to the global economy.
AJ: How severe is the fall in bee numbers across the world?
Tiffany Finck-Haynes: Bees are dying at alarming rates worldwide. In the US, beekeepers have lost an average of 30 percent of their hives in recent years, with some beekeepers losing all of their hives and many leaving the industry. This past year, beekeepers lost nearly half of their hives – the second highest loss recorded to date. This is too high to be sustainable.
AJ: How is this decline affecting the ecosystem and food production?
Tiffany Finck-Haynes: Recent losses are staggering making it difficult for beekeepers to stay in business and for farmers to meet their pollination needs for important crops like almonds and berries. Without bees to pollinate our crops and flowering plants, our entire food system – and our fragile ecosystem itself – is at risk.
AJ: What are the main causes for the decline?
Tiffany Finck-Haynes: Pests, diseases, loss of forage and habitat and changing climate have all been identified as possible contributing factors to unsustainable bee losses. A growing body of science implicates neonicotinoid pesticides – one of the most widely used class of insecticides in the world, manufactured by Bayer and Syngenta – as a key factor in recent bee die-offs.
Neonicotinoidss can kill bees outright and make them more vulnerable to pests, pathogens and other stressors while impairing their foraging and feeding abilities, reproduction and memory. Neonicotinoids are widely used in the US on 140 crops and for cosmetic use in gardens.
Neonics can last in soil, water and the environment for months to years to come.
AJ: Are Genetically Modified Organisms (GMOs) a major factor in the crisis?
Tiffany Finck-Haynes: The majority of conventional corn, soy, wheat and canola seeds – many GMO – are pretreated with neonicotinoids. Just one neonicotinoid coated seed is enough to kill a songbird.
AJ: In which regions in the world are there growing movements to protect bees?
Tiffany Finck-Haynes: There has been a movement to protect bees in a number of regions of the world including North America, South America, Europe, Asia, Africa and Australia.
AJ: How are governments responding to these movements?
Tiffany Finck-Haynes: In the face of mounting evidence and growing consumer demand, a growing number of responsible businesses and government agencies have decided to be part of the solution to the bee crisis and are taking steps to eliminate bee-harming pesticides.
For example, in the UK, the largest garden retailers, including Homebase, B&Q and Wickes, have already voluntarily stopped selling neonicotinoids.
Based on recommendations by the European Food Safety Administration (EFSA), the European Union (EU) voted for a continent-wide suspension of several widely used neonicotinoids in order to protect bees, which went into place on December 1, 2013
In the US, in the past year more than 20 wholesale nurseries, landscaping companies and retailers, including the two largest home improvement retailers in the world, Home Depot and Lowe’s as well as Whole Foods and BJ’s Wholesale Club have taken steps to eliminate bee-harming pesticides from their garden plants and their stores.
The US Fish and Wildlife Service announced in 2014 that it will ban the use of neonicotinoids on all national wildlife refuge lands by 2016.
In June 2014, US President Obama established a Pollinator Health Task Force to develop a National Pollinator Health Strategy, calling on EPA to assess the effect of pesticides, including neonicotinoids, on bees and other pollinators.
In May 2015, the Task Force released its report, which aims at taking a number of steps to reverse pollinator declines. In April, the EPA announced that it would be unlikely to approve new or expanded uses of neonicotinoids while it evaluates the risks posed to pollinators.
In addition, more than 10 states, cities, counties, universities and federal agencies in the US have passed measures that minimise or eliminate the use of neonicotinoids.
In Canada, on July 1, Ontario became the first jurisdiction in North America to officially adopt requirement to reduce the number of acres of planted with neonicotinoid treated corn and soy seeds by 80 percent by 2017.
AJ: Can organic farming help save the bees?
Tiffany Finck-Haynes: We need to re-imagine the way we farm and incentivise local, sustainable, and just agriculture practices. Oxford University found organic farming supports 50 percent more pollinator species than conventional, chemical-intensive agriculture.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.946728527545929,
"language": "en",
"url": "https://citizentruth.org/the-bleak-great-lockdown-economy/",
"token_count": 1672,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.11669921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c362e25d-ef01-4a04-8fc1-c0d053790b23>"
}
|
The Bleak Great Lockdown Economy
According to the IMF, the COVID-19 related economic downturn of 2020 is the world’s worst since the Great Depression.
The IMF predicts that the global GDP will contract to -3 percent, a reduction of 6.3 percentage points since they published their last projections in January. At that time, they had predicted global economic growth at 3.3 percent for 2020. In comparison, when the global economy contracted during the Great Recession of 2008, the global GDP shrank to -.1 percent. According to the organization, this qualifies the Great Lockdown of 2020 as the worst economic downturn since the Great Depression, during which the global GDP dropped 15 percent and U.S. GDP dropped 30 percent.
According to the IMF, this is the first time since the Great Depression that advanced economies and emerging markets have simultaneously experienced recession. They predict negative growth in advanced economies at -6.1 percent and emerging markets at -1 percent. The projected negative growth outlook for emerging markets becomes -2.2 percent when excluding China from the figure. During the Great Recession emerging markets did not experience negative GDP growth.
Future IMF Predictions
Provided that the pandemic recedes during the second half of 2020 and countries around the world take appropriate actions to protect their own economies, the organization cautiously predicts 5.8 percent global GDP growth in 2021. They note this will require the prevention of business bankruptcies and “system-wide financial strains,” while also preventing “excessive” unemployment.
Even so, their optimistic 5.8 percent projection for 2021 remains below prior estimates, and they anticipate a cumulative loss to global GDP in the range of 9 trillion dollars. Should the pandemic and its associated isolation period continue into the second half of 2020, worsening financial conditions and additional supply chain breakdown would likely result in a further 3 percentage point reduction. If it lasts until 2021, the global economy could contract by an additional 8 percent beyond current projections.
According to the IMF, a recession is a “sustained period when economic output falls and unemployment rises.” While the National Bureau of Economic Research (NBER), widely considered the expert in dating the onset and conclusion of recessions in the U.S., defines a recession as “a significant decline in economic activity spread across the economy, lasting more than a few months, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales.”
The IMF notes that in general, due to the U.S.’s status as the largest economy in the world, with strong financial and trade ties to most other economies, globally synchronized recessions generally link with U.S. recessions.
During the Great Recession, the rate of unemployment in the U.S. peaked at 10 percent in late 2009, resulting in 9 million jobs lost. The unemployment rate did not return to pre-recession levels (4.7 percent) until 2016. The global GDP declined by less than 1 percent, while the U.S. GDP dropped 4.3 percent by the second quarter of 2009.
In general, depressions are characterized by their duration, large increases in unemployment, a decline in available credit due to a financial or banking crisis, decreased industrial output and increases in bankruptcies. A severe recession resulting in GDP loss of 10 percent or more, or prolonged recession lasting three or more years constitutes a depression. Deflation, financial crisis, stock market crashes and bank failures are common features of depressions that are not typically seen during recessions.
Black Tuesday, the U.S. stock market crash in October of 1929, ushered in the beginning of the Great Depression in the U.S. A decade of high unemployment, poverty, low profits, deflation and declining farm income followed. Within one year of Black Tuesday, farmers began defaulting on loans and depositors withdrew their savings, forcing banks to liquidate their assets — in particular, by calling in loans.
During the Great Depression, U.S. production and the GDP decreased by 47 percent and 30 percent respectively, with unemployment estimated at 25 percent. Estimates put the drop in global GDP around 15 percent.
Unemployment in the U.S.
In March, unemployment in the U.S. rose to 4.4 percent after hitting a 50-year low of 3.5 percent in February. Over 22 million Americans have filed for unemployment over the past month, with unemployment in the U.S. reaching 12.4 percent. In contrast, during the Great Recession, employment in the U.S. reached 10 percent — a decline of about 9 million jobs — between November 2007 and December 2009.
The Department of Labor released their weekly unemployment numbers on Thursday. Seasonally adjusted initial claims for the week ending April 25 totaled nearly 3.4 million — a slight decrease from the prior week’s 4.4 million initial claims. The unemployment rate for the week ending April 18 stood at 12.4 percent, up 1.5 percent from the prior week — the highest level in the history of seasonally adjusted unemployment rates.
States with the highest unemployment rates for the week ending April 11 included Michigan (21.8), Vermont (21.2), Connecticut (18.5), Pennsylvania (18.5), Nevada (16.8), Rhode Island (16.7), Washington (16.0), Alaska (15.6), New York (14.4) and West Virginia (14.4).
In an interview with CNBC on April 6, former Chair of the Federal Reserve under President Obama, Janet Yellen, had predicted that if we had a “timely” unemployment statistic, it would have likely reflected an unemployment rate up around 12 or 13 percent and rising. She further predicts a second quarter GDP annual decline rate of at least 30 percent.
Weathering the storm
As part of the stimulus package, the CARES Act included a loan fund to assist smaller businesses weather the COVID-19 storm. However, the first round of loans from the Paycheck Protection Program disproportionately favored larger businesses over small businesses, which exhausted the funds in record time. By exploiting loopholes in the language for the PPP, larger publicly held companies with less than 500 staff members at individual locations raked in millions of dollars earmarked for assisting small businesses with weathering the Great Lockdown. Some further exploited the process, increasing the funds they received by having two subsidiaries file.
A number of publicly held companies such as restaurant chain Ruth’s Chris, hospitality conglomerate Ashford Inc., Fiesta Restaurant Group, Shake Shack, and the LA Lakers have agreed to return the funds they’ve received. Several other companies including Digimarc and Polarity TE have justified the necessity of the funds for keeping their businesses afloat and refused to refund them.
Even if this is just a recession, recoveries that follow recessions can enhance economic and income disparity, as happened following the Great Recession. This is due, in part, to the fact that when the unemployed rejoin the workforce, they often find themselves in lower paying jobs, but also due in large part to the way the U.S. financial sector works.
Over the course of several weeks, the COVID-19 shutdown has surpassed the Great Recession as first runner up to the Great Depression, having become the second worst economic downturn in modern history. The U.S. unemployment rate has skyrocketed from a 50-year low to levels unseen since the 1930s.
While optimists anticipate a quicker and easier recovery from this slump than the Great Recession, they base that claim on the assumption that the majority of jobs lost are recoverable once businesses are permitted to resume operations. A survey from Main Street America, however, indicates that nearly 7.5 million of the current 30 million small businesses in the U.S. are at risk of closing their doors over the next several months, with 3.5 million of them at risk of closing within the next two months. Over 50 percent of the U.S. workforce relies on small businesses for employment. The failure of several million of these businesses puts approximately 35.7 million Americans at risk of unemployment.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9573045372962952,
"language": "en",
"url": "https://dpmc.govt.nz/our-programmes/reducing-child-poverty/child-poverty-measures-targets-and-indicators",
"token_count": 1108,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9d0fd436-1f18-4841-a978-679aad573c91>"
}
|
The legislation establishes a balanced suite of measures to measure and report on child poverty. The measures will track progress towards the targets, allow some international comparison, and provide a good picture of the impact of policy decisions on the lives of children.
There are four primary measures of poverty and hardship for which the Government must set targets:
- Low income before housing costs (below 50% of median income, moving line)
- Low income after housing costs (below 50% of median income, fixed line)
- A measure of material hardship (reflecting the proportion of children living in households with hardship rates below a standard threshold)
- A measure of poverty persistence (currently being developed, reflecting the proportion of children living in households experiencing poverty over several years, based on at least one of the measures above). (The Act requires reporting on persistent poverty from 2025/26 on.)
There are also six supplementary measures set out in the Act. These allow further international comparison, and ensure that trends at different levels of severity can be monitored and reported on.
The Government Statistician is responsible for defining a number of concepts and terms under the Child Poverty Reduction Act, such as material hardship.
The Act requires the Government to set and review targets for child poverty reduction based on the primary measures. The Act requires 10-year targets to be set, as well as 3-year intermediate targets that support the 10-year long-term targets.
Following the release of the baseline rates as reported by Statistics New Zealand in April 2019, the Government has officially set its intermediate and long-term targets for the three primary measures for which data is available. The next set of three-year targets (for 2021/22 to 2023/24) need to be set by June 2021.
The longer term targets seek to at least halve child poverty within ten years
Ten year longer term targets:*
By 2027/28, the Government aims to reduce the proportion of children in:
- low income households on the before housing costs primary measure from 16 percent of children to 5 percent – a reduction of around 120,000 children.
- low income households on the after housing costs primary measure from 23 percent of children to 10 percent – a reduction of around 130,000 children.
- material hardship from 13 percent of children to 6 percent – a reduction of around 80,000 children.
Three year intermediate targets:*
By 2020/21, the Government aims to reduce the proportion of children in:
- low income households on the before housing costs primary measure from 16 percent of children to 10 percent – a reduction of around 70,000 children.
- low income households on the after housing costs primary measure from 23 percent of children to 19 percent – a reduction of around 40,000 children.
- material hardship from 13 percent of children to 10 percent – a reduction of around 30,000 children.
*Some of the figures have been rounded. The official targets are set out in detail in the New Zealand Gazette notice.
Regular reporting requirements provide a high level of transparency and accountability. The reports include:
- an annual report on nine child poverty measures by Stats NZ
- a report each Budget day on progress toward the targets, and how the Budget will reduce child poverty
- an annual Government report on child poverty related indicators – measures related to the broader causes and consequences of child poverty.
A challenge with measurement and reporting is that there are time-lags between data collection and reporting timeframes, meaning the impacts of policies are often not visible in the reporting for some time.
The child poverty data used in the child poverty report produced by Stats NZ is drawn from the Household Economic Survey (HES), which surveys adults (aged 15+) in more than 20,000 households. The survey is conducted over a 12-month period, from July to June, and collects annual income information for the 12 months prior to the interview. These collection timelines mean a significant lag in data reporting, of up to two and a half years at the time of the report’s release.
For example, the numbers reflected in the child poverty report for the 2018/19 year, released in February 2020, cover annual incomes from mid-2017 to mid-2019. As a result, the impact of the Families Package was only partially captured.
Child Poverty Related Indicators
The Act requires the Government to report annually on one or more ‘child poverty related indicators’ or ‘CPRIs’. These are measures related to the broader causes and consequences of child poverty, and/or outcomes with a clear link to child poverty.
The Government has identified its CPRIs, which are:
- housing affordability – as measured by the percentage of children and young people (ages 0-17) living in households spending more than 30 percent of their disposable income on housing.
- housing quality – as measured by the percentage of children and young people (ages 0-17) living in households with a major problem with dampness or mould.
- food insecurity – as measured by the percentage of children (ages 0-15) living in households reporting food runs out often or sometimes.
- regular school attendance – as measured by the percentage of children and young people (ages 6-16) who are regularly attending school.
- avoidable hospitalisations – as measured by the rate of children (ages 0-15) hospitalised for potentially avoidable illnesses.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9708184003829956,
"language": "en",
"url": "https://emerchantbroker.com/blog/are-young-adults-really-as-cash-free-as-you-think/",
"token_count": 589,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.212890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:66608366-d5a4-44fc-a093-f8c6400a6a9a>"
}
|
With all of the features that smartphones now offer coupled with the increasing number of apps like Facebook Messenger and Snapcash, you would think that the majority of young Americans would have forgotten all about traditional forms of payment. The general belief is that young Americans, used to having everything at their fingertips due to the digital age, would be completely switched over to the digital cash-free payment systems. Cash no more.
According to Federal Reserve Bank of San Francisco analyst, Doug Conover, this is simply not true. A study conducted by the Federal Reserve Bank that asked people to keep a spending diary revealed that young adults (18-24) use cash for nearly half of their purchases. Cash still plays a very big part in young adults’ spending.
It would seem that the biggest influence on young adults’ spending habits is what they have seen or haven’t seen at home. Young adults are very cautious about the mistakes they saw their parents make growing up, especially when it comes to money. This is great for those who use that experience to be more conscious of their spending habits – that includes the method used to make their purchases.
Everyone is different when it comes to how they look at spending. Some young adults feel that it is easier to control what they spend by using cash. It is “painful” to see those bills leave their wallet. On the other hand, some individuals find that it is too easy to spend a bill here and spend a bill there until they have no cash left.
Some young adults manage their money better by using digital means. Too them, every time they swipe their debit card they know that money is leaving their account. They prefer to pay with plastic, but not necessarily with a credit card. Some students have revealed that they use their debit card in order to keep a close eye on their spending.
Cash shouldn’t become a thing of the past. The envelope method, for example, remains one of the most effective ways to control and manage spending. When you receive your paycheck, slip the money you need for each bill – mortgage, electricity, utility, internet, etc. – in an envelope created specifically for each expense. This makes it easier for the individual to see where their money is going and prevent excessive spending.
Young adults are not the only ones that find themselves in difficult situations with spending. Sometimes situations that are completely out of one’s control can put an individual or a business in a tough situation. Thankfully, merchants can obtain a bad credit merchant account from an alternative lending source like eMerchantBroker. Waiting for approval from a traditional lending source not only slows down your business, but it can also make things seemingly impossible when you find out that you’ve been denied. A bad credit merchant account can help you get back on your feet, help you manage your situation and get you started back on the right path.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9473413228988647,
"language": "en",
"url": "https://gridintegration.lbl.gov/publications/optimal-planning-and-operation-smart",
"token_count": 296,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08544921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ea760462-4de7-40da-b62e-21f8adfacec8>"
}
|
LBNL Report Number
Connection of electric storage technologies to smartgrids will have substantial implications for building energy systems. Local storage will enable demand response. When connected to buildings, mobile storage devices such as electric vehicles (EVs) are in competition with conventional stationary sources at the building. EVs can change the financial as well as environmental attractiveness of on-site generation (e.g. PV or fuel cells). In order to examine the impact of EVs on building energy costs and CO2 emissions, a distributed-energy-resources adoption problem is formulated as a mixed-integer linear program with minimization of annual building energy costs or CO2 emissions and solved for 2020 technology assumptions. The mixed integer linear program is applied to a set of 139 different commercial buildings in California and example results as well as the aggregated economic and environmental benefits are reported. Special constraints for the available PV, solar thermal, and EV parking lots at the commercial buildings are considered. The research shows that EV batteries can be used to reduce utility related energy costs at the smart grid or commercial building due to arbitrage of energy between buildings with different tariffs. However, putting more emphasis on CO2 emissions makes stationary storage more attractive and stationary storage capacities increase while the attractiveness of EVs decreases. The limited availability of EVs at the commercial building decreases the attractiveness of EVs and if PV is chosen by the optimization, then it is mostly used to charge the stationary storage at the commercial building and not the EVs connected to the building.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9620736837387085,
"language": "en",
"url": "https://norwaytoday.info/finance/those-who-earn-the-most-pollute-the-most/",
"token_count": 304,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.031494140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ef16ea1b-1e43-4abf-ace2-a96c86feb96e>"
}
|
Those with the highest incomes in Norway emit approximately three times as much CO2 as those with the lowest incomes, according to new Norwegian research.
Two researchers at the University of Oslo have demonstrated a correlation between income and the size of a CO2 footprint, reported the newspaper Klassekampen.
‘We have tried to calculate how the carbon footprint varies between different income groups, and have concluded that in Norway, there is an approximately one-to-one ratio between income and emissions’, said PhD student Elisabeth T. Isaksen.
Together with Patrick A. Narbel, she has published the results in an article in the journal ‘Ecological Economics’.
In 2014, they found that the 10 percent who represent the richest households in Norway had a 2.9 times higher income than the 10 percent at the bottom.
The results of the research indicated that these higher income groups pollute at a rate three times that of the poor.
‘This is an argument for more progressive taxation. In addition to reducing economic inequality, such a tax would cut emissions’, says Anders Skonhoft, professor of economics at the university.
‘Everyone does not share equal blame for greenhouse gas emissions. The rich emit much more than the poor. This is an aspect of the subject that is rarely, or never, a part of the climate debate’, said Professor Skonhoft.
Source: NTB scanpix / Norway Today
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9328480362892151,
"language": "en",
"url": "https://preyproject.com/blog/en/what-is-data-security-everything-you-need-to-know/",
"token_count": 2081,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08349609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4a20cce7-9ed2-4c32-835b-2d52d8bbc432>"
}
|
What is Data Security?
Data security is the practice of protecting corporate and customer data against unauthorized use and exposure. It includes everything from discovering the data that a company owns to implementing security controls designed to restrict access to and protect this data.
Data security is one of the biggest cybersecurity challenges faced by the modern business. In 2020, 3,932 data breaches occurred, leaking over 37,186 individual records, which is more than the previous six years combined. Recent data breaches range from minor incidents that most people have never heard about to huge-scale incidents like the Equifax breach that exposed financial data for 147 million people.
Why is Data Security Necessary?
Strong data security is important for a number of different reasons. One of the biggest drivers for investing in data security is minimizing the potential cost and damage caused by a data breach.
According to IBM and the Ponemon Institute, the average cost of a data breach is $3.6 million, and includes the following types of expenses:
- Detection and escalation (28.8%)
- Remediation (6.2%)
- Ex-post response (25.4%)
- Lost business cost (39.4%)
Of these four categories, the biggest cost of poor data security to the business is not cleaning up after the incident occurs. The loss of customer trust and future business – while difficult to measure – is a greater expense for the company.
Organizations with poor data security are also likely to face regulatory penalties. As data protection regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) become more stringent, regulators can levy fines for failing to comply with requirements even if that non-compliance did not result in a breach.
Non-compliance with other regulations, such as the Payment Card Industry Data Security Standard (PCI-DSS), can result in the loss of the right to process credit and debit cards, which has a significant impact on an organization’s ability to do business.
Types of Data Security
The objective of data security is to protect sensitive data by minimizing the probability that it will be leaked or exposed to unauthorized users.
A number of different tools exist for achieving this goal, including:
Encryption algorithms make it impossible to read data without access to the proper decryption key. Under many data protection laws, if encrypted data is leaked but the attacker does not have access to the decryption key, then the breach does not need to be reported. To learn more about how to use data encryption for data security, check out our data encryption guide.
|Prey Project can manage BitLocker for Windows 10 devices that have a physical Trusted Platform Module (TPM) installed and active. With it, you can select which disks to encrypt and decrypt, check on their progress and use the security standard that best suits your needs.|
Erasing unneeded data is the most effective method of protecting it against unauthorized access. Many data protection regulations have strict rules on how long an organization can retain certain types of data.
Identity Access Management (IAM)
Access control systems enable an organization to limit users’ access and permissions to the minimum required for their job role (the principle of least privilege). Implementing IAM decreases the probability and impact of data breaches and is required for compliance with certain data protection regulations (such as PCI-DSS).
Data Loss Prevention (DLP)
DLP solutions are designed to identify and alert on or block attempted exfiltration of data from an organization’s network. These systems can be a good last line of defense against data breaches but are most effective when paired with other solutions as they might miss an attempted exfiltration and only come into play once an attacker has already gained access to an organization’s data.
Governance, Risk, and Compliance
Policies and procedures are essential for robust data security. By defining and training employees on policies regarding data classification and how to properly manage different types of data, an organization can reduce its risk of a data breach.
Anti-Malware, Antivirus, and Endpoint Protection
Many data breaches are enabled by malware, including ransomware that steals data to force a victim to pay a ransom or infostealer malware that steals users’ credentials and other data. Installing anti-malware, antivirus, and endpoint protection solutions on devices can help to detect and block attempted data theft by malware.
While a variety of solutions exist for implementing data security, different approaches are better at managing different risks. For example, lost or stolen devices have been the source of numerous data breaches. While IAM and DLP solutions have little impact on these types of data leaks, deploying full-disk encryption on devices carrying sensitive company or customer data can help to mitigate these threats.
Data Security Threats
Data is everywhere within an organization’s network, and it can be put at risk in a number of different ways. Some of the top threats to data security include:
- Data Loss in the Cloud: Many organizations are moving to the cloud, but cloud security has consistently lagged. 60% of cloud storage includes unencrypted data, and security misconfigurations present in 93% of cloud storage services have caused over 200 data breaches in the last two years. Since these cloud-based resources are directly accessible from the public Internet, this places the data that they contain at risk.
- Phishing and Other Social Engineering Attacks: Phishing and social engineering attacks are a common method for stealing sensitive data. A malicious email, SMS, social media message or phone call may attempt to steal sensitive information directly or steal user credentials. These credentials can then be used to access online accounts containing sensitive information, such as cloud-based email or data storage.
- Accidental Exposure: Not all data breaches are intentional. According to IBM and the Ponemon Institute, 48% of data breaches are caused by system glitches or human error. This can include everything from an accidental CC on an email to misconfiguring cloud security permissions to leaving a USB drive or printout on the subway.
- Insider Threats: The popular conception of data breaches is that they are mainly carried out by outside attackers. However, insider threats are behind an estimated 60-75% of data breaches. This includes both malicious insiders – like that employee that was fired this morning but still has access to the network – and negligent employees that cause accidental data exposures.
- Ransomware: Ransomware is a threat to an organization’s data in a couple of different ways. All ransomware variants perform data encryption, which makes the data impossible to access without paying the ransom for the decryption key. Some ransomware groups have gone further and added a data stealer to their malware, which provides additional leverage when demanding a ransom payment.
- Physical Hardware Compromise: All data is stored on physical hardware, and this hardware may be the target of an attack. Malicious hardware inserted via a supply chain attack can compromise sensitive data, or an attacker can attempt to read memory directly off of a disk while it is still turned off.
How You Can Influence Data Security Where You Work
Many data security decisions are made at the executive level, such as corporate policies and the security solutions to deploy to protect the business. However, there are simple steps that you can take to improve your own data security and that of the business, including:
- Use Strong Access Control: Weak passwords are one of the biggest cybersecurity threats to an organization and its data. Use strong, unique passwords for all accounts and turn on multi-factor authentication (MFA) wherever it is available.
- Install Full-Disk Encryption: Full-disk encryption stores data in an encrypted state, making it impossible to read without the proper password. This protection against physical attacks grows more important as working from mobile devices becomes more common.
- Share Data Securely: Using sharing links for cloud-based documents and data makes them accessible to anyone with the link, and tools exist specifically to search for these links. Send an individual invite to access the resource rather than turning on link sharing.
- Create Backups Regularly: Ransomware is a serious threat, and a successful attack can cause significant loss of data. Set up an automatic backup solution to make a copy of data to read-only storage to protect against these attacks.
- Cybersec Training: User awareness is essential to the success of enterprise data security. For tips on developing cybersecurity awareness training for employees, check out this blog.
Data Security Regulations
Data protection regulations have been around for many years. However, in the last few years, the regulatory landscape has grown very complex very quickly.
The exact data privacy laws that an organization must comply with depends on its location and industry. Some of the major data security regulations to be aware of include:
General Data Protection Regulation (GDPR)
The GDPR was passed in 2016 and went into effect in 2018. It protects the personal data of EU citizens and applies to any organization with EU customers, regardless of location. The GDPR kicked off the recent surge in data privacy laws and is the inspiration for many of them.
Health Insurance Portability and Accountability Act (HIPAA)
HIPAA is a US regulation that protects the personal health data of US citizens. Its data security requirements apply to both healthcare providers and their service providers that may have access to data protected under the law.
Federal Information Security Management Act (FISMA)
FISMA is a law governing information security for the US government. It codifies the cybersecurity and data security protections and policies that federal agencies must make in place.
Sarbanes-Oxley Act (SOX)
SOX is a law designed to protect investors in a company against fraud. Data security is an important component of this as a data breach can hurt the value of a company’s stocks. After the Solarwinds hack, a class-action lawsuit was filed against the company asserting that the company’s claims regarding cybersecurity in its SOX filings were untrue and misleading.
Interested in learning more about keeping your company safe with Prey?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9581455588340759,
"language": "en",
"url": "https://sindhgovt.cooperativecomputing.com/economy",
"token_count": 310,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1376953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5bdfcd5d-3d5c-4740-9948-9d81fbae447e>"
}
|
The economy of Sindh is the 2nd largest of all the provinces in Pakistan. Much of Sindh's economy is influenced by the economy of Karachi, the capital of the province and also the largest city and economic capital of the country.
Sindh has the second largest economy in Pakistan. Its GDP per capita was $1,400 in 2010 which is 50 per cent more than the rest of the nation or 35 per cent more than the national average. Historically, Sindh's contribution to Pakistan's GDP has been between 30% to 32.7%. Its share in the service sector has ranged from 21% to 27.8% and in the agriculture sector from 21.4% to 27.7%. Performance wise, its best sector is the manufacturing sector, where its share has ranged from 36.7% to 46.5%. Since 1972, Sindh's GDP has expanded by 3.6 times.
Endowed with coastal access, Sindh is a major centre of economic activity in Pakistan and has a highly diversified economy ranging from heavy industry and finance centred in and around Karachi to a substantial agricultural base along the Indus. Manufacturing includes machine products, cement, plastics, and other goods.
Sindh is Pakistan's most natural gas producing province.
Agriculture is very important in Sindh with cotton, rice, wheat, sugar cane, dates, bananas, and mangoes as the most important crops. Sindh is the richest province of Pakistan in natural resources of gas, petrol, and coal.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9209847450256348,
"language": "en",
"url": "https://www.anl.gov/article/recell-center-could-save-costly-nickel-and-cobalt-transform-battery-recycling-worldwide",
"token_count": 1358,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.006134033203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b427dcf4-f358-4933-a88b-c2102f47d0c1>"
}
|
The growing number of electric vehicles on U.S. roads poses a question: What will happen when those cars go out of service? Without recycling, their batteries may become 8 million tons of global scrap by 2040.
Yet those end-of-life lithium-ion batteries are an important resource. They contain viable and valuable materials that can — and should — be recovered.
To address this challenge, the U.S. Department of Energy (DOE) launched the ReCell Center in February 2019. The $5 million per year center, which is funded by DOE’s Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, is leading the way to make pivotal discoveries in cost effective lithium-ion battery recycling so valuable battery components such as cobalt and nickel compounds don’t go to waste.
ReCell is a collaboration between DOE’s Argonne National Laboratory, which leads the initiative, National Renewable Energy Laboratory (NREL), Oak Ridge National Laboratory (ORNL), as well as Worcester Polytechnic Institute, University of California at San Diego and Michigan Technological University.
The ReCell mission is to help grow a globally competitive U.S. recycling industry. Accelerating lithium-ion battery recycling will ultimately bring down the cost of electric vehicle batteries for consumers and reduce our reliance on foreign sources of these materials.
Since ReCell’s inception, we have made pivotal discoveries in each of ReCell’s four focus areas: direct cathode recycling, recovering other materials, design for recycling, and modeling and analysis.
Taking the direct approach to cathode recycling
The manufacturing costs for a vehicle battery can be 5% to 30% lower when using recycled cathode material, which stores lithium ions and releases them when a battery charges. But current recycling methods only produce metal salts that then need to be reprocessed back into battery materials. The value of these materials is too low for recycling to be commercially feasible in the U.S.
Direct cathode recycling, on the other hand, maintains the material as a cathode, retaining its original value. But the variety of ingredients and evolving chemistry of today’s batteries introduce technical hurdles. In ReCell’s first year, we focused our efforts on this important piece of the puzzle, testing at least nine basic direct recycling concepts. These include successful removal of the battery’s binder, which holds active materials together; and new methods for restoring the lithium content of degraded cathode materials.
Recovering other battery materials
We are also pursuing projects to recover other battery materials, such as lithium salts and electrolyte solvents, to maximize the number of reusable components and provide more revenue for recyclers. An important advance from researchers at Oak Ridge National Laboratory enables recovery of clean “black mass,” the mix of cathode and anode powders left after battery cells, or single units, have been shredded. They identified solvents that can quickly separate active material from the collector foil — i.e., copper or aluminum that acts as an electrical conductor and mechanical support — making it available for further processing.
Designing with recycling in mind
One way to help reduce the demand for battery materials is to make sure we design cells to be recycled in the first place. We are looking into feasible options and working to identify cell designs that would extend lifetimes far beyond the current 10-year average. Hitting this goal would reduce demand for replacements and, in turn, the materials needed to make them. For example, we are starting to configure new cells that could enable an electrode “rinsing” process to rejuvenate a cell and allow it to run longer.
Evaluating supply and demand
All of these technical areas are critical to building a new battery recycling economy, but we also need to ensure the processes are economically viable and environmentally sound. How much will they cost? How much energy and water will they use? What kind of revenue can they generate?
We have developed computer-based models that can help answer such questions and evaluate different technologies on the road from lab to commercial facility. Argonne’s EverBatt model, for example, allows us to compare the costs and impact of individual processes over the entire lifetime of a battery in order to identify the most promising options. Meanwhile, the National Renewable Energy Laboratory has begun to put battery material supply and demand in a global context with a model, named Lithium-ion Battery Recycling Analysis (LIBRA).
Leading global collaborations
Along the way, ReCell is fostering collaboration with industry to advance these technologies. Last fall, we met with more than 130 attendees from industry, government and academia. We also recently began our first industry-sponsored project with the Responsible Battery Coalition to develop battery recycling best practices.
Expanded facilities, greater impact
In ReCell’s first year, we focused on pinpointing the most promising battery recycling technologies. Now we’re expanding the ReCell facilities at Argonne. A laboratory that will house scale-up equipment, along with a bench-scale pilot laboratory for ReCell work, is opening this summer within the Materials Engineering Research Facility. This laboratory will give us access to the experts, equipment, and characterization tools needed to achieve our mission.
By 2022 at ReCell, we hope to demonstrate direct recycling from old cell to new, positioning us for pilot-scale demonstrations that can translate to commercial adoption. What we’re learning at ReCell will help lower battery costs for consumers and increase national energy security.
The Office of Energy Efficiency and Renewable Energy supports early-stage research and development of energy efficiency and renewable energy technologies to strengthen U.S. economic growth, energy security, and environmental quality.
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.
The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9597388505935669,
"language": "en",
"url": "https://www.desertfinancial.com/news-and-knowledge/credit-basics",
"token_count": 814,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.009033203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:256ef2a1-1de3-45ec-8270-b8424f31b6bd>"
}
|
The material presented here is for educational purposes only, and is not intended to be used as financial, investment, or legal advice.
Understanding Credit Basics: What You Need to Know
Maintaining a good credit score and history is imperative to your financial well-being — and it’s not as mysterious or scary as it may seem. Find out why credit is so important, who’s looking at your credit, how to read your credit report and what to do if you stumble along the way.
If you’re new to all things financial, credit can seem like a mystical concept. You’ve heard that it’s crucial to your success as an adult, and you know that there are such things as “good” and “bad” credit. You might have even been approved (or declined) credit based on your credit score. But how does it all work?
Here are a few of the basics:
Why should you use credit?
You might be wondering, why would I purchase something on credit if I can just pay cash? The short answer is that lenders base their decisions, in part, on your previous credit history. Becoming debt-free is a great aspirational goal! However, when you do end up needing credit for an emergency, or a larger purchase such as a home or car, having a solid, positive credit history built up is critical.
Who wants to see your credit report … and why?
When you apply for new credit, a lender will check your credit report and history. But potential creditors aren’t the only ones peeking at your history. Employers, utility companies, landlords and insurance companies are just some of the other sources allowed to view your credit history under The Fair Credit Reporting Act. Most are reviewing your borrowing history to determine whether to lend you money or provide a service.
Understanding your credit report
You can check your credit report through one of the three major reporting agencies (TransUnion, Equifax and Experian) or by requesting a free credit report online. Your credit report includes a history of each account listed, including open and closed dates, your credit limit or loan amount, and info that indicates whether you made on-time, late or missed payments each month. Your credit report also shows any bankruptcies and lists companies that have checked your credit recently.
Some credit reports also include an overall credit score that ranges from 300-850, which gives lenders a big-picture idea of how adept you are at using your credit wisely. A score of 700+ is considered good, while those with a score above 800 have excellent credit.
How to dispute inaccurate information
One of the most critical reasons to check your own credit is to see if there are any mistakes. After all, you don’t want to lose out on good loan rates and higher credit limits — or worse, get declined — because of an error on your report. If you do spot something fishy, you’ll likely need to contact the lender directly and request that they update their reporting with the credit bureaus. You may still need to follow up or contact the credit bureaus in writing afterward to make sure the changes are made.
Repairing damaged credit
If you’ve made a few credit mistakes along the way, don’t despair! Negative items such as a missed payment won’t stay on your credit report forever. There are also immediate steps you can take to repair the damage and improve your credit score, some of which can be found in our Financial Check-Up List or in one of these killer money workouts.
There’s plenty more to learn; we’re just scratching the surface here! Sign up for our free Understanding Credit Basics webinar to get expert answers to these and other questions about building and using credit. This one-hour online seminar is part of our Financial Wellness series, which covers topics important to your financial health.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9438340067863464,
"language": "en",
"url": "https://www.mathstips.com/sales-tax-trade-tax/?utm_source=business-maths&utm_medium=sidebar&utm_campaign=toc",
"token_count": 1744,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bf118382-ec95-49a6-9d46-73ad1dc85c62>"
}
|
In order to increase revenue, state governments levy sales tax on the sale of goods within their states. For the inter-state movement of goods, the Union (Central) Government also levies sales tax, known as Central Sales Tax. It is the tax imposed on purchase of items.
The rates of Sales Tax depend upon the nature of goods purchased and are different for different goods. Even different states have different rates of Sales Tax on the same goods. Some items of necessity or of daily use for common persons are completely exempted from Sales Tax.
Important terms necessary to understand this chapter fully have been explained in our article titled Profit and Loss. One needs to go through Profit and Loss article to understand the terminologies.
Sale Price: It is the price after subtracting discount from the list price. i.e. Sale Price=List price-Discount.
- Sales Tax is calculated on the Sale Price.
- Sales Tax=
- Rate of Sales Tax=
The amount of money of money paid by a customer for an article= The Sale Price of the article + Sales Tax on it.
If P be the printed price of a commodity, the rate of sales tax be r% and S be the selling price then
and Sales Tax=
Example 1: The printed price of a cycle is Rs 4200. The rate of sales tax on it is 10%. Find the price at which the cycle can be purchased.
Solution: Given, Printed Price (P) = Rs 4200
Rate of sales tax =10% i.e. r= 10
Therefore, the cycle can be purchased for Rs 4620.
Example 2: Ms Sinha bought a TV set for Rs 5136 which includes sales tax. If the M.P. of the TV set is Rs 4800, what was the rate of sales tax?
Solution: Given, S= Rs 5136
M.P. = Rs 4800
Sale Price= S-M.P. = Rs (5136-4800) = Rs 336
We know that, sale price=
So, the rate of sales tax was 7%.
Example 3: If the rate of sale tax increases by 5%, the selling price of an article goes up by Rs 40. Find the marked price of the article.
Let the M.P. be P and the rate of sales tax be r%.
When the rate of sales tax increases by 5%,
According to the question,
Therefore, the marked price of the article is Rs 800.
Example 4: Rohit buys a computer for Rs 38400 which includes 10% discount and then 6% sales tax on the listed price. Find the listed price of the computer.
Solution: Let the listed price be P.
Then, the discount = 10% of P=
And, Sales Tax= 6% of P=
Therefore, the price paid=
From the question,
The listed price of the computer= Rs 40000.
Example 5: Smith buys a radio-set for Rs 1696. The rate of sales tax is 6%. He asks the shopkeeper to reduce the price of the radio-set to such an extent that he does not have to pay anything more than Rs 1696 including sales tax. Calculate the reduction needed in the cost price of the radio-set.
Solution: Let the cost of radio-set be reduced to .
According to the given statement,
Therefore, reduction needed=
Example 6: The catalogue price of a computer set is Rs 45000. The shopkeeper gives a discount of 75 on the listed price. He gives a further off-season discount of 4% on the balance. However, sales tax at 85 is charged on the remaining amount. Find (i) the amount of sales tax a customer has to pay, (ii) the final price he has to pay for the computer set.
Solution: Since, P=Rs 45000
Discount= 7% of Rs 45000=Rs 3150
S.P. =List price- discount=Rs (45000-3150) = Rs 41850
Off-season discount=4% of Rs 41850=Rs 1674
Net S.P. =Rs (41850-1674) =Rs 40176
(i) The amount of sales tax a customer has to pay
=8% of Rs 40176
(ii) The final price the customer has to pay for the computer
=Net S.P. + sales tax
=Rs (40176+3214.08) = Rs 43390.08
Example 7: Ram purchases an article for Rs 7820 which includes 15% rebate on the list price and 15% sales tax on the remaining price. Find the marked price of the article.
Solution: Let the list price be Rs x
Therefore, Sale Price=
Since, Sales Tax=15%
Therefore, Price paid=
On solving, we get,
The list price = Rs 8000
Example 8: A shopkeeper buys an article at a rebate of 20 per cent on the list price. He spends Rs 20 on transportation of the article. After charging a sales tax of 6% on the list price, he sells the article at Rs 530. Find his profit percentage.
Solution: Let the list price be
Then, the cost price=
Actual cost price including transportation cost=
Therefore, S.P. including sales tax =
Actual cost price=
Profit = List price- Actual cost price=
Therefore, profit percentage=
- Rajat purchases a wrist-watch costing Rs 540. The rate of sales tax is 8%. Find the total amount paid by Rajat for the watch.
- Ramesh paid Rs 345.60 as sales tax on a purchase of Rs 3840. Find the rate of sales tax.
- Manoj purchases a bicycle for Rs 1337.50 including sales tax. If the rate of sales tax is 7%, what is the sale price of the bicycle?
- A colour TV set is marked for sale at Rs 17600 inclusive of sales tax at the rate of 10%. Calculate the sales tax on the TV set.
- A shopkeeper gives three successive discounts of 20%, 10% and 10% on his goods. If, the list price of an article in his shop is Rs 1600 and sales tax on it is 13% of its sale price, find how much a customer will have to pay for it.
- The price of a washing machine, inclusive of sales tax, is Rs 13530. If the sales tax is 10 per cent, find its basic price.
- A bicycle is available for Rs 1664 including sales tax. If the list price of the bicycle is Rs 1600, find:
- The rate of sales tax.
- The price, a customer will pay for the bicycle if the sales tax is increased by 6%.
- The catalogue price of an article is Rs 20000. The dealer allows two successive discounts 15% and 10%. He further allows an off-season discount of 10% on the balance. But sales tax at the rate of 10% is charged on the remaining amount. Find:
- The sales tax amount a customer has to pay.
- The final total price that customer has to pay for the article.
- A wholesaler sells an article for Rs 2700 at a discount of 10% on the list price to a retailer. The retailer, in turn, raises the list price of the article by 15% and sells it for Rs 3657 which includes sales tax on the new marked price. Find: (i) the rate of sales tax. (ii) the profit per cent, made by the retailer.
- Reena goes to a shop to buy a tape recorder whose listed price is Rs 2568. The sales tax on it is 7% of the price asked for. Reena requests the shopkeeper to give a discount on the listed price so that the price of the tape recorder becomes Rs 2568 after adding sales tax on the price. Find the total discount the shopkeeper has to give on the listed price to sell it at Rs 2568.
- A seller buys a suitcase for Rs 2500 and marks up its price. A customer buys the suitcase for Rs 3300 which includes a sales tax of 10% on the marked up price. Find:
- The mark-up percentage on the price of the suitcase.
- His profit percentage.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9284291863441467,
"language": "en",
"url": "https://www.nap.edu/read/2125/chapter/11",
"token_count": 10297,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.041259765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:aee33a7d-d6a5-472a-b40e-9136469d1e92>"
}
|
The need to confront uncertainty in risk assessment has changed little since the 1983 NRC report Risk Assessment in the Federal Government. That report found that:
The dominant analytic difficulty [in decision-making based on risk assessments] is pervasive uncertainty. … there is often great uncertainty in estimates or the types, probability, and magnitude of health effects associated with a chemical agent of the economic effects of a proposed regulatory action, and of the extent of current and possible future human exposures. These problems have no immediate solutions, given the many gaps in our understanding of the causal mechanisms of carcinogenesis and other health effects and in our ability to ascertain the nature or extent of the effects associated with specific exposures.
Those gaps in our knowledge remain, and yield only with difficulty to new scientific findings. But a powerful solution exists to some of the difficulties caused by the gaps: the systematic analysis of the sources, nature, and implications of the uncertainties they create.
Context Of Uncertainty Analysis
EPA decision-makers have long recognized the usefulness of uncertainty analysis. As indicated by former EPA Administrator William Ruckelshaus (1984):
First, we must insist on risk calculations being expressed as distributions of estimates and not as magic numbers that can be manipulated without regard to
what they really mean. We must try to display more realistic estimates of risk to show a range of probabilities. To help do this, we need new tools for quantifying and ordering sources of uncertainty and for putting them into perspective.
Ten years later, however, EPA has made little headway in replacing a risk-assessment "culture" based on "magic numbers" with one based on information about the range of risk values consistent with our current knowledge and lack thereof.
As we discuss in more depth in Chapter 5, EPA has been skeptical about the usefulness of uncertainty analysis. For example, in its guidance to those conducting risk assessments for Superfund sites (EPA, 1991f), the agency concludes that quantitative uncertainty assessment is usually not practical or necessary for site risk assessments. The same guidance questions the value and accuracy of assessments of the uncertainty, suggesting that such analyses are too data-intensive and "can lead one into a false sense of certainty."
In direct contrast, the committee believes that uncertainty analysis is the only way to combat the "false sense of certainty," which is caused by a refusal to acknowledge and (attempt to) quantify the uncertainty in risk predictions.
This chapter first discusses some of the tools that can be used to quantify uncertainty. The remaining sections discuss specific concerns about EPA's current practices, suggest alternatives, and present the committee's recommendations about how EPA should handle uncertainty analysis in the future.
Nature Of Uncertainty
Uncertainty can be defined as a lack of precise knowledge as to what the truth is, whether qualitative or quantitative. That lack of knowledge creates an intellectual problemthat we do not know what the "scientific truth" is; and a practical problemwe need to determine how to assess and deal with risk in light of that uncertainty. This chapter focuses on the practical problem, which the 1983 report did not shed much light on and which EPA has only recently begun to address in any specific way. This chapter takes the view that uncertainty is always with us and that it is crucial to learn how to conduct risk assessment in the face of it. Scientific truth is always somewhat uncertain and is subject to revision as new understanding develops, but the uncertainty in quantitative health risk assessment might be uniquely large, relative to other science-policy areas, and it requires special attention by risk analysts. These analysts need to allow questions such as: What should we do in the face of uncertainty? How should it be identified and managed in a risk assessment? How should an understanding of uncertainty be forwarded to risk managers, and to the public? EPA has recognized the need for more and better uncertainty assessment (see EPA memorandum in Appendix B), and other investigators have begun to make substantial progress with the difficult computations that are often required (Monte Carlo
methods, etc.). However, it appears that these changes have not yet affected the day-to-day work of EPA.
Some scientists, mirroring the concerns expressed by EPA, are reluctant to quantify uncertainty. There is concern that uncertainty analysis could reduce confidence in a risk assessment. However, that attitude toward uncertainty may be misguided. The very heart of risk assessment is the responsibility to use whatever information is at hand or can be generated to produce a number, a range, a probability distributionwhatever expresses best the present state of knowledge about the effects of some hazard in some specified setting. Simply to ignore the uncertainty in any process is almost sure to leave critical parts of the process incompletely examined, and hence to increase the probability of generating a risk estimate that is incorrect, incomplete, or misleading.
For example, past analyses of the uncertainty about the carcinogenic potency of saccharin showed that potency estimates could vary by a factor as large as 1010. However, this example is not representative of the ranges in potency estimates when appropriate models are compared. Potency estimates can vary by a factor of 1010 only if one allows the choice of some models that are generally recognized as having no biological plausibility and only if one uses those models for a very large extrapolation from high to low doses. The judicious application of concepts of plausibility and parsimony can eliminate some clearly inappropriate models and leave a large but perhaps a less daunting range of uncertainties. What is important, in this context of enormous uncertainty, is not the best estimate or even the ends of this 1010-fold range, but the best-informed estimate of the likelihood that the true value is in a region where one rather than or another remedial action (or none) is appropriate. Is there a small chance that the true risk is as large as 10-2, and what would be the risk-management implications of this very small probability of very large harm? Questions such as these are what uncertainty analysis is largely about. Improvements in the understanding of methods for uncertainty analysisas well as advances in toxicology, pharmacokinetics, and exposure assessmentnow allow uncertainty analysis to provide a much more accurate, and perhaps less daunting, picture of what we know and do not know than in the past.
Before discussing the practical applications of uncertainty analysis, it may be best to step back and discuss it as an intellectual endeavor. The problem of uncertainty in risk assessment is large, complex, and nearly intractable, unless it is divided into smaller and more manageable topics. One way to do so, as seen in Table 9-1 (Bogen, 1990a), is to classify sources of uncertainty according to the step in the risk assessment process in which they occur. A more abstract and generalized approach preferred by some scientists is to partition all uncertainties into the three categories of bias, randomness, and true variability. This method
(Table continues on following page.)
of classifying uncertainty is used by some research methodologists, because it provides a complete partition of types of uncertainty, and it might be more productive intellectually: bias is almost entirely a product of study design and performance; randomness a problem of sample size and measurement imprecision; and variability a matter for study by risk assessors but for resolution in risk management (see Chapter 10).
However, a third approach to categorizing uncertainty may be more practical than this scheme, and yet less peculiar to environmental risk assessment than the taxonomy in Table 9-1.
This third approach, a version of which can be found in EPA's new exposure guidelines (EPA, 1992a) and in the general literature on risk assessment uncertainty (Finkel, 1990; Morgan and Henrion, 1990), is adopted here to facilitate communication and understanding in light of present EPA practice. Although the committee makes no formal recommendation on which taxonomy to use, EPA staff might want to consider the alternative classification above (bias,
randomness, and variability) to supplement their current approach in future documents. Our preferred taxonomy consists of:
Problems With EPA's Current Approach To Uncertainty
EPA's current practice on uncertainty is described elsewhere in this report, especially in Chapter 5, as part of the risk-characterization process. Overall, EPA tends at best to take a qualitative approach to uncertainty analysis, and one that emphasizes model uncertainty rather than parameter uncertainties. The uncertainties in the models and the assumptions made are listed (or perhaps described in a narrative way) in each step of the process; these are then presented in a nonquantitative statement to the decision-maker.
Quantitative uncertainty analysis is not well explored at EPA. There is little internal guidance for EPA staff about how to evaluate and express uncertainty. One useful exception is the analysis conducted for the National Emission Standards for Hazardous Air Pollutants (NESHAPS) radionuclides document (described in Chapter 5), which provides a good initial example of how uncertainty analysis could be conducted for the exposure portion of risk assessment. Other EPA efforts, however, have been primarily qualitative, rather than quantitative. When uncertainty is analyzed at EPA, the analysis tends to be piecemeal and highly focused on the sensitivity of the assessment to the accuracy of a few specified assumptions, rather than a full exploration of the process from data collection to final risk assessment, and the results are not used in a systematic fashion to help decision-makers.
The major difficulty with EPA's current approach is that it does not supplant or supplement artificially precise single estimates of risk ("point estimates") with ranges of values or quantitative descriptions of uncertainty, and that it often lacks even qualitative statements of uncertainty. This obscures the uncertainties inherent in risk estimation (Paustenbach, 1989; Finkel, 1990), although the uncertainties themselves do not go away. Risk assessments that do not include sufficient attention to uncertainty are vulnerable to four common and potentially serious pitfalls (adapted from Finkel, 1990):
Perhaps most fundamentally, without uncertainty analysis it can be quite difficult to determine the conservatism of an estimate. In an ideal risk assessment, a complete uncertainty analysis would provide a risk manager with the ability to estimate risk for each person in a given population in both actual and projected scenarios of exposures; it would also estimate the uncertainty in each prediction in quantitative, probabilistic terms. But even a less exhaustive treatment of uncertainty will serve a very important purpose: it can reveal whether the point estimate used to summarize the uncertain risk is "conservative," and if so, to what extent. Although the choice of the "level of conservatism" is a risk-management prerogative, managers might be operating in the dark about how "conservative" these choices are if the uncertainty (and hence the degree to which the risk estimate used may fall above or below the true value) is ignored or assumed, rather than calculated.
Some Alternatives To EPA's Approach
A useful alternative to EPA's current approach is to set as a goal a quantitative assessment of uncertainty. Table 9-2, from Resources for the Future's Center for Risk Management, suggests a sequence of steps that the agency could follow to generate a quantitative uncertainty estimate. To determine the uncertainty in the estimate of risk associated with a source probably requires an understanding of the uncertainty in each of the elements shown in Table 9-3. The following pages describe more fully the development of probabilities and the method of using probabilities as inputs into uncertainty analysis models.
A probability density function (PDF) describes the uncertainty, encompassing objective or subjective probability, or both, over all possible values of risk. When the PDF is presented as a smooth curve, the area under the curve between any two points is the probability that the true value lies between the two points. A cumulative distribution function (CDF), which is the integral or sum of the PDF up to each point, shows the probability that a variable is equal to or less than each of the possible values it can take on. These distributions can some-
times be estimated empirically with statistical techniques that can analyze large sets of data adequately. Sometimes, especially when data are sparse, a normal or lognormal distribution is assumed and its mean and variance (or standard deviation) are estimated from available data. When data are in fact normally distributed over the whole range of possible values, the mean and variance completely characterize the distribution, including the PDF and CDF. Thus, with certain assumptions (such as normality), only a few points might be needed to estimate the whole distribution for a given variable, although more points will both im-
prove the representation of the uncertainty and allow examination of the normality assumption. However, the problem remains that apparently minor deviations in the extreme tails may have major implications for risk assessment (Finkel, 1990). Furthermore, it is important to note that the assumption of normality may be inappropriate.
When data are flawed or not available or when the scientific base is not understood well enough to quantify the probability distributions of all input variables, a surrogate estimate of one or more distributions can be based on analysis of the uncertainty in similar variables in similar situations. For example, one can approximate the uncertainty in the carcinogenic potency of an untested chemical by using the existing frequency distribution of potencies for chemicals already tested (Fiering et al., 1984).
Subjective Probability Distributions
A different method of probability assessment is based on expert opinion. In this method, the beliefs of selected experts are elicited and combined to provide a subjective probability distribution. This procedure can be used to estimate the uncertainty in a parameter (cf., the subjective assessment of the slope of the dose-response relationship for lead in Whitfield and Wallsten, 1989). However, subjective assessments are more often used for a risk assessment component for which the available inference options are logically or reasonably limited to a finite set of identifiable, plausible, and often mutually exclusive alternatives (i.e., for model uncertainty). In such an analysis, alternative scenarios or models are assigned subjective probability weights according to the best available data and scientific judgment; equal weights might be used in the absence of reliable data or theoretical justifications supporting any option over any other. For example, this approach could be used to determine how much the risk assessor should rely on relative surface area vs. body weight in conducting a dose-response assessment. The application of particular sets of subjective probability weights in particular inference contexts could be standardized, codified, and updated as part of EPA's implementation of uncertainty analysis guidelines (see below).
Objective probabilities might seem inherently more accurate than subjective probabilities, but this is not always true. Formal methods (Bayesian statistics)2exist to incorporate objective information into a subjective probability distribution that reflects other matters that might be relevant but difficult to quantify, such as knowledge about chemical structure, expectations of the effects of concurrent exposure (synergy), or the scope of plausible variations in exposure. The chief advantage of an objective probability distribution is, of course, its objectivity; right or wrong, it is less likely to be susceptible to major and perhaps undetectable bias on the part of the analyst; this has palpable benefits in defending a risk assessment and the decisions that follow. A second advantage is that objec-
tive probability distributions are usually far easier to determine. However, there can be no rule that objective probability estimates are always preferred to subjective estimates, or vice versa.
Model Uncertainty: "Unconditional" Versus "Conditional" PDFs
Regardless of whether objective or subjective methods are used to assess them, the distinction between parameter uncertainty and model uncertainty remains pivotal and has implications for implementing improved risk assessments that acknowledge uncertainty. The most important difference between parameter uncertainty and model uncertainty, especially in the context of risk assessment, concerns how to interpret the output of an objective or subjective probability assessment for each.
One can readily construct a probability distribution for risk, exposure, potency, or some other quantity that reflects the probabilities that various values, corresponding to fundamentally different scientific models, represent the true state. Such a depiction, which we will call an "unconditional PDF" because it tries to represent all the uncertainty surrounding the quantity, can be useful for some decisions that agencies must make. In particular, EPA's research offices might be able to make more efficient decisions about where resources should be channeled to study particular risks, if the uncertainty in each risk were presented unconditionally. For example, an unconditional distribution might be reported in this way: "the potency of chemical X is 10-2 per part per million of air (with an uncertainty of a factor of 5 due to parameter uncertainty surrounding this value), but only if the LMS model is correct; if instead the chemical has a threshold, the potency at any ambient concentration is effectively zero." It might even help to assign subjective weights to the current thinking about the probability that each model is correct, especially if research decisions have to be made for many risks.
In addition, some specified regulatory decisionsthose involving the ranking of different risks for the purpose of allowing "tradeoffs" or "offsets"can also suffer if model uncertainty is not quantified. For example, two chemicals (Y and Z) with the same potencyassuming that the LMS model is correctmight involve different degrees of confidence in the veracity of that model assumption. If we judged that chemical Y had a 90%, or even a 20%, chance of acting in a threshold fashion, it might be a mistake to treat it as having the same potency as a chemical Z that is virtually certain to have no threshold and then to allow increased emissions of Z in exchange for greater reductions in Y.
However, unconditional statements of uncertainty can be misleading if managers use them for standard-setting, residual-risk decisions, or risk communication, and especially if others then misinterpret these statements. Consider two situations, involving the same hypothetical chemical, in which the same amount of uncertainty can have different implications, depending on whether it stems
from parameter uncertainty (Situation A) or ignorance about model choice (Situation B). In Situation A, suppose that the uncertainty is due entirely to parameter sampling error in a single available bioassay involving few test animals. If 3 of 30 mice tested in that bioassay developed tumors, then a reasonable central-tendency estimate of the risk to mice at the dose used would be 0.1 (3/30). However, because of sampling error, there is approximately a 5% probability that the true number of tumors might be as low as zero (leading to zero as the lower confidence limit, LCL, of risk) and about a 5% probability that the true number of tumors is 6 or higher (leading to 0.2 (6/30) as the upper confidence limit, UCL, of risk).
In Situation B, suppose instead that the uncertainty is due entirely to ambiguity over which model of biological effect is correct. In this hypothetical situation, there was one bioassay in which 200 of 1,000 mice developed tumors; the risk to mice at the dose would be 0.2 (with essentially no parameter uncertainty due to the very large sample size). But suppose scientists disagree about whether the effect in mice is at all relevant to humans, because of profound metabolic or other differences between the two species, but can agree to assign equal probabilities of 50% to each eventuality. In this case as well, the LCL of the risk to humans would be zero (if the "nonrelevance" theory were correct), and the UCL would be 0.2 (if the "relevance" theory were correct), and it would be tempting to report a "central estimate" of 0.1, corresponding to the expected value of the two possible outcomes, weighted by their assigned probabilities. In either situation A or B, it would be mathematically correct to say the following: "The expected value of the estimate of the number of annual excess cancer deaths nationwide caused by exposure to this substance is 1,000; the LCL of this estimate is zero deaths, and the UCL is 2,000 deaths.''3
We contend that in such cases, which typify the two kinds of uncertainties that risk managers must deal with, it would be a mistake simply to report the confidence limits and expected value in Situation B as one might do more routinely in Situation A, especially if one then used these summary statistics to make a regulatory decision. The risk-communication problem in treating this dichotomous model uncertainty (Situation B) as though it were a continuous probability distribution is that it obscures important information about the scientific controversy that must be resolved. Risk managers and the public should be given the opportunity to understand the sources of the controversy, to appreciate why the subjective weights assigned to each model are at their given values, and to judge for themselves what action is appropriate when the two theories, at least one of which must be incorrect, predict such disparate outcomes.
More critically, the expected value in Situation B might have dramatically different properties as an estimate for decision-making from the one in Situation A. The estimate of 1,000 deaths in Situation B is a contrivance of multiplying subjective weights that corresponds to no possible true value of risk, although this is not itself a fatal flaw; indeed, it is possible that a strategy of deliberately
inviting errors of both overprotection and underprotection at each decision will turn out to be optimal over a long-run set of similar decisions. The more fundamental problem is that any estimate of central tendency does not necessarily lead to optimal decision-making. This would be true even if society had no desire to make conservative risk management decisions.
Simply put, although classical decision theory does encourage the use of expected values that take account of all sources of uncertainty, it is not in the decision-maker's or society's best interest to treat fundamentally different predictions as quantities that can be "averaged" without considering the effects of each prediction on the decision that it leads to. It is possible that a coin-toss gamble between zero deaths and 2,000 deaths would lead a regulator rationally to act as though 1,000 deaths were the certain outcome. But this is only a shorthand description of the actual process of expected-value decision-making, which asks how the decisions that correspond to estimates of zero deaths, 1,000 deaths, and 2,000 deaths perform relative to each other, in light of the possibility that each estimate (and hence each decision) is wrong. In other words, the choice to use an unconditional PDF when there is the kind of model uncertainty shown in situation B is a choice between the possibility of overprotecting or underprotectingif one model is accepted and the other rejectedand the certainty of erring in one direction or the other if the hybrid estimate of 1,000 is constructed. Because in this example the outcomes are numbers that can be manipulated mathematically, it is tempting to report the average, but this would surely be nonsensical if the outcomes were not numerical. If, for example, there were model uncertainty about where on the Gulf Coast a hurricane would hit, it would be sensible to elicit subjective judgment about the probability that a model predicting that the storm would hit in New Orleans was correct, versus the probability that an alternative modelsay, one that predicted that the storm would hit in Tampawas correct. It would also be sensible to assess the expected losses of lives and property if relief workers were irrevocably deployed in one location and the storm hit the other ("expected" losses in the sense of probability times magnitude). It would be foolish, however, to deploy workers irrevocably in Alabama on the grounds that it was the "expected value" of halfway between New Orleans and Tampa under the model uncertaintyand yet this is just the kind of reasoning invited by indiscriminate use of averages and percentiles from distributions dominated by model uncertainty.
Therefore, we recommend that analysts present separate assessments of the parameter uncertainty that remains for each independent choice of the underlying model(s) involved. This admonition is not inconsistent with our view that model uncertainty is important and that the ideal uncertainty analysis should consider and report all important uncertainties; we simply suspect that comprehension and decision-making might suffer if all uncertainties are lumped together indiscriminately. The subjective likelihood that each model (and hence each parameter uncertainty distribution) might be correct should still be elicited and
reported, but primarily to help the decision-maker gauge which depiction of risk and its associated parameter uncertainty is the correct one, and not to construct a single hybrid distribution (except for particular purposes involving priority-setting, resource allocation, etc.). In the hypothetical Situation B, this would mean presenting both models, their predictions, and their subjective weights, rather than simple summary statistics, such as the unconditional mean and UCL.
The existence of default options for model uncertainty (as discussed in the introduction to Part II and in Chapter 6) also places an important curb on the need for and use of unconditional depictions of uncertainty. If, as we recommend, EPA develops explicit principles for choosing and modifying its default models, it will further codify the practice that for every risk assessment, a sequence of "preferred" model choices will exist, with only one model being the prevailing choice at each inference point where scientific controversy exists. Therefore, the "default risk characterization," including uncertainty, will be the uncertainty distribution (embodying the various sources of parameter and scenario uncertainty) that is conditional on the approved choices for dose-response, exposure, uptake, and other models made under EPA's guidelines and principles. For each risk assessment, this PDF, rather than the single point estimate currently in force, should serve as the quantitative-risk input to standard-setting and residual-risk decisions that EPA will make under the act.
Thus, given the current state of the art and the realities of decision-making, model uncertainty should play only a subsidiary role in risk assessment and characterization, although it might be important when decision-makers integrate all the information necessary to make regulatory decisions. We recognize the intellectual and practical reasons for presenting alternative risk estimates and PDFs corresponding to alternative models that are scientifically plausible, but that have not supplanted a default model chosen by EPA. However, we suggest that to create a single risk estimate or PDF out of various different models not only could undermine the entire notion of having default models that can be set aside for sufficient reason, but could lead to misleading and perhaps meaningless hybrid risk estimates. We have presented this discussion of the pitfalls of combining the results of incompatible models to support our view urging caution in applying these techniques in EPA's risk assessment. Such techniques should not be used for calculating unit risk estimates, because of the potential for misinterpretation of the quantitative risk characterization.4However, we encourage risk assessors and risk managers to work closely together to explore the implications of model uncertainty for risk management, and in this context explicit characterization of model uncertainty may be helpful. The characterization of model uncertainty may also be appropriate and useful for risk communication and for setting research priorities.
Finally, an uncertainty analysis that carefully keeps separate the influence of fundamental model uncertainties versus other types of uncertainty can reveal which controversies over model choice are actually important to risk manage-
ment and which are "tempests in teapots." If, as might often be the case, the effect of all parameter uncertainties (and variabilities) is as large as or larger than that contributed by the controversy over model choice, then resolving the controversy over model choice would not be a high priority. In other words, if the "signal" to be discerned by a final answer as to which model or inference option is correct is not larger than the "noise" caused by parameter uncertainty in either (all) model(s), then effort should be focused on data collection to reduce the parameter uncertainties, rather than on basic research to resolve the modeling controversies.
Specific Guidance On Uncertainty Analysis
Generating Probability Distributions
The following examples indicate how probability distributions might be developed in practice and illustrate many of the principles and recommended procedures discussed earlier in the chapter.
A second opportunity, which allows the analyst to draw out some of the model uncertainty in dose-response relationships, stems from the flexibility of the LMS model. Even though this model is often viewed as unduly restrictive (e.g., it does not allow for thresholds or for "superlinear" dose-response relations at low doses), it is inherently flexible enough to account for sublinear dose-response relations (e.g., a quadratic function) at low doses. EPA's point-estimation procedure forces the q1* value to be associated with a linear low-dose model, but there is no reason why EPA could not fit an unrestricted model through all the values on the binomial uncertainty distribution of tumor response, thereby generating a distribution for potency that might include some probability that the true dose-response function is of quadratic or higher order (see, for example, Guess et al., 1977; Finkel, 1988).
Finally, EPA could account for another source of parameter uncertainty if it made use of more than one data set for each carcinogen. Techniques of meta-analysis, more and more frequently used to generate composite point estimates by averaging together the results of different studies (e.g., a second mouse study that might have found 20 leukemic animals out of 50 at the same dose), can perhaps more profitably be used to generate a composite uncertainty distribution. This distribution could be broader than the binomial distribution that would arise from considering the sampling uncertainty in a single study, if the new study contradicted the first, or it could be narrower, if the results of each study were reinforcing (i.e., each result was well within the uncertainty range of the other).
Statistical Analysis of Generated Probabilities
Once the needed subjective and objective probability distributions are estimated for each variable in the risk assessment, the estimates can be combined to determine their impact on the ultimate risk characterization. Joint distributions of input variables are often mathematically intractable, so an analyst must use approximating methods, such as numerical integration or Monte Carlo simulation. Such approximating methods can be made arbitrarily precise by appropriate computational methods. Numerical integration replaces the familiar operations of integral calculus by summarizing the values of the dependent variable(s) on a very fine (multivariate) grid of the independent variables. Monte Carlo methods are similar, but sum the variables calculated at random points on the grid; this is especially advantageous when the number or complexity of the input variables is so large that the costs of evaluating all points on a sufficiently fine grid would be prohibitive. (For example, if each of three variables is examined at 100 points in all possible combination, the grid would require evaluation at 1003 = 1,000,000 points, whereas a Monte Carlo simulation might provide results that are almost as accurate with only 1,000-10,000 randomly selected points.)
Barriers to Quantitative Uncertainty Analysis
The primary barriers to determining objective probabilities are lack of adequate scientific understanding and lack of needed data. Subjective probabilities are also not always available. For example, if the fundamental molecular-biologic bases of some hazards are not well understood, the associated scientific
uncertainties cannot be reasonably characterized. In such a situation, it would be prudent public-health policy to adopt inference options from the conservative end of the spectrum of scientifically plausible available options. Quantitative dose-response assessment, with characterization of the uncertainty in the assessment, could then be conducted conditional on this set of inference options. Such a "conditional risk assessment" could then routinely be combined with an uncertainty analysis for exposure (which might not be subject to fundamental model uncertainty) to yield an estimate of risk and its associated uncertainty.
The committee recognizes the difficulties of using subjective probabilities in regulation. One is that someone would have to provide the probabilities to be used in a regulatory context. A "neutral" expert from within EPA or at a university or research center might not have the knowledge needed to provide a well-informed subjective probability distribution, whereas those who might have the most expertise might have or be perceived to have a conflict of interest, such as persons who work for the regulated source or for a public-interest group that has taken a stand on the matter. Allegations of conflict of interest or lack of knowledge regarding a chemical or issue might damage the credibility of the ultimate product of a subjective assessment. We note, however, that most of the same problems of real or perceived bias pervade EPA's current point-estimation approach.
At bottom, what matters is how risk managers and other end-users of risk assessments interpret the uncertainty in risk analysis. Correct interpretation is often difficult. For example, risks expressed on a logarithmic scale are commonly misinterpreted by assuming that an error of, say, a factor of 10 in one direction balances an error of a factor of 10 in the other. In fact, if a risk is expressed as 10-5 within a factor of 100 uncertainty in either direction, the average risk is approximately 1/2,000, rather than 1/100,000. In some senses, this is a problem of risk communication within the risk-assessment profession, rather than with the public.
Contrary to EPA's statement that the quantitative techniques suggested in this chapter "require definition of the distribution of all input parameters and knowledge of the degree of dependence (e.g., covariance) among parameters," (EPA, 1991f) complete knowledge is not necessary for a Monte Carlo or similar approach to uncertainty analysis. In fact, such a statement is a tautology: it is the uncertainty analysis that tells scientists how their lack of "complete knowledge" affects the confidence they can have in their estimate. Although it is always better to be able to be precise about how uncertain one is, an imprecise statement of uncertainty reflects how uncertain the situation isit is far better to acknowledge this than to respond to the ''lack of complete knowledge" by holding fast to a "magic number" that one knows to be wildly overconfident. Uncer-
tainty analysis simply estimates the logical implications of the assumed model and whatever assumed or empirical inputs the analyst chooses to use.
The difficulty in documenting uncertainty can be reduced by the use of uncertainty guidelines that will provide a structure for how to determine uncertainty for each parameter and for each plausible model. In some cases, objective probabilities are available for use. In others, a subjective consensus about the uncertainty may be based on whatever data are available. Once these decisions are documented, many of the difficulties in determining uncertainty can be alleviated. However, it is important to note that consensus might not be achieved. If a "first-cut" characterization of uncertainty in a specific case is deemed to be inappropriate or superseded by new information, it can be changed by means of such procedures as those outlined in Chapter 12.
The development of uncertainty guidelines is important, because a lack of clear statements as to how to address uncertainty in risk assessment might otherwise lead to continuing inconsistency in the extent to which uncertainty is explicitly considered in assessments done by EPA and other parties, as well as to inconsistencies in how uncertainty is quantified. Developing guidelines to promote consistency in efforts to understand the uncertainty in risk assessment should improve regulatory and public confidence in risk assessment, because guidelines would reduce inappropriate inconsistencies in approach, and where inconsistencies remain, they could help to explain why different federal or state agencies come to different conclusions when they analyze the same data.
Risk Management And Uncertainty Analysis
The most important goal of uncertainty analysis is to improve risk management. Although the process of characterizing the uncertainty in a risk analysis is also subject to debate, it can at a minimum make clear to decision-makers and the public the ramifications of the risk analysis in the context of other public decisions. Uncertainty analysis also allows society to evaluate judgments made by experts when they disagree, an especially important attribute in a democratic society. Furthermore, because problems are not always resolved and analyses often need to be repeated, identification and characterization of the uncertainties can make the repetition easier.
Single Estimates of Risk
Once EPA succeeds in supplanting single point estimates with quantitative descriptions of uncertainty, its risk assessors will still need to summarize these distributions for risk managers (who will continue to use numerical estimates of risk as inputs to decision-making and risk communication). It is therefore crucial to understand that uncertainty analysis is not about replacing "risk numbers" with risk distributions or any other less transparent method; it is about con-
sciously selecting the appropriate numerical estimate(s) from out of an understanding of the uncertainty.
Regardless of whether the applicable statute requires the manager to balance uncertain benefits and costs or to determine what level of risk is "acceptable," a bottom-line summary of the risk is a very important input, as it is critical to judging how confident the decision-maker can be that benefits exceed costs, that the residual risk is indeed "acceptable," or whatever other judgments must be made. Such summaries should include at least three types of information: (1) a fractile-based summary statistic, such as the median (the 50th percentile) or a 95th-percentile upper confidence limit, which denotes the probability that the uncertain quantity will fall an unspecified distance above or below some associated value; (2) an estimate of the mean and variance of the distribution, which along with the fractile-based statistic provides crucial information about how the probabilities and the absolute magnitudes of errors interrelate; and (3) a statement of the potential for errors and biases in these estimates of fractiles, mean, and variance, which can stem from ambiguity about the underlying models, approximations introduced to fit the distribution to a standard mathematical form, or both.
One important issue related to uncertainty is the extent to which a risk assessment that generates a point estimate, rather than a range of plausible values, is likely to be too "conservative" (that is, to excessively exaggerate the plausible magnitude of harm that might result from specified environmental exposures). As the two case studies that include uncertainty analysis (Appendixes F and G) illustrate, these investigations can show whether "conservatism" is in fact a problem, and if so, to what extent. Interestingly, the two studies reach opposite conclusions about "conservatism'' in their specific risk-assessment situations; perhaps this suggests that facile conclusions about the "conservatism" of risk assessment in general might be off the mark. On the one hand, the study in Appendix G claims that EPA's estimate of MEI risk (approximately 10-1) is in fact quite "conservative," given that the study calculates a "reasonable worst-case risk" to be only about 0.0015.6However, we note that this study essentially compared different and incompatible models for the cancer potency of butadiene, so it is impossible to discern what percentile of this unconditional uncertainty distribution any estimate might be assigned (see the discussion of model uncertainty above). On the other hand, the Monte Carlo analysis of parameter uncertainty in exposure and potency in Appendix F claims that EPA's point estimate of risk from the coal-fired power plant was only at the 83rd percentile of the relevant uncertainty distribution. In other words, a standard "conservative" estimate of risk (the 95th percentile) exceeds EPA's value, in this case by a factor of 2.5. It also appears from Figure 5-7 in Appendix F that there is about a 1% chance that EPA's estimate is too low by more than a factor of 10. Note that both case studies (Appendixes F and G) fail to distinguish sources of uncertainty from sources of interindividual variability, so the corresponding "uncertainty" distributions obtained cannot be used to properly characterize uncertainty either
in predicted incidence or in predicted risk to some particular (e.g., average, highly exposed, or high-risk) individual (see Chapter 11 and Appendix I-3).
As discussed above, access to the entire PDF allows the decision-maker to assess the amount of "conservatism" implicit in any estimate chosen from the distribution. In cases where the risk manager asks the analyst to summarize the PDF via one or more summary statistics, the committee suggests that EPA might consider a particular kind of point estimate to summarize uncertain risks, in light of the two distinct kinds of "conservatism" discussed in Appendix N-1 (the "level of conservatism,'' the relative percentile at which the point estimate of risk is located, and the "amount of conservatism," the absolute difference between the point estimate and the mean). Although the specific choice of this estimate should be left to EPA risk managers, and may also need to be flexible enough to accommodate case-specific circumstances, estimates do exist that can account for both the percentile and the relationship to the mean in one single number. For example, EPA could choose to summarize uncertain risks for reporting the mean of the upper five percent of the distribution. It is a mathematical truism that (for right-skewed distributions commonly encountered in risk assessment) the larger the uncertainty, the greater the chance that the mean may exceed any arbitrary percentile of the distribution (see Table 9-4). Thus, the mean of the upper five percent is by definition "conservative" both with respect to the overall mean of the distribution and to its 95th percentile, whereas the 95th percentile may not be a "conservative" estimate of the mean. In most situations, the amount of "conservatism" inherent in this new estimator will not be as extreme as it would be if a very high percentile (e.g. the 99.9th) was chosen without reference to the mean.
Thus, the issue of uncertainty subsumes the issue of conservatism in point estimates. Point estimates chosen without regard to uncertainty provide only the barest beginnings of the story in risk assessment. Excessive or insufficient conservatism can arise out of inattention to uncertainty, rather than out of a particular way of responding to uncertainty. Actions taken solely to reduce or eliminate potential conservatism will not reduce and might increase the problem of excessive reliance on point estimates.
In summary, EPA's position on the issue of uncertainty analysis (as represented in the Superfund document) seems plausible at first glance, but it might be somewhat muddled. If we know that "all risk numbers are only good to within a factor of 10," why do any analyses? The reason is that both the variance and the conservatism (if any) are case-specific and can rarely be estimated with adequate precision until an honest attempt at uncertainty analysis is made.
Inadequate scientific and technical communication about risk is sometimes a source of error and uncertainty, and guidance to risk assessors about what to
include in a risk analysis should include guidance about how to present it. The risk assessor must strive to be understood (as well as to be accurate and complete), just as risk managers and other users must make themselves understood when they apply concepts that are sometimes difficult. This source of uncertainty in interprofessional communication seems to be almost untouched by EPA or any other official body (AIHC, 1992).
Comparison, Ranking, And Harmonization Of Risk Assessments
As discussed in Chapter 6, EPA makes no attempt to apply a single set of methods to assess and compare default and alternative risk estimates with respect to parameter uncertainty. The same deficiency occurs in the comparison of risk estimates. When EPA ranks risks, it usually compares point estimates without considering the different uncertainties in each estimate. Even for less important regulatory decisions (when the financial and public-health impacts are deemed to be small), EPA should at least make sure that the point estimates of risk being compared are of the same type (e.g., that a 95% upper confidence bound for one risk is not compared with a median value for some other risk) and that each assessment has an informative (although perhaps sometimes brief) analysis of the uncertainty. For more important regulatory decisions, EPA should estimate the uncertainty in the ratio of the two risks and explicitly consider the probabilities and consequences of setting incorrect priorities. For any decisions involving risk-trading or priority-setting (e.g., for resource allocation or "offsets"), EPA should take into account information on the uncertainty in the quantities being ranked so as to ensure that such trades do not increase expected risk and that such priorities are directed at minimizing expected risk. When one or both risks are highly uncertain, EPA should also consider the probability and consequences of greatly erring in trading one risk for another, because in such cases one can lower the risk on average and yet introduce a small chance of greatly increasing risk.
Finally, EPA sometimes attempts to "harmonize" risk-assessment procedures between itself and other agencies, or among its own programs, by agreeing on a single common model assumption, even though the assumption chosen might have little more scientific plausibility than alternatives (e.g., replacing FDA's body-weight assumption and EPA's surface-area assumption with body weight to the 0.75 power). Such actions do not clarify or reduce the uncertainties in risk assessment. Rather than "harmonizing" risk assessments by picking one assumption over others when several assumptions are plausible and none of the assumptions is clearly preferable, EPA should use the preferred models for risk calculation and characterization, but present the results of the alternative models (with their associated parameter uncertainties) to further inform decision-makers and the public. However, ''harmonization" does serve an important
purpose in the context of uncertainty analysisit will help, rather than hinder, risk assessment if agencies cooperate to choose and validate a common set of uncertainty distributions (e.g., a standard PDF for the uncertain exponent in the "body weight to the X power" equation or a standard method for developing a PDF from a set of bioassay data).
Findings And Recommendations
The committee strongly supports the inclusion of uncertainty analysis in risk assessments despite the potential difficulties and costs involved. Even for lower-tier risk assessments, the inherent problems of uncertainty need to be made explicit through an analysis (although perhaps brief) of whatever data are available, perhaps with a statement about whether further uncertainty analysis is justified. The committee believes that a more explicit treatment of uncertainty is critical to the credibility of risk assessments and to their utility in risk management.
The committee's findings and recommendations are summarized briefly below.
Single Point Estimates and Uncertainty
EPA often reports only a single point estimate of risk as a final output. In the past, EPA has only qualitatively acknowledged the uncertainty in its estimates, generally by referring to its risk estimates as "plausible upper bounds" with a plausible lower bound implied by the boilerplate statement that "the number could be as low as zero." In light of the inability to discern how "conservative" an estimate might be unless one does an uncertainty analysis, both statements might be misleading or untrue in particular cases.
EPA committed itself in a 1992 internal memorandum (see Appendix B) to doing some kind of uncertainty analysis in the future, but the memorandum does not define when or how such analysis might be done. In addition, it does not distinguish between the different types of uncertainty or provide specific examples. Thus, it provides only the first, critical step toward uncertainty analysis.
Comparison of Risk Estimates
EPA makes no attempt to apply a consistent method to assess and compare default and alternative risk estimates with respect to parameter uncertainty. Presentations of numerical values in an incomplete form lead to inappropriate and possibly misleading comparisons among risk estimates.
Harmonization of Risk Assessment Methods
EPA sometimes attempts to "harmonize" risk-assessment procedures between itself and other agencies or among its own programs by agreeing on a single common model assumption, even though the assumption chosen might have little more scientific plausibility than alternatives, (e.g., replacing FDA's body-weight assumption and EPA's surface-area assumption with body weight to the 0.75 power). Such actions do not clarify or reduce the uncertainties in risk assessment.
Ranking of Risk
When EPA ranks risks, it usually compares point estimates without considering the different uncertainties in each estimate.
1. Although variability in a risk-assessment parameter across different individuals is itself a type of uncertainty and is the subject of the following chapter, it is possible that new parameters might be incorporated into a risk assessment to model that variability (e.g., a parameter for the standard deviation of the amount of air that a random person breathes each day) and that those parameters themselves might be uncertain (see "uncertainty and variability" section in Chapter 11).
2. It is important to note that the distributions resulting from Bayesian models include various subjective judgments about models, data sets, etc. These are expressed as probability distributions but the probabilities should not be interpreted as probabilities of adverse effect but, rather, as expressions of strengths of conviction as to what models, data sets, etc. might be relevant to assessing risks of adverse effect. This is an important distinction which should be kept in mind when interpreting and using such distributions in risk management as a quantitative way of expressing uncertainty.
3. Assume that to convert from risk to the test animals to the predicted number of deaths in the human population, one must multiply by 10,000. Perhaps the laboratory dose is 10,000 times larger than the dose to humans, but 100 million humans are exposed. Thus, for example,
4. Note that characterizing risks considering only the parameter uncertainty under the preferred set of models might not be as restrictive as it appears at first glance, in that some of the model choices can be safely recast as parameter uncertainties. For example, the choice of a scaling factor between rodents and humans need not be classified as a model choice between body weight and surface area that calls for two separate "conditional PDFs," but instead can be treated as an uncertain parameter in the equation Rhuman Rrodent BWa, where a might plausibly vary between 0.5 and 1.0 (see our discussion in Chapter 11). The only constraint in this case is that the scaling model is some power function of BW, the ratio of body weights.
5. It is not always clear what percent of the distribution someone is referring to by "correct to within a factor of X." If instead of assuming that the person means with 100% confidence, we assumed that the person means 98% confidence, then the factor of X would cover two standard deviations on either side of the median, so one geometric standard deviation would be equal to X.
6. We arrive at this figure of 0.0015, or 1.5 × 10-3, by noting that the "base case" for fenceline risk (Table 3-1 in Appendix G) is 5 × 10-4 and that "worst case estimates were two to three times higher than base case estimates."
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9641478061676025,
"language": "en",
"url": "https://www.pig-world.co.uk/news/asf-expected-to-increase-the-demand-for-imported-protein.html",
"token_count": 388,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:758178a2-08c4-42b9-b4cb-4cdc1210a41d>"
}
|
The last time China needed to import large quantities of protein was in 2016, in response to its sow shortage. At that time, the protein shortage was met with imports of pig meat. In 2019, African Swine Fever (ASF) is expected to increase the demand for imported protein again.
This time, the scale of the expected production decline is much greater. Industry forecasts anticipate the protein deficiency in China will be anything from 10 to 20 million tonnes, depending on how much of the pig herd is lost to ASF and its management.
Even conservative estimates of the loss in production mean the volumes necessary to replace them are simply not available on the global market.
AHDB Pork’s lead analyst Duncan Wyatt said: “Demand for other imported proteins has already been growing in China, as Chinese consumers are increasingly familiar with these products. Chicken will play a vital role, although in 2016 China Customs reported imports of around 800,000 tonnes of beef and sheep meat; in 2018 this figure was 1.36 million tonnes.”
In the first quarter of 2019, imports of other proteins compared with a year earlier have increased by more than those of pig meat.
Mr Wyatt said: “Pork consumption is estimated to have declined between 10-15%, according to Rabobank. If pork prices start to rise in China, demand for other proteins could be further increased. Large volumes of frozen stocks of pork are being withdrawn and there is an increased level of pig slaughter, making more pork available domestically.”
Meanwhile, the trade war with the US is limiting overall increases in pork imports.
Mr Wyatt added: “These factors may be keeping the wolf from the door for now, and there is little question that global pork markets can be expected to receive a significant boost from increased Chinese buying. However, both imports and domestic production of other proteins will help fill the gap too.”
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9166052341461182,
"language": "en",
"url": "https://www.visualcapitalist.com/by-this-measure-the-u-s-has-the-2nd-highest-national-debt/",
"token_count": 3444,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c5304160-f0f1-4c10-b6d3-677577d9b9af>"
}
|
By This Measure, the U.S. has the 2nd Highest National Debt
USA is #7 in debt to gdp, but #2 in debt to revenue
In absolute terms, the United States is the most indebted country in the world, accounting for 29% of the world’s $60 trillion of sovereign debt.
However, this is not really a fair comparison in some ways because it does not account for the relative wealth of the country in contrast to poorer economies. That’s why it is standard practice to measure sovereign debt in a ratio comparing it directly to the economic productivity, measured by gross domestic product (GDP).
Using this ratio in comparison with other OECD countries, the United States is a modest 7th place (out of 34) in the rankings in terms of its debt load. However, as Jeffrey Dorfman writes in Forbes, comparing debt and GDP has some considerable problems.
The major issue is that economic production cannot be converted directly to dollars that a government can spend. If this were true, a government could claim everyone’s income as taxes and use it to pay down the debt. However, in reality, a 100% tax rate would make everyone would quit their jobs or leave the country. That’s why it makes more sense to compare a government’s debt to the actual tax revenue collected, as this creates a clearer picture of the country’s debt burden and the capacity to pay.
We pulled the latest data from the OECD to compare three ways of measuring the amount of debt that a country has accumulated. The first is the standard Debt to GDP ratio. In addition, we looked at Debt to Revenue (this includes all federal, state, and municipal tax revenues) as well as Debt to Central Government Revenue (this excludes state and municipal tax revenue). The data from the OECD database is from 2013.
When tabulated using all three measures, the world debt picture changes significantly. The United States is 7th in Debt to GDP with a ratio of 103%, but it jumps to 4th place (406%) in terms of Debt to Revenue, and then 2nd place (979%) in terms of Debt to Central Government Revenue. In other words, when it comes to the actual capacity to pay down this debt, the United States is the second most indebted country in the world. Even if the federal government theoretically used all tax revenue to pay down debt, it would take 10 years (not including any interest).
Of course, the United States also has the world’s reserve currency for now, which gives it more flexibility with its debt and monetary policy. This is less true for a country like Greece, where the currency cannot be devalued at all so long as the country is a part of the EU.
How do other major countries do when comparing the regular measure to the new one using revenue? Canada jumps five spots to 5th place with 695%, and Germany jumps nine spots to 6th place. The UK drops five spots down to 16th overall with 351%. Australia rises two spots from 30th to 28th.
Which Asian Economies Have the Most Sustainable Trade Policies?
The Sustainable Trade Index ranks 19 Asian economies and the U.S. across three categories of trade sustainability.
Which Asian Economies Have the Most Sustainable Trade Policies?
To say that Asia has benefited from international trade is an understatement. By opening its economies to the rest of the world, the region has become a leading exporter in many of today’s most important industries.
Trade has also improved Asia’s quality of life, lifting over one billion people out of poverty since 1990. Without the proper controls, however, such rapid growth could have harmful effects on Asia’s environment and society.
In this infographic from The Hinrich Foundation, we break down the results of their 2020 Sustainable Trade Index (STI). Since 2016, this index has ranked 19 Asian economies and the U.S. across three categories of trade sustainability: economic, social, and environmental.
What Exactly is Sustainable Trade?
International trade is an important source of economic growth, enabling domestic businesses to expand, reach new customers, and gain exposure to foreign markets.
At the same time, countries that focus too heavily on exports put themselves at greater long-term risk. For example, an aggressive expansion into manufacturing is likely to impair the quality of a country’s air, while overdependence on a single product or sector can create an economy that is susceptible to demand shocks.
“The primary principle which underpins sustainable trade is balance. Trade cannot be pursued solely for economic gains, without considering environmental and social outcomes.”
– Merle A. Hinrich
Thus, sustainable trade supports not only economic growth, but also environmental protection and strengthened social capital. It involves finding a balance between short-term incentives and long-term resilience.
Measuring Sustainable Trade
The Sustainable Trade Index (STI) is based on three underlying pillars of trade sustainability. Every economy in the STI receives a score between 0 and 100 for each pillar.
|Pillar||Number of Indicators||Examples of Indicators|
The economic pillar measures a country’s ability to to grow its economy through trade, while the social pillar measures a population’s tolerance for trade expansion, given the costs and benefits of economic growth.
Last but not least, the environmental pillar measures a country’s proficiency at managing climate-related risks. Individual pillar scores are then aggregated to arrive at an overall ranking, which also has a maximum possible score of 100.
The Sustainable Trade Index 2020: Overall Rankings
For the first time in the STI’s history, Japan and South Korea have tied for first place. Both countries have placed in the top five previously, but 2020 marks the first time for either to take the top spot.
|1 (tied)||🇯🇵 Japan||75.1|
|1 (tied)||🇰🇷 South Korea||75.1|
|4||🇭🇰 Hong Kong||68.3|
|10||🇱🇰 Sri Lanka||50.4|
|15 (tied)||🇮🇳 India||46.9|
|15 (tied)||🇻🇳 Vietnam||46.9|
Advanced economies like Singapore, Hong Kong, and Taiwan were also strong performers, each scoring in the high 60s. At the other end of the spectrum, developing countries such as India and Vietnam were tightly packed within the 40 to 50 range.
To learn more, here’s how each country performed in the three underlying pillars.
1. Economic Pillar Rankings
Hong Kong topped the economic pillar for the first time thanks to its low trade costs and well-developed financial sector. Financial services have increased their contribution to Hong Kong’s GDP from 13% in 2004 to 20% in 2018.
The region’s recently initiated national security law—which has resulted in greater political instability—may have a negative effect on future rankings.
|1||🇭🇰 Hong Kong||69.6|
|4||🇰🇷 South Korea||63.3|
|5 (tied)||🇲🇾 Malaysia||61.2|
|5 (tied)||🇺🇸 U.S.||61.2|
|9 (tied)||🇯🇵 Japan||58.6|
|9 (tied)||🇵🇭 Philippines||58.6|
|13||🇱🇰 Sri Lanka||54.7|
China was also a strong performer, climbing to third for the first time. Asia’s largest economy benefits from a well-diversified group of trading partners, meaning it doesn’t rely too heavily on a single market.
The bottom five countries—India (16th), Myanmar (17th), Thailand (18th), Pakistan (19th) and Laos (20th)—suffered from issues such as payment risk, which is measured as the difficulty of getting money in and out of a country. This risk is especially damaging to trade because it discourages foreign direct investment.
2. Social Pillar Rankings
The social pillar features the highest average score, but also the largest gap from top to bottom. This gap has expanded over recent years, growing from 43.9 points in 2018 to 52.3 in 2020.
|3||🇰🇷 South Korea||86.9|
|8||🇭🇰 Hong Kong||57.8|
|18||🇱🇰 Sri Lanka||46.1|
Taiwan claimed the top spot for the second time, solidifying its reputation as Asia’s leader in human capital development. It performed well in the educational attainment indicator, with 93.6% of its population receiving a tertiary education.
China, despite its success in other pillars, only managed 16th. This was partly due to the effects of its now defunct one-child policy, which has been responsible for creating gender imbalances and a shrinking population.
3. Environmental Pillar Rankings
The environmental pillar has the lowest average score of the three. Japan, Singapore, Hong Kong, and South Korea were the only countries to score above 75.
|3||🇭🇰 Hong Kong||77.4|
|4||🇰🇷 South Korea||75.2|
|8||🇱🇰 Sri Lanka||50.4|
The top four performed well in areas such as air quality and water pollution, and with the exception of Hong Kong, have all introduced carbon pricing schemes in the past decade. This doesn’t mean these countries are without their flaws, however.
Land-constrained Singapore, for instance, ranked 16th in the deforestation indicator. The city-state is one of the densest population centers in the world, and has cut down forests to clear space for further settlement and urbanization.
Building Back Better From COVID-19
Despite the damage that COVID-19 has caused, there are some silver linings. This includes the environmental benefits experienced by China, where lockdowns reduced carbon emissions by 200 million tonnes in a single month. It’s been estimated that after two months, China’s reduced pollution levels saved the lives of 77,000 people.
These temporary improvements are an explicit reminder of the environmental and social costs associated with economic growth. In response, governments in Asia are taking steps to ensure the long-term sustainability of their nations. Japan and South Korea both announced their commitments to achieving carbon neutrality by 2050, while China set a similar goal for 2060.
Mapping the World’s Key Maritime Choke Points
Ocean shipping is the primary mode of international trade. This map identifies maritime choke points that pose a risk to this complex logistic network.
Mapping the World’s Key Maritime Choke Points
Maritime transport is an essential part of international trade—approximately 80% of global merchandise is shipped via sea.
Because of its importance, commercial shipping relies on strategic trade routes to move goods efficiently. These waterways are used by thousands of vessels a year—but it’s not always smooth sailing. In fact, there are certain points along these routes that pose a risk to the whole system.
Here’s a look at the world’s most vulnerable maritime bottlenecks—also known as choke points—as identified by GIS.
What’s a Choke Point?
Choke points are strategic, narrow passages that connect two larger areas to one another. When it comes to maritime trade, these are typically straits or canals that see high volumes of traffic because of their optimal location.
Despite their convenience, these vital points pose several risks:
- Structural risks: As demonstrated in the recent Suez Canal blockage, ships can crash along the shore of a canal if the passage is too narrow, causing traffic jams that can last for days.
- Geopolitical risks: Because of their high traffic, choke points are particularly vulnerable to blockades or deliberate disruptions during times of political unrest.
The type and degree of risk varies, depending on location. Here’s a look at some of the biggest threats, at eight of the world’s major choke points.
Because of their high risk, alternatives for some of these key routes have been proposed in the past—for instance, in 2013 Nicaraguan Congress approved a $40 billion dollar project proposal to build a canal that was meant to rival the Panama Canal.
As of today, it has yet to materialize.
A Closer Look: Key Maritime Choke Points
Despite their vulnerabilities, these choke points remain critical waterways that facilitate international trade. Below, we dive into a few of the key areas to provide some context on just how important they are to global trade.
The Panama Canal
The Panama Canal is a lock-type canal that provides a shortcut for ships traveling between the Pacific and Atlantic oceans. Ships sailing between the east and west coasts of the U.S. save over 8,000 nautical miles by using the canal—which roughly shortens their trip by 21 days.
In 2019, 252 million long tons of goods were transported through the Panama Canal, which generated over $2.6 billion in tolls.
The Suez Canal
The Suez Canal is an Egyptian waterway that connects Europe to Asia. Without this route, ships would need to sail around Africa, which would add approximately seven days to their trips. In 2019, nearly 19,000 vessels, and 1 billion tons of cargo, traveled through the Suez Canal.
In an effort to mitigate risk, the Egyptian government embarked on a major expansion project for the canal back in 2015. But, given the recent blockage caused by a Taiwanese container ship, it’s clear that the waterway is still vulnerable to obstruction.
The Strait of Malacca
At its smallest point, the Strait of Malacca is approximately 1.5 nautical miles, making it one of the world’s narrowest choke points. Despite its size, it’s one of Asia’s most critical waterways, since it provides a critical connection between China, India, and Southeast Asia. This choke point creates a risky situation for the 130,000 or so ships that visit the Port of Singapore each year.
The area is also known to have problems with piracy—in 2019, there were 30 piracy incidents, according to private information group ReCAAP ISC.
The Strait of Hormuz
Controlled by Iran, the Strait of Hormuz links the Persian Gulf to the Gulf of Oman, ultimately draining into the Arabian Sea. It’s a primary vein for the world’s oil supply, transporting approximately 21 million barrels per day.
Historically, it’s also been a site of regional conflict. For instance, tankers and commercial ships were attacked in that area during the Iran-Iraq war in the 1980s.
The Bab el-Mandeb Strait
The Bab el-Mandeb Strait is another primary waterway for the world’s oil and natural gas. Nestled between Africa and the Middle East, the critical route connects the Mediterranean Sea (via the Suez Canal) to the Indian Ocean.
Like the Strait of Malacca, it’s well known as a high-risk area for pirate attacks. In May 2020, a UK chemical tanker was attacked off the coast of Yemen–the ninth pirate attack in the area that year.
Due to the strategic nature of the region, there is a strong military presence in nearby Djibouti, including China’s first ever foreign military base.
Money1 month ago
The Richest People in the World in 2021
Green2 months ago
Mapped: The Greenest Countries in the World
Misc2 months ago
The World’s Most Searched Consumer Brands
Markets1 month ago
World Beer Index 2021: What’s the Beer Price in Your Country?
Markets2 months ago
The Population of China in Perspective
Sponsored2 months ago
The Carbon Footprint of Trucking: Driving Toward A Cleaner Future
Money1 month ago
Ranked: The World’s Black Billionaires in 2021
Markets2 months ago
The Buffett Indicator at All-Time Highs: Is This Cause for Concern?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9360971450805664,
"language": "en",
"url": "https://transitiontownpayson.net/2013/08/22/the-cost-of-climate-change-planning-for-the-future/",
"token_count": 558,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.166015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:331c93d9-6ff9-4f84-a74e-7d2e70d1cf81>"
}
|
After Katrina and after Sandy, climate watchers keep predicting more disasterous storms!
The costs to the American taxpayers have been estimated to be 150 to 200 Billion dollars for these two storms alone. What are the costs incurred by other Nations in the world for weather related disasters? What will be the annual cost of repairing the infrastructure in the major cities of the world by the year 2050 when carbon dioxide levels are estimated to be closer to 500 – 600 parts per million?
The Journal Nature has recently published an article titled “Future flood losses in major coastal cities”. In this article they forecast the potential losses from flooding in 136 major coastal cities in the world. The projection is that 1 Trillion Dollars will be needed per year to repair damages to these cities by the year 2050. The amount of taxes needed by governments to protect and repair their cities will make up a considerable amount of a taxpayers pay check!
To read the whole article you will either need to have a subscription to Nature or purchase the article on line.
Mother Jones gives you a little more information without the cost of signing up for a subscription to Nature. Mother Jones shows a map of the 10 highest estimated repair cost coastal cities. From New York City at 2 Billion Dollars per year to Mumbai, India at 6.4 Billion, with the highest costs going to Guangzhou, China at 13.2 Billion. How will this affect the economic stability of the world? I guess that question has yet to be answered!
NASA has recently come out with a “SEA LEVEL RISE TOOL” to help Communities Plan for the future!
Urban planners take note. Based on Superstorm Sandys devastation, NASA has devised a mapping system that will allow communities along coastal areas prepare for the new 100 year storm surge levels. This tool can identify vulnerable areas based on elevations and predicted surge levels based on Climate Change, Arctic Ice Cap Melting and Ocean Warming. It is resource tool that can identify where it is reasonable to rebuild and where it is not. It is not a predictor of floods nor a tool used for flood insurance. It is a tool that a community can use to make smart decisions on whether or not to rebuild or how to strengthen building codes to mitigate future flooding.
The Seattle Times quotes the latest UN Climate Panel Report that states there could be as much as a 3 ft. rise in sea levels by the year 2100!
“A 3-foot rise would endanger many of the world’s great cities — among them London; Shanghai; Venice; Sydney; Miami; New Orleans; and New York.”
Maybe NASA should send them their Sea Level Rise Tool to use!
How much are you willing to pay to offset the effects of Climate Change?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9354938268661499,
"language": "en",
"url": "https://www.jackiebeck.com/credit/",
"token_count": 493,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09228515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:967d1f10-0ccc-45be-b30e-65af46910b3f>"
}
|
Do you want to learn about credit & credit repair? You’re in the right place. Knowing what you’re dealing with can definitely help when it comes to your money, borrowing, and debt.
Let’s start with some helpful articles, and then go over what some of the terms you hear a lot mean.
Credit & Credit Repair Articles
- What Is a Credit Score?
- What Happens When You Have No Credit Score
- What You Need to Know About Closing Credit Cards
- Credit Karma: Get a Free Credit Score and So Much More
- Manual Underwriting Magic: Getting a Mortgage With No Credit Score
- Improve Your Credit Score by Paying Off Debt: How It Can Help
- The 800 Credit Score Club: 4 Tricks to Quickly Increase Your FICO Score
- How Collections Accounts Can Affect Your Credit
- Thinking of Credit Repair? Think Twice and Beware of Scams
- Credit Expert Explains How Regulation B Impacts Credit Reporting
What is Credit?
Credit means that you’re able to borrow money from a lender — usually in exchange for paying them interest over time.
Good credit means you have a history of borrowing money and paying it back on time.
Bad credit means that you either didn’t repay what you owe in full, didn’t repay it on time, or both.
Credit scores are a measure of those things, and they are based on your credit reports. Credit reports are created by companies called credit reporting agencies or credit bureaus.
What is Credit Repair?
Credit repair is a way to try to improve your credit score over time by fixing the things that caused it to be low. Sadly, many credit repair companies are scams. (We are not a credit repair company.)
You can do the work to improve your credit yourself, without having to pay someone else to do it. For example, you can ask for errors on your credit report to be fixed, and there are things you can do that will improve your credit score over time.
One Last Note
There are some things credit isn’t. It’s not a measure of your financial situation, wealth, or how good of a person you are. It’s just about whether or not you can easily borrow money, and a way of talking about how you’ve handled borrowing money in the past.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9634932279586792,
"language": "en",
"url": "https://www.nationwideselfstorage.ca/how-self-storage-affects-small-businesses/",
"token_count": 974,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0810546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dc1cd4c0-4e8e-4eec-be5b-2e1d466a1c64>"
}
|
July 23, 2020 | 4 minute read
How Self Storage Affects Small Businesses
The uncounted employment impact of Self Storage developments in Urban Centers.
One major issue always identified by municipalities and their planning departments when looking at new self-storage development projects is the lack of employment they provide to the community. Sometimes the argument is made that the same plot of land could be used to create far more jobs than a self-storage development would. In these arguments, the secondary employment that self-storages facilities help create is not usually taken into account. Below is an exploration of the different ways that a self-storage facility helps create employment in a community beyond just the staff and management present at the facility.
In urban centers, small businesses can struggle to find suitable space to expand their businesses as they grow. This can limit the addition of new employees to a business and in turn, slow the growth of these businesses. Self-Storage can provide flexible solutions for these businesses to expand and therefore add staff at an earlier point by not having as many space limitations to contend with. Additionally, self-storage can provide flexible inventory solutions for small businesses during busy times allowing seasonal businesses to be flexible and easily meet seasonal demands for products and thus being able to more readily add seasonal staffing as well.
In urban centers, small scale production can have a great deal of difficulty finding suitable space for these activities. This will typically push these types of businesses into more rural areas thus moving employment from core areas to the suburbs and take away vital jobs from the Urban core. In some cases, certain types of production are not suitable for Self-Storage however there are many cases where production can operate easily within an existing storage facility thus allowing these jobs to remain within the Urban core adding good jobs to the area. In this way, self-storage can help stop the exodus of production businesses and their employment from downtown areas to the suburbs.
In urban centers, Light Industrial space is typically in short supply. This forces most distribution jobs to be based in surrounding areas where the warehouse and light industrial space is more readily available. This move will also move many of these jobs from the Urban centres to the areas that the warehouse space is available. This not only reduces employment within a city but also may increase the volume of traffic as trucks are forced to commute from outside of the city and then back out again after making all of their stops in a day rather than starting and stopping in the city. Additionally, with the increased prevalence of e-commerce businesses, Self-storage would allow these companies to be based within the Urban centers while using the Self-Storage facility for distribution to local and other customers.
In Urban centers, space for repair operation, tradespeople and parts storage are always at a premium. Self-Storage can help keep these businesses and jobs in the Urban center. It is very difficult for many onsite repair businesses to store needed parts and tools in the downtown core of a city due to the lack of available commercial space that is also affordable. As such, self-storage can provide this space to help keep these businesses viable. Additionally, if there is the ability to base these trades and repair services in the city, this may help keep these jobs also based in the city as it will be much easier for employees to get to where they start their day if they don’t have to commute to a warehouse or repair shop outside the city and then back into the city to start work. In this way, self-storage can help repair services be more efficient when based within the Urban center rather than outside the city and also help to reduce traffic congestion in the urban core.
Employment impacts of Self-Storage Facilities in large cities can be very large and go far beyond the local facility employment. As density increases and light industrial space becomes even more scarce than it is today, self-storage facilities will be able to play a crucial role in both keeping existing employment and also driving new employment in urban centers in ways that no other developments can.
Who Uses Storage Units in Vancouver?
More and more people and businesses are turning to self storage units in Vancouver as a cost-effective solution to dealing with their stuff and a lack of space. 7 Reasons That You Use Storage Units Personal Storage Personal storage is probably the most common reason that people use self-storage in Vancouver. And it’s not surprising […]Read More
How To Utilize Self Storage for your Renovation?
If there’s one thing we all know about renovations, it’s that they are stressful and expensive and they always take longer than expected. Ok, that’s three things. The point is they always turn out more complicated than envisioned at the outset. So the more you can do to ease the stress, the better. And one […]Read More
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9553101062774658,
"language": "en",
"url": "https://www.rcbo.rw/2020/12/13/financial-managing-means-organizing-all-organization-activities-in-concert/",
"token_count": 755,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.038330078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a1fb6b2d-7b69-480e-ade2-a6988df3459a>"
}
|
In simple terms, fiscal management can be defined as a self-discipline or field in an company that is largely concerned with the management of cash, expenses, gains and credit. Financial management involves the assessment, planning and supervision of financial solutions of an firm. It entails the use of economical tools and techniques as well as the preparation of accounts.
Financial administration includes five main rules namely – cash flow, cost of capital, functioning, and economical balance. This kind of also entails the recognition, dimension and reporting of economic transactions. The concepts and principles on this branch of accounting have become highly complex because of the modern fads and within them. As a result of these difficulties, financial management includes a a few different disciplines. These kinds of disciplines happen to be related to accounting, economics, details systems and banking.
Accounting for economic management identifies the process with which financial facts is highly processed suhogarinteligente.com and used for making decisions. It includes the preparation of reports, examining the data, and providing suggestions on how to increase the performance with the organization. An excellent accountant will always be detail oriented and is expected to perform evaluation and the analysis of the economical data. Accounting is an essential part of the managing of funds. Proper accounting techniques allow managers for making informed decisions on the allowance of means. The objective of accounting is to facilitate decision making and improve the management of money.
The first of all principle of economic management description is that cash is the fundamental resource from the organization. Since capital funds represent the growth inside the organization, managers must always manage all over capital funds. A superb accountant will be able to maximize the return about capital funds by ensuring effective using existing capital and new resources available in the market.
Finance certainly is the study of financial activities. In neuro-scientific finance, two broad classes are distinguished namely managing of financial activities and usage of financial actions. Managerial actions refer to those activities that are done in order to enhance or cure the effectiveness of business activities. From this context, pretty much all actions that contribute to increasing the effectiveness of business are also known as finance activities. On the other hand, using financial activities refers to everything that are completed use the economical activities designed for the benefit of the business.
The purpose of a manager is usually to increase the earnings of the organization through audio financial supervision decisions. This really is achieved by appropriate investment of your profits. Great financial managers are those who understand when to put in on solutions and when to sell them. That they always make an effort to increase the net profit by making the most of the output of the used capital.
Another principle of finance is the rule that most changes in the financial affairs of a organization are accompanied by corresponding changes in other related areas of the business as well. Therefore there should be a coordinated change in financial commitment, production, and marketing strategies as well. In addition , all of these activities ought to be carried out as a way not to impact the other areas of the venture. In this regard, additionally, it is necessary to claim that financial control means finding beyond the four edges. It is necessary to know the inter-dependence of all the domains of the firm in terms of pay for.
Thus, we see which the principle of economic management is definitely seeing the inter-dependence plus the cumulative effect of all monetary activities. This kind of inter-dependence is certainly closely linked with the concept of productivity. For instance, in case the procurement procedure is made effectively and the cash allocated designed for the procurement properly, then the firm is said to have performed financial administration successfully. Likewise, if the development process is certainly planned correctly and the information are effectively utilized, then this firm is said to have efficiently handled the procurement process.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9285925626754761,
"language": "en",
"url": "https://theinvestorsbook.com/financial-leverage.html",
"token_count": 1684,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.003753662109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c9656959-3725-401e-af4d-de6de2ebae91>"
}
|
Definition: Financial leverage refers to the utilization of borrowed funds to acquire new assets which are assumed to generate a higher capital gain or income as compared to the cost of borrowing. It is a liability for the borrowing business organization whereas, makes a source of income for the lender.
The three ways in which the company can obtain funds are as follows:
- Debt: Debts are the funds borrowed in the form of bonds, commercial papers and debentures to be paid back to the lender with interest.
- Equity: Equity is the issuing of shares to the public for gathering funds by giving ownership.
- Leases: Lease refers to a legal agreement abiding to which the lessor provides a property to be used by the lessee for the defined period in exchange for money.
Content: Financial Leverage
- Debt-Equity Ratio
- Debt Ratio
- Interest Coverage Ratio
- Degree of Financial Leverage (DFL)
Factors Affecting Financial Leverage
Financial leverage is more about the borrowings from external sources and needs to be repaid sooner or later. To understand more about financial leverage, let us go through the following factors:
- Second Stage Leverage: The financial leverage is considered as second stage leverage because it is dependant upon the degree of operating leverage. If the operating risk is high, the company will plan for low financial leverage and vice-versa.
- Financial Liability: The borrowings in the form of debts create financial liability on the company.
- Financing Decision: The financial leverage decision is a part of the company’s financing strategy planned by the directors.
- Interest Rates: These borrowings are usually payable with interest which is quite high.
- Stability of the Firm: The most important factor considered by the management while taking the financing decision is the firm’s position and balance, to bear the risk.
- Return on Assets: The returns which the additional capital needs to be estimated to find out whether the company will be able to generate higher profits on the capital employed or not.
- Fixed Financial Cost: The debts create a fixed financial burden in the form of interest over the company.
Measures of Financial Leverage
After employing additional capital into the business, the management uses various financial ratios to the performance of the company. The four most crucial financial leverage ratios or measures are given below:
The debt-equity ratio is the proportion of the funds which the company has borrowed to the fund raised from shareholders. In short, it is the ratio of the borrowings to the owner’s fund.
Analysis: The higher the debt-equity ratio is, the weaker is the financial position of the company. Therefore, this ratio should always be less to avoid the risk of bankruptcy and insolvency.
The debt ratio determines the company’s asset position or strength to meet its liabilities.
Analysis: Lower is the debt ratio of the company; the sounder is its financial position, indicating that the company has sufficient assets to pay of the liabilities at the time of downfall.
Interest Coverage Ratio
The interest coverage ratio emphasizes the company’s ability to pay off the interest with the profits earned.
Analysis: If the ratio is high, it signifies that the company can make enough profit to pay the interest due and vice-versa.
Degree of Financial Leverage (DFL)
The degree of financial leverage (DFL) signifies the level of volatility in the earning per share (EPS) with the change in operating income as a result of the capital restructuring, i.e., acquisition of debts, issuing of shares and debentures and leasing out assets.
Percentage Change in EPS = [(New EPS – Old EPS)/Old EPS]
EPS stands for earning per share
Percentage Change in EBIT = [(New EBIT – Old EBIT)/Old EBIT]
EBIT stands for earnings before interests and taxes
Analysis: A higher DFL indicates that the company is more sensitive to the change in operating income, ultimately showing its unstable earnings per share.
ABC Ltd. expanded its business unit by investing $200000 out of which $50000 was acquired through debts. The company issued 1500 equity shares of $100 each for the remaining amount. The company generates a profit before interests and taxes of $20000 annually. The total assets amounted to $145000, and the liabilities were $75000. The interest payment is $5000.
The previous year’s earning per share (EPS) was $3.5, and in the current financial year, the EPS is $4.8, if the last year’s EBIT is $8000. Find out the related financial leverage ratios.
Debt Equity Ratio = Total Debt/Shareholder’s Equity
Debt Equity Ratio = 50000/150000
Debt Equity Ratio = 0.33
Debt Ratio = Liabilities / Assets
Debt Ratio = 145000/75000
Debt Ratio = 1.93
Interest Coverage Ratio = EBIT / Interest Expenses
Interest Coverage Ratio = 20000/5000
Interest Coverage Ratio = 4
Degree of Financial Leverage (DFL) = Percentage Change in EPS/Percentage Change in EBIT
Percentage Change in EPS = [(New EPS – Old EPS)/Old EPS] ⨯ 100
Percentage Change in EPS = [(4.8 – 3.5)/3.5] ⨯ 100
Percentage Change in EPS = 37%
Percentage Change in EBIT = [(New EBIT – Old EBIT)/Old EBIT] ⨯ 100
Percentage Change in EBIT = [(20000 – 8000)/8000] ⨯ 100
Percentage Change in EBIT = 150%
Degree of Financial Leverage (DFL) = 37/150
Degree of Financial Leverage (DFL) = 0.25
Benefits of Financial Leverage
The financial leverage has various advantages to the company, management, investors and financial companies. The following are some such benefits:
- Economies of Scale: The financial leverage helps the organizations to expand its production unit and manufacture goods on a large scale, reducing the fixed cost drastically.
- Improves Credit Rating: If the company take debts and can pay off these debts on time by generating a good profit from the funds availed, it secures a high credit rating and considered reliable by the lenders.
- Favourable Cash Flow Position: This additional capital provides an opportunity to increase the earning power of the company and hence to improve the cash flow position of the company.
- Increases Shareholders’ Profitability: As the company expands its business through financial leverage, the scope for profitability also increases.
- Tax Relaxation: When the debts and liabilities burden the company, the government allows tax exemptions and benefits to it.
- Expansion of Business Ventures: The need for financial leverage arises when the company plans for growth and development, which is a positive step.
Limitations of Financial Leverage
There are certain drawbacks of the financial leverage which are mainly related to borrowings through debts. These are as follows:
- High Risk: There is always a risk of loss or failure in generating the expected returns along with the burden of paying interest on debts.
- Adverse Results: The outcome of such borrowings may be harmful at times if the business plan goes wrong.
- Restrictions from Financial Institutions: The lending financial institution usually restricts and controls the business operations to some extent.
- High Rate of Interest: The interest rates on the borrowed sum is generally high, which creates a burden on the company.
- Benefits Limited to Stable Companies: The financial leverage is a suitable option for only those companies which are stable and possess a sound financial position.
- May Lead to Bankruptcy: In case of unexpected loss or poor returns and huge debts or liabilities, the company may face the situation of bankruptcy.
A company must be careful while analyzing its financial leverage position because high leverage means high debts. Also, giving ownership may prove to be hazardous for the organization and even result in huge loss and business failure.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9183018803596497,
"language": "en",
"url": "https://www.iasbhai.com/indias-neighbourhood-first-policy-2021-upsc/",
"token_count": 995,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.462890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:232e619a-485e-4d53-84be-b46285a92790>"
}
|
India’s Neighbourhood First Policy 2021 | UPSC
India and its neighbours
WHY IN NEWS:
SYLLABUS COVERED: GS 2 : IR
For PRELIMS note down recent mission like – Vande Bharat and SAGAR etc.
For MAINS concentrate on the timeline of events and challenges India is tackling in the sea as well as on land.
INDIA’S NEIGHBOURHOOD FIRST POLICY 2021
INDIA TACKLING THE COVID-19 CHALLENGE
- The COVID-19 pandemic that originated in China has led to one of the biggest health challenges, causing heavy economic damage in South Asia.
- India ranks second after the United States in terms of number of cases, and the worst-hit economy among G20 nations.
- India is also one of the best poised nations to aid recovery efforts in the region.
- In March, Prime Minister held a special virtual summit of eight SAARC nations and proposed a COVID-19 package.
- India provided about half of the $20 million funding for relief.
- India’s military ran a series of missions to SAARC countries and the Indian Ocean Region (IOR) with supplies of food and medicines.
- India’s ‘Vande Bharat’ mission flew home nationals from neighbouring countries, along with lakhs of Indians who had been stranded during the lockdown.
- India was not the only country in the region providing help.
CHINESE INTERESTS IN SOUTH ASIAN REGION
- China, too, stepped up efforts to extend its influence in the South Asian region through COVID-19 relief.
- China also promised to provide the Chinese-made Sinovac vaccine to them when it is available.
- China also shipped relief to South Asia, sending out PPE suits and other medical equipment.
- India owe’s different amounts of debts to Chinese banks, Beijing stepped in to provide partial debt waivers to the Maldives and Sri Lanka.
- It also extended a massive $1.4-billion Line of Credit to Pakistan.
DID THE MILITARY STANDOFF IMPACT REGIONAL TIES?
- China doubled down on territorial claims and its transgressions along its borders with South Asia: from Ladakh to Arunachal Pradesh.
- PLA soldiers amassed along various sectors of the LAC, leading to violent clashes.
- The deaths of 20 Indian soldiers at the Galwan valley was the first such casualty in 45 years.
- China also laid claim to Bhutan’s Sakteng natural reserves and pushed along the boundary lines with Nepal.
- Another mishap happened when Nepal amended its constitution and map to claim Indian territory.
- Meanwhile, a new defence pact this year between China and Pakistan vis-à-vis a sharp rise in ceasefire violations along the Line of Control (LoC) with Pakistan.
- These ceasefire violations are at the highest levels since 2003, has made it clear that India must factor in among its military challenges at the LAC the possibility of a two-front war.
[wc_highlight color=”yellow” class=””]ALSO READ : CHINESE AGGRESSION AT GALWAN [/wc_highlight]
INDIA AND THREE-PRONGED CHALLENGE
- The state’s response to the challenges has been to assert its Neighbourhood First and SAGAR (Security and Growth for All in the Region) strategies as foreign policy priorities.
- India has also upped its game on infrastructure delivery, particularly for regional connectivity in the past year.
This includes :
- Completing railway lines to Bangladesh and Nepal.
- Riverine projects, ferry service to the Maldives.
- Identifying other services to Sri Lanka and IOR islands.
- Also considering debt waiver requests from its neighbours.
- India has also become more flexible about the entry of other powers to help counter China’s influence in the region — it recently welcomed the U.S.’s new military dialogue with the Maldives.
- America’s Millennium Challenge Corporation’s (MCC) projects in Afghanistan, Bhutan, Sri Lanka, Nepal and Bangladesh are also finding more space.
- QUAD will collaborate on security and infrastructure initiatives in the neighbourhood, along with promoting forays by other partners like the U.K., France and Germany in the region.
- New Delhi has made it clear that despite the provocations, it intends to resolve the nearly ten-month-long military standoff diplomatically and bilaterally.
SOURCES: THE HINDU | India’s Neighbourhood First Policy 2021 | UPSC
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9428057670593262,
"language": "en",
"url": "https://www.jjkellerlibrary.com/news-article/a-virus-in-the-supply-chain",
"token_count": 1057,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.018798828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7bf9d599-310e-4592-b63e-13bf6fc8bfc8>"
}
|
Minimize the ill effects of the coronavirus on your carrier
If truckers deliver the nation’s (and world’s) economy, what happens when the supply chain catches more than the common cold? It’s evident that the coronavirus, or COVID-19, has already disrupted the world’s markets, economies, and supply chain. Much of the effect on the supply chain has not hit “home” yet. Hold on to your hats, the stretch up ahead will likely be bumpy.
Decrease in imports
Much of the goods consumed in North America are manufactured overseas — primarily in East Asia where the coronavirus originated. While the countries that were originally affected hold their cards close to their chest, we have an idea of the impact from:
- Production levels reported by domestic companies with manufacturing facilities in Asia, and
- Tonnage projected to be coming into the North American ports.
February production in China declined at a record pace. With China’s production at record lows, assembly that is dependent on the imports and the related freight volume will follow shortly.
The cold and flu season has always been a challenge for motor carriers. Now the coronavirus adds one more variable to the supply chain. Unhealthy workers mean less production, less volume shipped, and reduced capacity to carry.
What has been witnessed overseas can happen in North America. As a precaution, major enterprises are already proactively limiting travel. If major manufacturers slow production or idle lines or plants (either as a precaution or a necessity), there will not be ripples up and down the supply chain — there’ll be waves.
Nature of the disease
Much still needs to be learned about the virus, but according to the Centers for Disease Control and Prevention (CDC), the virus seems to be spreading with ease. There have been reports of the virus spreading before infected individuals even know that they are sick. While not the primary spread of the disease, an infected worker could infect coworkers before showing symptoms.
Like most contagious viruses, it’s thought to spread mainly from person to person when within six feet of each another. When an infected person coughs or sneezes, respiratory droplets are produced that land on others nearby. It’s possible to get the virus by even touching an infected surface.
Intelligent people may disagree whether the outbreak will reach pandemic levels. However, it is fairly certain that there is, and will continue to be, an outbreak.
Drivers and the spread of the virus
As the nature of their work, drivers carry people and product from location to location. As such, they may also carry:
- The virus,
- Others that have the virus, or
- Infected items from Point A to Point B.
And like everyone else, drivers do not want to be sick. If there are hotbed infected cities or regions, many drivers will be unwilling to go into them. Demand will be initially high in affected areas to meet real and perceived (hording) needs. Unwilling drivers will reduce the capacity to meet that need. In addition, keep in mind that ill drivers are prohibited by regulation from operating commercial motor vehicles.
An ounce of prevention
Understand that there will be disruptions to the supply chain. The disruption will affect you, your operation, and your associates. Your reaction can either be planned and rational or emotional and panicked. The choice is really yours to make.
Consider the following best practices:
- Know your vehicles. You don’t need a spare for every part, but plan for the high frequency replacement components by having a couple extra on hand. There should be no need to lose productivity by sidelining a critical asset over a hundred-dollar part.
- Be flexible with employee absenteeism. You should be willing to send associates home sick earlier and keep them out longer than normal. Review your policies. Now may be a good time to make an exception to hardline attendance policies. Afterall, you may have to enforce them to a greater percentage of your workforce.
- Communicate health tips and news. You could have regular newsletters and information blasts covering how the virus spreads, symptoms of infection, prevention, treatment, and what to do when infected.
- Provide driver-specific materials. You should provide educational and news aimed at a driver audience. It is a natural response for drivers to refuse to go into affected areas. But good information, support, and empathy can go a long way to combat a driver’s fear — that can be both rational and irrational. Your call-terming script should now include a “stay healthy!” message, along with your standard “drive safe” communication.
There will be light
As an economy, people, community, nation, and organization we can — and will — get through this. The SARs, ebola, H1NI (swine) to one degree or another have come and went. As they say, this too shall pass. Stay healthy and safe.
You may also enjoy the following articles:
Additional articles by Rick Malchow:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9580081701278687,
"language": "en",
"url": "https://foodtank.com/news/2020/04/covid-19-puts-the-global-food-system-at-risk/",
"token_count": 846,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.458984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b63db005-d3b2-450e-87a5-b42a9392f95b>"
}
|
Before the COVID-19 outbreak, more than 820 million people did not get enough food to eat. As countries deal with this unprecedented pandemic, a danger exists that the crisis will push many more people into hunger. Indeed, the entire global food system may face shocks that will exacerbate the economic and health challenges millions of people around the world face.
The majority of hungry people work in agriculture as producers or laborers, and millions more people living on the edge of poverty work in food value chains. Women are especially vulnerable to food insecurity. As producers, for example, women often face formal and informal barriers to accessing the tools, resources, inputs and financing needed to be successful, earn a decent living from agriculture and escape poverty. The pandemic exasperates their situation and could be disastrous in countries suffering from pervasive poverty, poor healthcare infrastructure, and the absence of robust social safety nets.
Here in the United States, workers in grocery stores are on the front lines of the COVID-19 pandemic, risking exposure and possible illness in order to serve us. Many go to work every day because they need a paycheck and cannot afford to miss work. Seventy percent of retail cashiers are women – and women also do the bulk of unpaid care work at home.
Supermarket workers stock the shelves and help us check out, yet too many do not have access to paid sick leave or proper equipment and training to protect themselves. This isn’t right.
Oxfam is calling on US grocery stores to take crucial steps to support the workers in their stores and ensure they remain safe and healthy. Specifically, they must: provide paid sick leave for all their workers, ensure all workers have proper protective equipment and training in order to stay safe, and talk to their workers to develop the best solutions to meet these challenges.
Globally it seems increasingly likely that there will be disruptions to food production and increased food prices. COVID-19 could cause four potential shocks to the global food system leading to increased hunger for millions of people:
1. Supply shocks: As countries take drastic measure to stop transmission of COVID-19, extreme measures such as restricting the movement of people or goods could create significant supply shocks. If farmers fall sick or laborers are not able to move, fields may not be planted or harvested.
2. Price shocks: export restrictions, supply disruptions, production shortfalls and hoarding of food can all lead to increasing prices for basic staple foods like the rice, maize and wheat people rely on for daily sustenance.
3. Income shocks: as economies shut down and people are quarantined, the economic impact will be severe and dramatic, causing a sharp decrease in people’s ability to purchase food and other basic necessities including medicine and soap.
4. Nutrition shocks: As families are forced to shelter in place and access to healthy fresh foods is curtailed, they may resort to eating packaged and processed foods higher in fat, sodium and sugar. As women often eat last and eat less, they are at high risk of malnutrition when food insecurity within households is threatened.
As we take steps to slow the spread of COVID-19, it is critical to mitigate the potential negative impacts on food security. Governments must take proactive measures to protect the food supply chain, support the continued operation of markets, avoid harmful export restrictions, provide basic social and livelihood protection for households, and facilitate the ability of farmers to continue to operate. It is especially important that government action focuses on women and vulnerable populations, such as the elderly and people with compromised immune systems, who are most likely to experience the worst impact of the coronavirus crisis and are least able to cope.
An unprecedented pandemic calls for a response that engaged with local experts and communities that is carried out with a sense of shared humanity. Oxfam is working with partners in more than 50 countries and here in the US to meet the needs of those who are suffering the worst impacts of COVID-19. And although we might be isolated, we are not alone. Each of us has a role to play and we hope you will join us to stand up for the most vulnerable among us around the world.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9630026817321777,
"language": "en",
"url": "https://greaterspokane.org/levy-resources-and-faqs/",
"token_count": 950,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1728515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b38898f5-4578-454b-a43d-fac85b8f8d9b>"
}
|
LEVY RESOURCES AND FAQS
Learn More About Levies
As an employer and/or a community member, it is important to be informed and inform your employees on regional voting opportunities, especially those that directly impact our public school systems. In this list of links below, you will find helpful resources on how to best encourage your employees and community to “Vote YES for Education.”
What is a school levy?
- Levy = learning (staff, programs & activities).
- Replacement Levy = the renewal of an existing enrichment levy that is about to expire. Not a new tax, simply the continuation of an existing tax.
- A short-term, local property tax passed by the voters of a school district that generates revenue for the district to fund programs and services that the state does not fund or fully fund as part of basic education.
- A way for local communities to enrich basic education programs and activities not funded by the state that is necessary for a well-rounded education. Local educational program and operation levies fill the gap between state funding and the actual cost of critical programs.
- Levy dollars represent programs and positions that are not currently considered part of basic education. This includes* nurses, counselors, art, music, technology, textbooks and materials, STEM, transportation, extracurricular activities and smaller class sizes. *varies by district
- Levies make up 12-18% of regional school districts’ operating budgets every year.
- Levies require a simple majority for passage (50%+).
Why should I vote yes when our schools are fully, or partially, closed?
- We all want to see our students in school to have a normal learning experience that includes small class sizes, intervention services, athletics, activities, art, music, and safety. Voting no on replacement levies means students have not only missed out on these important school experiences during the pandemic but when they return to school after the pandemic, will continue to miss out on many of the same things given lack of levy funding. How tragic would it be for kids to be so eager to return to school for music, athletics, or library, only to find out those experiences don’t exist anymore. They would also return to school with higher class sizes, fewer nurses, less counselors, fewer special education services, and less intervention supports. Critical resources and support would be gone when our kids are going to need them the most. Simply put, voting no punishes kids, not adults. Kids deserve to return to school and have all the same experiences and services that students have always enjoyed. Voting no on the replacement levies would deny kids those opportunities.
Teachers and support staff are not working, why should I vote yes?
- Despite the perception that distance and hybrid learning is less work, it actually takes many more hours of work to make personal connections and provide instruction and support when kids are at home and not attending our schools. In many cases, teachers are working longer hours and often times on weekends to keep up with the demand and time intensive nature of connecting with kids. Building administrators have never seen staff more tired or fatigued. Teachers and support staff continue to work hard and be devoted to the students they serve.
Districts are saving millions of dollars while school buildings are closed and activities are suspended, why should I vote yes if they have all this extra money?
- It is true that districts are saving some money during the school closure. However, despite the closure of school buildings, districts are also facing new costs related to areas such as technology, intervention services, staffing to meet COVID-19 safety requirements, and health services. Any savings may help districts next year, however, that would in no way cover the 12-18% of funding that the local levy provides to each local district. In addition, the levy is for three years, any savings from this year would certainly not impact districts beyond the immediate future.
How does the levy impact the local economy?
- Collectively, all 14 school districts make up one of the largest employers in Spokane County. Without 12-18% of their operating budget, more than 700 employees would be laid off. This would harm individuals and would further depress the current economic crisis we are facing. It would be mean people out of work, which we know impacts local business. Simply put, our public school districts are a key part of the local economy and failed levies would hurt us all.
Please visit your local school district's website for more specific information and resources.
This campaign and materials are paid for the by the Alliance for a Competitive Economy of Greater Spokane Incorporated.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9456247687339783,
"language": "en",
"url": "https://innovationobserver.com/2016/01/25/when-it-comes-to-some-startups-ideas-do-matter/",
"token_count": 602,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1025390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8f1f08b5-8553-49f0-9e49-92db9be3698d>"
}
|
The rate of startup failure remains depressingly high: 55% of startups close before raising $1M in funding, and almost 70% of them die having raised less than $5M. So the question “Why do startups fail?”–or succeed, if you prefer a positive spin–is far from being purely academic, given the important role small businesses play in the global economy.
The lack of market demand, insufficient funding and incompetent team are routinely mentioned to account for the death of yet another startup project. One the other hand, factors making startups more successful have begun to emerge too. For example, the U.S. Small Business Administration reports that small businesses receiving mentoring services survive longer than non-mentored entrepreneurs, the fact pointing to potential value of startup accelerators and incubators. It was also noticed that startups that were funded by at least one corporate VC investor outperformed those funded exclusively by traditional VCs (here and here).
And then, there is a perennially debated question of the importance of the original idea behind any startup. One can often hear that ideas “are a dime a dozen” and that “startups are all about execution;” but a recent study paints more nuanced picture. The authors of the study took a look at a unique entrepreneurial program, the Massachusetts Institute of Technology’s Venture Mentoring Service (VMS). A peculiar feature of this program is that when an entrepreneur joins the VMS, a select group of advisors reviews a summary of the proposed venture, a document that describes technology, business model, key customers, etc., but provides little information about the founding team. Based on this summary, which is essentially just a “naked” idea behind the venture, VMS advisors decide whether to work with it.
Having analyzed the eventual outcomes of 652 ventures gone through VMS in 2005-2012, the authors of the study showed a positive correlation between the number of advisors who wanted to mentor a given venture–a signal of the quality of the original idea–and the likelihood that the startup would eventually reach the commercialization phase.
But there was a twist. The correlation between the advisor interest and startup success was especially strong for ventures with documented intellectual capital in R&D-intense sectors, such as life sciences and medical devices. No such a correlation was seen for non-R&D-intense sectors, such as consumer web and enterprise software.
The significance of the study is in pointing out that in different industries, there are different factors defining the ultimate success of newly emerging companies. These factors need to be further identified, industry by industry (a nice example of an “industry-specific” mentorship can be found here), and used as a tool by everyone working with startups: government agencies, accelerators/incubators and individual mentors.
Image credit: http://kaboompics.com/one_foto/1315
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9782564640045166,
"language": "en",
"url": "https://therobusttrader.com/why-do-stocks-exist-whats-their-purpose/",
"token_count": 1420,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ecd25345-92f9-4395-8ae1-1505f94a55d1>"
}
|
Last Updated on 13 April, 2021 by Samuelsson
Whether you are a beginner trader just starting out on the stock market or an experienced veteran, you might have wondered why stocks exist. Why is it that owners of companies sell parts of their ownership to anonymous people? Why is it that investors who will (mostly) not have anything to do with running the operations of a company are willing to risk their money on it?
Among several reasons, stocks exist so that owners can raise capital to finance the operations of the company. In fact, the first company in history to issue stocks to the public was the Dutch East India Company (VOC). Doing so, it managed to fund its operations, while at the same time making it possible for shareowners to invest and speculate in the company.
While this is one of the biggest factors to why stocks exist, there are some more reasons that can explain why a company decides to issue shares.
Reasons Why Stocks Exist
Let’s have a look at some of the different reasons!
Companies Can Raise Capital
Without stocks, the only way that companies would be able to raise money would be debt. Debt could be a better way to raise capital than stocks due to a higher Return on Equity. On top of this, the interest payments on debt are tax-deductible as well which can be incredibly useful for huge corporations with gargantuan tax bills. However, debt also has significant disadvantages.
For one, issuing too much debt can lead to too high-interest payments. If these interest payments get out of control, then a company can go bankrupt. On top of this, many companies do not have sufficient collateral or credit rating to obtain debt at a reasonable interest rate.
Issuing stocks is a great alternative to debt. It is true that issuing stock dilutes ownership and the return on equity for current investors. Still, stocks are considered a relatively safer way to raise capital as opposed to debt financing. For companies who are already highly leveraged, there might be no other option available apart from issuing stock.
While this solves the question of why companies utilize stocks when they are in need of capital, there are often companies that conduct an Initial Public Offering (IPO) when they do not have a need for a lot of capital. Let’s take a look at the reasons behind that.
Owners Cashing Out
In a private company, the majority of the ownership is usually held by the founders of the company. Also, the part that is not held by the founders is often either held by some of the employees or by the Venture Capitalist (VC) Funds that invested in the company at the early stages.
While it is possible for you to sell your stake in a private company, it is much more difficult than selling stocks. In order to sell private shares in a company, you need to find a private buyer who is willing to take on the risk of his ownership stake having extremely low liquidity. To add to that, private companies do not need to be as transparent when it comes to the financial state of the company, so any investor looking to purchase a small stake in a company will have less data on which to base their investment decision.
Stocks solve this problem too. They do this by having a market where you can sell as much or as little of your stake in a public company. While liquidity varies between stocks, most stocks are liquid enough for you to sell a lot of shares very quickly. On top of this, the price of your shares will also rise if the company does well (or if it seems like the company is doing well).
Also, being listed on an exchange means that the company needs to become more transparent and release more information about its operations and cash flow.
Once a private company has been around for some time, many investors (especially VCs) want to cash out. One of the most common ways to do this is through an Initial Public Offering where shares are offered to the public for the first time and the company is listed on a stock exchange.
Diversification for Potential Investors
Diversification is another reason for stocks existence. For the most part, owners of private companies own a fairly large percentage of the company and are looking to sell in bulk. Imagine trying to buy 5% of Microsoft. You would need over $50 million as of 9/1/2019. On top of that, we have already mentioned how difficult it is to purchase/sell shares in a private company due to there being no market for it.
A stock exchange solves this problem by offering a market where buyers and sellers can freely deal with one another in a safe and regulated manner. Any investor worth his salt knows how important it is to diversify your holdings. Even investors who take huge risks and place gigantic bets rely on diversification at a small scale.
Without stocks, the main way to increase your wealth would be for example to either start a business or invest in bonds. Many people may not have the time to research and start a business and bonds often offer an interest rate that is barely above inflation.
Stocks are one of the most inflation-proof investments, and enable individuals to reap the profits of a growing economy.
A Brief History of Stocks
The first-ever public company in modern times was the Dutch East India Company. Known as VOC due to its Dutch name, VOC was the first company to conduct an IPO in 1602 and sell its stock to the public. They were also the first company to be a part of a stock exchange.
The VOC had a complete monopoly on trading with the East Indies and any newly discovered territories. Eventually, they decided to form LLCs where the public could pay for their voyages and expect percentages of the profits.
The VOC was essentially a collection of various merchant ships. In order to keep supply low and demand high to keep prices high, the merchant ships banded together. This way, they could manage their schedules to make sure there was never an influx in the supply of spices.
Eventually, the company did a proper IPO. This was for a permanent stake in the company rather than individual voyages. The company began to pay dividends at this time. The company could use the money for everything from trading ships to warships, soldiers, and employees. After its initial IPO, the company issued bonds to further fund its operations. The bonds’ main purpose was to fund individual voyages and increase the ROE for shareholders.
The company was operational for almost 200 years until 1800. However, the flame of VOC’s trading monopoly was eventually extinguished and competition drove it to extinction.
Stocks exist because the financial markets are a better place because of them. Stocks are necessary in order to make sure strong businesses can grow at a deserving rate and provide individuals with the opportunity of building wealth.
If you enjoyed this article you might also like our other articles answering common questions traders have!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9508658051490784,
"language": "en",
"url": "https://www.business.com/articles/decision-support-systems-dss-applications-and-uses/",
"token_count": 1040,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.056396484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f98026b5-4869-48a1-851b-4e6727a7df4a>"
}
|
Decision support systems may provide businesses with more accurate projections, better inventory management and data analysis.
- Decision support systems collect, organize and analyze massive amounts of data.
- DSS helps businesses manage inventory, project sales and much more.
- DSS is used in a variety of fields, including medical, mapping and directions, education and real estate
What is a decision support system?
DSS is a computerized-based information system that organizes, collects, and analyzes business data that can be used in decision-making activities for the management, operations, and planning in an organization or business. The typical types of information that is gathered by a decision support system may include sales figures and projected revenue, inventory data that has been organized into relational databases for analysis, and comparative sales figures between specific, selected time periods.
A good DSS helps decision-makers with compiling various types of data gathered from several different sources, including documents, raw data, management, business models and personal knowledge from employees.
DSS applications can be used in a vast array of diverse fields, such as credit loan verification, medical diagnosis, business management, evaluating bids on engineering projects, agricultural production and railroad evaluation.
Specific uses for DSS in business
DSS is getting a lot of attention from many businesses as a way to promote better projections, management and analysis within a company or business.
DSS comes in many forms, and the term basically refers to a computer-aided system that helps managers and planners make decisions. There are a lot of different ways managers can use DSS software to their advantage if they are open to exploring DSS applications and uses. Typically, business planners will build a DSS system according to their needs and use it to evaluate specific operations, including
- A large stock of inventory, where DSS applications can provide guidance on establishing supply chain movement that works for a business.
- A sales process, where DSS software is a "crystal ball" that helps managers theorize how changes will affect results.
- Other specialized processes related to a field or industry.
DSS can help manage inventory
DSS can come in handy by evaluating stock held in a facility, or any other type of business asset that can be moved around or otherwise optimized. This is often one way a business can profit from "itemizing" its assets with DSS.
DSS can aid sales optimization and sales projections
Decision support technology can also be a tool that analyzes sales data and makes predictions, or monitors existing patterns. Whether it's big picture decision support tools, active or passive solutions, or any other kind of DSS tool, planners often tackle sales numbers using a variety of decision support resources.
Utilize DSS to optimize industry-specific systems
There are other uses for this powerful software option, to make good projections on the future for a business or to get an overall bird's-eye view of events that determine a company's progress. This can come in handy in difficult situations where a lot of financial projection may be necessary when determining expenditures and revenues.
Examples of DSS
DSS operates on several levels and there are many examples of common day-to-day use for decision support systems. For instance, GPS is used to determine the best and quickest route between two points. This task is completed by comparing and analyzing the option for multiple possibilities. GPS systems may also include features such as traffic avoidance, which monitors the traffic conditions between the two points, allowing you to avoid traffic congestion.
One of the easiest ways to understand how DSS works is to consider your computer use; every time you log on and use a search engine, you've used a DSS to organize a massive amount of information and transform it into images, videos, and text files in order to choose the information that best suits your search. Other ways DDS is used may include:
- Farmers that use tools for crop-planning to help determine the best planting time, when to fertilize and when harvest.
- When DSS is used in medicine it is known as clinical DSS. The functions of clinical DDS may be used to manage details and complex information for a wide range of things, such as maintaining research information about chemotherapy protocols, preventative care, tracking orders and follow-up care. The system is often used for cost control, avoiding duplicate tests and monitoring medication orders. DSS is also used with medical diagnosis software, which provides medical personnel with the ability to diagnose illnesses.
- Some states have used DSS to provide information about potential hazards, such as floods. The system includes real-time weather conditions and may include information (current and historical) about floodplain boundaries and county flood data.
Real estate companies often use DSS for information about properties, including current data such as neighborhood comparison prices, acreage and future planning.
- Universities and colleges rely on DSS to know how many students are currently enrolled, which allows them to predict how many additional students are needed in particular courses or overall population to ensure there are enough students enrolled to meet the university costs.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.954079270362854,
"language": "en",
"url": "https://www.cityindex.com.sg/market-analysis/what-are-negative-interest-rates/",
"token_count": 1150,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01202392578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:be697694-12a4-4f4d-87b9-907191b116d2>"
}
|
Bank of England base rate: what are negative interest rates?
Rebecca Cattlin March 12, 2021 7:20 PM
Throughout the Covid-19 crisis, the Bank of England’s talk of negative interest rates has sparked concerns among banks, investors and consumers. But what exactly would negative interest rates mean and why would the Bank of England consider them?
What are negative interest rates?
Negative interest rates refer to when the bank rate, handed down from the Bank of England (BoE) to commercial banks, falls below zero.
The bank rate is the amount these institutions receive per year in interest for deposits held in the bank. For example, under a bank rate of 0.1%, a £1000 deposit would earn £1 per year in interest. Under a negative interest rate, banks would instead pay an interest charge on their deposits with the BoE. For example, if the rate fell to -0.1%, the same £1000 deposit would be worth £999 after a year.
Why would the Bank of England consider negative interest rates?
Negative interest rates occur in deflationary periods, when people hold too much capital rather than spending it. In theory, by taking interest rates below zero, it would become undesirable to save money.
In March 2020, it cut them to a historic low of 0.1% in an attempt to stimulate the economy amid the coronavirus pandemic – this bank rate was held throughout the year and into 2021. As the BoE’s Monetary Policy Committee has already pursued other means of bringing inflation back toward the 2% target, it was announced they were reviewing the possibility of using negative interest rates.
In February 2021, the Bank of England asked that banks and building societies be ready to implement negative rates by the end of the summer. Although Andrew Bailey, Governor of the BoE, made it clear that these preparations were not an indication that the MPC intends to set negative rates – it’s just another tool in their arsenal.
While negative rates are an extremely uncommon tool and are used sparingly, the UK wouldn’t be the first country to resort to the measure. Sweden, Switzerland, Japan and the eurozone have all taken interest rates below zero at some point.
What would negative interest rates mean?
Negative interest rates would mean that banks earn less money on deposits with the BoE. This would hit bank earnings by reducing the profit margin between the money they make on loans and what they pay to savers.
Any rates below zero mean that banks would have to charge consumers to use bank accounts in order to make money, making it fairly pointless to hold savings in a bank account. Although in recent years, commercial banks have been reluctant to pass on negative rates to customers in order to remain competitive.
However, borrowers would be credited for taking out loans – such as mortgages – rather than having to pay interest to lenders. That being said, there are a lot of other factors that are taken into account when assessing individual loans, so a negative bank rate won’t necessarily be passed on to borrowers.
How to trade interest rate changes
Trading on price movements in Interest rates allows you to diversify your positions. If you think interest rates will rise, you can buy or go long on a market, and if you think they’re going to fall, you can sell or short the market.
Learn more about interest rate trading with us.
This report is intended for general circulation only. It should not be construed as a recommendation, or an offer (or solicitation of an offer) to buy or sell any financial products. The information provided does not take into account your specific investment objectives, financial situation or particular needs. Before you act on any recommendation that may be contained in this report, independent advice ought to be sought from a financial adviser regarding the suitability of the investment product, taking into account your specific investment objectives, financial situation or particular needs.
GAIN Capital Singapore Pte. Ltd., may distribute reports produced by its respective foreign entities or affiliates within the GAIN Capital group of companies or third parties pursuant to an arrangement under Regulation 32C of the Financial Advisers Regulations. Where the report is distributed to a person in Singapore who is not an accredited investor, expert investor or an institutional investor (as defined in the Securities Futures Act), GAIN Capital Singapore Pte. Ltd. accepts legal responsibility to such persons for the contents of the report only to the extent required by law. Singapore recipients should contact GAIN Capital Singapore Pte. Ltd. at 6826 9988 for matters arising from, or in connection with the report.
In the case of all other recipients of this report, to the extent permitted by applicable laws and regulations neither GAIN Capital Singapore Pte. Ltd. nor its associated companies will be responsible or liable for any loss or damage incurred arising out of, or in connection with, any use of the information contained in this report and all such liability is hereby expressly disclaimed. No representation or warranty is made, express or implied, that the content of this report is complete or accurate.
GAIN Capital Singapore Pte. Ltd. is not under any obligation to update this report.
Trading CFDs and FX on margin carries a high level of risk that may not be suitable for some investors. Consider your investment objectives, level of experience, financial resources, risk appetite and other relevant circumstances carefully. The possibility exists that you could lose some or all of your investments, including your initial deposits. If in doubt, please seek independent expert advice. Visit cityindex.com.sg for the complete Risk Disclosure Statement.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9585016965866089,
"language": "en",
"url": "http://character-studies.com/blog-posts/individual-differences-perspective-on-retirement/",
"token_count": 1599,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.12451171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:334db4c4-b9b3-4b1f-8fcd-b204771e24ff>"
}
|
Globally, life expectancy age is rising, with the average age has increased by 5.5 years during this century, to currently being 72 (WHO, 2016). Although being a great development in and of itself, the so-called ‘greying of society’ comes with societal challenges as well. Ordinarily, parents take care of their children when they are young, who at their turn look after their parents when they grow old; the societal equivalent being the working class bringing in tax money which helps ensure a minimal financial standard of living for the retired. In order for such a system to work, a balance has to be warranted between the fiscal costs of living of the retired and the capital brought in by those still a part of the labour force.
Worldwide, in countries in which such a governmental social service exists, it is common that a fixed retirement age is put in place. Currently, the Netherlands is rated to have the best, most financially secure system worldwide (Melbourne Mercer Global Pension Index, 2019). However, even in the Netherlands, the debate regarding the retirement age has been a hot issue for years, with a policy of a gradual increase of the pre-determined retirement age having faced heavy resistance. Nevertheless, such an increase will be required in order for the system to maintain its financial stability. Particularly when you consider that by 2040, in the Netherlands, 25% of all people will be above retirement age. Similarly, for the countries lower in the ranking, one of the largest points of critique already currently is a need to increase the retirement age in order for the system to be financially sustainable.
For something as unique as someone’s working life, it is interesting to notice how little individual differences are taken into account when considering the end-point of one’s career. Across all countries in which a policy framework for retirement has been set-up, the only individual difference that is taken into account in a few of the countries is gender. Paradoxically, the countries that make a distinction based on gender all have a lower retirement age for women as opposed to men, despite the average life expectancy of women being higher all across the world. Overall, it can be seen that the end-point of one’s career is determined mostly based on sociological – abstract – measures, such as the supply of labour force and fiscal costs of ageing; overstepping the individual reality of the worker.
When considering such reality, a set end-point poses a constraint, which can heavily influence the way an individual looks at the rest of his or her remaining career. Theoretically, the Future Time Perspective theory describes the way individuals look upon their future. Cognitively, the perception regarding remaining opportunities to grow out in one’s job has an influence, together with expected difficulties and challenges expected in the remaining time left. Generally speaking, it can be seen that less time perceived to have left narrows the mind, as there is less time remaining to still grow and develop.
Such diminishing of motivation towards the later stage of an individual’s career can be seen in both the employer as well as the employee. For the employee, prior research has established lower promotion focus, lower motivation to grow out in one’s job and lower motivation to continue working in general (Kooij & Bal, 2014). With a set end-point approaching -and your environment expecting you to retire anytime soon – you start working towards the end. Unfortunately, accumulated experience hereby goes to waste. Similarly, also from the perspective of the employer, it becomes more of value to invest in training of younger personnel rather than provide managerial training to individuals who are set to leave the company and retire due to a set end-point either way.
However, when purely considering the effect of age on work attitudes and performance, evidence of a different picture can be found (Kooij et al., 2011). Specifically, an association has been established between age and beneficial work attitudes as well as performance. For example, an increase in age has been shown to be associated with lower amounts of burn-outs, higher job engagement and involvement, higher trust and commitment towards the organization one works for as well as higher feelings of perceived fairness at work.
Similarly, on the performance side, an effect on aspects such as higher commitment to safety measures, less absenteeism, less counterproductive work behaviour and overall higher core task performance has been found. Taken together, it can thus be said that older employee’s in general hold more favourable work attitudes with better performance, whereas an approaching end-point of a career limits perceived motivation for further growth and lowers further desire to continue working.
Therefore, it could be considered whether a general end-point based on chronological age is a sensible approach or a more individual-based approach should be considered. When an end of a career becomes a question rather than a given, the motivation to invest in training programs together with mental and physical resilience programs for employees could be greatly enhanced. Investing in the health of ageing employees can ensure their employment, providing the company with committed, experienced employees. Such an approach could contribute towards a healthier greying of the population.
At the same time, financial stability potentially could be improved with an individual-based approach, when a higher level of control regarding the end-point of one’s career is given to the individual. Rather than a fixed end-point, it could be transformed into a possibility of retiring once an individual has reached an age corresponding with 80% of the average life expectancy, i.e. being 80% of 81,5 in the Netherlands, thus 65.2 years of age. Not only could such a more flexible endpoint be more durable with the increasing average life expectancy over time, but it could also provide the opportunity for individuals that feel motivated and capable to continue working, to do so.
At the same time, rather than limiting the time left, this could motivate employers as well as governments to look for ways to provide an individual with opportunities to remain employed. For example, for someone who has worked a life of manual labour incapable to continue doing so at old age, yet rather being provided with a task rather than having to sit at home in retirement, opportunities to pursue training to be able to utilize the acquired experience in different ways could be offered. Potentially, the increase in growth opportunities in later stages of an individual’s career could have a positive effect on the balance between those retired and those still a part of the labour force.
When paying attention to the unique individual rather than the overall group he or she belongs to, unique opportunities otherwise not there can be seen. Rather than being limited regardless of individual strengths and capacities, such strengths can be made useful, attention to the individual thereby being potentially beneficial to the overall society.
Kooij, D. T., De Lange, A. H., Jansen, P. G., Kanfer, R., & Dikkers, J. S. (2011). Age and work‐related motives: Results of a meta‐analysis. Journal of Organizational Behavior, 32(2), 197-225.
Kooij, D. T., Bal, P. M., & Kanfer, R. (2014). Future time perspective and promotion focus as determinants of intraindividual change in work motivation. Psychology and ageing, 29(2), 319.
Melbourne Mercer Global Pension Index (2019) Melbourne Mercer Global Pension Index, Monash Centre for Financial Studies, Melbourne.
World health statistics overview (2019) Monitoring health for the SDGs: sustainable development goals. Geneva: World Health Organization, WHO/DAD/2019.1. Licence: CC BY-NC-SA 3.0 IGO. Source of picture: https://nl.pinterest.com/pin/203928689363934757/
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9204304218292236,
"language": "en",
"url": "http://www.edouardstenger.com/2013/02/05/solar-cheaper-coal/",
"token_count": 349,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2041015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ae902f53-94bd-405d-b995-27032d5af09a>"
}
|
Solar energy may already cheaper than coal, at least for New Mexico, USA. To Bloomberg : ” First Solar, the world’s largest maker of thin-film solar panels, may sell electricity at a lower rate than new coal plants earn. “
” El Paso Electric Co. agreed to buy power from First Solar’s the 50-megawatt Macho Springs project for 5.79 cents a kilowatt- hour (…) That’s less than half the 12.8 cents a kilowatt-hour for power from typical new coal plants. “
Having solar at grid parity to coal would be a real game changer, a paradigm shift. New coal plants would totally lose their interest, and thus nobody would build them anymore.
On the other side of the world, Chinese leading Solar PV manufactuers believe that by 2015 solar panels will cut their costs by another 30 percent, reaching 42 cents per Watt.
To Greentech Media :
The cost of producing a conventional crystalline silicon (c-si) solar panel continues to drop. Between 2009 and 2012, leading “best-in-class” Chinese c-Si solar manufacturers reduced module costs by more than 50 percent. And in the next three years, those players — companies like Jinko, Yingli, Trina and Renesola — are on a path to lower costs by another 30 percent.
(…) “Clearly, the magnitude of cost reductions will be less than in previous years. But we still do see potential for significant cost reductions. Going from 53 cents to 42 cents is noteworthy,” says Shayle Kann, vice president of research at GTM Research.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9248796105384827,
"language": "en",
"url": "https://blog.maicoin.com/2020/04/18/introduction-to-hierarchical-threshold-signaturerevised-version/",
"token_count": 1703,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06689453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d3b9bbbd-b3ec-4704-bafe-46dc106133ff>"
}
|
In honour of opening sourced HTSS-Lib https://github.com/getamis/alice worked by AMIS, we revise this topic: Hierarchical Threshold Signature.
Digital signature is a digital analogue of a pen-and-ink signature on a physical document. The purpose of digital signature is to solve the following scenario. Alice has a digital document and wants to attach some “proof” that can be used to prove that she had approved this document. Therefore, digital signature can be recognized as an analogue to her handwritten signature on an ordinary document.
Therefore, it is very critical to clarify and confirm that who had signed the contract. In order to prevent forged digital signature, we have to apply public-key cryptography. Public-key cryptography, is a cryptographic system that uses pairs of keys: public keys, which may be disseminated widely, and private keys are known only to the owner. Using this cryptography primitives, the basic idea of digital signature is that the person who signs a message by his private key, and anyone can use the associated public key to verify the signed message. Meanwhile, an ideal digital signature should have the following properties:
- A signed message can unambiguously be traced back to its originator as a valid signature is only generated by the unique signer’s private key. Only the signer has the ability to generate a signature on his behalf.
- Anyone can use the public key to convince himself/herself that the signer has actually approved this message.
- It is very difficult to get the private key even though you have the knowledge of a public key.
There are three popular public-key families, namely integer factorization, discrete logarithm of finite fields or elliptic curves. Each of these allows us to construct ideal digital signatures: RSA, DSA, and ECDSA. Among these digital signature, RSA is the most widely used method in practice. However, in order to attain the same security-level as other signatures, the bit-length of public key of RSA should be the longest, which implies that it needs more space to save necessary data. Therefore, Bitcoin and Ethereum both adopt ECDSA which signature is the shortest one compared with other signature types.
In the architecture of digital signature, anyone can sign transactions by using a private key. Therefore, key management plays a significant role in blockchain technology regarding the protection of digital assets. Practically speaking, losing private keys leads to great losses. Improper key management and poor system implementation may increase the risk of asset being transferred maliciously. Take an extreme case that happened before as an example, a principal died suddenly and no one was able to recover keys so that the whole asset was frozen. To solve these problems, experts therefore propose two solutions: multi-signatures and threshold signature scheme (abrev. TSS) to reduce the risk of key management. The purposes of both are(ref. https://en.bitcoin.it/wiki/Multisignature):
- If we can avoid a single-point failure, it will be more likely to prevent the asset to be transferred.
- M-of-N backup where loss of a single seed doesn’t lead to loss of the asset.
Multi-signature requires multiple private keys to authorize a transaction, rather than a single signature from one key. In detail, the multi-signature can be t-of-n type and it is possible to transfer the money once the transaction possesses t private keys out of total n keys. For example, a 2-of-3 multi-signature might have your private keys spread across a cold wallet, laptop, and smartphone, any two of which are required to move the money, but the compromise of any one key cannot result in theft. However, the main flaw of multi-signature is not so natural such that we have to write similar logic codes in different blockchains.
Threshold Signature Scheme:
To solve this problem, TSS has come in view of people. Let n be the number of participants and 1<t<n. A t-of-n threshold signature scheme means that a private key constructed by this scheme is divided into n parts called “share”, and at least t shares are required for creating a signature. In details, threshold signature includes four phases as follows:
- Key Generation:Each participant chooses his/her secret value first. All the participants run a progress together to determine their private key, the public key, and their own private shares based on these secret values.
- Sign a transaction: Each participant uses his/her private shares and a public message to be signed as input. All the participants in this protocol will exchange some necessary data such that each person produces a partial signature and broadcast it. Combining these partial signatures will produce a digital signature. The most important thing is that the process ensures that no leakage of secret shares will occur and the private key is never appeared.
- Verification: The verification algorithm of TSS and the original case are the same. Anyone who has the knowledge of the public key and the message is able to verify the correctness of a signature.
- Refresh share: Refreshing the shares means that we change the value of shares without altering the public key. Periodically refresh can reduce the number of compromised shares to zero. Assuming that old un-compromised shares are erased, the refreshing process makes it more difficult to reach a state where the number of contemporaneous compromised shares surpasses the compromise threshold.
Compared to multi-signature, TSS offers shorter signature and better privacy. Most importantly, TSS does not save private key on the server and provides risk control as well as separation of duties. It seems that TSS may be a fabulous solution, but there are still some problems. For example, an important contract not only requires enough shares to sign, but also needs to be signed by a manager. Despite the fact that vertical access control can be realized on the application layer and tracked by an audit log. Once a hack happens, we will have no idea about who to blame for in TSS.
Hierarchical Threshold Signature:
To solve this scenario, Professor Tassa introduced Hierarchical Threshold Signature Scheme by assigning different ranks of each share such that any valid signature generated includes the share of the manager (i.e. All shares in TSS have the same rank). A naive application of HTSS in cold wallets describes as follows. Assume that t =2 and n=3. In this case, we generate two high-ranked shares into different cold wallets and a low-ranked share in the cell phone or computer (ie. more risky place). After we generate all shares, one of the cold wallets can be used as a backup. Remember that if we want to sign a transaction, it should be either a high-ranked shares combined with a low-ranked shares or two high-ranked shares. As usual applications, there exists two situations to be considered:
- Low-ranked shares lose: No matter how many low-ranked shares lose, no one can generate a correct signature due to the advantage of HTSS. Because any signature should involve at least a high-ranked share.
- A cold wallet lose (i.e. a high-ranked share lose): User uses his/her backup cool wallet to transfer his/her property to a new address. Or another solution is to add a new share and then refresh all the shares.
Therefore, if there is an illegal signature, we can confirm that at least the high-ranked share had been involved, which is so-called “partial accountability”.
- HTSS preserves flexibility between partial accountability and privacy.
- A rank of each share represents different power in business models.
- Compared to TSS, distributing extra new low level shares to users is less risky(i.e. This is an important merit for currency exchange) because low level shares can not generate a valid signature.
- Shamir’s Secret Sharing
- An Introduction to Mathematical Cryptography
- Understanding Cryptography
- Hierarchical Threshold Secret Sharing
本文由 AMIS ChihYun Chuang 提供
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9637815356254578,
"language": "en",
"url": "https://consensusadvisors.com/03222021/?h=post",
"token_count": 878,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0086669921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:65ce40b9-9926-4676-ac4a-8c6d31bfccae>"
}
|
Supply chains for many commodities and products are more out of whack today than they were a year ago. This is surprising to those of us who thought that over the course of a year of pandemic life, things would be more sorted out. But here we are today, and a number of items are now harder to get, or at least more expensive. Why?
There are different reasons for different shortages, but a common answer is that COVID-19 and its effects have pushed supply chains past the limits of their flexibility. While the first shortages we felt last spring (toilet paper, hand sanitizer, KN95 masks) were the result of pantry stocking and panic-consumption, many of the shortages of today are a result of lasting imbalances in supply and demand, changing consumer trends, and the fact that it can take years for certain industries to add new capacity.
There are numerous examples. Take, for instance, cardboard. The pandemic has resulted in a boom in ecommerce – according to Internet Retailer, ecommerce sales jumped 43% in 2020, nearly three times the average growth rate of the prior seven years. More ecommerce requires more cardboard boxes for shipping, and the corrugated industry has struggled to keep up. Mills have been running at full capacity, but it takes time to make new boxes. As a result, cardboard prices have surged, and some businesses have recently been forced to switch to plastic packaging. (More on plastics in a bit.)
Another example is semiconductors. At the start of the pandemic, consumer spending plummeted – auto sales were almost cut in half from last February to last April. At the same time, however, sales of electronics boomed as people built out home offices and bought new devices for their children’s remote schooling. Consequently, many semiconductor manufacturers switched from making chips for cars to making chips for electronics. When auto sales recovered, not only was there more demand for semiconductors in total, but less of the industry was making chips for auto manufacturing. Unfortunately, semiconductor factories are incredibly complex, and can take billions of dollars and years to build. A number of auto companies and electronics companies have felt the pinch, including Volkswagen, Fiat Chrysler, Nissan, and Samsung. General Motors, Ford, Honda and Toyota have been forced to cut production due to the shortage of semiconductors.
Another contributing reason for surging prices and shortages in a number of products and materials is the Federal Reserve’s reaction to the pandemic: cutting interest rates to nearly zero. Low interest rates has sparked a surge in real estate prices, home remodeling, and home building. According to the Wall Street Journal, lumber prices have more than doubled and have never been higher, and prices for granite, insulation, concrete blocks and bricks have all hit record highs. As with cardboard and semiconductors, home building materials suppliers just can’t catch up.
Global shipping hasn’t caught up yet either, contributing to issues across many other supply chains. In addition to the shortage of shipping containers in Asia that my colleague Marshall Schleifman wrote about in this space earlier this month, a surge of imports has resulted in a traffic jam at the California ports of Long Beach and Los Angeles, resulting in dozens of colossal cargo ships waiting at anchor until space opens, delaying their deliveries.
But changing consumption patterns aren’t the only culprit in this mess; mother nature hasn’t been kind either. Last month’s cold snap and storms in Texas knocked out power to numerous chemical plants and factories in the region, resulting in a doubling (or more) in the prices of the plastics polyethylene, polypropylene, and PVC.
Supply chain issues have already lasted much longer than some of us initially anticipated they would at the start of the pandemic and threaten to last still longer. But try to take solace in the big picture. Yes, the supply chain is going to be a mess for longer than expected, but COVID-19 vaccines were developed and made available last year faster than any vaccine in history. I’ll take that tradeoff every day of the week. I just hope the recent shortage of syringes doesn’t prevent me from getting my shots.
Read the full weekly consensus
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9677692651748657,
"language": "en",
"url": "https://gainesvillebizreport.com/the-debt-problem-in-america/",
"token_count": 426,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.00372314453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:787a9d70-15af-4a76-b2a5-c79e27efb058>"
}
|
Household debt in the U.S. is the highest it has ever been. According to the New York Fed, total household debt reached $13.67 trillion in the first quarter of 2019, an increase of 0.9% from the fourth quarter of 2018 and nearly $1 trillion above its previous peak in 2008.
These statistics make sense in the larger context of the national economy. In general, household debt decreases during recessions, and increases during economic booms. That’s because banks often tighten borrowing requirements during recessions, making it difficult for consumers to take out loans.
With the U.S. economy more than ten years into its longest-ever expansion, it isn’t surprising that total household debt has increased for 19 consecutive quarters.
Debt isn’t a bad thing in itself, since debt can finance a variety of purchases, like homes, cars and education. But debt can become problematic if the borrower can’t repay the loan. Across all households, the percentage of loan balances in serious delinquency (90+ days late) is currently 4.5% for auto loans, 7.8% for credit card payments, 1.1% for mortgages and 11.4% for student loans.
Delinquencies also follow the ups and downs of the economy. During recessions, more people tend to skip out on their debt repayments; during periods of economic expansion, delinquencies tend to decrease. At the peak of the Great Recession in late 2009, 11.8% of total loan balances were at least 30 days delinquent. Compare that to the beginning of 2019, when that number was just 4.6%.
In other words, more Americans today effectively manage their debt and pay their bills on time.
“The key to managing debt is planning . . . Before you take on debt, you need to know three things: why you plan to take on the debt, how you are going to repay it and the date by which you will repay it,” explains Rod Griffin, Director of Consumer Education and Awareness at Experian.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9485561847686768,
"language": "en",
"url": "https://macroessays.com/factors-which-shape-organizational-approach/",
"token_count": 4076,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.04248046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5bb9f987-8656-4d50-827a-9e7efc77b198>"
}
|
In organisations pay rates are influenced by several factors, such as: the intrinsic value of the job, the internal and external relativities and the market inflation.
Every job should be valued to be given a rate of pay. The intrinsic value of a job is based on the amount of responsibility, the level of competences, and the degree of skills required to perform the job (Armstrong and Murlis, 2007). Nevertheless, the intrinsic value depends on other characteristics, such as: the number of resources managed the amount of power job holders possess, the degree of flexibility in making decisions and the extent to which they receive guidance on how to perform their duties (Armstrong and Murlis, 2007).
It has been argued that it is impossible to find a definite value for a job since the value of anything is always relative to something else (Armstrong and Murlis, 2007). So jobs are compared either to other jobs (internal relativities) in the organisation or to similar job outside the organisation (external relativities) (Milkovich and Newman, 2002).
The Internal relativities are based on determining the value of the job compared to other jobs in the organisation (Milkovich and Newman, 2002). The internal differentials between jobs are based on the information related to the input requirements and the use of different levels of knowledge and skills. Differentials between jobs can also be connected to the added value they create (Milkovich and Newman, 2002). While external relativities are based on determining the value of the job according to the market rate (Milkovich and Newman, 2002).
However, market rates are not very accurate, since market surveys reveal salaries about organisations that are usually linked to their unique circumstances in terms of their organisational structures, the numbers of people employed and their own pay policies (IRS Employment Review, 2006). Nevertheless, organisations that tend to compete and seek to recruit high employee calibre need to adjust their salaries based on market rates (IRS Employment Review, 2006). In some cases, specific individuals whose talents are unique in the market place could have their own individual market value, based on what the market is willing to pay them for their services. Head hunters are usually interested in these people (Armstrong and Murlis, 2007).
Organisations strive to achieve a balance between the internal relativities ensuring equity and the external relativities ensuring a competitive position in the market. However, organisations seeking this balance will always face a degree of tension when creating it (Armstrong and Murlis, 2007).
The inflation and market movement will definitely have an influence on the pay rates of any organisation (People Management, 2007). Other factors can influence the pay rates, as follows: the budget, the pay strategy, and the pressure by trade unions (Perkins and White, 2007).
Job evaluation is a systematic approach that establishes internal relativities among jobs by assessing their intrinsic values, to be later placed in grade and pay structures (CIPD, 2002d). Job evaluation is important to ensure consistency, equity and transparency (Milkovich and Newman, 2002). Nevertheless, many researchers argue that job evaluation is very time consuming, inflexible and has no relation to salaries since salaries are set in reference to market rates (Armstrong and Brown, 2009).
Job evaluation can be both analytical and non analytical. The former makes decisions based on the analysis of the extent to which a set of predetermined factors are present in a job (Burn, 2002). The factors should be selected carefully and agreed on by the management. The same factors are used in the evaluation of other jobs (Armstrong and Cummins, 2011).
In a point factor rating, jobs are broken down into several factors and certain levels are allocated to these factors reflecting the extent to which the factor is present in the job. The levels should be carefully described ensuring fair judgements are made. Following that, all points are summed up together resulting in a total score for the job (Burn, 2002). Ranks are then arranged sequentially in reference to the total scores of the jobs which are divided later into grades. Pay ranges will be attached to these grades taking into account the external relativities (Burn, 2002). This evaluation might be very time consuming therefore it is possible to evaluate several general roles and then mach the rest to these grades. The other analytical approach is the factor comparison that compares jobs, factor by factor using a scale of values.
In comparison, the non-analytical job evaluation compares jobs as a whole together and not based on any factors as in the analytical approach and then place them in a grade or rank order. Therefore it is a simple and easy process, yet it is very subjective since it is not based on any fixed standards (Armstrong and Cummins, 2011).There are different non analytical methods, as follow: job classification where jobs are placed in a grade by matching the job description with the grade description without any numerical values (Burn, 2002). The other method is job ranking which compares whole jobs to one another and then arrange them according to their perceived size. The paired comparison ranking is another tool which compares a certain job as a whole separately with each and every other job in the organisation (Burn, 2002).
The non analytical approach is another type for job evaluation that neither falls under the analytical job scheme nor the non-analytical scheme, it encloses the following two tools: market pricing and job/role matching (internal benchmarking) (Burn, 2002).
Market pricing is the process of determining the rate of pay for a certain job according to the supply and demand in the labour market (Corby, 2008).However, it neglects the internal relativities, which may results in dissatisfaction among employees who put in the same effort in different jobs but are paid unequally due to the different market rates (Corby, 2008). Nevertheless, internal relativities might be achieved if all jobs in the organisation are market driven (Armstrong and Murlis, 2007).
These rates should be accurate and up to date. Market info sources should compare jobs that are similar in: region, sector, organisation’s size, industry and job value (Milkovich and Newman, 2002).There is no absolute correct market rate for a certain job since it is not possible to track similar jobs everywhere (IRS Management Review, 2006). Therefore, it only provides approximate values. Organisations must decide its market posture and stance, in other words it must decide the extent to which their pay levels should relate to market rates (Milkovich and Newman, 2002).
Spot rates are rates given to a certain job taking in consideration the market relativities and the market value of an individual (Fisher, 1995). They are usually applied when paying professionals, senior managers, and manual workers. This type of pay does not consist of grades and pay ranges; therefore there is no scope for pay progression (CIPD, 2010). Any increases will produce a new spot rate for the person (Berger. and Berger, D., 2000). Spot rates are often used by small organisations that don’t have any formal graded structure. Furthermore, organisations that seek a maximum amount of flexibility in their pay would adopt the spot rate (Armstrong and Murlis, 2007).
Individual job grades are similar to spot rates with a defined pay range in both sides. A scope of progression is available to reward employees without the need to upgrade their jobs (Corby, 2008).
4.1. Narrow graded structure (Conventional graded structure) consists of 10 or more grades arranged sequentially, pay ranges are attached to each grade with a maximum of pay range between 20%-50% above the minimum and differentials between pay grades are usually around 20% (Perkins and White, 2007). Grades are described by points based on analytical job evaluation, by grades’ definitions or by the types of jobs slotted in grades (Fisher, 1995). Mid points are defined between the minimum and the maximum of grades based on the market rates and the organisation’s market policy which explains the relation of its pay levels and market rates.
The mid point represents the rate of a fully proficient person (Berger, L. and Berger, D., 2000). This structure ensures equal pay for equal value since all jobs are placed in the structure in reference to their job evaluation points (Milkovich and Newman, 2001). Nevertheless, this type of structure may result in extended hierarchy that does not exist in flat and flexible organisations. The width of grades and pay ranges are very narrow, therefore this structure doesn’t provide enough scope for progression (Armstrong and Murlis, 2007) .A common problem that arises with this structure is the grade drift; where grades are upgraded only to reward people that might already have reached the max of their grades and yet there is no more scope for progression for them within the current grade (Fisher, 1995).
This type of structure fits mostly in large and bureaucratic organisations with rigid structures and in which great amount of control is required.
4.2. Broad banded pay structure is a structure where a number of different grades are all put together in a small number of wider bands (Milkovich and Newman, 2001). The number of bands is usually 5 to 6 bands with a width typically around 50% and 80%; the number of bands is based on the levels of responsibility and accountability in an organisation (Martocchio, 2001). In this type of structure; the pay is based more on market relativities (Caruth and Handlogten, 2001). Jobs are slotted in bands either by reference to their market rate or by reference to both: their job evaluation and the market rates (Armstrong and Murlis, 2007). Bands are described by the type of jobs slotted (senior management) in them and the similar roles they enclose (administration and support) (Armstrong and Murlis, 2007).
Broad band pay structures offer great flexibility in pay that could result in increased labour costs and high employee expectations. Therefore reference points that are aligned to market rates can be added in order to guide pay progression rather than to control it as in narrow graded structure (Milkovich and Newman, 2002). Ranges for progression exist as zones around the predefined reference points (CIPD, 2010). It is easy to update and modify this type of structure because many jobs can be placed in one band (IRS Management Review, 2001c).
Additionally, a broad banded structure facilitates cross team working since it allows lateral movement between bands (Corby, 2008).This type of pay structure puts a great load on line managers and HR especially in appraisals and pay progression, therefore equal pay problem may arise (Berger,L. and Beger,D., 2000). It is difficult to explain for employees the basis upon which salaries and pay progression are set. It has also been argued that wider bands will result in placing jobs that are different in value altogether, since this structure depends more on market rates than job evaluation (Martocchio, 2001).
This structure fits in flexible and delayered organisations where a scope of flexibility within pay is required. Furthermore, it fits in organisations that focus on lateral development rather than vertical career growth and promotions (Corby, 2008).
4.3. Career and Job Family structures in these types of structures jobs are grouped into families. Career families group jobs in either functions or occupation, such as: marketing, operations and engineering (People Management, 2000a). They are grouped based on the activities they execute and the fundamental knowledge, skills and competencies needed to perform the job, but in which levels of responsibility, knowledge, skills and competences differ (Armstrong and Murlis, 2011, p.219). The number of levels within a family is often 6 to 8 levels and pay ranges are attached to levels. Nevertheless, these numbers are not fixed and might vary between career families. However, career families provide clear and well defined career paths for progression within levels (Zinghein and Schuster, 2000). So organisations must inform all employees with the criteria needed for progression. It is a single graded structure since the same specific grade might be placed in several families; furthermore, jobs in the same level have the same value and the same pay range across all families which in return ensure equity. It perfectly fits in organisations that aim to achieve career development and that already have a comprehensive competency framework for all their jobs. Nevertheless, it has been argued that progression occurs in an occupational ‘silo’ (Armstrong and Murlis, 2011, p.221).
Job families are very similar to career families, but jobs are grouped according to their common proposes, processes and skills like business support (People Management, 2000a).This structure consists of 3-5 separate families. It also consists of levels that might vary between families (People Management, 2000a). Levels are defined depending on their responsibilities, knowledge and the levels of skills and competences required (Corby, 2008). In other words, role profiles are often set that could also be used for progression between levels (IRS Management Review, 2001b). This structure facilitates market differentials between families since there are different market pay structures for each family (IRS Management Review, 2001a).Moreover, jobs in the same level among families may differ in their size and their pay ranges unless they are slotted in families based on job evaluation. Consequently, this structure might appear very divisive and unfair if it isn’t linked to job evaluation (Corby, 2008).
Job families could be designed either by the use of job evaluation or by the modelling process that is developed by hay group. The job evaluation process relies on using the analytical job evaluation that produces a rank order of jobs that are then divided into families. Levels within families are then defined either by the knowledge, skills and competences needed or by job evaluation points(Armstrong and Murlis, 2007). After defining the groups and levels, organisations can match role profiles with levels definitions.
Modelling involves identifying job groups with similar work but performed at different levels, followed by a general level identification. Produce general job roles and profiles and match them with the levels’ definitions in order to place them into the appropriate levels. The matching process will then be validated by the use of job evaluation processes. Following that pay ranges are determined based on market rates (Armstrong and Cummins, 2011).
This structure fits in organisations that consist of different groups of job families and a huge number of professionals (People Management, 2000a). It also fits in organisations that emphasise on planning career paths and routes. It is highly recommended when a specific job family needs to be rewarded and paid distinctively and differentially (Personnel Today, 2009).
The mixed model; which mixes broad banded and job family pay structures, arranges job families within bands. This model provides a flexible scope for promotions and pay progression since it encloses the benefits of both pay structures (IRS Management Review, 2001a).
4.4. Pay Spines is a series of incremental pay points enclosing all jobs in an organisation. Progression is usually linked to the length of service and increments are fixed between 2.5%-3%. Public sectors and voluntary organisations usually use this type of structure. Furthermore, it is used by organisations that are unable to evaluate fairly the different performance levels of employees (CIPD, 2009).
Job evaluation is recently used in some structures like the broad band and job family to validate only the positioning of roles into bands and jobs into levels. ‘Reports of the death of job evaluation have been overstated, but its lifestyle has changed’ (People Management, 2000a). The role of job evaluation is now minor and only supportive rather than being the solid base pay structure as it has always been.
Every organisation has its own context and circumstances that makes one specific structure more suitable than another. The type of structure depends on the organisational strategy, structure, culture and size (People Management, 2000a). It also depends on the type of industry, the type of people employed and the budget available. Other external factors can influence the selection process, as follows: the competitive pressure, the demographic trends and employment legislation (Corby, 2008). For more statistics regarding pay structures, see appendix (4).
It is suggested to replace the current pay structure with a job family pay structure. Job families should be identified and the levels should be defined carefully either based on the job evaluation points or the level of skills and competences required for each level. Jobs are later placed in different levels within each family. It is advised to have wider pay ranges to allow a scope for progression (CIPD, 2010). Differentials between pay ranges, in other words between the mid point of one level and the mid-point of the neighbouring level, should provide enough scope to recognise the increases in job sizes among levels (CIPD, 2010). Large differential gaps will cause problems for people on borderlines waiting for long period of time, while small gaps will result in many appeals and arguments regarding the fairness of the pay structure. It is also recommended to have an overlap between pay ranges to allow flexibility and to recognise the contributions of well experienced employee on the top of his level that might be more beneficial for the organisation than a new employee in the next higher level (CIPD, 2010). It is highly recommended to design the job family pay structure based on the job evaluation that is already in place to ensure equity among families rather than the modelling process where job evaluation exist only to validate the allocation of jobs within families.
Reward strategies should be aligned with other HR strategies so that they balance and support one another (Armstrong and Murlis, 2007). For instance, implementing job family structure will provide basis for planning career progression and career paths and setting the general framework for managing performance. Pay is one of the main factors that attracts candidates to join an organisation through offering them excellent packages. Pay policies are also linked to training and development, since many organisations reward their employees upon their possession of skills and the development of their competencies (Armstrong and Murlis, 2007).
Every organisation strives to achieve a balance between internal relativities ensuring internal equity and external relativities ensuring competitive benefits. Many organisations try to replicate best practices in reward structures neglecting the fact that each organisation is operating in a distinct context and a unique culture. Organisations must look for the best fit rather than the best practice as there is no one best model structure that fits all organisations(Armstrong and Murlis, 2007). Furthermore, it is impossible to achieve 100% pay satisfaction in any organisation; as O’Neil cited in Armstrong and Murlis (2007, p.9) stated : “ it is not possible to create a set of rewards that is generally acceptable and attractive to all employees since there is no right single set of solution that can address all the business issues and concerns”.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.Read more
Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.Read more
Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.Read more
Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.Read more
By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.Read more
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9480708837509155,
"language": "en",
"url": "https://reliefweb.int/report/south-sudan/east-africa-key-message-update-crisis-ipc-phase-3-or-worse-outcomes-remain",
"token_count": 704,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.035888671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c2d1ec86-a15b-49b0-9036-4771b0e9c24c>"
}
|
The severity of acute food insecurity is highest in Yemen, South Sudan, and Sudan, where several areas are in Emergency (IPC Phase 4) or Crisis! (IPC Phase 3!). In South Sudan, Catastrophe (IPC Phase 5) is likely among pockets of households in Jonglei and Upper Nile, who are extremely vulnerable to conflict- or flood-related disruptions to key food and income sources. Protracted conflict and civil insecurity coupled with long-term macroeconomic deterioration remain the primary drivers of food insecurity in these countries; however, the economic impacts of the COVID-19 pandemic are contributing to further declines in household income and access to food. Additionally, recent floods and desert locusts have caused crop production losses in parts of Sudan and South Sudan. While not the most likely scenario, a risk of Famine (IPC Phase 5) persists in Yemen and in South Sudan.
In the nine countries that FEWS NET monitors in the East Africa region, confirmed COVID-19 cases and deaths as of August 31st exceeded 111,500 and 2,900, respectively, led by Ethiopia, Kenya, and Sudan. The economic impacts of COVID-19 are primarily driven by movement restrictions and vary across countries, including reductions in formal and informal business activity, migratory and local labor activities, sales of crops and livestock, remittance inflows, and tourism; minimal to moderate reductions in crop production; a slowdown in regional trade flows; and constrained capacity to deliver in-kind food assistance. These impacts on household income and food sources have increased the scale of the population that is experiencing Crisis (IPC Phase 3) or Stressed (IPC Phase 2) outcomes, particularly in urban and peri-urban areas and among displaced populations.
In Sudan, South Sudan, Ethiopia, and Somalia, significantly above-average staple food prices are making it more difficult for poor households to purchase their minimum food needs. Rising food prices are attributed to multiple factors and vary across each country, but include reductions in export earnings and local currency depreciation that make food imports increasingly expensive; the impacts of below-average crop production on market supply; and the impact of COVID-19 movement restrictions and other preventive measures on regional and domestic supply chains. According to data collected in key reference markets in July, sorghum prices exceeded the five-year average by 150-250 percent across Sudan, 50-240 percent across South Sudan, 85 percent in Addis Ababa, Ethiopia, and 20-55 percent in southern Somalia.
The October to December (OND) 2020 rains are expected to be below average in the greater Horn of Africa, according to the consensus of ensemble forecast models that include the North American Multi-Model Ensemble. The impact of below-average rainfall on crop and livestock production is likely to drive high food assistance needs through at least early 2021 in Somalia, southern and southeastern Ethiopia, and northern and eastern Kenya, which depend on the OND rains. Crop losses, reduced demand for agricultural labor, rising staple food prices, declines in milk availability, increased expenditures on water and livestock feed, and increased resource-based conflict are among the negative impacts that will likely lead to an increase in the population facing Crisis (IPC Phase 3) or Emergency (IPC Phase 4) outcomes. Furthermore, there is an elevated likelihood of below-average rainfall in the March to May 2021 season. Two consecutive below-average seasons would likely result in rapidly worsening acute food insecurity.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9576420783996582,
"language": "en",
"url": "https://searchcio.techtarget.com/news/252448753/Blockchain-solutions-and-disruption-pondered-at-EmTech-2018",
"token_count": 1358,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.072265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c4ea5ad4-b87a-4767-86ca-c765844be7e3>"
}
|
CAMBRIDGE -- The World Bank, one of the most powerful financial institutions on the planet, is experimenting with blockchain as a tool to track agricultural goods and raise capital.
Gideon Lichfield, the editor in chief of the MIT Technology Review, found some irony in that.
"This technology that was invented by somebody whose true identity we still don't know -- Satoshi Nakamoto -- specifically to take power away from financial institutions and put currency in the hands of the people is now being used by the ultimate, central, financial institution," Lichfield told an audience at EmTech 2018, a conference focused on big data, artificial intelligence and technology.
The crowd gathered at MIT's Media Lab had just heard from two thinkers in the increasingly mainstream field of blockchain, a method of distributed ledgers that can dramatically alter how transactions are made and verified.
Ledgers themselves date back to cuneiform records etched into tablets 7,000 years ago at the dawn of civilization, said Michael Casey, an author and senior advisor to the Digital Currency Initiative at Media Lab. If blockchain solutions decentralize financial ledgers in the future, that change could disrupt the flow of money into the world's financial hubs. Using the 21st century version of the ledger, governments and other institutions could invest the money they save on financing in other causes.
Michael Caseysenior advisor to the Digital Currency Initiative, MIT Media Lab
"If they could raise money more cheaply, you'd have a lot more funds to put into education, to put into health," Casey said. "Why should [the cost of financing] go into the hands of a large investment bank when it could be going back to the poor?"
Blockchain solutions could also help the so-called underbanked and unbanked gain access to financial services. Distributed ledgers accrue credibility by replicating transaction records across a network of computers. Casey said that credibility could benefit people in places like Nairobi, Kenya, who have difficulty leveraging value from their real estate because banks distrust their property records.
"The lack of trust in the record-keeping function has a huge impact on the world," he said.
World Bank experiments with blockchain solutions
The altruistic applications of blockchain were a focus of Casey's EmTech talk with Prema Shrikrishna, who works on blockchain projects at World Bank Group.
Teaming up with the International Finance Corporation, the World Bank is currently designing a blockchain architecture to track oil palm from the farm to mills, where it becomes palm oil -- an agricultural staple in everything from chocolate to candles. By tracking the origin of the raw material, most of which is produced in Indonesia, blockchain could reward farmers for sustainable practices, according to Shrikrishna.
Among other World Bank experiments with blockchain:
Education. The World Bank is developing a system for rewarding students playing an educational game called Evoke, which is designed to teach skills for success in modern society, Shrikrishna said.
Vaccine management. In December, Oleg Kucheryavenko, a public health professional who works with the World Bank, wrote on the institution's blog that blockchain could provide a "cost-effective solution" for vaccine distribution. Vaccines have a shelf-life, Kucheryavenko wrote, and the supply chain is "too complex to be taken for granted, with vaccines changing ownership from manufacturers to distributors, re-packagers and wholesalers before reaching its destination."
Financing. In August, the World Bank sold blockchain-enabled bonds through the Commonwealth Bank of Australia, which raised about $80.5 million, according to Reuters.
Blockchain's best use cases
Members of the audience at the talk had varying aspirations for blockchain's use.
Rahul Panicker, chief innovation officer at Wadhwani Institute for Artificial Intelligence, which focuses on technological solutions to large-scale societal problems, believes blockchain can be harnessed for humanitarian causes.
"It was very encouraging to see an organization like the World Bank being willing to look at these frontier technologies, and especially a technology like blockchain that has the ability to reduce friction in the financial system," said Panicker, after attending the talk. "The whole purpose of blockchain is actually to minimize the burden of trust. The cost of trust is especially high in the developing world, so the fact that organizations like the World Bank are willing to look at this can mean big things for the disempowered."
Tom Hennessey, an attendee, posited that financial settlement was the most readily available application.
Tomas Jansen, of Belgium's Federal Agency for the Reception of Asylum Seekers, said a lot of refugees arrive in Europe without identification papers because they belong to a marginalized group or lost their documents. Jansen wanted to hear ideas from the blockchain experts on how to address those problems.
Shrikrishna sidestepped the political ramifications, but she noted that World Bank has a program called Identification for Development that is working on integrating ID databases and creating an identity that would be "portable across borders."
She said the World Bank is "technology agnostic" in seeking to solve problems around the globe, and stressed that the financial institution's approach with blockchain has been both "very cautious" and "very experimental."
World Bank is hardly alone in its exploration of blockchain solutions to solve problems and change how business is done. Analysts expect blockchain to have a major impact on businesses, which are eyeing its potential to manage supply chains, verify documents, and trade securities. The firm Gartner estimates blockchain will add $3.1 trillion to the world economy by 2030. Some industry sectors have been quicker than others to start experimenting.
Describing blockchain as at an "inflection point," a recent report by the consultancy Deloitte found that financial services executives are "leading the way in using blockchain to reexamine processes and functions that have remained static for decades," and emerging players are using blockchain to challenge traditional business models.
Meanwhile, blockchain's most developed use case -- bitcoin -- is driving most of the interest in the technology, while taking those invested in the cryptocurrency on a roller coaster ride.
So far development of a "stable coin" has been a "difficult nut to crack," according to Casey, who used to cover currencies for The Wall Street Journal.
To stabilize the tender, a coin could be pegged to other metrics, or it could be backed by a reserve of funds to try to create more stability, Casey said. One way or another, he predicted, developers will find success.
"Something's going to work. Something's going to break as well," Casey said.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9203007221221924,
"language": "en",
"url": "https://socialnomics.net/2017/08/31/big-data-is-leading-a-greater-demand-for-stem-students/",
"token_count": 918,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01806640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6c0899b1-4f72-45b0-911a-43ed071467fa>"
}
|
Big Data Is Leading a Greater Demand for STEM Students
Big data, the huge volume of data collected daily from transactions or operations of particular businesses, is becoming more relevant in the contemporary business world. This data has to be processed into meaningful information that’s applied in the explanations and understandings of various business scenarios like market trends, profit and losses, product development and risk analysis in near-real time.
STEM (Science, Technology, Engineering, and Math) is becoming critical in the analyses of big data. Through empirical procedures, the gigantic volume of data can be processed into meaningful information. The corporate world is now turning to this body of scientific insight for a better understanding of big data. This has, in turn, created a growing demand for STEM students. Below we’ll be exploring reasons for this ever-growing demand.
Cost-Benefit of Big Data
In 2010, 13 Exabytes of data was stored by users across various industries of the global economy with an estimated value of $700 billion to the end user. This data is projected to be viable for a decrease of an estimated 50% of production cost. In terms of labor needs, between 140,000-190,000 workers with extensive data analysis skills will be employed to analyze this info in the US. Over 1.5 million managers will have to acquire skills in digital media as well. This transformation will touch across multiple sectors.
Various sectors of the economy have been looked into to ascertain how big data impacts the economic values:
- Healthcare in the US — focused on efficiency and quality, $300 billion could be generated annually.
- Manufacturing and personal location data globally — could tap into an excess $600 billion in consumer surplus.
- The public sector in Europe — operational efficiency enhancements could save an excess of approximately $149 billion.
- Retail in the US — a possible increase in operating margin by more than 60%.
Challenges of Big Data Analysis
To effectively process big data, we have to understand the challenges that exist within the data that prove problematic to the processing stages. They are:
- Heterogeneity – data is all mixed up together in no specific order even for similar industries and for different users.
- Scale – this is the representation of the amount of data collected and stored. Managing and organizing this rapidly exploding data is quite challenging in any field particularly the emerging ones.
- Timeliness – large amounts of mixed up data require a considerable amount of time to sort out, extract, understand and process into meaningful information.
- Human collaboration – despite the commendable efforts that have been put into developing outrageous computing capabilities, there still remains a huge hurdle for computers to process data into information. Human intelligence is still depended upon for identification of minute variations in data patterns that significantly alter the analytical processing.
- Privacy – there is a growing concern directed at the collection, storing and manipulation of private public data. Despite law enforcement policies regulating this aspect, adequate and satisfactory privacy policies will have to be managed through both technical and sociological paradigms.
Processing Big Data
The processing of big data into meaningful information is a multi-phase procedure that employs technical and scientific analysis skills. At every phase, new challenges emerge that have to be addressed for a meaningful conclusion to be reached.
The stages involved in the process are:
In this stage, all the information pertaining to the particular corporate phenomena is captured and stored. All information is generally gathered into a huge data bank with no particular immediate analysis or selection of particular information for storage.
The collected data will not be in a format ready for processing. The data is too general and wide for any meaningful analysis to be conducted. In this stage, you identify which variables you want to examine and apply a code to pull out the particular data.
Heterogeneous data will remain irrelevant if we apply the normal data collection and processing procedures. Metadata will have to be formed for the results from big data to be understood and reused in subsequent analyses.
To query and mine big data, we must apply and rely upon more complex scientific procedures than those used in simple data processing and analysis. Development of coordination between different platforms of databases is vital for effective querying and mining procedures.
For the analysis of Big Data to be meaningful, users have to understand the results. Interpretations of results from big data analysis will have to be made by a human decision maker. Instead of relying on computer analysis, there is a need for human insight into the understanding and verification of results from the computer.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.956826388835907,
"language": "en",
"url": "https://taxguru.in/income-tax/agricultural-income-analysis.html",
"token_count": 4706,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:12a8df05-f109-48db-acc7-9fa558018ec1>"
}
|
Definition – Section 2(1A) – Bare Act:
“agricultural income” means—
(a) any rent or revenue derived from land which is situated in India and is used for agricultural purposes;
(b) any income derived from such land by—
(i) agriculture; or
(ii) the performance by a cultivator or receiver of rent-in-kind of any process ordinarily employed by a cultivator or receiver of rent-in-kind to render the produce raised or received by him fit to be taken to market; or
(iii) the sale by a cultivator or receiver of rent-in-kind of the produce raised or received by him, in respect of which no process has been performed other than a process of the nature described in paragraph (ii) of this sub-clause;
(c) any income derived from any building owned and occupied by the receiver of the rent or revenue of any such land, or occupied by the cultivator or the receiver of rent-in-kind, of any land with respect to which, or the produce of which, any process mentioned in paragraphs (ii) and (iii) of sub-clause (b) is carried on :
(i) the building is on or in the immediate vicinity of the land, and is a building which the receiver of the rent or revenue or the cultivator, or the receiver of rent-in-kind, by reason of his connection with the land, requires as a dwelling house, or as a store-house, or other out-building, and
(ii) the land is either assessed to land revenue in India or is subject to a local rate assessed and collected by officers of the Government as such or where the land is not so assessed to land revenue or subject to a local rate, it is not situated—
(A) in any area which is comprised within the jurisdiction of a municipality (whether known as a municipality, municipal corporation, notified area committee, town area committee, town committee or by any other name) or a cantonment board and which has a population of not less than ten thousand; or
(B) in any area within the distance, measured aerially,—
(I) not being more than two kilometres, from the local limits of any municipality or cantonment board referred to in item (A) and which has a population of more than ten thousand but not exceeding one lakh; or
(II) not being more than six kilometres, from the local limits of any municipality or cantonment board referred to in item (A) and which has a population of more than one lakh but not exceeding ten lakh; or
(III) not being more than eight kilometres, from the local limits of any municipality or cantonment board referred to in item (A) and which has a population of more than ten lakh.
Explanation 1.—For the removal of doubts, it is hereby declared that revenue derived from land shall not include and shall be deemed never to have included any income arising from the transfer of any land referred to in item (a) or item (b) of sub-clause (iii) of clause (14) of this section.
Explanation 2.—For the removal of doubts, it is hereby declared that income derived from any building or land referred to in sub-clause (c) arising from the use of such building or land for any purpose (including letting for residential purpose or for the purpose of any business or profession) other than agriculture falling under sub-clause (a) or sub-clause (b) shall not be agricultural income.
Explanation 3.—For the purposes of this clause, any income derived from saplings or seedlings grown in a nursery shall be deemed to be agricultural income.
Explanation 4.—For the purposes of clause (ii) of the proviso to sub-clause (c), “population” means the population according to the last preceding census of which the relevant figures have been published before the first day of the previous year;
Income is Agricultural only if it is from the following 3 sources:
|Any rent or revenue derived from land which is situated in India and is used for agricultural purposes. *
||Any income derived from such land by agricultural operations including processing of agricultural produce,raised or received as rent in kind or any process ordinarily employed by cultivator or receiver of rent-in-kind so as to render it fit for the market,or sale of such produce. *
||Any income derived from any building owned and occupied by the assessee, receiving rent or revenue from the land,by carrying out agricultural operations. *
|The land should either be assessed to land revenue in India or be subject to a local rate assessed and collected by officers of the Government.
||Where such a land revenue is not assessed or not subject to local rate, the land should not be situated within the jurisdiction of a municipality. **
||The revenue must not include any income arising out of transfer of such land.
** – should not be situated within the jurisdiction of a municipality (whether known as a municipality, municipal corporation, notified area committee, town area committee, town committee or by any other name) or a cantonment board, and which has a population of more than ten thousand (according to the last preceding census which has been published before the first day of the previous year in which the sale of land takes place); or it should not be situated:
- more than 2kms. from the local limits of any municipality or cantonment board and which has a population of more than 10,000 but not exceeding 1,00,000; or
- not being more than 6kms. from the local limits of any municipality or cantonment board and which has a population of more than 1,00,000 but not exceeding 10,00,000; or
- not being more than 8kms. from the local limits of any municipality or cantonment board and which has a population of more than 10,00,000.
A direct nexus between the agricultural land and the receipt of income by way of rent or revenue is essential. (For instance, a landlord could receive revenue from a tenant.)
|Existence of Land
||Usage of Land for Agricultural Operations
||Cultivation of Land is a must
||Ownership of Land is not essential
Usage: Agricultural operations means efforts induced for the crop to sprout out of the land. The ambit of agricultural income covers income from agricultural operations, which includes processes undertaken to make the produce fit for sale in the market. Both, rent or revenue from the agricultural land and income earned by the cultivator or receiver by way of sale of produce are exempt from tax only if agricultural operations are performed on the land.
Cultivation: Some measure of cultivation is necessary for land to have been used for agricultural purposes. The ambit of agriculture covers all land produce like grain, fruits, tea, coffee, spices, commercial crops, plantations, groves, and grasslands. However, the breeding of livestock, aqua culture, dairy farming, and poultry farming on agricultural land cannot be construed as agricultural operations.
Ownership: In the case of rent or revenue, it is essential that the assessee has an interest in the land (as an owner or a mortgagee) to be eligible for tax-free income. However, in the case of agricultural operations, it is not necessary that the cultivator be the owner of the land. He could be a tenant or a sub-tenant. In other words, all tillers of land are agriculturists and enjoy exemption from tax. In certain cases, further processes may be necessary to make a commodity marketable out of agricultural produce. The sales proceeds in such cases are considered agricultural income because the producer’s final objective is to sell his products.
(Supreme Court Decision in CIT vs Raja Benoy Kumar Sahas Rpy (1957) 32 ITR 466)
2 types of Operations should be carried out on Land:
|Cultivation, Tilling, Sowing, Planting, etc. demanding labour & skill and further they are directed to make the crop sprout from land
||Weeding, Digging, Removal, Tending, Pruning, Cutting, Prevention from Insects, Pests, cattle, etc. After the crop sprouts, for Efficient Production of crops
The cultivation of the land does not comprise merely of raising the products of the land in the narrower sense of the term like tilling of the land, sowing of the seeds, planting, and similar work done on the land but also includes the subsequent operations set out above all of which operations, basic as well as subsequent, form one integrated activity of the agriculturist and the term “agriculture” has got to be understood as connoting this integrated activity of the agriculturist. One cannot dissociate the basic operations from the subsequent operations, and say that the subsequent operations, even though they are divorced from the basic operations can constitute agricultural operations by themselves. If this integrated activity which constitutes agriculture is undertaken and performed in regard to any land that land can be said to have been used for “agricultural purposes” and the income derived therefrom can be said to be “agricultural income” derived from the land by agriculture.
EXCEPTIONS – NOT AGRICULTURAL INCOME:
- If a person sells processed produce without carrying out any agricultural or processing operations, the income would not be regarded as agricultural income.
- Likewise, in cases where the produce is subjected to substantial processing which changes the very nature of the product (for instance, canning of fruits), the entire operation is not considered as an agricultural operation. The profit from the sale of such processed products will have to be apportioned between agricultural income and business income.
- Income from trees that have been cut and sold as timber is not considered as an agricultural income since there is no active involvement in operations like cultivation and soil treatment.
- Income from sale of Forest Trees of spontaneous or natural growth where only forestry operations in the nature of these subsequent operations are performed would, therefore, not be agricultural income. (Maharjadhiraj Sir Kameshwar Singh vs. CIT (1957) 32 ITR 587 (SC)). However, where fresh trees have been planted in old forests, the income attributable to such plantation activity would be Agricultural Income.
SOME QUERIES ANSWERED:
- Is the Income earned from “Contract Farming” business taxable?
- If you are farmer cultivating crop for a company or firm than the income is not taxable but if you are a company or a firm getting crop cultivated by the farmers under “Contract Farming” agreement than all the income is taxable. In a recent judgment on petitions by Namdhari Seeds Pvt. Ltd., a Division Bench of the Karnataka High Court held that such income of agri-business firms come under the purview of business income which attracts tax under the provisions of the Income Tax Act. In this case, the firm had claimed as agricultural income the amount generated by it from the sale of hybrid seeds grown on land belonging to various persons under “contract farming” agreements. As per the Income Tax Act, according to the firm, income derived from agricultural land qualifies as agricultural income and it is not necessary to own land to derive agricultural income. Though the Income Tax Department had rejected this claim, the Income Tax Appellate Tribunal had, in its 2006 order, treated 90 per cent of the firm’s income as agricultural income. However, the High Court pointed out that the entire terms of agreement would indicate that the foundation seeds grown by the farmer would be purchased by the firm at the end for a certain price provided seeds fulfilled the specifications as per the agreement. “It (agreement) is nothing short of a fertile womb being offered by a surrogate mother for the growth of a child of someone else. The assessee firm supervises and oversees the sowing, cultivation right from the process of sowing till the end to get the qualified foundation seeds to carry on its trade in selling certified seeds. The firm also provides scientific advice. However, the firm is not carrying out none of the normal activities of farming. “Such input or scientific method in giving advice to the farmer cannot be termed as either basic agricultural operation or subsequent operation ordinarily employed by the farmer or agriculturist. If the basic operations of agriculture are not carried out by the firm, then the harvested foundation seeds purchased by it, and converting them to certification seeds also cannot be termed as an integral part of the foundation activity of agriculture,” the court said while treating the entire income generated by the company as business income.
- Advanta India Ltd., Bangalore vs Assessee on 10 May, 2012: ITA No.819 & 820/Bang/2010: If we examine the operations carried out by the assessee in the previous year relevant to the assessment year in appeal, we find that the production of Basic Seeds as well as Hybrid Seeds are the results of basic of agricultural operations carried on by the assessee company in its own land as well as in leasehold land. The method of contract farming does not take away the character of the basic operations carried out by the assessee company which are agricultural in nature. The assessee company procures germplasm and sows in its own field, and carries on all agricultural operations and produces the Basic Seeds. The Basic Seeds so harvested are again put through agricultural operations intimately connected with leasehold land for finally bringing out the Hybrid Seeds. Only for the reason that the Basic Seeds are sown in leasehold land and the manpower required are arranged through contract farming, it does not mean that the operations carried out by the assessee company are not agricultural operations. As a matter of fact, it is to be seen that the assessee company has carried out basic as well as secondary agricultural operations. Therefore, without any fear of contradiction, it is possible for us to hold that entire such income of the assessee is agricultural in nature which is to be excluded from the nature of total income.
- Does interest on arrears of rent qualify as agricultural income and will this be exempt from tax?
- Sometimes, a tenant could slip up on rent or revenue payments (either in cash or kind) and have to pay arrears. If the landlord charges interest on such arrears, the income would not be considered as an agricultural income, but would be deemed income by way of interest and would, hence, be chargeable to tax. While ‘rent’ presupposes periodical and pre-determined payment (either in cash or kind), ‘revenue’ implies a sharing arrangement that depends on the actual agricultural produce. In either case, ownership of agricultural land or interest in such land is essential, which means, the owners of agricultural land, tenants who are given a sub-lease, and people who are mortgagees of agricultural land, all enjoy tax-free agricultural income.
- If agricultural produce is processed to make it marketable at a place other than the agriculture land, then the amount charged for such processing will be an agricultural income or not?
- Any processing done on Agricultural produce to make it marketable is a part of agricultural operations and such amount recovered will be treated as agricultural income only. Say for example trashing of wheat, mustard, etc is part of agricultural operations only and the amount recovered will be treated as agricultural income only no matter processing takes place on the land itself or some other place. But in certain cases like in the case of tea, coffee, sugarcane where a major processing (change of very nature of the product) is being done, then some part of the processed produce (tea, coffee & sugar) is taxed as non-agricultural income and rest is exempt as agricultural income.
- What if agriculture operation is carried on urban land?
- If agricultural operations are carried out on land, either urban or rural, the income derived from sale of such agricultural produce shall be treated as agricultural income and will be exempt from tax.
- If any industrial organization grows crops and sells half of the produce as raw material in the market and remaining (further processed) as finished goods, what will be the tax treatment?
- Agricultural income is exempt from income tax. It does not matter whether the agricultural operations are done by an industrial organization or an individual. If any industrial organization grows crops and sells half of the produce as raw material in market and remaining (further processed) as finished goods, the income which is earned on the first half of produce (sold in market as raw material) is totally exempt from tax. In case of the remaining produce which is further processed, scheme of presumptive taxation is applicable. Rule 7, 7A, 7B & 8 of Income tax Rules deals with such type of income. Rule 7A deals with Income from manufacture of rubber, 7B deals with Income from manufacture of coffee and Rule 8 deals with Income from manufacture of tea. Rule 7 says that in cases where income is partially agricultural in nature and partially from business, the market value of the agricultural produce which has been raised by the assessee or received by him as rent in kind and which has been utilised as a raw material, shall be deducted from the sale receipts and will be treated as agriculture income. The remaining will be considered as non agricultural income.
- In my agriculture farm, I have 5 cows in Pune (Maharashtra). The product being milk is the main produce, and not a byproduct. Is this income an agriculture income or a taxable income? (This milk is sold to dairy product plant in nearest Co-op Society).
- Dairy farming is not an agricultural income.
- Why rent on land is treated as agricultural income?
- Rent received from agricultural land used for agricultural purpose is treated as agricultural income. This is prescribed by the law.
- Can Interest on Crop Loan be claimed as an exemption?
- The interest earned on Crop Loan cannot be claimed as an exemption by the provider of loan since the condition of ownership of land being not essential holds true only if the assessee has interest in the land. The provider of the loan may not have an interest in the land because it may be his ordinary business to provide Crop Loan. However, the farmer to whom the crop loan is provided can claim the same as a deduction while computing his tax liability.
- If an assessee sells the fruits of the trees planted by him around his home, will the income so earned be agricultural income?
- The trees planted by him should be on a land which can be classified as an agricultural land by fulfilling the conditions mentioned earlier in this article. If the land is agricultural, then the income earned by selling of fruits can be treated as agricultural income.
- I have taken certain agricultural land on lease and crops are being grown on the said land for many years. Now the said land alongwith growing crops has been acquired by the Govt. The Govt. paid separate compensation for the land and the crop. Whether the compensation received in lieu of crop is agriculture income or not? Further note that assessee has not further invested the amount in agriculture land received as compensation against crop.
- The compensation paid for the crops by the Govt. can be considered to be as good as income earned by purchase of standing crop, which is not an agricultural income. Hence the compensation against crop is taxable in the hands of receiver of the compensation.
- Whether income earned from export of agricultural produce is exempt from income tax?
- The conditions for considering the income as agricultural in nature have to be satisfied if the agricultural produce has to be exempt from income tax. Middlemen dealing in trade of agricultural produce are generally not entitled to exemption due to lack of satisfaction of the conditions.
- I have an income of Rs.1,45,000 from my business and an agricultural income of Rs. 8,40,000. Do I need to file the return of income?
- The process of computation of tax liability is followed only if the assessee’s non-agricultural income is in excess of the basic exemption slab. In this case, the income from business of the assessee is lower than the basic exemption limit. However, the returns have to be filed with regards to the disclosure of agricultural income.
- An assessee wants to buy farms which bear coconut trees, on a lease for a period of one year. State whether sale of coconuts is said to be an agricultural income or not?
- The land on which the coconut trees are planted should be an agricultural land which can be classified by fulfilling the conditions mentioned earlier in this article. If the land is agricultural, then the income earned by selling of coconuts can be treated as agricultural income.
- I had sold an agricultural land in a rural area, which is outside jurisdiction of the Municipal Authority. Whether the sales proceeds are exempt or taxable?
- The scope of agricultural income excludes the revenue which is earned by transfer of agricultural land not falling under the definition of Capital assets u/s. 2(14). By definition of a capital asset under Section 2(14), an agricultural land in an area falling out of jurisdiction of the Municipal Authority (which has a population of more than 10,000), is not a capital asset. Section 10(37) allows income from transfer of such a land to be classified as a capital gain via clause (i). Under Section 54B, a capital gain arising out of this transaction will be exempt provided the conditions (mentioned earlier in this article) are satisfied.
- Is receipt from sale of rubber trees an agricultural income?
- Yes, receipt of sale of rubber trees is an agricultural income if the conditions for land being agricultural in nature are satisfied.
This document is meant for the recipient for use as intended and not for circulation. The information contained herein is from the public domain, company published data or sources believed to be reliable. The information published is analyzed by the respective analyst publishing the report. The data contained herein doesn’t represent any view that is intended to influence any decision making by the person reading the content of this report. We do not guarantee the accuracy, adequacy or completeness of any Data in the Report and is not responsible for any errors or omissions or for the results obtained from the use of such Data.
CA. Khyati B. Vasani
Vasani & Co.
Level 5 ,Vini Elegance, Above Tanishq,
L. T. Road, Borivali (W), Mumbai – 400 092.
Tel: (022) 2899 8888 (Multiple Lines)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9148106575012207,
"language": "en",
"url": "https://urbanwired.com/business-process-analysis/",
"token_count": 1299,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.055908203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dc7062cf-4abb-4a6c-9429-ac866f2c1afa>"
}
|
Business process analysis (BPA) is a complex of actions aimed at the examination of a business with a view to study the procedures and make the operations more efficient and effective. It reveals the processes used, people participating, data shared and documents created.
The primary goal of BPA is to improve processes by eliminating the activities that don’t add any value to production of goods and services. Taking into consideration that processes influence the business performance significantly, following that approach is a better way to increase competitiveness of an enterprise, satisfy needs of customers and increase sales.
According to the methodology of BPA, the main object of its investigation is an operation consisting of procedures or processes, which transform initial resources (inputs) into final products (outputs). Inputs comprise raw materials, labor force, equipment, etc. Outputs mean goods and services, as well as products that act as inputs for another process.
Stages of Business Process Analysis
The initial step to optimize a process is its examination to find out the operations, their interrelationships and indicators of their productivity. In this sense, BPA usually includes the following stages:
- Defining of the scope of the process with exact start and end points to distinguish it from other processes.
- Constructing the process flow chart that will show all the operations within the process and their interconnections.
- Determining the input of each phase into the process. Calculating the corresponding values (e.g. KPI).
- Identifying the bottlenecks representing the phase that gives the smallest contribution to the process.
- Evaluating of other limitations to find out the extent how the bottleneck influences on the process.
- Making decision on the base of analysis to optimize the process.
Benefits of BPA
Business process analysis helps in obtaining the following results:
- Documentation of all the processes. As a result of business process analysis, an organization gets all the operations documented, including implicit activities not reflected in process manuals. This side effects the affords ground for elaboration of more precise job descriptions for employees. Also, additional functional instructions can be implemented to use the equipment during its operation more effectively.
- Identification of problematic areas slowing down the production process, such us fulfillment of excessive and repetitive actions or filling up unnecessary forms.
- Revelation of opportunities for optimization, such as reducing transaction costs, increase of quality control, etc.
Having conducted analysis and improvement of business processes, an enterprise can achieve better performance indicators including:
- Higher productivity;
- Better quality of goods and services;
- Reduced costs;
- Lower error rates and number of problems arising within working operations;
- Reduced delays of projects;
- Less cycle time.
Moreover, regular process analysis necessarily leads to more effective planning and controlling of all the business projects.
Business Process Analysis Tools
BPA tools are techniques used by organizations to document and analyze business processes. Through the use of different models, BPA tools allow end users to obtain visual representation of existing operations, so that decision makers can assess their scalability, level of automation and efficiency.
The basic BPA tool is a flow chart – a diagram illustrating the sequence of operations within a process.
Rectangles in a chart usually mean tasks, triangles or rhombs represent inventory, circles or ovals imply data storage, arrows signify flows, which cover both material and information streams.
Nowadays modeling of processes can be fulfilled directly in CRM systems, which are usually used in huge organizations. For that purpose, bpm’online system contains the special feature called Visual Process Designer, with the help of which core business processes can be automated. The system permits to create a flow chart of various activities using the templates available in the program. Using the Designer, you can plan tasks, streamline workflow, provide for work with external service providers.
Besides the flow chart method, some other tools can be used, such as:
- Failure Mode Effects Analysis (approach for identification of any process failures, examination of the consequences and elimination of their reiteration in the future);
- Mistake-detecting (use of the techniques excluding occurrence of errors at all or allowing to spot an error right after it happened);
- Spaghetti Diagram (a visual illustration of the full path of an activity within a process to identify overlapping in operations and to look for opportunities to speed up process flow).
Role of Business Process Management (BPM) Software
There are also a lot of BPA tools used in IT. Such tools are usually an integral part of BPM software and allow managers to analyze processes on the base of diagrams, statistics, performance indicators calculation, benchmarking, modeling and other methods. Using these resources, business leaders make more accurate decisions, which either maintain current business processes or change them to improve future operations.
Business process management software plays an important role in documentation of multiple business processes in enterprises with a complex architecture helping to trace value chains to achieve better business performance.
The interface of bpm’online package also allows conducting business process analysis in several aspects. You can become familiar with a brief description of the analytical functions by the following link: https://www.bpmonline.com/crm/business-process-analysis. Should you have any issues related to the usage of the BPM tools, please feel free to ask questions on the page of the official community. All you need to do is to describe your problem with the business process management system precisely, and the specialists will answer you shortly.
To keep up with the fast development of technology organizations (especially those with complex processes), should have progressive BPA tools. Modern BPM systems offer a business cloud technology, which provides its managers an opportunity to audit all the processes accurately and to expose problem areas in a timely manner, owing to the possibility to reflect of a process result in real time. Whilst visual reporting of such BPM systems helps business leaders to expose process bottlenecks, resources overage and other performance inefficiencies.
- Everything you wanted to know about BPM system
- 5 BPM Software Tools That Can Help Your Business Stay Ahead Of The Competition
- 5 business process software tools to auto-pilot your company operations
- 5 top-notch BPM software tools to leave your competitors behind
- BPM prices: why are they different
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9645385146141052,
"language": "en",
"url": "https://interestingengineering.com/a-closer-look-at-hydrogen-fuel-cell-truck-projects",
"token_count": 1817,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.017578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:09450976-bf45-444d-9a40-efb35750d0b6>"
}
|
Electric vehicles are growing significantly in market share, up to about 2.5% globally in 2019. However, with EV sales expected to increase significantly in coming years, questions still remain around their functionality in a variety of areas.
For example, until charging networks are as common as gas stations, charging electric passenger vehicles on long trips will continue to be a hassle. Compounding on this issue is the length of time it takes to charge electric vehicles. While supercharging and fast charging are getting faster all the time, it's still not as fast as simply filling up a tank.
Both of these issues, involving time and charging availability, are central in the current debate around which types of commercial vehicles are practical for electrification. The trucking and commercial industries need long-haul vehicles that can fill up fast with ease. Electricity isn't there yet, but Hydrogen fuel cells may be the answer.
Hydrogen fuel cells convert hydrogen gas into electricity onboard the vehicle, releasing it through a chemical reaction in the cells. This allows the vehicles to refuel in a more traditional way — at a pump. This solves one of the problems surrounding electrification, the speed at which vehicles can be filled up.
However, anyone that has been to a gas station lately likely realizes that hydrogen isn't yet available at your neighborhood pump station. However, this problem can be solved more easily than adding electric charging stations, in that, hydrogen is a shippable and storable product, much like traditional petrol. Whereas constructing electric fuel stations requires significant investment in infrastructure.
So, is hydrogen the golden goose of alternative fuels for the trucking industry? Perhaps. But first, we need to understand the technology in greater detail.
What are hydrogen-electric trucks?
Toyota, Honda, Hyundai, and other automakers have all invested in hydrogen fuel cell medium and heavy-duty vehicles. This particular segment of the industry used to be one dominated only by startups with lofty goals (such as the now struggling Nikola), but it now finds itself rife with big investment from legacy automakers.
Fuel cells allow trucks to run on electric motors, benefiting from the instant torque and scarcity of moving parts. They have a relatively long range, and allow trucks to be filled up rapidly, and in a traditional means, at a pumping station. And finally, the only exhaust that exits the vehicles of hydrogen fuel cells is water.
In essence, hydrogen fuel cells make up quite possibly the perfect middle child to bridge fossil fuels and electrified vehicles.
A company by the name of US Hybrid Inc., a California-based company, has been developing fuel cells for heavy-duty trucking applications for some time now. In fact, the company has started licensing and selling their fuel cell engines commercially, to clients like the U.S. Air Force, vehicle startup Faraday Future, and several public transit agencies.
Vehicle startup Nikola Motor Co. is developing the Nikola Two semi, a hydrogen-fuel-cell truck that can produce 1,000 HP, and travel for up to 750 miles (1,207 km) on one tank of hydrogen.
Startups like Nikola also might help advance the adoption of hydrogen fuel cell technology as well, as one of their goals is to develop a hydrogen fueling network across the US, something that is notably lacking today. In fact, most hydrogen vehicles are in use in California because this state has the majority of hydrogen fueling stations in the US. Outside of California, there' one station in Connecticut, and the vast expanse between these two states is more or less devoid of anywhere to fill up on hydrogen.
Toyota is in the development of Project Portal, a class 8 truck that is powered by a hydrogen fuel cell. The truck is being tested in California with the primary goal of using it to shuttle containers between the port and Long Beach, a 70-mile (112 km) journey.
Honda has also recently partnered with Isuzu on fuel-cell development.
With all of these companies investing in the tech, what actually sets hydrogen fuel cells apart and how do they work?
Hydrogen fuel cell engineering
Fuel cells are composed of cathodes, anodes, and an electrolyte membrane. By passing the hydrogen fuel through the anode and oxygen through the cathode, the molecules can be broken down into electrons and protons with the help of a catalyst. The protons then pass through the electrolyte membrane and the electrons are forced through a circuit, which generates a current. On the cathode side; the protons, remaining electrons, and oxygen then combine to create water molecules. This water is the only emission from fuel cell vehicles.
The best part about fuel cells is that there are no moving parts, they operate silently, and they are incredibly reliable. The downside, at least currently, is that the process of converting hydrogen into electricity within a fuel cell is inherently inefficient. Currently, only about 30% to 50% of energy actually makes it to the wheels of a fuel-cell vehicle because of the energy needed to produce the hydrogen.
As for the parts in fuel cell vehicles, the unique components include the fuel cell stack, which is a grouping of electrode membranes that make up the fuel cell. There are also hydrogen fuel tanks that store hydrogen gas for the fuel cell to use. There are several power and thermal control systems required as well. There's the power electronic controller, which manages the flow of electricity from the fuel cell to power the electric motors for the wheels. There's then the thermal system that maintains ideal operating temperatures of the fuel cells and the motor.
Of course, there are also batteries onboard fuel cell vehicles to store excess energy from the fuel cell, along with energy created from regenerative braking.
The biggest thing you might notice about the setup of fuel cells and fuel cell vehicles is the lack of moving parts. Most of the processes that occur in fuel cells are chemical in nature, meaning that there are fewer things in the car that can mechanically break or need repairs compared to traditional internal combustion engine (ICE) vehicles.
Advantages of the technology
We've already gone through several different advantages of hydrogen and how it can serve as a sturdy bridge into the world of electrification, but there's more to the story.
We have gone through this actually but one of the biggest selling points of fuel cells is their emissions, and not just from a strictly-environmental perspective. From a marketing perspective, being able to tell customers that the only emission from their vehicles is pure water is a strong selling point.
Hydrogen fuel cells also work in extreme conditions, even in very cold environments, a place where battery-only vehicles struggle.
The efficiency of hydrogen fuel cells is around 30 to 50 percent, according to the U.S. Department of Energy, which is significantly worse than battery-powered cars, which are 70-80% efficient. These efficiencies are measures of how much of the potential energy in a fuel the engine or cell is able to convert into usable power.
Scalability is also being debated. Because there's already a gasoline fuel station every few miles globally, there's already infrastructure in place that can be adapted for hydrogen. However, installing hydrogen fuel cell infrastructure nationally could expensive compared to the expansion of the EV charging infrastructure, mainly because there is already an electrical grid in place in most areas. It is also possible to charge EV vehicles from home, whereas hydrogen cells would require a trip to a recharging station.
Hydrogen fuel cells themselves, being quite adaptable, do allow for a variety of use cases, from small vehicles to large ones. A small hydrogen fuel cell for use in a lawnmower has the same basic operation as a larger one that would be used in a semi truck.
So, why isn't hydrogen more popular? The answer to that is multi-pronged. Currently, the vast majority of hydrogen produced in the United States occurs through the process of steam reforming. This process utilizes natural gas to produce hydrogen, and that's a fossil fuel. The hydrogen fuel would also need to be transported to the pumping stations. So in essence, hydrogen fuel cell vehicles right now just offload their carbon output to the hydrogen manufacturing and transport process.
There are ways to produce green hydrogen, through the process of electrolysis, which breaks water down into its base elements (oxygen and hydrogen) utilizing an electric current. If this current is supplied by renewables, the hydrogen produced becomes quite green. However, this process isn't as scalable, due to the lack of renewable proliferation on a wide scale in the US and it's also not as cost-effective, making the fuel more expensive to purchase.
Hydrogen as it currently stands, comes from predominately non-green sources, is about 50 percent more expensive than gasoline, making it a more expensive option. It's this non-green origin of hydrogen and the cost that keeps it from becoming widely adopted, but the fundamentals are there.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9697185158729553,
"language": "en",
"url": "https://nancylawrenceregency.com/2018/04/16/u-s-taxes-and-the-bank-of-england/",
"token_count": 564,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.07763671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d76594af-8ce6-4eab-941c-570beb737c33>"
}
|
I’m thinking about money today, because I’m getting ready to file my income tax return for 2018 (the deadline for filing is tomorrow here in the U.S.).
As I bid farewell to an admittedly small amount of money that I have to pay with my return, I was reminded (by my “On This Date” calendar) that today is the birthday of Charles Montague, first Earl of Halifax and founder of the Bank of England. He was born on this day in 1661.
My small neighborhood bank (which temporarily holds the money I’ll be paying to the IRS) is all steel and glass. It simply doesn’t have the imposing presence the Bank of England had in Regency London.
The images in this post show how the Bank appeared in Jane Austen’s time, although Jane was never a customer of the Bank of England. Instead, she deposited her hard-earned money at Hoare’s Bank in Fleet Street.
Whenever I decide to visit my money in person, I go to my local branch, where the first thing I see on entering is wide open area, containing neat rows of desks and a line of teller windows. By contrast, here’s the Bank of England’s Doric Vestibule, as it appeared in 1803.
My poor little neighborhood bank simply cannot compete with the Bank of England. The Bank’s magnificent exterior, the Great Hall, the vast Rotunda—they were all designed by architect John Sloan to portray wealth and elegance. It was an imposing building, meant to inspire trust and awe.
I think Mr. Sloan accomplished his purpose. Here’s a bird’s-eye-view drawing of the Bank of England after it underwent an expansion in 1810, under Mr. Sloan’s direction. After the expansion, the Bank of England covered over three acres of prime London real estate. In the drawing you can see the various courts and interior buildings contained within the Bank’s impressive outer walls.
It would be a treat to tour the Bank of England as John Sloan designed it. Unfortunately, the magnificent Bank of England was remodeled in 1933 by architect Sir Herbert Baker. In the remodel process, much of the original building was demolished, which, according to architectural writer Nikolaus Pevsner, was “the greatest architectural crime in the City of London, of the twentieth century.”
Despite that, I’d still like to see the Bank of England, and I hope to do so one day. Whatever it looks like now, I have a feeling its design is more inspiring than the dreary but efficient steel and glass design of my little neighborhood bank.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9153249859809875,
"language": "en",
"url": "https://privpapers.ssrn.com/sol3/papers.cfm?abstract_id=957148",
"token_count": 373,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.076171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ecb223ea-268f-45bb-8a0e-93e2569e245e>"
}
|
HANDBOOK OF THE PHILOSOPHY OF SCIENCE: PHILOSOPHY OF ECONOMIC, pp. 641-690, Uskali Mäki ed., Amsterdam: Elsevier, 2012
76 Pages Posted: 15 Jan 2007 Last revised: 6 Jul 2011
Date Written: January 14, 2007
Behavioral economics is the effort to increase the explanatory and predictive power of economic theory by providing it with more psychologically plausible foundations. Behavioral economics, which recently emerged as a bona fide subdiscipline of economics, raises a number of questions of a philosophical, methodological, and historical nature. This chapter offers a survey of behavioral economics, including its historical origins, results, and methods; its relationship to neighboring fields; and its philosophical and methodological underpinnings. Our central thesis is that the development of behavioral economics in important respects parallels the development of cognitive science. Both fields are based on a repudiation of the positivist methodological strictures that were in place at their founding and a belief in the legitimacy of making reference to unobservable entities such as beliefs, emotions, and heuristics. And both fields adopt an interdisciplinary approach, admitting evidence of many kinds and using a variety of methods to generate such evidence. Moreover, there are in fact more direct links between the two fields. The single most important source of inspiration for behavioral economists has been behavioral decision research, which can in turn be seen as an integration of ideas from cognitive science and economics. Exploring the parallels between the two endeavors, we attempt to show, can shed light on the historical origins of, and the specific form taken by, behavioral economics.
Keywords: Behavioral Economics, Cognitive Science, History, Philosophy, Methodology
JEL Classification: B20. B40, D00, D60
Suggested Citation: Suggested Citation
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.971408486366272,
"language": "en",
"url": "https://weeklygravy.com/healthfitness/want-to-get-fit-make-money-your-motivator/",
"token_count": 521,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1962890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:55a00740-e127-4b69-86e6-2caf78945e7d>"
}
|
Researchers at the University of Pennsylvania School of Medicine in the U.S. have been examining whether financial incentives may help increase the levels of physical activity among overweight and obese adults.
In a new study, 281 participants were each given the goal of reaching 7,000 steps per day for a 26-week study period. The average daily step count among American adults is 5000.
During the first 13 weeks, the participants were assigned to four groups. One had no financial incentive, while the “gain” group received $1.40 for every day the goal was achieved – or $42 per month.
There was also a “lottery” category – where people were entered into a daily draw with a prize that averaged $1.40 each day.
Lastly, there was a loss incentive group, where the participants started with $42 each month, and the researchers subtracted $1.40 for each day the aim wasn’t achieved.
For the final 13 weeks of the study, participants received feedback on their performance but were not offered any financial incentives.
Participants’ progress was tracked through a mobile app on their smartphones.
Results from the first half of the study showed that offering a daily reward or lottery was no more effective than offering no reward at all.
Participants in those groups only achieved the goal approximately 30 to 35 per cent of the time.
However, those who risked losing the reward they had already been given achieved the goal nearly 45 per cent of the time. This equated to an almost 50 per cent increase over the control group.
Senior study author Dr Kevin Volpp believes the findings demonstrate that the potential of losing a reward is a very powerful motivator.
“(The study) adds important knowledge to our understanding of how to use financial incentives to encourage employee participation in wellness programs,” he said.
The authors noted that 96 per cent of participants were still actively enrolled in the study even three months after stopping incentives, which may have important implications for the role that smartphones could play in deploying these programs on a broader scale.
“Our findings reveal how wearable devices and apps can play a role in motivating people to increase physical activity, but what really makes the difference is how you design the incentive strategy around those apps,” said Dr David Asch.
The authors added that future studies might compare the effectiveness of incentives when combined with other motivators such as team-based designs that rely on peer support and accountability.
Results were first published in the Annals of Internal Medicine.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9644427299499512,
"language": "en",
"url": "https://wittysparks.com/career-opportunities-with-degree-in-finance/",
"token_count": 1341,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.04443359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:81d8139f-0d0b-4e87-8b10-0a6f4743e1bf>"
}
|
Finance is one of the most fruitful subjects for people who want to get good job prospects and want to be in a high earning bracket. A report by the U.S. Bureau of Labor Statistics states that financial analysts earn an average of $84,300 per year, while financial managers earn more than $125,080 per year.
This indicates that finance is a suitable choice for you if you want to have a highly successful career.
Having a degree is necessary if you want to move up the corporate ladder at your company, as it is the pre-requirement for most of the jobs at the managerial level. If you are good with numbers and calculation, then you have a chance to land a lucrative job.
There are numerous
Table of Contents
advantages of holding access to a Finance degree,
some of which are mentioned below:
1. A Wide Diversity of Career Possibilities
One benefit of having a Finance degree is that it gives you a lot of options regarding which career you want to work on in the financial world. With the number of options available, you can choose one that fits your relevant skills and that you are passionate about.
Examples of career path in Finance includes commercial and investment banking, insurance, hedge and mutual funds, financial planning and so on. One of the best aspects of this is that you can switch among the jobs if you do not like your current job and get relevant experience in the other.
This would be valuable to you if you have experience working in different job aspects since, it would make you look like a financial expert within the financial world. Finance is a global phenomenon, so you will have the chance to work on the current trends in the global market, prepare financial reports for individuals and researching various corporate related tax laws.
One thing to note is that you constantly need to remain alert regarding the new trends in the financial markets so that you can polish your skills undertaking cacs paper or doing CFA.
2. Earn a Competitive Salary
Earning a high salary is an attractive prospect for those who are in the early stage of their careers. There are not many fields available that can get you the right package that you are expecting from your company. Luckily, finance is one of those careers that comes with a lucrative salary even at entry level jobs.
The average salary at the entry level lies between $55000 to $73000 for different kinds of jobs. Your salary may rise drastically as you get experience in your field. According to Forbes, Finance has one of the highest salaries for recent graduates and it rises along with the continual learning in the field which would ensure you a stable future financially.
3. Moderate to Quick Job Growth
Holding a finance degree will not only provide you a huge variety of career options in the financial world, but will also ensure that you have a high probability of finding a new job if you have to quit the current job for some reason. This is because finance is still one of the only fields which have constantly experienced growth and the number of jobs increases every year.
Managing their financial data is crucial if any company wants to be successful. You can rest assure knowing that financial experts will always be in demand for those firms which face an enormous amount of financial inflows and outflows every year.
By utilizing your degree effectively, you will face no threat to your job security in the financial world for many years. This is a major advantage since this field will always need manual operators as opposed to other sectors where technology is decreasing the number of employees required.
Organizations will always need a financially apt person to operate the system.
4. Potential to Work in Other Fields
Although having a finance degree would lead you to have a successful career in any financial institution, you can always choose to work in any other sector if you wish to do so. Some of the fields where you can utilize your financial knowledge is real estate, non-profit organizations, entrepreneurship, etc.
If you feel that working in a financial institution is not the best thing to do, you have the option to work anywhere else, which is a choice that is not offered by every other career major.
For example, if you cannot find a job in the company you desired, you can temporarily work in any other field and wait for a vacancy instead of sitting at home unemployed. You can work at any organization that shares your belief and principles or start your own business.
5. Managing Your Income as A Benefit for Financial Skills
Although there are several professional benefits due to the degree that you hold, there are other benefits too. Developing the skills required to effectively budget your money and invest it wisely so that you can increase your wealth is not as easy it sounds. Not everyone who earns has the required skills to do proper budgeting.
Instead of hiring any professional to handle your money, you will have all the suitable financial skills required to manage your money efficiently. You will also be aware of the latest financial tools available to improve and diversify your investment portfolio to invest in.
Saving money and investing in the right instruments will significantly raise your wealth. You’d know better which insurance plans are good to go for family or which stocks will repay more dividends.
6. High Networking Potential
The potential of establishing a quality network is relatively high if you have a finance degree as it would provide you useful contacts that would be beneficial for you in advancing your business. They would also help you to have access to information that cannot be available otherwise.
Networking is highly recommended if you want to have a successful career, especially in finance. This is because people will only advise you if they believe that you are credible and have a firm knowledge in your respective field, which in this case is finance.
The Wrap Up
There are numerous benefits both professional and personal if you hold a finance degree as you will be entering a profession that will provide you with numerous opportunities to be successful and have a comfortable career. By entering the financial world, you will constantly be exposed to experiencing endless growth in your job, have a chance to earn a lot of money, and be in a competitive environment where you feel your skills level is judged on merit.
You will also get the chance to work in a variety of other fields from where you can get relevant experience which will help you later in your career and significantly increase your wealth. Although it is difficult to choose a degree which would be beneficial for you, but a focus on finance will surely come a long way to help you have a prosperous career and a comfortable lifestyle.
Featured image source: Freepik
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9610036015510559,
"language": "en",
"url": "https://www.creditcards.com/credit-card-news/buy-crypto-with-credit-card/",
"token_count": 1283,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.283203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ef118e8d-61e6-40d3-a3d1-855c525cd46d>"
}
|
Many Americans are intrigued by the idea of investing in Bitcoin and other cryptocurrencies, but doing so with a credit card is risky. Read on as experts weigh in on the pros and cons of charging your cryptocurrency purchases.
Many Americans are intrigued by the idea of investing in Bitcoin and other cryptocurrencies, but doing so with a credit card is risky.
A recent study by global investment platform eToro found 43 percent of millennial online traders trust crypto exchanges – platforms where you can buy and sell cryptocurrencies — more than the U.S. stock exchange. Also, 71 percent of millennials who don’t invest in cryptocurrencies would do so if a traditional bank offered the option.
With crypto sparking so much interest, some exchanges offer the convenience of buying cryptocurrencies with a credit card.
However, some experts say “Not so fast.”
Investing in cryptocurrency: What you should know
Cryptocurrencies are virtual currencies that can be used to pay for goods and services just as you might use dollars, euros or pesos. They fluctuate in value, so if you buy a cryptocurrency there’s a chance it can rise in value, making you a lot of money, or it could lose value, leaving you in possession of something worthless.
At its height, one Bitcoin – the most well-known cryptocurrency – was worth $20,000; today the price hovers in the $5,900 range. When you buy cryptocurrencies, they are stored and tracked in a digital wallet until you’re ready to sell them. A “blockchain” is a record of all cryptocurrency transactions that are made.
Unlike other currencies, cryptocurrencies aren’t backed by any governments; nor are they regulated by financial institutions. In fact, some governments have warned their citizens about the risks of investing in cryptocurrencies for that very reason.
However, some disagree, pointing out that the speculative nature of cryptocurrencies makes them an appealing component of a long-term investment strategy.
“Cryptocurrencies are very volatile; however, this also means they have a good chance to appreciate,” says Kirill Bensonoff, a crypto advocate and entrepreneur who has been an active member of Boston’s blockchain community.“If you have an appetite for risk, crypto should be in your portfolio.”
If you decide to buy cryptocurrencies, you’ll need to find an exchange, many of which let consumers pay with a credit card. If you’re thinking of going that route, here’s what you should know.
Finding a crypto-friendly credit card
While crypto exchanges such as Coinmama, CEX.IO and Bitstamp let consumers use a credit card to buy cryptocurrencies, finding a credit card issuer in the U.S. that will let you buy them is another matter.
Last year, several of the biggest card issuers, including Bank of America, Chase, Citigroup, TD Bank and Capital One all banned the purchase of cryptocurrencies via their credit cards. Wells Fargo joined the list last June and Discover’s then-CEO David Nelms last year cited fraud concerns as one of the reasons the card issuer has banned crypto purchases using credit cards.
However, you may have better luck with some smaller banks or credit unions that have not officially banned the practice of buying cryptocurrencies with a credit card. American Express allows you to purchase cryptocurrencies, but in very limited circumstances, says representative Melissa J. Filipek.
“In a similar way our card members can link their card to certain digital wallets, they can also link their U.S. consumer cards to an Abra wallet and load a modest amount of money,” Filipek says. “The limit is $200 a day, up to $1,000 a month. Abra’s wallet can, in turn, be used to purchase Bitcoin in U.S. dollars.”
Weighing the pros and cons
If you’re thinking about investing in anything with a credit card, it’s important to go in with your eyes wide open, experts say. The U.S. Securities and Exchange Commission in 2018 issued an alert warning investors that using a credit card to invest comes with a number of risks. For example, some scammers pressure investors to use credit cards to fund their investments, the SEC says.
The pros of buying cryptocurrencies with a credit card include being able to invest regardless of how much cash you have on hand and being able to take advantage of rewards earned through your spending.
However, there are many downsides.
For one, the interest owed if you don’t pay off the balance at once could eat into your investment returns. If your credit card issuer charges a transaction fee, that too could take away from your profits. Then there is the possibility of damage to your credit score if you find yourself unable to pay off the balance or make payments on time.
Some may think they can avoid credit card interest by using a 0 percent promotional offer, but that too can be problematic, says Melinda Opperman, executive vice president at Credit.org.
“You would need the investment to pay off quickly before introductory rates expire and you’re hit with those high rates,” Opperman says.
Another drawback is that a lot of crypto exchanges charge hefty service fees, Bensonoff points out. For example, CEX.IO charges an extra 2.99 percent to make crypto purchases using a credit card, while Bitstamp charges 5 percent on top of whatever your card issuer might charge.
There are also scams surrounding cryptocurrencies. According to crypto security firm Ciphertrace, crypto-criminals made off with more than $356 million from exchanges in the first quarter of this year.
Last year, the Tennessee Department of Commerce & Insurance warned consumers there is no guarantee a cryptocurrency will increase in value and to beware of cryptocurrency investment opportunities that are being aggressively marketed through social media.
Bottom line: Investing is risky by nature. Do you want to risk your credit profile along with your money?
“If you put a short ticking clock on your investments by gaming the credit card system, you’re setting yourself up for painful failure,” Opperman says.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9477807283401489,
"language": "en",
"url": "https://www.eias.org/news/greener-pastures-southeast-asias-potential-for-a-post-covid-19-green-recovery/",
"token_count": 2177,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.051513671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:24212c69-c013-4339-b674-eaf371b32154>"
}
|
Whilst the COVID-19 pandemic has brought many societies and economies to a standstill across the world, it has also given governments an opportunity to assess their nations’ sustainability. The pandemic has shown how consequential one action can be on the rest of the world and how vulnerable our economies can be when faced with unprecedented circumstances. This year, Southeast Asia has been no stranger to this. The larger economies in the region have slumped in the second quarter of 2020 with the Asian Development Bank (ADB) reporting double-digit GDP contraction in Malaysia, Singapore, and Thailand. For many of the states, this has been the worst economic period since the 1997 Asian Financial Crisis, leading the likes of the Philippines and Indonesia to fall into their first recessions in over 20 years.
Almost 11 months into the pandemic, the grave extent of its effects has prompted a reimagining of a Southeast Asian economic strategy to propel growth in 2021. A ‘green recovery’ would encompass not solely sustainability of natural resources and climate resilience but also an inclusive effort from all sections of society. This is even more imperative for this region given that five of the fifteen countries most affected by climate change are members of the Association of Southeast Asian Nations (ASEAN): Myanmar (2nd), the Philippines (4th), Vietnam (6th), Thailand (8th), and Cambodia (12th). However, achieving a green recovery will prove to be a challenge given the differing interests in the region for each of the 10 ASEAN members and the variation in both energy demand as well as infrastructure. As a result, a green recovery will require a pronounced and coordinated effort from all states, even more so given that the ASEAN nations are all falling behind in achieving the UN’s 17 Sustainable Development Goals by 2030; facing notable slow progress in ‘Clean Water and Sanitation’, ‘Sustainable Cities and Communities’ as well as ‘Climate Action’.
The Progress So Far
Before the pandemic, the United Nations’ International Labour Organization (ILO) projected that 14.2 million green jobs could be created in Asia as a whole by 2030, if the region moved towards measures to slow global warming, such as boosting the use of renewable energy or electric vehicles. If investment and policy developments are realised, then Southeast Asia can become an innovative hub of green employment. The 10 countries of the ASEAN bloc have already shown their political willingness in doing so and collectively committed to meeting 23% of their primary energy needs from renewable sources by 2025. However, achieving this will require a collective annual investment of USD 27 billion from ASEAN members.
As one of the wealthier nations with a flourishing green finance hub, Singapore is starting to unveil initiatives that will positively boost green economies across Southeast Asia. In response to its economic recession and decline in trade, a ‘Green Collar Portal’ initiative has been set up to post jobs in sectors ranging from renewable energy to climate change and sustainable farming. The portal currently has listings in Singapore, Malaysia, and Thailand, with plans to gradually invest in job opportunities in other parts of the region. Considering Singapore’s economic strengths, capacity to build green infrastructure as well as its close links to the region, the Singapore government is in a well-placed position to take up a principal role in attracting more green and sustainable private financing into ASEAN nations.Given its wealth of experience in the sector, Singapore could take the lead in ASEAN to set up a platform featuring local power and transport projects available for investments whilst also conducting transparent Environmental and Social Impact Assessments to monitor the conflicting effects on each state. Encouraging measures have also been seen in its close neighbour Malaysia, as Prime Minister Muhyiddin Yassin has emphasised sustainable initiatives in the government’s recent 2021 budget announcement. This has included an allocated RM 2 billion in its successful Green Technology Financial Scheme, which would encourage the private sector, mainly the manufacturing and services industry, to participate in green technology development and application. They have also invested RM 2.7 billion for rural infrastructure projects, as well as RM 20 million towards a sustainable palm oil certification programme; an industry that has long received international criticism.
For a region that relies so heavily on international tourism as a part of its GDP, hotspots such as Bali and Phuket have been left deserted and devoid of their usual influx of masses. Travel restrictions and border closures have disrupted international tourism, transportation, and trade. According to data from the Bali Tourism Agency, 3000 workers in the sector have been dismissed from their job with further dismissals expected this year, whilst the provincial government has lost USD 679 million in revenue from monthly tourism. The likes of Cambodia and Indonesia have taken it upon themselves to develop a growing ecotourism sector that is targeted at domestic tourists. The Indonesian town of Nglanggeran has built an ecotourism business around the reforestation of an ancient volcanic area; whilst the community of Tmatboey, has fought environmental destruction to become Cambodia’s leading spot for birdwatching. According to NethPheaktra, Secretary of State and Spokesperson for the country’s Ministry of Environment, Cambodia has also earned USD 25 million from ecotourism services during the first nine months of 2020. An increase from both 2018 and 2019, much of this income has come from over 500,000 national tourists to the ecotourism areas located in 22 communities in 12 protected areas across Cambodia.
The Importance of Regional Cooperation
Nonetheless, whilst there are evident positives to be taken from governments employing domestic strategies, achieving a green recovery may be too large a challenge without a collaborative effort. This is dependent on several factors. Notably, having a basis of grey infrastructure is a critical area to focus on given the scale of impacts that green infrastructure development can have, both positive and negative. The shift from grey to green infrastructure canbring with it inequality, as wealthier areas can end up as the sole beneficiaries of green infrastructure, while less developed areas lag behind in the development timeline due to their lack of pre-existing functioning grey infrastructure.
Given the spectrum of dominant political views in the region as well as the lack of green political parties in national governments, there are varying levels of priority given towards sustainable measures. The lack of coordination amongst government agencies and the private sector in ASEAN is hindering the implementation of renewable energy priorities and policies. For Southeast Asia to establish a combined effort towards a green recovery, there must be a consensus over what sustainable infrastructure projects should look like: ASEAN member states need to agree on common definitions for what are considered “green” or “sustainable” activities and investment strategies. This must be all done whilst taking into consideration the variation in wealth, industry, and levels of urbanisation across the region. Nations such as Cambodia, Laos, and Myanmar are all more vulnerable to the effects of climate change whilst simultaneously lacking in nationwide basic infrastructure provisions. Currently, only about 40 to 50% of the population have access to electricity in Myanmar. While in rural areas, this figure is as low as 20%. National interests across the region differ and these factors do pose the question of whether such emerging nations can afford the privilege of developing green infrastructure in this economically volatile environment.
ASEAN as a network can facilitate a cooperative effort and investment to empower less developed nations. It has already shown its desire to move forward with a green economy through the ASEAN Green Bonds Standards; an initiative through the Asian Capital Market’s Forum (ACMF) that facilitates ASEAN capital markets in accessing green finance solutions to support sustainable regional growth and galvanise investor interest for green projects. This is of particular interest given the need for greater private sector involvement in green infrastructure. Currently, government financing contributes to 90 percent of infrastructure expenditure in Asia, compared to a worldwide average of 40 percent.
The EU’s Role in Climate Diplomacy
Given the position the EU holds as ASEAN’S second largest trading partner as well being the largest provider of Foreign Direct Investment to the ASEAN region, the EU must utilise its position to build on and further develop the range and impact of EU-ASEAN programmes it holds. A key example is the ASEAN Catalytic Green Finance Facility (ACGF), to which the EU contributes 50% of the EUR 1.2 billion mobilised by the facility. Under the ASEAN Infrastructure Fund, the ACGF will provide loans and technical assistance to green infrastructure projects on sustainable transport, clean energy, and resilient water systems. It aims to catalyse private capital by mitigating risks through innovative finance structures. Establishing a reliable clean energy source in the region is also essential given the projected rates of rapid urbanisation and consequently rising rates of energy demand.
In the past, the EU has shown that green diplomacy can have a positive effect on the actions of Southeast Asian states. Concerning Illegal, Unreported and Unregulated (IUU) fishing, Vietnam, Philippines, and Thailand have all made substantial developments to improve their legal framework and implement a more sustainable fishing infrastructure following receiving ‘yellow cards’ from the EU. Since then, the ASEAN Network on Combating IUU Fishing has also been established. Tools such as the Enhanced Regional EU-ASEAN Dialogue Instrument can be utilised to improve the level of ASEAN-EU cooperation and discussion on ecological issues, and we have already seen this during the 2nd ASEAN meeting on combating illegal, unreported and unregulated (IUU) fishing, held in partnership with the EU.
The EU has the leverage to engage with ASEAN on a progressive green recovery. The EU can use its developments of Free Trade Agreements (FTAs) with ASEAN nations as a mechanism for green growth and sustainable development, as it has already done with Singapore and Vietnam. The EU are currently holding ongoing FTA negotiations with Indonesia and the Philippines and have confirmed its eagerness to resume FTA talks with Thailand. To introduce green measures as a part of the agreements would go hand in hand with the European Green Deal and the ASEAN-EU Plan of Action (2018-2022), but the idea of a green recovery needs to be accepted widely across the region before it can be implemented. For Southeast Asia, sustainable development must rise to the top of the policy agenda across all states. Through more equal contributions from both government and private investments in clean energy and resource-efficient infrastructure, this can cater to the demands and needs of future generations coming out of this pandemic.
Author: Jason Fernandez, Junior Researcher EIAS
Photo Credits: Pixabay
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8460397720336914,
"language": "en",
"url": "https://www.emerald.com/insight/content/doi/10.1108/S1049-258520170000025009/full/html?skipTracking=true",
"token_count": 473,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:848728cb-f57a-401e-b475-549d8b72c069>"
}
|
This chapter assesses the extent to which historical levels of inequality affect the creation and survival of businesses over time. To this end, we use the Global Entrepreneurship Monitor survey across 66 countries over 2005–2011. We complement this survey with data on income inequality dating back to early 1800s and current institutional environment, such as the number of procedures to start a new business, countries’ degree of financial inclusion, corruption and political stability. We find that, although inequality increases the number of firms created out of need, inequality reduces entrepreneurial activity as in net terms businesses are less likely to be created and survive over time. These findings are robust in using different measures of inequality across different points in time and regions, even if excluding Latin America, the most unequal region in the world. Our evidence then supports theories that argue early conditions, crucially inequality, influence development path.
We acknowledge financial support from the Spanish Ministry of Science and Innovation (reference ECO2010-21668-C03-02 and ECO2013-46516-C4-1-R) and from the Generalitat of Catalunya (reference 2014SGR-1279). We thank Fabrice Murtin for having shared with us the historical estimators on income distribution shown in this chapter. We thank Isabel Busom, Cristina López-Mayan, Adam Pepelasis, Xavi Ramos, Francesc Trillas and the participants of the EDIE workshop, the GEM-Barcelona conference, UAB PhD seminar, Universidad Tecnológica Metropolitana de Mérida, the LACEA/IADB/WB/UNDP Research Network of Inequality and Poverty for their comments and suggestions on earlier drafts of this chapter.
Gutiérrez-Romero, R. and Méndez-Errico, L. (2017), "Does Inequality Foster or Hinder the Growth of Entrepreneurship in the Long Run?", Bandyopadhyay, S. (Ed.) Research on Economic Inequality (Research on Economic Inequality, Vol. 25), Emerald Publishing Limited, Bingley, pp. 299-341. https://doi.org/10.1108/S1049-258520170000025009
Emerald Publishing Limited
Copyright © 2018 Emerald Publishing Limited
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9599287509918213,
"language": "en",
"url": "https://businessandfinance.expertscolumn.com/what-managerial-economics-how-it-important-business-managers",
"token_count": 1433,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06787109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:baf881ba-0599-4cde-a4b9-9d6ca0a1335e>"
}
|
Managerial economics helps to develop leadership qualities which are necessary for every business. It helps in effective decision making thereby profiting the company. Here are some of the reasons how economics leads to the development for professionals at all levels.
A business manager is essentially involved in the processes of decision making as well as forward planning. Decision making is an integral part of management. Management and decision making are to be considered as inseparable. It is the intellectual process and a purposeful activity which at varied times takes in hands all the managerial activities, such as, planning, organizing, staffing, directing and controlling. It is the process wherein an executive, by taking in to consideration several alternatives reaches at the conclusion about how it should be dealt successfully in a given situation. Thus, being a continuous activity, decision making is regarded to be the heart of management.
Decision making is nothing but choice-making and the importance of choice-making emerges due to the fact that a business faces the changes in the conditions in which it operates and there arise unforeseen contingencies.
The survival and the growth of a business in such situations is directly determined through decision making process. It can be defined clearly as selecting one of the best alternatives available - that entails being two or more alternatives. According to George Terry, "Decision making is the selection of a particular course of action, based on some criteria, from two or more possible alternatives." Decision making is thus choosing the best course of action out of the available options while aiming at the achievement of particular organizational objectives.
Since a business organization has the available resources, such as, capital, land and labor, a business manager needs to select the best alternative among others and employ in the most efficient manner so as to attain the desired results. After a particular decision is made relating to resources, plans about production, pricing and materials are to be implemented. In this way, decision making and forward planning go conjointly.
The fact that a business entity is influenced by the conditions is uncertainty about the future and due to the changes in the business environment resulting complexities in business decisions. Since no information or the knowledge about the future sales, profits or the costs is available for a business executive, the decisions are to be made on the basis of past data as well as the approximations being forecasted. In order that the decision making process is carried out in such conditions in an efficient way, economic theory is of great value and relevance as it deals with production, demand, cost, pricing etc. This gives rise to understand the concepts of managerial economics for business manager so that he may apply the economic principles to the business and appraise the relevance and impact of external factors in relation to the business.
Having been regarded as micro economic as well as the economics of the firm, managerial economics is related to the economic theory which is to be applied to the business with the objective of solving business problems and to analyze business situations and the factors constituting the environment in which a business is operated. Managerial economics has been defined by Spencer and Siegelman as, "The integration of economic theory with business practice for the purpose of facilitating decision making and forward planning by management."
Managerial economics is very much capable of serving various purposes and useful for managers in making decisions in relation to the internal environment. It aims at the development of economic theory of the firm while facilitating the decision making process with regard to sales and profits etc. Moreover, it enables to take decisions about appropriate production and inventory policies for the future. It is a branch of economics that is applied to analyze almost all business decisions. It is meant to undertake risk analysis, production analysis that is useful for production efficiency. Likewise, it is of great use for capital budgeting processes as well. In the most positive form, it seeks to make successful forecasts with the objective of minimizing the risks involved. It deals with the aspects as how much cash should be available and how much of it should be invested in relation to a choice of processes and projects while making possible the economic feasibility of various production lines.
A business produces goods which are in course of time to be sold in the market on the basis of demand of consumers. Demand can be defined in brief as the quantity of goods that the consumers are willing to buy at certain prices. In this pursuit, the decisions related to demand are of much significance for managers as the process entails making appropriate estimates with successful forecasts on sales before the activity of production is to be carried out. It is therefore demand analysis is essential part of managerial economics since it enables to analyze the demand determinants and forecasting with a deep involvement of value judgments. Above and beyond, by considering whether the competitions are likely to increase or decrease, a business manager with the help of managerial economics applications is able to asses demand prospects as well as the social behavior that can result in the expansion or the reduction of the sales of business products.
As regards the pricing of products being produced by a business entity, it is one of the most critical decisions for a manager to fix the price of particular products as it is by means of pricing decisions taken by a manager, the inflow of revenue is determined. The areas that are to be covered through managerial economics application in this respect are, price methods, product line pricing and price forecasting. Further, Managerial economics deals with the cost estimates that are helpful for management decisions. More to the point, it is important for a manager to undertake production analysis and to determine economic cost with the objective of profit planning and cost control processes. Since the objective of a business entity in general is to generate profits, profit is the chief measure of success in this way. In respect of this, managerial economics cover the aspects, such as, Profit policies and the techniques of profit planning - Break Even Analysis - also called as cost volume profit analysis - that assists significantly in profit planning and cost control methods with a view to maximize profits of the business.
Managerial economics plays a significant role in the business organizations. It is very much effective to the management in decision making and forward planning in relation to the internal operations of a business as it gives clear understanding of market conditions as well as analytical tools through which the competitions prevailing in the markets can be studied, at the same time the market behavior can be predicted. It enables to analyze the information about the business environment in which a business is managed. It is meant to undertake systematic course of business plans by making possible forecasts. Managerial economics contributes to the profitable growth of business and effective solutions of the business problems by changing the economic scenario in to the feasible business opportunities for business organizations while enabling managers to optimize business decisions as well as involving them in the activity of forward planning efficiently.
In this day and age the importance of executive development programs cannot be underestimated. As the global business is getting increasingly complex, there is a great need for arranging executive development programs in business organizations..
Computer programming is one of the most advantageous fields. It provides a wide range of good prospects in the career advancement.
Below article discusses what makes an effective team leader highlighting some of the main attributes of an effective team leader..
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9447405338287354,
"language": "en",
"url": "https://hillpost.in/2013/04/green-buildings-can-help-india-minimise-power-shortages/69611/",
"token_count": 572,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.052978515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:825bff08-83f3-400b-8e0b-8896e180e7f8>"
}
|
New Delhi, April 19 (IANS) The Indian government must set stricter norms and push for the development of energy-efficient buildings so that the country can minimise power shortages and curb burgeoning petroleum import bills, say experts.
“Power bills can be cut by almost 30 percent just by upgrading the buildings to energy efficient standards,” Frances Beinecke, president, New York-based Natural Resources Defense Council, told IANS.
She said green buildings were not only good for environment but also made a sensible business proposition. “It’s a good profitable investment. Our study shows that payback period for investment in energy-efficient technology is nearly five years. After that, it’s all your saving,” Beinecke said.
As India continues to urbanize, its building-occupied area is estimated to climb sharply, from eight billion square metres in 2005 to a projected 41 billion square metres in 2030, according to a McKinsey & Company study.
Beinecke said as per the study, 70 percent of the buildings in India in 2030 would be new structures.
“Implementation of green technology is easier in the new buildings. It’s a great opportunity for India to develop green buildings, save energy and reduce dependence on oil imports,” she said.
Anjali Jaiswal, director of NRDC’s India initiative, said a case study conducted on a building in Mumbai showed that installation of energy-efficient technology can help cut electricity bill by 28 percent. The money spent on new technology can be recovered in less than five years through savings in electricity bill.
“With India’s energy crisis worsening, scaling up energy efficiency in buildings will be critical to ensuring that businesses and cities can continue to grow in a sustainable way,” she said.
Jaiswal said NRDC in association with other organisations like Shakti Sustainable Energy Foundation, Administrative Staff College of India and Confederation of Real Estate Developers Association of India (CREDAI) was trying to sensitise developers, common people as well as the policy makers about the benefits of green building technologies.
Jaiswal said most of the power is being used in buildings and adoption of energy efficient technology can help minimise the demand-supply gap of electricity in the country.
According to Central Electricity Authority, India faced power deficit of over 12,000 MW during the peak hours in the financial year 2012-13. Total supply of power was 1,23,294 MW in 2012-13 against the peak hour demand of 1,35,453 MW.
Urban Development Secretary Sudhir Krishna said the government is offering various incentives to promote the use of green building technology.
(Gyanendra Kumar Keshri can be contacted at [email protected])
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9410322308540344,
"language": "en",
"url": "https://sia-tickets.com/qa/quick-answer-what-is-your-annual-premium.html",
"token_count": 1273,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.044677734375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:12de2c5b-7963-46d1-bc94-ad1d8c2d0fae>"
}
|
- What is total policy premium?
- Is premium yearly or monthly?
- What factors determine your insurance premium?
- What is an example of a premium?
- How is premium percentage calculated?
- How do you calculate annual premium?
- Is it cheaper to pay insurance monthly or annually?
- Is it better to pay upfront or monthly?
- Do you have to pay insurance upfront?
- What is a premium?
- Why are insurance premiums so high?
- What is an annual insurance premium?
- How is homeowners insurance premium calculated?
- Can I pay my insurance monthly?
- Can you pay mortgage insurance yearly?
- Is premium and deductible the same?
- How often do you pay a premium?
What is total policy premium?
Total Policy Premium means the level annual premium amount for the Participant’s Coverage that is projected to result in the Policy qualifying as a Permanent Policy if the annual premium amount is paid for each of the scheduled Premium Payment Years..
Is premium yearly or monthly?
annual premiums. Monthly premiums are paid once a month, on the date of your billing cycle. While splitting up the premiums is better for some budgets, missing payments can risk a policy lapse. Annual premium payments mean that you only pay one lump sum to your insurer each year.
What factors determine your insurance premium?
Factors that affect your car insurance premiumThe driver’s age. … The vehicle you drive. … Where you park your car at home. … Your insurance excess. … Market value insurance. … The regular driver. … The type of insurance you take out. … Whether or not there’s finance on the vehicle.More items…
What is an example of a premium?
Premium is defined as a reward, or the amount of money that a person pays for insurance. An example of a premium is an end of the year bonus. An example of a premium is a monthly car insurance payment. … A sum of money or bonus paid in addition to a regular price, salary, or other amount.
How is premium percentage calculated?
Price premium calculation using market shares As an example, if a brand has a 25% revenue market share and a 20% unit market share, then their price premium would be 25%/20% = 1.20 – indicating that they have a 20% price premium over the marketplace.
How do you calculate annual premium?
Total annual premium = bodily injury premium + property damage premium +comprehensive premium + collision premium. Use Tables 18-5 and 18-6 to find the annual premium for an automobile liability insurance policy in which the insured lives in territory 1, is class A, and wishes to have 50/100/10 coverage.
Is it cheaper to pay insurance monthly or annually?
Annual Income Protection Payment Paying your insurance premiums annually will always be the least expensive option. Most of the companies offer discounts for paying yearly because it costs more for the insurance provider, if the policyholder pays the premium monthly.
Is it better to pay upfront or monthly?
If the interest rate is less than what you’d pay on a credit card or other loan to pay the balance up front, then it makes sense to use the monthly method. If the rate is more than you’d pay from other financing, then you should borrow using that alternative financing source and make a single annual payment.
Do you have to pay insurance upfront?
No company will insure you without some kind of upfront payment – either a down payment or the first monthly payment that acts as a down payment. Virtually every car insurance company requires that you pay at least one month ahead on a six-month policy.
What is a premium?
The amount you pay for your health insurance every month. In addition to your premium, you usually have to pay other costs for your health care, including a deductible, copayments, and coinsurance.
Why are insurance premiums so high?
Premiums often increase each year to reflect: the higher risk of a claim being lodged as the insured (you or your pet) gets older; changes to government taxes; … any other factor the insurer believes is relevant to their risk.
What is an annual insurance premium?
Definition: The total amount of premium paid annually is called the annualized premium. Description: Any insurance policy comes up with many premium payment options. Premium can be paid monthly, quarterly, semi annually and annually.
How is homeowners insurance premium calculated?
Homeowners insurance premiums are determined by many factors Age of the home (newer homes can be cheaper to insure) Home square footage (larger homes are more expensive to rebuild and have higher premiums) Number of primary inhabitants (larger households increase potential liability)
Can I pay my insurance monthly?
When you buy (most) car insurance policies, there are two ways you can pay: annually or monthly. If you pay annually, you pay the whole thing in one lump sum. If you make monthly payments, you’ll set up a direct debit. … But paying monthly for car insurance is also more expensive than paying annually.
Can you pay mortgage insurance yearly?
FHA borrowers are required to pay for MIP, and there are two types: upfront MIP, which is paid at closing, and annual MIP, which is paid each year in 12 monthly installments that are added to their mortgage payments. In most cases, MIP must be paid for the life of an FHA loan, while PMI can eventually be cancelled.
Is premium and deductible the same?
A premium is the amount of money charged by your insurance company for the plan you’ve chosen. … A deductible is a set amount you have to pay every year toward your medical bills before your insurance company starts paying. It varies by plan and some plans don’t have a deductible.
How often do you pay a premium?
Understanding Insurance Premiums Policyholders may choose from a number of options for paying their insurance premiums. Some insurers allow the policyholder to pay the insurance premium in installments—monthly or semi-annually—while others may require an upfront payment in full before any coverage starts.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9558513760566711,
"language": "en",
"url": "https://www.directenergy.ca/learn/energy-choice/alberta-major-players-energy-market",
"token_count": 1114,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5edf12f7-a8f4-4fd2-ad50-4cdf0500df11>"
}
|
When you think of the energy market in Alberta, you probably think of your gas retailer or retail electricity provider. After all, these are the companies you do business with on a monthly basis to ensure your gas and electricity needs are met. But these services providers are only a single component of the broader market as a whole.
This article aims to provide you with insight into the gas and electricity markets in Alberta so you can learn more about how the system works and who the major players are.
Alberta's gas market has been deregulated since 1985, and while some of the players on this list predate that period, others have been directly created by the development of energy choice. Here are some of the major players you need to know to better understand how Alberta's gas market works.
The Alberta Utilities Commission regulates the natural gas and electricity markets in Alberta. According to its website, the commission's goal is to "protect social, economic, and environmental interests in Alberta where competitive market forces do not." Regulatory functions are administered through written and oral proceedings, and the commission also monitors the building, operation, and decommission of electric facilities to ensure the process is handled in an environmentally friendly way.
The Alberta Energy Regulator is responsible, along with the Department of Environment, for regulating the production of natural gas in Alberta, including the province's 181,300 wells and 415,000 kilometers of pipelines. The AER also has the authority to extend approvals under public lands and environment statutes relating to energy resource activities.
Gas producers are responsible for just that: producing the gas that is sold on the energy market. Gas producers sell the gas at the wellhead, a gas processing plant, a gas transmission system, a storage facility, or directly to the consumer.
NOVA is the gas pipeline network used to transport natural gas from the wellhead to processing plants. The NOVA network extends across the province and the rates and tariffs placed on the system are regulated by the Alberta Utilities Commission under the provisions of the Gas Utilities Act and the Pipeline Act.
The NGX in Alberta and the New York Mercantile Exchange (NYMEX) in the United States are the locations where future gas prices are set, based on the laws of supply and demand. These platforms don't set the prices themselves, but instead provide an open exchange floor for prices to be determined.
Working as the consumer's representative, the Office of the Utilities Consumer Advocate is responsible for handling complaints made against utilities companies or energy retailers. The office works to protect consumers and to provide them with information ahead of time to assist citizens in making smart choices in the deregulated energy market.
Your natural gas retailer is the company responsible for providing you with your natural gas service. When you sign a contract with a natural gas retailer, you will pay a fixed or variable rate per gigajoule for the length of the contract. Your retailer purchases gas from the gas supplier and ensures it's delivered to you while handling billing and customer service responsibilities. At the end of your contract, you will have the choice to sign a new contract with your current natural gas retailer, shop for a new natural gas retailer in the deregulated energy market, or you may choose to receive gas services from a regulated provider.
Like the natural gas market, the electricity market in Alberta is also regulated by the Alberta Utilities Commission, and consumers are represented by the Office of the Utilities Consumer Advocate. But there are other players unique to the electricity market, and here is a list of those you should know.
The AESO is responsible for operating power pool in the province. Power pool is the wholesale energy market where the price is set each and every hour based on the metrics of supply and demand. The AESO was established by the Alberta Electric Utilities Act, and the majority of electricity in the province flows through the pool it operates.
Power generators are responsible for harvesting the electricity used by consumers and businesses in the province, and while they are regulated by the AUC, they are allowed certain freedoms. The AUC ensures that power generators comply with environmental, design and safety standards, but power generators are free to generate power where they please and set their own rate of return. Power generators also must compete against one another to sell their electricity in the power pool, which helps control prices.
The power-generating companies create the power, but it is the responsibility of power distributors and transmitters to see that the electricity reaches consumers. Known as "the wires business," there are actually two facets to the industry. Power transmitters harness power from the generators and transfer it to load centers. From there, power distributors provide the electricity to appropriate paying customers. While the majority of the grid is owned by private, for-profit companies, the ASEO handles planning and operation. Distribution systems are often owned by the area municipalities and their public utilities departments.
Just as they do in the deregulated natural gas market, retail electricity providers ensure energy service is provided to consumers. It is your retail electricity provider who handles your billing, sets up your service and provides customer service for any questions you have along the way. Consumers in the Alberta electricity market can either choose to sign a fixed plan with a retail electricity provider. or they can choose not to sign a contract and do business with the default supplier in their region.
These are the major players in the gas and electricity markets for the province of Alberta. You can learn more about the deregulated Alberta energy market by visiting the Office of the Utilities Consumer Advocate website.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9664643406867981,
"language": "en",
"url": "https://www.dnb.com/business-directory/industry-analysis.education-sector.html",
"token_count": 240,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08642578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c9a212dd-4c47-4400-aa21-0d15e00656c6>"
}
|
Institutions in this industry provide instruction and training to students enrolled in elementary through high schools, colleges and universities, and training centers that offer industrial, professional, and vocational programs. Institutions include public, private, and nonprofit as well as for-profit businesses.
Funding-related challenges have a major bearing on competition at every level of the education sector. Public K-12 school districts are vulnerable to state budget cuts, declines in local property tax revenue, and increasing federal support for alternatives to traditional public schools. Sharp declines in enrollment, caused in part by rising tuition costs, are stifling revenue growth at many colleges and universities. Career and technical education providers are benefiting from growing community investment in vocational training, but tighter government regulation of for-profit trade schools has forced several institutions to close in recent years.
Products, Operations & Technology
In the US, about 90% of kindergarten through 12th grade (K-12) students attend public schools; 10% are enrolled in private schools. At the postsecondary level, about 60% of students are enrolled in public colleges and universities, while about 30% attend private nonprofit schools and roughly 10% go to private for-profit institutions.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9630782008171082,
"language": "en",
"url": "https://www.efeedlink.com/contents/09-09-2003/b07f10c3-dbcb-4120-a6b0-450c723f625d-1092.html",
"token_count": 489,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.259765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e5d3018f-a7ad-4eff-a7c7-a3d9e6afffa2>"
}
|
September 9, 2003
South African Beef Production Seen around 650,000 Tons
Livestock and meat production has been very constant over the past few years as weather conditions have been generally favorable. Unfortunately the absence of statistical information makes meaningful analysis of minor trends impossible. The cattle population is constant around 13.5 to 13.7 million while slaughter figures vary between 3.1 and 3.2 million. Live imports of between 100,000 and 150,000 annually, mainly originates from Namibia.
Beef production is around 650,000 tons while imports and exports are very small, usually less than 4% of the total. The main source of beef imports is again Namibia supplying 65 to 75%. The meat is imported free of duty as Namibia is a member of the Southern African Customs Union. South African meat prices are low by world standards which inhibits imports from other sources. The volatile Rand/Dollar exchange rate also aggravates trade uncertainties.
In spite of much publicity South Africa's beef export drive is not really getting off the ground, currently running at between 10 and 12,000 tons annually. The strong Rand is a factor making exports less profitable. The industry is, however, in the process of complying with the high international requirements for traceability etc. which could open markets when the Rand
Beef prices nonetheless showed increases up to May 2003 reflecting the still high feed prices. Since the onset of winter, prices have decreased, however.
Although much has been said about South Africa's beef export drive, the total numbers are small constituting less than 2% of production, but the intention is to increase exports substantially.
The emphasis would mainly be on the Middle East and Southeast Asia as the EU tariffs are high. The industry is nonetheless preparing for a serious export drive. It is also preparing to sell more to the EU and is looking at the following aspects to increase exports: Traceability has become the single most important marketing tool into the developed markets. It is a vital part of disease control, public safety, quality control and product identification and the industry is looking at its wider introduction. Both Botswana and Namibia who are extremely successful in the EU market are implementing traceability programs.
Natural production also presents an entry into very lucrative niche markets and can be easily attained. Namibia exports all its best cuts to the EU, albeit duty free, and imports South African feedlot produced beef.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9561472535133362,
"language": "en",
"url": "https://www.investopedia.com/terms/c/commerce.asp",
"token_count": 831,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08642578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0bf1a380-7de0-4320-85ba-59899d32f959>"
}
|
What Is Commerce?
Commerce is the conduct of trade among economic agents. Generally, commerce refers to the exchange of goods, services, or something of value, between businesses or entities. From a broad perspective, nations are concerned with managing commerce in a way that enhances the well-being of citizens, by providing jobs and producing beneficial goods and services.
- Commerce has existed from the early days of human civilization when humans bartered goods to the more complex development of trade routes and corporations.
- Today, commerce refers to the macroeconomic purchases and sales of goods and services by organizations.
- Commerce is a subset of business that focuses on the distribution aspect of business as opposed to the production side.
- The buying or selling of a single item is known as a transaction, whereas all the transactions of that item in an economy are known as commerce.
- Commerce leads to the prospering of nations and an increased standard of living, but if left unchecked or unregulated, it can lead to negative externalities.
- E-commerce is a variant of commerce in which goods are sold electronically via the Internet.
Commerce has existed from the moment humans started exchanging goods and services with one another. From the early days of bartering to the creation of currencies to the establishment of trade routes, humans have sought ways to exchange goods and services and build a distribution process around the process of doing so.
Today, commerce normally refers to the macroeconomic purchases and sales of goods and services by large organizations at scale. The sale or purchase of a single item by a consumer is defined as a transaction, while commerce refers to all transactions related to the purchase and sale of that item in an economy. Most commerce is conducted internationally and represents the buying and selling of goods between nations.
It is important to note that commerce does not have the same meaning as "business" but rather is a subset of business. Commerce does not relate to the manufacturing or production process of business but only the distribution process of goods and services. The distribution aspect encompasses a wide array of areas, such as logistical, political, regulatory, legal, social, and economic.
Implementation and Management of Commerce
When properly managed, commercial activity can quickly enhance the standard of living in a nation and increase its standing in the world. However, when commerce is allowed to run unregulated, large businesses can become too powerful and impose negative externalities on citizens for the benefit of the business owners. Many nations have established governmental agencies responsible for promoting and managing commerce, such as the Department of Commerce in the United States.
Large organizations with hundreds of countries as members also regulate commerce across borders. For example, the World Trade Organization (WTO) and its predecessor, the General Agreement on Tariffs and Trade (GATT), established rules for tariffs relating to the import and export of goods between countries. The rules are meant to facilitate commerce and establish a level playing field for member countries.
The Rise of E-Commerce
The idea of commerce has expanded to include electronic commerce in the 21st century. E-commerce describes any business or commercial transaction that includes the transfer of financial information over the Internet. E-commerce, unlike traditional commerce between two agents, allows individual consumers to exchange value for goods and services with little to no barriers.
E-commerce has changed how economies conduct commerce. In the past, imports and exports conducted by a nation posed many logistical hurdles, both on the part of the buyer and the seller. This created an environment where only larger companies with scale could benefit from export customers. Now, with the rise of the Internet and e-commerce, small business owners have a chance to market to international customers and fulfill international orders.
Companies of all shapes and sizes can engage in international commerce. Export management companies help domestic small businesses with the logistics of selling internationally. Export trading companies help small businesses by identifying international buyers and domestic sourcing companies that can fulfill the demand. Import/export merchants purchase goods directly from a domestic or foreign manufacturer, and then they package the goods and resell them on their own as an individual entity, assuming the risk but taking higher profits.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9547029137611389,
"language": "en",
"url": "https://www.road2college.com/expert-explains-how-student-loans-work/",
"token_count": 1574,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2001953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1b6fc6c9-dfd9-4aca-b0ac-2f86ba4fc6e4>"
}
|
An Expert Discusses How Student Loans Work
Having to learn about the Basics of the Student Loans at the point during the college admissions process when every parent is “just, plain exhausted” can seem like cruel and unusual punishment.
But, if you know you will need to borrow money for your child’s college education, understanding everything you can about loans will only serve to help you make the most educated and financially sound decision for you and your family.
So, it’s worth it!
Angela Colatriano, Chief Marketing Director of College Ave Student Loans joined us for a Facebook Live recently, and she provided us with what every family needs to know about how student loans work.
Types of Federal Student Loans
Filling out the FAFSA is the only way you can gain access to federal loans. In addition, some schools use this information when they consider your student for merit aid. So, fill it out, regardless.
Both Unsubsidized and Subsidized Direct Student Loans are the most common student loans.
These types of loans are both in the student’s name and offer different limits by year: up to $5,500 the first year, $6,500 the second year, and $7,500 for years three and four.
Dependent on what the school costs and what you might still need in addition, the maximum amount you can borrow over the life of your education is $31,500.
A family’s financial need determines whether a student receives a Subsidized or Unsubsidized loan.
The U.S. Department of Education pays the interest on a Direct Subsidized Loan while you’re in school at least half-time.
Students with Direct Unsubsidized Loans are responsible for paying the interest during all periods.
Both of these loans typically offer the lowest interest rate your student will find—approximately 4.53% with a 1% origination fee.
College Ave Tip: Because of the low interest rate, subsidized and unsubsidized federal student loans should be considered before any other type of loan. They will be your best starting point.
How Do Parent PLUS Loans Work?
If your student finds they will need additional funding, the next type of loan you might want to consider will be the Parent PLUS loan, from the federal government.
The maximum PLUS loan amount you can borrow is the cost of attendance at the school your child will attend minus any other financial assistance they might receive.
As of July 1st, 2019 the interest rate on the Parent PLUS loan is 7.08% with a 4.248% origination fee.
Any parent or legal guardian may borrow the Parent PLUS loan as long as there are no bankruptcies or prior loan defaults in their credit history.
There may be some other requirements, but none that would be as stringent as that of private loans.
If a parent is rejected from receiving a Parent PLUS loan, the student is eligible to borrow additional money through the Direct program, usually between $4-$5,000.
A Parent PLUS loan is solely the parent’s financial responsibility.
It is in the parent’s name, and it affects the parent’s credit history.
College Ave Tip: If you have any questions, don’t hesitate to call the school’s financial aid office or contact https://studentaid.ed.gov/
How Do Private Student Loans Work?
If you are researching Parent PLUS loans, it would also be a good idea to start looking into private loans.
In many instances, depending upon a parent’s credit score, private loans can offer a lower interest rate.
In addition, if a parent is not comfortable taking on the sole responsibility of paying for a child’s education, a private loan may offer more options than a Parent PLUS loan.
One of those options is having a cosigner — Because 18 year-olds do not generally have a good, if any, credit history, they cannot obtain a loan on their own.
For that reason, they will need someone to cosign their loan.
Determining who that person will be can be based on who will have the best credit rating and thus potentially get the best/lowest interest rate.
Keep in mind that this person agrees to repay the loan if the student is unable or unwilling to make the loan payments.
A positive aspect of a private student loan is that it gives the student some “skin in the game.”
A negative aspect of this type of loan is that if the student is late with a payment or defaults on the loan, it damages the credit history of both the borrower and cosigner (you).
Parent who cosign loans for multiple children are taking a huge risk and should think very seriously about the implications that will have on their financial future.
When taking out a loan of this type, it is prudent to check whether the lender offers a “Cosigner Release, “ a clause that enables the cosigner to eventually bow out of any responsibilities when the student is deemed creditworthy.
How Do Private Parent Loans Work?
Private Parent loans are relatively similar to Parent PLUS loans in that they involve only the parent taking the borrowing responsibility.
The student’s name is not on this loan.
Not every lender offers this type of loan—College Ave Student Loans is one of the few that does.
College Ave Tip: Shop around! If you have a strong credit score, you very well may be able to get a better rate on a private loan than on a federal loan.
What Is Student Loan Pre-qualification?
If you attempt to “prequalify” for a loan, the lenders will do what they call a “soft credit hit” and evaluate your creditworthiness.
This will not affect your credit, so it’s highly recommended to prequalify if you can.
Not all lenders offer this option.
(College Ave does have a prequalification tool.)
College Ave Tip: If you do shop around for lenders, do it within a 30-day period, so the credit bureau will understand your shopping behavior and not “ding” you adversely.
What’s the Difference Between Fixed vs. Variable Interest Rates?
All federal programs will only offer fixed-rate loans.
Private lenders tend to offer a choice of either a fixed interest rate, which stays the same for the life of the loan, or a variable rate which will fluctuate (go up or down) over time, depending on the market.
The variable interest rate is usually lower than fixed interest rates, however there are no guarantees that it will remain as such.
Anyone who is averse to taking risks and more comfortable with consistency should not consider a variable rate loan.
It is also more difficult to create a dependable budget when one takes on a variable rate loan.
To the uninformed, the Student Loan Process can be a scary place. But with some education, research, and introspection, a parent (or guardian) can make a good decision that will help pay for their child’s education, and keep their financial future intact.
Once Angela completed her “Basics of Student Loans” Parent Orientation, many of the parents who watched this very informative Facebook Live had the opportunity to ask some additional questions.
For more insight in to Student Loans, watch the video here.
HOW TO PAY FOR COLLEGE
JOIN ONE OF OUR FACEBOOK GROUPS:
This post is sponsored by College Ave Student Loans.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.