meta
dict | text
stringlengths 224
571k
|
|---|---|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9378975629806519,
"language": "en",
"url": "https://www.renniecenter.org/research/reports/smart-school-budgeting-resources-districts",
"token_count": 447,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06103515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:02591cb9-1a51-4daa-89e5-17d0bdf2a906>"
}
|
In an era of aggressive public education reform, school districts face increasing pressure to produce higher levels of student performance with increasingly limited resources. The economic downturn has forced many districts to tighten their belts, and careful thought must be given to how each and every dollar is spent. Optimally, district leaders should work with stakeholders in their communities to set goals, analyze current spending, provide transparency in their budgeting, and consider cost-saving and reallocation strategies.
The Rennie Center has created a toolkit, Smart School Budgeting: Resources for Districts, aiming to assist district leaders in decision-making about school budgeting. Smart School Budgeting is intended to push school leaders to take a more deliberative approach to school budgeting. The resources presented in the toolkit act as a starting point for districts examining their own budgeting processes. The document is designed as a user-friendly summary of existing literature and tools on school finance, budgeting, and resource allocation that directs district leaders and school business officials to practical and useful information to shape resource decisions. Each section includes an overview of a critical topic in school budgeting, summaries of useful documents and resources, relevant case studies (if available), and a resource list with hyperlinked documents for easy access. The toolkit is organized around the following topics: introduction and context for school budget analysis; setting goals; types of budgets; strategies for analyzing spending; tools for budget analysis; and cost-saving strategies.
This toolkit was released at a public event on October 3, 2012.
Below is an interactive map exploring school spending in Massachusetts school districts—select the image to begin exploring. This map presents per pupil spending data in Massachusetts and presents an opportunity to compare spending across school districts and categories. It also exemplifies the type of critical analysis on school spending promoted through the Smart School Budgeting guide.
Too often the budget critique focuses on total per pupil spending without an understanding of the deeper context of school budgeting or inputs. Ideally, districts should use comprehensive information to examine how spending is allocated across categories and develop data-driven budgets that link inputs and desired outcomes. The purpose of this map is to highlight how spending in educational categories varies across Massachusetts districts.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.954323410987854,
"language": "en",
"url": "https://www.watereducation.org/aquapedia/monterey-amendment",
"token_count": 469,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1982421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:35c47548-171a-407c-a7af-42f7615bba14>"
}
|
The Monterey Amendment, a 1994 pact between Department of Water Resources and State Water Project contractors, helped ease environmental stresses on the Sacramento San Joaquin Delta.
As part of large-scale restructuring of water supply contracts, the Monterey Amendment allowed for storage of excess flows during wet years in groundwater banks and surface storage reservoir. This stored water could then be used later during dry periods or to help the Delta.
The amendment also included a rate-stabilization provision to cushion contractors against sudden rate increases in dry years, and added flexibility for contractors that draw water from Castaic Lake and Lake Perris in Southern California.
More controversially, the Monterey Amendment included a conditional commitment by DWR to make reasonable efforts to complete SWP facilities, transfer of ownership and operation of the Kern Water Bank from the state to the Kern County Water Agency and elimination of a provision from the original contracts that allowed for proportional water allocation adjustments if the system was declared in permanent shortage.
Like many other controversial water deals, the Monterey Amendment ended up in court, challenged by the Planning and Conservation League, Citizens Planning Association of Santa Barbara County, and a small SWP contractor, the Plumas County Flood Control and Water Conservation District.
In 2000, a state appeals court agreed with the challengers that the Environmental Impact Report for the amendment did not analyze provisions for completion of the SWP or permanent water shortages.
In 2003, a settlement was reached that called for preparation of a new EIR, more detailed reporting of the project’s actual delivery capability and public participation on any project amendments.
DWR in 2007 released a draft EIR, which discusses the project alternatives, growth inducement, water supply reliability, as well as potential areas of controversy and concern. The final EIR was released in 2009. DWR decided to continue to operate the SWP under the existing Monterey Amendment to the SWP long-term water supply contracts, including the Kern Water Bank transfer, and under the Settlement Agreement entered in PCL v. DWR. DWR’s decision was challenged by two groups of plaintiffs on issues relating to the adequacy of the EIR and the validity of the Monterey Amendment. The cases are currently being heard by the trial court. Final resolution of the issues is likely to take a number of years.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9236453771591187,
"language": "en",
"url": "http://www.industrialheater.cn/newsinfo/800840.html?templateId=1133604",
"token_count": 854,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08447265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d6a88de5-120d-4991-9d61-f18038ee0291>"
}
|
The Green Future of Power Generation | Hydrogen Power
As power generation looks to be a greener option, Europe is banking on hydrogen. The European Commission published its 2030 hydrogen strategy marking its official commitment to hydrogen power, doubling down renewable energy. Goldman Sachs estimates that the green hydrogen market could be worth $12 trillion (USD).
The push for greener power comes from a combination of growing power demands and greater environmental concern. There is a call to arms to update the world’s power sources to a more sustainable model, while still being able to supply a growing demand for power.
E-hydrogen has the potential to meet these needs. With the commission anticipating it to double power demand in Europe, making this energy type the largest electricity consumer.
One of the biggest hurdles for e-hydrogen has been satisfying the infrastructure required to implement and produce it at scale. With massive buy in occurring in Europe, these needs are met. This creates a clear opportunity for the e-hydrogen and e-fuels market to make a significant impact on, if not replace, traditional fuel sources.
In addition to the European Commission’s commitment to hydrogen power generation, future planning is also being championed by individual countries.
Germany has unveiled a plan to invest over $10.5 billion (USD) into green hydrogen as part of a larger commitment for climate initiatives. This transition to e-hydrogen is intended to put an end to Germany’s reliance on coal power.
The plan goes beyond the adoption of grene hydrogen and plans to make it competitive. To do so they are building a home market, pushing for rapid scaling, reducing costs, and training personnel.
Why the Hydrogen Boom is Occurring in Europe, and Not the USA
Although hydrogen is showing promise, and is considered the best option for a scalable solution for zero emission energy production, The USA isn’t buying in just yet.
Part of this is due to the fact that e-hydrogen isn’t necessarily clean. It still requires intensive production using water and electricity. Where enough infrastructure is in place to support generating that electricity through wind or solar panels, it creates a clean green hydrogen.
Where renewable energy isn’t available, hydrogen production uses steam reforming. A process that still requires the use of natural gas. In the US, 95% of hydrogen is produced through steam-methane reforming.
In addition to the slower adoption of renewable sources at scale, the US also has a much greater availability of natural gas. This offers greater energy security than putting risks on a new source of energy. So not only is renewable energy less abundant, but natural gas is readily available and affordable.
Furthermore, the US government hasn’t had the same level of commitment to clean energy. While the official climate plan addresses green hydrogen, it does not make investments or infrastructure plans. Whereas European nations have made clear commitments and invested billions of dollars.
Without government buy-in, public industries are less incentivized to participate in green energy. Unfortunately for the US, this could result in leaving them years behind in an energy race that could change the face of power generation on a global scale. Of course, some US firms like General Electric are looking forward and making significant advancement on hydrogen power.
Electric Heaters in Hydrogen Power Generation
Electric heaters are no stranger to green energy. Wattco immersion heaters in solar power system are currently in use to improve power generation and use from solar energy.
In the green hydrogen market, they stand to provide even greater service. The market relies on two factors:
- Zero carbon emissions
- Maximizing efficiency
Electric heaters already meet the need for zero-emissions, using green-friendly technology to deliver power effectively and sustainably. As well, their superior performance ensures faster heat-up times, lower operational costs, and greater reliability. These factors will make them a necessity in the rapid implementation and scaling of e-hydrogen over the coming decade.
Wattco custom-manufactures electric heaters. Our team of experts help you choose the precise designs, wattages, materials, and configurations to meet your specific application and budget.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9264867901802063,
"language": "en",
"url": "https://change.walkme.com/business-process-management/",
"token_count": 5191,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.05712890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a573c9f5-7526-456e-a154-ef5e0cce5896>"
}
|
This guide explores business process management from top to bottom.
Among other things, we will look at:
- A definition of business process management (BPM)
- How business process management compares to related disciplines
- Frequently asked questions about business process management
- The business process management life cycle
- Tips, strategies, and best practices
…and much more.
To start with, let’s define a few important terms, starting with business process management itself.
Business Process Management (BPM): Important Definitions and Differences
Here are some important concepts that can help us better understand how business process management fits into the organization.
Business Process Management: A Definition
Business process management is a business discipline designed to organize and optimize individual employees’ workflows.
It does this through a complex life cycle aimed at assessing needs, designing an optimal workflow, then testing and improving that workflow over time.
The purpose of this approach is to:
- Understand how business processes are operating
- Increase efficiency, effectiveness, and productivity
- Improve organizational performance
- Lower error rates
- Save costs
Effective business process management can also carry a number of other positive side effects.
When business processes are made more efficient, for instance, employee engagement can also increase alongside productivity.
Workflow Management vs. Business Process Management
Though workflow management and business process management both tackle the same area – business processes – they each take a different approach.
Business process management takes a big-picture view of a process in the context of an organization. Its purpose is to coordinate various processes and improve the efficiency of organization-wide procedures, programs, and processes.
Workflow management, on the other hand, takes a local view of the same topic, focusing on people, instructions, and tasks. The aim is to improve the efficiency of individual processes.
As we will see later, there are software applications that exist for both discipline.
Business Process Analysis vs. Business Process Management
Business process analysis is the process of analyzing business processes to find and address problems with business processes.
According to some, business process analysis is a specialization within business process management, which includes four steps:
- Identify the process
- Collect key data
- Analyze the process as it is
- Design what the process should look like
From this perspective, business process analysis is clearly an important step in business process management – but it is clearly only one step and not the same as business process management itself.
Project Management vs. Business Process Management
Project management is a management discipline dedicated to coordinating, organizing, and managing a discrete business project.
For example, a business may choose to adopt a new CRM platform.
This project would have…
- A start date, a timeline, and an end date
- Specific goals and objectives related to adoption, such as employee performance and time-to-competency
- Roles and responsibilities, such as super users who would coach team members
Among other aims.
Business projects are certainly related to a business, but unlike business processes, they are not ongoing. And they may or may not be related to core business functions.
Service Management vs. Business Process Management
Service management is often used as shorthand for IT Service Management.
IT service management is a specific IT function designed to perform tasks such as:
- Manage changes to IT services
- Minimize service impacts
- Mitigate risk
- Increase efficiency of service changes
This discipline is a specific function within IT and although it shares certain characteristics with business process management, they are two separate functions.
Program Management vs. Process Management
In business, programs are composed of multiple projects. Each program has an aim, goals, and objectives.
For example, an organization may choose to undertake a digital transformation initiative.
Digital transformation initiatives are complex programs that can involve many aspects, such as:
- The adoption of new technology
- Employee training efforts
- New business processes, products, or services
- Organizational culture change
Naturally, project would be far too limiting a term to apply to such a scenario.
Instead, an organization would execute a complete program – a portfolio of interdependent change projects designed to transform the organization.
Frequently Asked Questions (FAQ) About Business Process Management
To better understand business process management, let’s explore some of the most frequently asked questions about this topic.
Why does business process management matter?
As mentioned earlier, effective business process management offers a number of benefits for the organization, such as:
- Improved process outcomes
- Increased business process efficiency
- Better organizational performance
It can also have a number of indirect – yet positive – effects on the workplace.
When business processes are made more efficient, for example:
- Processes tend to operate more smoothly, which can improve worker engagement and satisfaction
- Decreased complexity benefits the workplace by simplifying the employee experience
- Organizations that operate more efficiently tend to be more agile
Ultimately, business process management is a way to improve organizational effectiveness and performance.
And those gains, in turn, have a positive impact on the organization’s bottom line.
How do you measure the value of business process management?
Any time an organization sets out to improve a business process, it will set goals and objectives.
One of the best ways to measure business process management efforts is by comparing those goals against process improvement efforts.
That is, process improvement objectives should be measured against costs, such as:
- Financial costs
- The time it takes to achieve specific objectives
- The actual outcomes of the improvement efforts
This type of data can be quantified and compiled, then distilled into bottom-line profitability metrics.
Who is in charge of business process management?
The larger the organization, the more likely it is to have resources specifically dedicated to business process management.
In these cases, business process managers and specialists will most likely be in charge.
However, smaller organizations may delegate the same duties to a cross-functional team or to another department, such as operations.
Outsourcing is yet another option.
Business process management consultants are firms that help organizations streamline business processes, align resources, and improve organizational effectiveness.
Naturally, a business should weigh the pros and cons of outsourcing against insourcing.
A high-growth firm, for instance, may choose to develop its own business process management function rather than hiring outside help.
An organization that has no such aspirations, however, may choose to hire a consultancy.
What are the main software tools used in business process management?
Business process management platforms are one-stop solutions that offer a range of important functionality.
According to a Forrester report, these solutions are essential to driving successful digital transformation in business.
Forrester claims that the effective use of these platforms can:
- Fuel customer-centric innovation
- Increase organizational agility
- Encompass both the front and the back-office
Some of the most heavily weighed aspects of these platforms included:
- Low-code development
- Process data virtualization
- Guiderails and governance
- Mobile engagement
- User interface integration
Today, there are plenty of vendors operating in this space, including Oracle, IBM, Appian, and Red Hat.
These platforms are becoming more and more critical for scaling organizations.
However, to be successful, it pays to remember that business process management is first and foremost a business discipline and a method.
The Business Process Management Life Cycle
The business process management life cycle is a way to categories the activities of business process management.
Here are a few stages that are commonly included in this cycle:
The business process strategy should, among other things…
- Align with the organization’s goals
- Be designed to achieve specific aims, such as targeted organizational improvements
- Outline the necessary processes that should be implemented or altered, including primary, secondary, and management processes
- Define an organizational change plan
Once a strategy is in place, an analysis is required to provide more insight into the current state of the business.
There are a number of techniques that can aid the business process manager, such as a value-added analysis, a gap analysis, a process simulation, and so forth.
Objectives to business process analysis include:
- Understanding the current state of targeted business processes
- Identifying weak points and growth opportunities
- Gaining insight into how existing processes impact organizational performance and the designated areas of change
The information gained in the business process analysis is necessary for the next step – designing improved or restructured business processes.
The information gathered earlier will be incorporated into the design stage, helping professionals understand whether or not existing processes are sufficient.
During this phase, business managers will…
- Optimize existing processes
- Redesign when necessary
- Design new ones
Going hand-in-hand with the designs are models, which represent those designs in a concrete form.
As mentioned earlier, business processes and workflows are closely connected.
Modeling accomplishes several objectives, such as:
- Introducing multiple variables into the design to model how the design might change under different circumstances
- Documenting the design as a representation that can be easily grasped
- Providing a reference for use by employees and stakeholders
- Defining steps, stages, roles, responsibilities, and timelines
In short, modeling provides a roadmap, or a concrete structure, that transforms the design into a solid action plan.
Inevitably, the implementation of new business processes involves change.
For that reason, it pays to:
- Carefully monitor and manage the change program
- Stay agile
- Tackle obstacles to change
- Take a systematic approach to implementing the new processes
As we cover elsewhere on this blog, change management is important in order to successfully implement change.
Throughout the implementation of the process change, managers should:
- Continuously track KPIs and metrics
- Analyze the performance of the process
- Stay agile, adaptable, and data-centric
Regular reviews of performance metrics is essential to maintaining an objective perspective on process improvements.
No organizational change project is perfect at the outset, so business process managers should expect to spend time optimizing their new process.
- Collecting and monitoring data, as mentioned above
- Using those insights to develop new solutions
- Make course corrections as necessary
There are several purposes to monitoring and optimizing process changes – they help managers determine whether a process is working as intended, what may need changing, and how to make course corrections.
If a new process is not operating efficiently enough or as expected, business process managers may elect to reengineer the process completely.
Business process reengineering is a holistic cycle that follows several steps:
- Identifying problems
- Review, update, and analyze as-is
- Design “to-be” processes
- Test and implement to-be processes
Like business process management, business process reengineering is a discipline unto itself.
However, since reengineering is not always a necessity, many organizations choose to start with business process management, an approach that is less risky and more economical.
How Technology Is Changing Business Process Management
Digital innovation and disruption are driving digital transformation across every industry in the world, including business process management.
Here are a few technology trends that are impacting the way business process
The Internet of Things (IoT)
The Internet of Things integrates the internet with physical devices.
Alongside the increase in sensors, chips, and electronics comes an increase in business process complexity.
Business process analysis, management, and reengineering can all lead to the simplification of business processes.
Arguably, such simplification is a necessity, since too much complexity in business processes and work environments can lead to disorganization, inefficiencies, or worse.
Cloud computing is another technology trend that is affecting the way business process managers do their jobs.
Oracle says that business process management combined with cloud computing means several things:
- With cloud-based business process management solutions, processes can be managed anywhere, anytime
- Cloud-based management flexibility adds more resourcing options, such as outsourcing
- Process optimization options should be as rich in the cloud as in the enterprise
- Distributed systems offer several advantages, but also require greater security
…to name just a few.
In short, by using cloud- and web-based software, organizations can achieve a number of advantages over platforms that strictly operate from within the enterprise.
Artificial Intelligence (AI)
AI is another technology that shows promise when it comes to business process automation.
Machine learning algorithms allow computers to automatically recognize patterns and “learn” from data.
By combining AI with business process automation, organizations can tap into business data to increase efficiency across many areas of the organization.
For instance, if a marketing team has to frequently parse and analyze a customer data set in order to identify potential sales opportunities, AI can be taught to automatically perform the same process.
This type of automation draws closer to cognitive automation, which, in the coming years, will become yet another important tool in the business process manager’s toolbox.
Digital transformation is the process by which an organization leverages technology to improve business processes, services, products, or other areas of the business.
Whether an organization intends to become more digitally mature or overhaul its customer experience program, business process management plays an important role in digital transformation efforts.
- Digital transformation programs entail organizational change
- Organizational change typically requires the introduction of new processes, process reengineering, or process optimization – or all of the above
- Managing that change requires the management of people, processes, and more
The more that a business undertakes digital transformation, the more it will need to design, manage, and optimize its business processes.
Business Process Management Software
Business process management software is, as the name suggests, designed to assist with the management and optimization of business processes.
According to a report by Forrester, these platforms “innovate, modernize, and continuously improve a process.”
They also point out that business process management platforms are now called “DPA-deep” – that is, Digital Process Automation platforms that target deep, complex processes.
However, given the fast-paced nature of technology, not all vendors and experts use the same definitions.
Many still refer to business process management platforms as “BPM platforms.”
This array of terminology can be complex and difficult to navigate at times, which is perhaps why Forrester makes a distinction between different types of business process management tools.
Below, we will explore a few of these platforms in depth.
Business Process Management Software
As mentioned above, business process management platforms vary in terms of their complexity and scope.
To make matters even more complicated, many people conflate different terms.
For instance, certain websites put project management tools and business project management platforms under the same umbrella. These are then included in the same category as low-code app development platforms.
Since each platform offers different types of functionality – and will have different capabilities – it pays to investigate these options thoroughly.
Here are a few examples:
Business Process Management Tools
Business process management tools, generally speaking, are designed to help business process managers analyze, design, and optimize workflows.
As mentioned above, Forrester categorizes these platforms into:
- Robotic process automation (RPA)
- Digital process automation (DPA)-deep
- Digital process automation (DPA)-wide
The question most enterprise architects are asking themselves, says Forrester, is which types of platforms can enable which types of functionality.
Workflow Management Tools
Workflow management tools take a tactical approach to managing workflows, or sequences of tasks.
Common features include:
- Task automation
- Low-code design capabilities
- Integration with other enterprise tools
- Data and analytics
- Mobile access
These workflow tools can be extremely useful for improving workflow efficiency.
However, organizations that need to automate more complex tasks or organize multiple business processes should examine some of the other tools listed here.
Project Management Tools
Project management tools are specifically designed to manage business projects, not processes.
Wrike, Freedcamp, and similar project management tools include functions that are perfectly suited to project management, such as calendars, kanban boards, and collaboration tools.
However, it is important not to confuse these tools with business process management tools, since they are very different.
Project management tools are designed to help project managers…
- Collaborate more effectively with their teams
- Keep projects organized
- Set goals, milestones, and deadlines
- Track project performance and progress
It is important to note that these tools differ considerably from other business process management tools.
However, since certain sources include them under the same category as business process management platforms, they have been included in this list.
Robotic Process Automation
Robotic process automation (RPA) is, as the name suggests, specifically designed to automate business processes and workflows.
Compared to other process automation solutions, robotic process automation focuses on low-level, tactical workflows.
Data entry and other tasks that require little cognitive effort can all be automated with these solutions.
There are a number of benefits to using robotic process automation tools, including:
- Improved business process efficiency
- Decreased costs for a specific process or workflow
- Increased turnaround time
- Lower error rates
Implementing robotic process automation can result in significant performance gains across the entire organization.
However, there are other forms of automation that can be used to improve business process efficiency.
Digital Process Automation
Forrester and others point out that business process management (BPM) tools have shifted to digital process automation (DPA) tools.
And, as mentioned above, this next generation of tools are evolving into:
- Digital process automation-deep, which is designed to transform and improve narrow, deep business processes
- Digital process automation-wide, which are oriented more towards the everyday business user
- Dynamic case management, or solutions that are specifically designed to help case workers customize workflows easily and efficiently
For the large-scale enterprise, it may be necessary to implement all of these tools – or it may not.
As mentioned earlier, since these tools’ functionality overlaps to a certain extent, organizations may be able to achieve the same objectives with a smaller technology stack.
Digital Adoption Platforms (DAPs)
Digital adoption platforms (DAPs) are specifically designed to automate product onboarding, training, and workflows.
Leading platforms include features such as:
- In-app walkthroughs. Product walkthroughs take users through a series of actions – that is, through a workflow. These are ideal training mechanisms and very useful for business process managers. After all, whenever a new business process is introduced, users must learn that process. And unless training is delivered effectively, productivity will fall behind.
- Contextualized guidance. Interactive, context-based guidance outperforms traditional training approaches. In the digital workplace, this is especially valuable – not only is automated contextual guidance more scalable than human-led training, users will be much more likely to retain what they have learned when they actually experience it.
- Software analytics. Software analytics monitor users’ interactions with a software tool or a set of tools. That behavior can then be analyzed in order to identify training needs, common errors, and behavior patterns. Over time, this information can help organizations create more robust training program – which in turn helps them adopt new business processes more effectively.
- Automation. Automation, as we saw earlier, is an essential facet of the modern workplace. Since software and bots can perform many tasks better than humans – and at a lower cost – effective automation can grant a competitive advantage.
These solutions mesh seamlessly with the digital workplace, augmenting employees’ day-to-day workflows with automated tasks, interactive training, and in-software assistance.
The result: a platform that improves efficiency and productivity by combining employee training with low-code automation tools.
5 Tips and Strategies for Better Business Process Management
Business process management is a continually evolving business discipline.
Staying competitive and relevant requires more than just the right software – it requires the right mindset, strategies, and tactics.
Here are five tips that can help business process managers keep up in today’s evolving digital landscape:
1. Build a digitally mature business
Digital transformation cannot succeed without business process management.
Becoming digitally mature requires effective business process management – and at the same time, effective process management requires a degree of digital maturity.MITSMR-Deloitte-Digital-Maturity-Infographic-2017
There are several reasons for this, many of which we have covered above:
- Modern-day business process management makes heavy use of digital technology
- The ability to use digital software, especially business process management tools, is fundamental to successful organizational change and business process improvement
- Digitally mature businesses will be more successful in general, which should naturally be one of the overriding concerns of any business manager
The link between digital maturity and business process effectiveness demonstrate the clear value of digital transformation.
Business process managers should work closely with leadership to design and implement changes that prioritize the organization’s digital growth.
2. Take a structured approach to change management
Technology does not operate in a vacuum.
In fact, digital transformation and technological change are more about people than about the tools themselves.
Therefore, when implementing new business processes, it is critical to focus on change at the individual level.
An effective change management strategy focuses on the human side of the equation.
For instance, most change management frameworks emphasize the need to:
- Communicate clearly and effectively with employees
- Build awareness of the need for change
- Generate desire for change
- Provide employees with the necessary skills and tools they need to change
- Maintain accountability
- Reinforce change to ensure that it remains permanent
Change management is a discipline unto itself, so if necessary organizations should hire or outsource the required expertise.
3. Create a unified technology stack
As we saw above, a single business process management platform is not enough.
To enable effective process management in a large-scale enterprise, it is necessary to employ multiple tools.
These should include an appropriate mixture of:
- Business process management platforms
- Digital process automation solutions
- Robotic process automation tools
- Digital adoption platforms
It is important to implement the right tools and the right functionality.
However, it is just as crucial to ensure that those tools operated as a seamless, integrated stack.
For that reason, business process managers should ensure that they fuel transformation efforts with a comprehensive digital adoption strategy.
4. Stay data-driven
Data can help managers make decisions that are based on real-world information and data. Decisions, designs, and models will be more objective, more efficient, and more effective.
Business process managers will need to access and utilize a breadth of data from across the organization.
For this reason, efforts should be made to:
- Democratize data
- Collaborate closely with other departments
- Work to instill data-driven practices into the organization
- Cultivate a data culture
In short, the effective use of data can help companies earn far more from their business process improvements.
5. Take a lean, agile approach to process implementation
In the digital era, speed is a weapon.
The faster organizations can innovate, produce, and transform, the more quickly they will be able to capitalize on growth opportunities.
Here are a few tips for staying agile and lean:
- Stay user-centered. User-centered design means deriving designs from user feedback, input, and data. This approach helps to keep process designs more relevant and useful – and, as a result, more effective.
- Be agile and adaptable. Agility means focusing on responsiveness rather than static processes. Change and fluidity should be a standard part of business process management, but this isn’t always the case. Staying responsive means not only reengineering processes when necessary, it also means being willing to adapt one’s own approach to business process management.
- Collaborate with stakeholders frequently. Frequent collaboration is a hallmark of both agile and lean. Agile software developers, for instance, will meet on a regular basis to ensure that teams are in sync. And lean practitioners will constantly tap user input to learn and adjust their services.
- Make frequent incremental improvements. Optimization, as discussed above, is one of the key stages in the business process management life cycle. Business process managers that are willing and able to continually optimize processes will have a better chance of keeping up with the dynamic and fast-paced digital economy.
In most cases, these business approaches require new mindsets.
Rethinking organizational strategy and business practices may be difficult, but the benefits are well worth it.
Implementing these ideas can result in greater organizational agility, better organizational performance, and more effective, efficient business processes.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9264892935752869,
"language": "en",
"url": "https://china-turning.com/how-much-do-you-know-about-investment-casting/",
"token_count": 2122,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.080078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d9df8314-498e-4d91-9f72-19a92e788e98>"
}
|
1. The characteristics of investment casting
Investment casting is also called precision casting or lost wax casting. It uses fusible materials (wax and plastics, etc.) to make precise fusible models. The models are coated with several layers of refractory coatings, which are dried and hardened into a whole The shell is then heated to melt the mold, and then fired at a high temperature to become a refractory shell. The liquid metal is poured into the shell, and the casting is formed after cooling.
Mold material-wax mold-mold assembly-mold repair-coating-sanding-demoulding-roasting-pouring-cooling-sand falling-cleaning
Compared with other casting methods, the main advantages of investment casting are as follows:
The castings have high dimensional accuracy and low surface roughness, which can cast complex-shaped castings. The general accuracy can reach 5~7, and the roughness can reach two Ra25-6.3μm;
It can cast thin-walled castings and low-weight castings. The minimum wall thickness of investment castings can reach 0.5mm, and the weight can be as small as a few grams;
It can cast fine patterns, characters, castings with fine grooves and curved pores;
The shape and cavity shape of investment castings are almost unlimited, and parts with complex shapes that are difficult to manufacture by sand casting, forging, cutting and other methods can be manufactured, and some assemblies and welding parts can be made after slight structural improvements. Directly cast into integral parts, thereby reducing the weight of parts and reducing production costs;
There are almost no restrictions on the types of casting alloys, commonly used to cast alloy steel, carbon steel and heat-resistant alloy castings;
There is no limit to the production batch, and it can be mass produced from a single piece to a batch.
The disadvantage of this casting method is that the process is complicated, the production cycle is long, and it is not suitable for the production of castings with large outline dimensions.
2. Mold material types and performance requirements
(1) Classification of mold materials With the development of investment casting technology, there are more and more types of mold materials with different compositions. Mold materials are usually divided into high temperature, medium temperature and low temperature mold materials according to their melting point.
The melting point of the low-temperature molding compound is lower than 60C, and 50% of the molding compound of paraffin wax and stearic acid currently widely used in my country belongs to this category;
The melting point of the high-temperature molding compound is higher than 120C, and the molding compound with 50% rosin, 20% ozokerite, and 30% polystyrene is a typical high-temperature molding compound.
The melting point of the medium temperature mold material is between the above two types of mold materials. The currently used medium temperature mold materials can be basically divided into two types: rosin-based and wax-based mold materials.
(2) Basic requirements for mold material performance
Thermophysical properties: suitable melting temperature and solidification interval, small thermal expansion and contraction, high heat resistance (softening point), and the mold material should have no precipitates in the liquid state, and no phase change in the solid state;
Mechanical properties: mainly strength, hardness, plasticity, flexibility, etc.;
Process performance: mainly include viscosity (or fluidity), ash content, coating properties, etc.
Three, molding process
According to the specified composition and ratio of the mold material, the various raw materials are melted into a liquid state, mixed and stirred uniformly, and impurities are filtered out and poured into a paste mold material, that is, the investment mold can be pressed. The pressing method is commonly used to suppress investment molds. This method allows the use of liquid, semi-liquid, and solid and semi-solid mold materials. Liquid and semi-liquid mold materials are pressed under low pressure, called injection molding; semi-solid or solid mold materials are pressed under high pressure, called extrusion molding. Whether it is injection molding or extrusion molding, the advantages and disadvantages of filling and solidification must be considered.
(1) Pressure injection molding The wax injection temperature of pressure injection molding is mostly below the melting point. At this time, the mold material is a slurry or paste with liquid and solid phases coexisting. In the slurry-like mold material, the amount of liquid phase significantly exceeds the amount of solid phase, so the fluidity of the liquid is still retained. Under pressure injection in this state, the surface of the investment mold has low roughness, and surface defects caused by turbulence and splashing are not easy to appear. The temperature of the paste mold material is lower than that of the paste mold material, and it has lost fluidity. Although there are few surface defects, it has a higher surface roughness.
When molding material is injection molded, the lowest molding material temperature and pressing working temperature should be used as far as possible under the condition of ensuring good filling. The choice of pressure is not as large as possible. Although the pressure is high, the shrinkage rate of the investment is small, but the pressure and injection speed are too large, which will make the surface of the investment not smooth and produce “bubbles” (bubbles under the surface of the investment). , Make the mold material splash and appear cold barrier defects. In the molding process, in order to avoid the adhesion of the molding material and improve the surface finish of the investment mold, a parting agent should be used, especially for the rosin-based molding material.
(2) Extrusion molding Extrusion molding squeezes the mold material in the low-temperature plastic state into the cavity and forms it under high pressure to reduce and prevent the shrinkage of the investment mold. The mold material during extrusion molding is in a semi-solid or solid state. The mold material is relatively hard under normal conditions, but can flow under high pressure, and is characterized by high viscosity. Therefore, the pressure during extrusion depends on the viscosity of the mold material and the flow resistance in the injection hole and cavity. The greater the viscosity of the mold material, the smaller the injection hole diameter, the larger the cavity size, the smaller the cross-sectional area, and the longer the mold material stroke, the greater the resistance of the mold material to flow, and the higher the extrusion pressure is required. The semi-solid mold material is used for extrusion molding, and the solidification time of the investment mold is shortened, so the productivity is increased, and it is especially suitable for the production of thick and large section castings.
Four, shell making process
Shell making includes two processes: coating and sanding. Before coating, the investment mold needs to be degreasing treatment. Dip coating method should be used when coating. During the coating operation, the surface of the investment mold should be evenly coated with paint to avoid blanks and local cloth accumulation; the welds, fillets, corners and grooves should be evenly painted with a brush or special tools to avoid bubbles; The floating sand on the previous layer should be cleaned before the coating of the reinforcement layer; the coating should be stirred regularly during the coating process to control and adjust the viscosity of the coating.
Sprinkle sand after coating. The most commonly used methods of sanding are fluidized sanding and rain-drenching sanding. Usually, after the investment mold is taken out from the paint tank, when the remaining paint on it flows evenly and no longer drips continuously, it means that the paint flow is terminated and freezing begins, and sand can be sprinkled. Spreading sand too early can easily cause paint accumulation; spreading sand too late will cause the sand particles to not adhere or adhere firmly. When sanding, the investment mold should be continuously rotated and upside down. The purpose of sanding is to fix the coating layer with sand particles; increase the thickness of the shell to obtain the necessary strength; improve the air permeability and concession of the shell; and prevent cracks when the shell is hardened. The particle size of the sand is selected according to the coating level and is compatible with the viscosity of the coating. The viscosity of the surface coating is small, and the sand particle size must be fine to obtain a smooth surface. Generally, the surface layer sanding particle size can be selected as 30 or 21 sand; the reinforcement layer is scattered with coarser sand, preferably one by one. The layer is bold. When making the shell, after each layer of coating and sanding, it must be fully dried and hardened.
5. Defects and prevention methods
The defects of investment casting are divided into surface and internal defects, and size and roughness tolerance.
Surface and internal defects refer to under-casting, cold separation, shrinkage porosity, porosity, slag inclusion, hot cracking, cold cracking, etc. The size and roughness tolerances mainly include the elongation and deformation of the casting.
The surface and internal defects are mainly related to the pouring temperature of the alloy liquid, the baking temperature of the shell and the preparation process, the pouring system and the design of the casting structure.
The main reason for the excessive difference in the size and roughness of the casting is related to the design and wear of the pressure, the structure of the casting, the firing and strength of the shell, and the cleaning of the casting.
For example, when the investment castings are under-casting, the reason may be that the low pouring temperature and mold shell temperature reduce the fluidity of the molten metal, the casting wall is too thin, the pouring system design is unreasonable, the mold shell is not sufficiently baked or the air permeability is poor, and the casting The speed is too slow and the pouring is insufficient. At this time, according to the specific structure of the casting and the related processes, the problems should be solved and the defects should be eliminated.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8380156755447388,
"language": "en",
"url": "https://exceljet.net/formula/check-register-balance",
"token_count": 370,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.056884765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e4d2f860-f848-4686-b704-3cf7c1275cec>"
}
|
To set a check register formula that calculates a running balance, you can use a formula based on simple addition and subtraction. In the example shown, the formula in G6 is:
The value in G5 is hard-coded. The formula picks up the value in G5, then subtracts the value (if any) in E6 and adds the value (if any) in F6. When the credit or debit values are empty, they behave like zero and have no effect on the result.
When this formula is copied down column G, it will continue to calculate a running balance in each row.
Dealing with blank values
To display nothing in the balance column when the credit and debit columns are empty, you can use the IF function with AND and ISBLANK like this:
The IF function runs a logical test and returns one value for a TRUE result, and another for a FALSE result. For example, to "pass" scores above 70: =IF(A1>70,"Pass","Fail"). More than one condition can be tested by nesting IF functions. The IF...
The Excel AND function is a logical function used to require more than one condition at the same time. AND returns either TRUE or FALSE. To test if a number in A1 is greater than zero and less than 10, use =AND(A1>0,A1...
Formulas are the key to getting things done in Excel. In this accelerated training, you'll learn how to use formulas to manipulate text, work with dates and times, lookup values with VLOOKUP and INDEX & MATCH, count and sum with criteria, dynamically rank values, and create dynamic ranges. You'll also learn how to troubleshoot, trace errors, and fix problems. Instant access. See details here.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9363628029823303,
"language": "en",
"url": "https://learnwithanjali.com/microeconomics/the-supply-curve/",
"token_count": 745,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09912109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fcaf9757-58a6-4441-bd3c-15f159f0739a>"
}
|
The Supply Curve
The way we have done the demand related concepts, in the same way, we will be discussing the supply related concepts. In this post, we will just focus on the basic concepts related to the supply curve. We will discuss the following:
- What is “Supply”?
- What is “Quantity supplied”?
- Types of supply
- What is “Supply schedule”?
- What is “Supply curve”?
Supply is the exact opposite of demand. Let’s understand this concept by breaking it into smaller basic questions.
What is Supply?
Supply refers to the quantity of a commodity that a seller is willing to sell at different prices during a given period of time.
For example, a seller is willing to supply 1000 Pepsi bottles at price of Rupees. 25
What is Quantity Supplied?
Quantity supplied refers to the amount of commodity offered for sale against specific price at a particular point of time.
Taking the same example, the quantity supplied is 1000 at the rate of rupees 25.
Types of Supply
Supply is of 2 types:
- Individual Supply: The supply of a particular commodity by an individual firm at different prices in the market is called the individual supply.
- Market Supply: The supply of a particular commodities by all the firms at different prices in the market is called the market supply. Market supply is the sum of individual supply.
What is Supply Schedule?
Supply schedule is a tabular presentation of various quantities of a commodity that are offered for sale, corresponding to different possible prices of that commodity. Supply schedule shows the positive relationship between price and quantity supplied of a commodity. So, if the price increases, the quantity supplied of a commodity will increase.
It is of 2 types:
1. Individual Supply Schedule
It means the tabular presentation of various quantities that a seller is willing to sell at different prices. As shown below:
2. Market Supply Schedule
It means the tabular presentation of various quantities that all the sellers are willing to sell at the different prices.
What is Supply Curve?
Supply curve is a graphical representation of supply schedule showing various quantities of a commodity offered for sale at the different possible prices of that commodity.
It shows a positive relationship between price of a commodity and its quantity supplied.
Supply curve is upward sloping. For a firm, a rising part of its marginal cost curve is a supply curve.
It is of 2 types:
1. Individual Supply Curve:
The graphical representation of the relationship between price and individual supply of a commodity by an individual firm. As shown below:
2. Market Supply Curve:
The graphical representation of a relationship between price and the market supply of a commodity by all the firms is called the market supply curve.
It is the horizontal summation of individual supply curve. As shown below:
Thank You for reading.
You can read the following related posts:
- What is production function?
- Terms related to production concept
- Law of diminishing returns to a factor
- Total cost, Total variable cost and Total fixed cost
- The relation between TC, TVC and TFC
- Average total cost
- The demand curve
Feel free to join our Facebook group and subscribe to this website to get daily educational content in your mailbox.
Disclosure: Some of the links on the website are adds, meaning at no additional cost to you, I will earn a commission if you click through or make a purchase. Please support so that I can continue writing great content for you.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9527202844619751,
"language": "en",
"url": "https://www.infobloom.com/what-is-sustainable-economic-development.htm",
"token_count": 586,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1474609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:01872be3-6804-4976-a9cd-6213d517771b>"
}
|
"Sustainable economic development" refers to the balance of economic growth with social and environmental needs. In order for economic growth to be considered sustainable, it must not compromise resources or environmental factors for the future. Sustainable economic growth has become a topic of considerable discussion in the 21st century, but many economic and environmental experts believe there is a long road to be trekked before economic growth stops compromising societal and environmental health.
It may be easy to think that society, the economy, and the environment exist in their own vacuums of function and need, but proponents of sustainable economic development insist that these three pillars of human existence are interdependent. A factory that spews toxic waste into the air and water can pose risks to global health and do irreparable damage to the environment. Similarly, if the environmental stores of a resource fully vanish, industry may be decimated for want of usable supplies, thus depriving the public of necessary goods. The principles of sustainable economic development focus on creating a balance between the needs of these three concerns; only when economic growth can be obtained without unduly harming society or the environment can it truly be considered sustainable in the long term.
One of the biggest issues facing the promotion of sustainable economic development is a disconnect between the goals of environmentalists, and the goals of many economic groups, such as businesses. In general, the goal of a business is to make as much profit as possible, to ensure its economic future and retain its shareholders. Unfortunately, many alternative energy, alternative farming, and sustainable resource management technologies are either still untested or not cost-effective for businesses. Though the arguments in favor of sustainable economic development point out that without air to breathe, water to drink, and resources to use,industry will have no customers, this argument in itself seems insufficient to spurring change. Some economists suggest that the drive toward sustainable growth will occur only when green technology becomes cheaper than traditional methods, and when consumers drive the market toward sustainability through social change.
Another major issue preventing the spread of sustainable economic development is the lack of environmental regulations in developing nations. Many developing nations are desperately in need of economic stimulus, and thus willing to allow major sacrifices in terms of wage, labor, and environmental laws in order to bring in new industry. If a company based in the Western world can have goods made for significantly less money, with less regulations and virtually no chance of violating environmental standards, there is little incentive to manufacture in a developed country.
Still, proponents of sustainable development argue that those who do not adapt to sustainable practices will destroy their own markets, just as a fish farm that harvested all of its fish for sale would have none for the next year, essentially destroying its long term survival. Unfortunately, the effects caused by non-sustainable growth do not occur in a vacuum, meaning that ecosystems, species, and even human society as a whole can be damaged and endangered by unsustainable practices.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9427157640457153,
"language": "en",
"url": "https://www.nap.edu/read/4759/chapter/198",
"token_count": 686,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0303955078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:638f3ebd-58c9-41a5-9303-3b58e9100bcd>"
}
|
Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
THE POVERTY MEASURE AND AFDC 368 (basic unit plus four added children). At one extreme, Louisiana increases its $138 benefit for the basic two-person unit by 32 percent on average ($44) for each additional child. At the other extreme, Alaska increases its much higher benefit of $821 for the basic unit by only 12 percent ($102) for each additional child. The median value that is added on average to the basic unit benefit for each added child is 23 percent.20 In looking at the shape of the equivalence scales for AFDC benefits, five states have a regular pattern whereby, within 1 or 2 percentage points, they add the same amount to the basic unit benefit for each additional child; 10 other states have a regular pattern within 6 percentage points. Ten states have a declining pattern, whereby they add progressively less for each child after the second or third. In contrast, 10 states add more for the third and fourth child than for either the second or fifth. Finally, 16 states have erratic patterns. For instance, they may add more for the third and fifth children than for the second and fourth. In this, they resemble the equivalence scale implicit in the current U.S. poverty measure, in which the second child adds 17 percent to the two- person (one-adult/one-child) poverty threshold, the third child adds 31 percent, the fourth child adds 23 percent, and the fifth child adds 20 percent.21 The type of equivalence scale that we recommend for the poverty measure would increase the benefit for a one-adult/one-child family the most for the second child, with declining percentages for each additional child to reflect household economies of scale. Depending on the value of the scale economy factor, our proposed equivalence scale would add an average of 27 percent (using a factor of 0.75) or an average of 22 percent (using a factor of 0.65) to the basic unit benefit for each additional child. Trends in Need Standards and Benefits Looking at trends over the last two decades, it appears that relatively few states have increased their need standard or maximum benefit to keep up with inflation. Relatively few states have statutes that require them to adjust their standards for inflation, and even those states that have such requirements do not always heed them in periods of budget stringency. As of 1988, seven states had statutory requirements for adjusting their need standard to keep up with inflation, one state had a requirement to update its benefit level, and three 20 Note that the ratios of the benefit for an added child to the benefit for the basic AFDC unit are not comparable to equivalence scales expressed in terms of a one-person family or house-hold. Such scales can be constructed for January 1994 from U.S. House of Representatives (1994:368-369). 21 The average value added per child to the U.S. poverty threshold for the two-person (one- adult/one-child) family is 23 percent, the same as the median value for the 50 states and the District of Columbia.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.928259015083313,
"language": "en",
"url": "https://www.projectaccelerator.co.uk/who-values-value/",
"token_count": 895,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.042724609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f2e940e6-dadd-40f5-b06c-ecfe27a4b9cb>"
}
|
Projects and programs are undertaken to enable an organisation to achieve part of its strategy, usually by creating new or better ways of working. The fundamental reason any organisation chooses to undergo this type of change in its operations is to realise or create ‘value’ for some or all of its stakeholders.
Project managers are key people in this overall value chain; they create the ‘outputs’ that enable the organisation to change. If the project’s deliverables are used, the intended outcomes should be achieved and benefits realised. Finally, if the benefits support the organisations strategy, value is created. But what is value and how can this be assessed and measured?
If a charity initiates a fund raising project to upgrade is mobile soup kitchen, after the project is successfully completed, the charity will be able to deliver more meals to more homeless people at an increased weekly operating cost for the ‘soup and coffee’. The cost of operations has increased (there is a negative cash flow); and the value proposition of more disadvantaged people getting a hot meal in the evening is nearly impossible to quantify in financial terms. Value has been created but it is not measurable in terms of ‘financial returns’. The concept of benefits should be expanded to include both financial benefits and other stakeholder requirements.
A useful definition of value is the ratio between the satisfaction of needs (benefits, expectations and requirements) which may be tangible or intangible; and the use of resources (money, people, time, energy and materials) which will normally be definable in terms of cost.
V (value) ∝ B (benefits) / $ (cost)
However, the units of measure are often unrelated, so the equation is shown as a proportionality rather than equality – it’s difficult to directly align the cost of the mobile kitchen and its supplies against ‘full stomachs’ and potentially the increased status of the charity……
Managing the overall concept of value creation to maximise ‘value’ for the organisation’s stakeholders requires a coordinated approach by the whole organisation, the key elements are:
- Developing a strategy that is ‘value oriented’.
- Portfolio management selecting the ‘most valuable’ projects and programs for the organisation to undertake. Even in commercial businesses, this requires ways of assessing ‘total value’ not just financial returns.
- Project managers need to keep maximising benefits realisation and value creation in mind when making project decisions.
- The organisation’s change management needs to be effective and aligned to ensure the intended benefits are actually realised.
- Finally the organisations governance systems need to require management to report on the final outcomes in terms of the total value realised from the original decision to invest in a project or program.
This framework is relatively easy to describe, the difficult issue is creating a ‘language’ that describes value from the perspective of the organisation and its stakeholders. For the charity, value may be defined as serving more meals cost-effectively, or as reaching more people in need or as being seen to be the leading ‘soup-kitchen’ in the area (ie, achieving elevated prestige) – different concepts of what is ‘valuable’ can shift the focus of both the project and the way the project’s deliverables are used.
One language that may help define the full scope of ‘value’ is Dr. Edward de Bono’s ‘Six Value Medals‘; each of the medals represents a different concept of ‘value’.
In commercial situations the challenge is deciding what value is attached to options such as:
- A mining project spending additional resources on environmental protection in excess of the minimum required by law to achieve a better outcome?
- A project expending resources on enhancing its stakeholder engagement effort?
- A project manager spending budget on clerical support to help implement project management processes more effectively?
The answer will always be based in the specific context of the organisation, its ethics and culture – valuing the value of value is not straightforward! What matters is making sure the understanding of value is consistent and agreed by the organisation’s governors, its key stakeholders, and incorporated in the way the organisation works.
Are you discussing ‘real value’ with your stakeholders??
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9589115381240845,
"language": "en",
"url": "https://www.re-nuble.com/blogs/re-nuble/coronavirus-the-food-supply-chain",
"token_count": 1098,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1953125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:347e9e32-543a-42b4-b865-1aa92966e515>"
}
|
Written by Riyana Razalee
When news of the coronavirus spreading in China first broke out, stories about people stocking up on their groceries and panic buying seemed slightly foreign to many people across the world. While everyone could understand the situation, many could not quite envision it – until it hit our shores here in the US. It was then that we began to see the effects of a constrained food supply chain, brought to light by the coronavirus.
Overdependence on food imports
Under normal circumstances, if there is ever a shortage in the food supply chain, the US can rely on imports in order to fill the demand gap. In fact, the US is known for importing a significant amount of its fruits and vegetables from overseas. According to the FDA, “More than 200 countries or territories and roughly 125,000 food facilities plus farms supply approximately 32% of the fresh vegetables [and] 55% of the fresh fruit that Americans consume annually.” During this pandemic however, the import option has been eliminated. One by one, we began to see ports across the world close, whether it was in Honduras or Guatemala, where a bulk of the US’ imported coffee goes through, or Kazakhstan, one of the world's biggest wheat flour exporters. It therefore came as no surprise that the US government deemed agriculture labor as an “essential worker” in order to accommodate the food supply needs of the country.
“More than 200 countries or territories and roughly 125,000 food facilities plus farms supply approximately 32% of the fresh vegetables [and] 55% of the fresh fruit that Americans consume annually. During this pandemic however, the import option has been eliminated."
Shifting consumer behaviour
When it comes to consumers, the end of a “linear” food supply chain, two things happened when the coronavirus hit. Firstly, many consumers started to pay more attention to their health once more, thereby assessing the food that they were consuming. Secondly, consumers started looking at their finances more closely due to the financial uncertainties that come with the pandemic. With these two factors in play, it should be expected that there will be a shift in consumer behaviour. As Erica Carranza, VP of Consumer Psychology at Chadwick Martin Baily explains, although habits that drive consumption behaviour are generally difficult to change, when something as monumental as a pandemic happens, causing negative emotions, it could shift consumer behaviour. Michael Barbera, Chief Behaviour Officer of Clicksituation Labs also notes that as consumers are reminded by the CDC and World Health Organization about the necessity of proper handwashing, consumers are likely to move towards healthier lifestyles, which includes sustainable and healthier food products. The results therefore are twofold: (1) Attention will be placed on the nutrition value of what they are consuming, (2) There may be even less reliance on restaurants as consumers choose to cook more in order to save on unnecessary expenses.
"Although habits that drive consumption behaviour are generally difficult to change, when something as monumental as a pandemic happens, causing negative emotions, it could shift consumer behaviour."
Using the coronavirus to fix the food supply chain
Firstly, we need to move away from the idea that a food supply chain is linear and instead approach it from a closed-loop perspective. By the time the food gets to the consumer, we need to understand how to recover useful resources from the food that is disposed, or any of its by-products. Next, we need to strengthen local production of food. The coronavirus has shown us the fragility of reliance on food supply chains that span across the globe. While there are certainly benefits to food trading, this cannot be our main source of supply anymore. Lastly, we need to find ways to provide nutritious food that is accessible for all. This is where communication across various players along the food supply chain, as well as innovation and collaboration comes in.
As Rahul Bhansali, COO of Re-Nuble explains, "Indoor farming methods like hydroponics are basically superfactories of food production, using 13X less water, producing 11X more food per square foot, and producing year round, almost completely independent of weather patterns. The problem has generally been that these farms are expensive to build and operate. By cutting their costs and selling more profitable organic food, indoor farms can become significantly more financially viable and scalable within the food supply chain. We can decentralize food production reliably and powerfully and make pure, nutritious foods just miles from need. That equals a food supply resiliency like never before, to protect us from systemic risks like those uncovered by COVID."
"We can decentralize food production reliably and powerfully and make pure, nutritious foods just miles from need. That equals a food supply resiliency like never before, to protect us from systemic risks like those uncovered by COVID" - Rahul Bhansali, COO, Re-Nuble
For Re-Nuble, we have made it our mission to sustainably manage our local communities’ food waste streams. By transforming food waste into plant-based technologies for both soil based and hydroponic cultivation, we strive to play a part in the solution by enabling closed loop food once again, and making nutritious food accessible for all. Can we help you strengthen your position in the food supply chain? We'd love to explore this further.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9310796856880188,
"language": "en",
"url": "https://bergmill.com/2017/04/17/e-waste/",
"token_count": 698,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.30859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:940d4b9f-a80b-45c9-bb64-13bde2cfd026>"
}
|
The speed of technological innovation is rendering electronics obsolete quicker every year, exponentially expanding the global stockpile of electronic scrap or e-waste as consumers trash the old to buy the latest tech.
It may alarm some that, in 2014, 41.8 million metric tons of outdated and broken electronics were tossed globally, only 16 percent of which was recycled. To the entrepreneur, this figure represents the potential to expand markets and ramp up revenue generation through the sale of recycled e-waste and electronic components.
In fact, the 2014 global e-waste market was worth $11.03 billion dollars, according to a market report published by Transparency Market Research. This figure is predicted to triple, reaching $34.32 billion by 2022. Precious metals, ferrous and non-ferrous metals, plastics, and glass are all raw materials that can be extracted from electronic scrap, making e-waste economically enticing.
To get a better idea of whether selling e-waste is the right business decision for you, let’s discuss what e-waste actually is and some of the challenges facing the electronics recycling sector.
What is e-waste?
E-waste is a blanket term used to describe consumer and business electronics that have surpassed their usefulness and must be thrown away. It broadly includes, but is not limited to, computers, televisions, VCRs, stereos, copiers, and fax machines. In addition, e-waste includes handheld devices, such as smartphones and tablets. There really is no clear definition. Some may even categorize microwaves, stoves, air conditioners, and similar appliances as e-waste.
E-waste recycling challenges
Federal and state regulations may act as obstacles to businesses attempting to widen their profit margins through electronic scrap recycling. Because electronic waste is considered hazardous, there are regulations on end-life handling imposed by the federal Resource Conservation and Recovery Act or a state’s Health and Safety Code laws.
Extraction processes to recover raw materials from electronic scrap and certain raw materials themselves pose significant environmental and health risks when improperly handled. Hazardous materials found in e-waste include lead, mercury, and cadmium. Exposure to these metals can damage the nervous and reproductive systems and cause cancer and kidney damage. Certain extraction processes, such as open burning and washing components in drinking sources, can also release environmental toxins that negatively impact human health.
Do the benefits level the obstacles?
Through safe handling practices and partnerships with businesses who engage in responsible e-waste disposal and sales, the economic and environmental benefits far outweigh the challenges of recycling e-waste. Reusing and recycling old electronics helps reduce pollution, lowers greenhouse gas emissions, conserves energy in electronic manufacturing and extraction of virgin materials, and conserves natural resources by reducing the need to extract raw materials from the earth’s crust.
E-waste may even prove to be a richer source of precious metals than ores extracted in the mining process. For example, 1 ton of computer circuit boards yields greater amounts of gold than 17 tons of gold ores!
If you run a recycling or buyback center, handle large volumes of e-waste, or, perhaps, are making large-scale updates to your technological equipment, contact us! Berg Mill Supply has the industry know-how to help your business offload electronic waste in a financially and environmentally smart way.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.952057957649231,
"language": "en",
"url": "https://dirt.asla.org/2011/08/03/from-one-crisis-to-the-next-congress-must-pass-a-transportation-bill-for-all-users/",
"token_count": 1170,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.283203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dcb03abb-7c9d-4681-af22-27c350233543>"
}
|
As Congress wraps up its work on a debt ceiling deal that will avert a world-wide financial catastrophe, another crisis is looming down the road – literally. In less than 60 days, our nation’s surface transportation law will expire on September 30th, leaving the country’s highways, roads, streets, bridges and other infrastructure vulnerable. Today, our infrastructure is crumbling and in dire need of repair, congestion is clogging our transportation arteries, impeding commerce and economic development, and families are incurring major costs to travel to and from daily destinations. Congress must take action to pass a comprehensive transportation bill that will not only repair our current infrastructure and better expedite the movement of goods and services but also meet the current demands of American households.
Congress passed the last omnibus transportation bill, SAFETEA-LU, in 2005 and has “kicked the can” down the proverbial street by merely extending the measure multiple times since its first expiration in 2009. But now Congress’ dawdling on the issue is accruing a significant price tag for the nation. A recent report by the American Society of Civil Engineers calculates that “the cost of failing to invest more in the nation’s roads and bridges would total $3.1 trillion in lost GDP growth by 2020. For workers, the toll of investing only at current levels would be equally daunting: 877,000 jobs would also be lost.” Already, the report found, that “deficient and deteriorating surface transportation cost us $130 billion in 2010.”
More importantly, individual households are feeling the economic pinch from the lack of a comprehensive transportation policy that fits the needs of today’s American family. Currently, many Americans are forced to take costly automobile trips for all their daily activities, including routine activities less than one mile from home. Schoolchildren cannot safely walk or bicycle to and from school and instead must rely on lengthy school bus trips that many school districts can no longer afford. Now, more than ever, Americans are clamoring to get out of their cars and have more transportation options than the car-centric approach first envisioned and deployed in the 1950s is providing. Recent studies have shown that an increased number of communities want nearby bicycle and pedestrian paths not only for recreational opportunities, but also to provide accessible networks to transit, shopping, school, work, and other daily routines. Not only will this save individuals and families thousands of dollars in transportation costs each year, it will also increase the value of their homes and other nearby real estate, and attract economic development.
The U.S. Conference of Mayors recently surveyed its members about transportation infrastructure priorities, revealing that 75 percent of the polled mayors would support an increase in the gas tax if a greater share of the funding were invested in bicycle and pedestrian projects. These mayors went on to disclose that the lack of funding for bicycle and pedestrian projects was the biggest challenge to using transportation as part of their communities’ broader strategies to reduce congestion, improve livability, and increase economic competitiveness.
Further, with the nation’s unemployment rate still hovering at nine percent, the impact of bicycle and pedestrian projects on job creation must be underscored. According to a recent Political Economy Research Institute study, bicycle and pedestrian projects create about 11.4 jobs for every one million dollars spent compared to 7.8 jobs created through road projects.
The federal Transportation Enhancements (TE) program, first established in 1992 as part of the surface transportation law known as ISTEA, is the major source of dedicated federal funding to create bicycle and pedestrian projects. Given the needs of today’s communities, a robust TE program must be a critical component of any comprehensive surface transportation bill. Since its inception, the TE program has provided communities across the country with dedicated funding to design and construct bicycle and pedestrian projects. But clearly more is needed. The TE program is oversubscribed in many states, with requests about three times the available funds. Moreover, the Alliance for Bicycling and Walking reported that bicycling and walking make up 12 percent of all trips made in the United States, but receive less than 2 percent of federal funding.
Recently, both the House and Senate unveiled blueprints for a new transportation policy. House Transportation and Infrastructure Chairman John Mica’s (FL) proposal is a six-year bill with a 35 percent across-the-board cut to existing transportation programs and the elimination of dedicated funding for bicycle and pedestrian programs, including the successful Transportation Enhancement program. Senate Environment and Public Works Chair Barbara Boxer (CA) released an outline of her bipartisan Moving Ahead for Progress in the 21st Century (MAP-21), a two-year reauthorization that would consolidate several core transportation programs, leaving the fate of TE unclear.
When Congress returns from its recess in September, it must immediately return to “crisis mode” and focus its attention on crafting a well-balanced surface transportation policy that can repair our nation’s crumbling infrastructure and meet the present-day needs of the citizenry, all while spurring economic development and creating much-needed jobs. A final bill must include policies and programs that promote the efficient movement of cars and other motor vehicles, invest in transit, and strengthen our bicycle and pedestrian networks. Continuing the Transportation Enhancements program will go a long way in achieving these and other national transportation goals.
Now is the time to contact your legislators to urge them to support the Transportation Enhancements program in the next reauthorization of the surface transportation bill.
This guest post is by Roxanne Blackwell, Esq., Federal Government Affairs Manager, American Society of Landscape Architects (ASLA).
Image credit: Wydown Boulevard. Clayton, Missouri / APA Great Places in America: Streets
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8879196047782898,
"language": "en",
"url": "https://financial-dictionary.thefreedictionary.com/two-part+tariff",
"token_count": 305,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.1025390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:90c7f204-ccac-4bac-b438-20594aaed956>"
}
|
Also found in: Wikipedia.
A set fee assessed with a purchase along with a per-unit charge. For example, a credit card carries a two-part tariff if it has an annual fee and a minimum fee with each purchase. A two-part tariff is not necessarily an import tariff.
Farlex Financial Dictionary. © 2012 Farlex, Inc. All Rights Reserved
two-part tariffsee TARIFF.
Collins Dictionary of Business, 3rd ed. © 2002, 2005 C Pass, B Lowes, A Pendleton, L Chadwick, D O’Reilly and M Afferson
two-part tariffa pricing method that involves a charge per unit of GOOD or SERVICE consumed, plus a fixed annual or quarterly charge to cover overhead costs. Two-part tariffs can be used by PUBLIC UTILITIES or firms to achieve the benefits of MARGINAL-COST PRICING while raising sufficient revenues to cover all outlays (so avoiding a deficit and problems of financing it). Simple two-part tariffs are presently used to charge customers for gas, electricity telephones, etc., although more sophisticated multipart tariffs can be adopted to reflect the different marginal costs involved in offering products like electricity and transport services at peak and off-peak periods. See also AVERAGE-COST PRICING, NATIONALIZATION, PEAK-LOAD PRICING.
Collins Dictionary of Economics, 4th ed. © C. Pass, B. Lowes, L. Davies 2005
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9183189272880554,
"language": "en",
"url": "https://flowit.ee/en/11-top-rpa-use-cases-in-2020/",
"token_count": 2477,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0023651123046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:631cead1-9f88-47d6-bf63-7309da83fdcd>"
}
|
Robotic Process Automation brings several benefits to organisations. Here are the top 11 processes RPA can execute for business.
- What is robotic process automation?
- Why do companies invest in RPA?
- Eleven RPA use cases
What is robotic process automation?
Robotic process automation (RPA) is the use of specialised computer programs, known as software robots. These robots automate and standardise repeatable business processes that are typically carried out by employees.
The sector is huge, according to research from Forrester, the industry is expected to reach $2.9 billion by 2021.
Imagine a robot sitting at a desk using the same applications and performing the same tasks as a human employee would.
Robotic process automation is not physical robots. It is where a virtual robot mimics human activities by interacting with applications in the same way an employee would.
Except it does with the same speed as a computer system.
For example the most efficient software robot Flowit developed performed the tasks of approximately 1,100 human worker equivalents every single day.
Supporting as a virtual robotic assistant, these bots take on monotonous tasks, freeing up time for workers to engage on more revenue-generating tasks.
Furthermore, RPA integrates well with existing IT infrastructure, Even working across different platforms, applications and departments. Even legacy applications which are extremely costly to update.
Companies do not have to heavily invest in automating processes, yet those who do witness surmountable benefits.
Why do companies invest in RPA?
Robotic process automation is both cost-effective and user-friendly. Its advantages are drawing interest from organisations across several industries.
These benefits include:
- Increased accuracy. Bots are incredibly accurate and consistent – they are much less prone to making mistakes or typos than a human worker. Virtual robots apply conventional processes like adding or removing user accounts. Including copying information from one system to another, onboarding employees or populating a form based on information from other systems.
- No interruption of workflows. Virtual robots operate 24/7 without staff initiation or further interruption.
- Meet regulatory compliance. Configured bots follow instructions and provide an audit trail for each step. Bots can also re-play their past steps in case a process needs reviewing. The controlled ‘robotic’ nature of their work makes them ideally suited to meet strict compliance standards.
- Work within existing parameters. Traditional automation initiatives need extensive IT resources to integrate across multiple applications. Robots do not; they work across the layers of existing applications as a person would. This is particularly useful for legacy systems, where APIs are not available. Alternatively, in organisations that do not have the resources to develop deep level integration with existing legacy applications.
- They improve employee morale and experience. Employees invest more time and their talents into more engaging and strategic work. Bots enable workers to offload manual tasks like filling out forms, data entry and searching for website information. Employees can focus on strategic, revenue-producing activities instead.
- Increased productivity. Process cycle times are more efficient and completed faster compared to manual processes.
RPA technology has advanced significantly and is adding more value all the time.
Advanced cognitive capabilities like artificial intelligence and machine learning allow bots to interpret the interfaces they work across intelligently. Virtual bots are better able to handle errors and sift through unstructured data.
Machine learning allows bots to recognise patterns over time, meaning that when a process requires human intervention, a bot learns and acts autonomously when the situation arises again.
Eleven RPA use cases for business
Data consistency across enterprise-level platforms is a very tedious task. Sales representatives need to spend their time entering data into a CRM system plus an ERP system.
Finance analysts have to replicate that data and enter it in another system or module.
This results in duplication, produces errors and impacts productivity. RPA can automate such tasks and perform end-to-end sales activities like data entry and invoicing. Furthermore, bots can maintain a database by removing duplicate data.
By eradicating time-consuming tasks, employees focus on their primary tasks, generating more revenue.
Marketing: Lead Generation
Lead generation is an essential part of today’s marketing processes. Marketing teams create new entries for potential leads within a CRM system gathered from outside sources.
Some CRM platforms offer their own built-in data upload tools. Most legacy platforms need users to enter each new lead’s information by hand. Thus decreasing the time staff has for other tasks and increasing the chance for error.
Take, for example, a firm that attends industry conferences to gather potential leads. They would enter each prospective customer manually into their CRM system.
With RPA, users can program software to import the data from their spreadsheets. Being faster and with a higher level of accuracy than by hand. Once again, the staff can focus on engaging with lead prospects instead of data entry.
Invoice processing often contains repetitive manual tasks, resulting in delayed and inaccurate payments.
Timely payments can deliver quality goods and services from the vendor faster.
High volume invoice processing has many challenges. These include invoice formats, data from various sources, reconciliation procedures and entry into one database.
RPA automates invoice processing – formatting the data input, reconciliation errors, and even process precise decision-making, minimising human intervention.
RPA automates the end-to-end process from receipt to payment.
Processing Refunds Faster
A company’s reputation depends on how fast it can remedy its errors, and refunds are one example. Customers demand this process to be seamless, fast and pain-free; however, this is easier said than done.
Complaints and return requests generate much data that can be tiresome to sort through. RPA deals with the matter and processes the refund without delay. Improving customer satisfaction and having a positive impact on a brands reputation.
Processing payroll every month is a time-consuming, repetitive task for the HR team in every organisation.
Payroll involves a significant volume of data entry. Often resulting in data inaccuracy causing delays in payment and employee dissatisfaction.
RPA verifies employee data consistency across multiple systems, validating timesheets, load earnings and tax deductions. Virtual robots can automate salary slips, administer taxable benefits and other reimbursements.
RPA automates payroll-related transactions from end-to-end to avoid inaccuracies and delays.
Financial Reports & Accounting
At the month’s end, or after each quarterly period, is a stressful time for those working in finance and accounting, who are frantically compiling various sources of information for companies.
RPA can analyse past historical and current market trends to make forecasting assessments of the company’s financial health, providing variance reports. Furthermore, RPA can download monthly sales data and calculate sales commissions owed, make payments and record all financial data.
RPA automates the aggregating of financial data in a fraction of the time, leaving accountants to leverage that information for insights and forecasting.
Fast responses are what today’s customers expect, with a solution following quickly behind. RPA makes it possible to deliver top customer satisfaction and what customers want.
Automated customer care systems can filter queries and offer initial responses to customers. RPA categorises queries and sends them to the right department, such as the tech department, service department or sales.
Sorting ensures that the right customer care agent is selected for a quick resolution. There is no need to transfer a customer’s call from one customer service agent to another.
Customer service contains several rules-based processes that could be streamlined. According to recent research, 70% to 80% of rules-based processes could be automated, and it is a good idea to begin with customer service.
Looking at RPA use cases, it can store, sort, organise and make accessible all kinds of business information very easily. Systems can categorise different data ranging from contact information, purchase history, preferences, HR information like birthdays or contracts.
When data is sensitive, then there is a need to automatically obfuscate data due to data privacy concerns. The data does not need to be hidden but merely anonymised.
This data can then be either locked or displayed according to the privileges of various job roles.
So customer care agents, salespeople, HR and senior management can equally access but not share sensitive or obfuscate data. There is no need to enter this information or worry about its accuracy or sensitivity.
Storing information is one of the most labour-intensive jobs and can cause much stress. RPA use cases have reduced repetitive tasks by up to 80%.
Businesses have to make purchases in bulk to manufacture products or provide services.
The cost of these items impacts a company’s revenue or profits. Company staff always research online to make an informed and hopefully cost-cutting decision.
Researching is time-consuming and complicated, which is why we can see it as different RPA use cases in many companies. Virtual bots compare prices from different vendors by their quality and product attributes. Businesses then buy the best products at the most competitive.
RPA can also help with recruitment, streamlining the process. It can source resumes from various platforms, access their value, wade through spam or unwanted applications.
RPA streamlines recruitment by a considerable margin. Reducing the stress of recruiters and allowing them to thoroughly assess every applicant.
Virtual robots could administer 90% to 95% of vital recruitment processes like screening, assessing, measuring and onboarding.
IT: Adding New Users
IT departments spend more time setting up user accounts for new employees and application-specific profiles for current employees.
This kind of work ties up highly skilled workers doing low-value, repetitive tasks. Also, some systems do not include the functionality to run back-end automation scripts.
For example, an IT department may already have an automated system in place for new user account provisions.
In essence, a user submits a request for an account on a particular system. Once the system administrator approves the request, the account is created. For the process to continue, an automated email is sent to the administrator to confirm user permissions. Finally, IT can configure user security permissions.
To speed up the process, a virtual bot can perform the administrator’s actions during the configuration step. The only manual work IT has to do is to approve the request.
Processing HR Information
Storing and processing HR information is challenging. It takes a lot of time and can be a tedious process.
A successful business generates vast amounts of employee data, that is challenging to filter and organise. RPA can collect and organise all the information an HR department requires.
Employee history, payroll, reimbursements and training levels, can be sorted through using RPA. It can handle all the day-to-day tasks and allow HR employees to focus on human-to-human interaction.
HR personnel can prioritise improving employee productivity, workplace culture, and finding new talent.
Extract Data in Different Formats
Data appears in varying formats, ranging from editable, scanned text to handwritten notes. Data entry workers grapple with reading the information and inserting it into the system.
RPA can use Optical Character Recognition (OCR) technology to read various information formats. Once scanned and processed, can enter it into databases.
Employees spend around 10% to 20% staff-hours on recitative computer tasks. Saving all that time and directing it towards something more productive.
Automating this process accurately stores information, returning a significant investment for an organisation.
RPA use cases can be seen in a wide range of ways to save both time and money, at the same time increasing job satisfaction.
At Flowit, we encourage businesses to adopt Robotic Process Automation. We recommend implementing system-wide automation from the entire outset.
Alternatively, start small and automate one process at a time to determine if it is the right choice for an organisation.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9588261246681213,
"language": "en",
"url": "https://news.hipaaspace.com/Article/Show/Prices%20of%20Prescription%20Drugs%20would%20be%20Reduced%20by%20the%20CREATES%20Act/WYZQQZEAU1UDO6O6",
"token_count": 577,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.404296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:845afda7-8106-469a-a669-f580ba916b55>"
}
|
Prices of Prescription Drugs would be Reduced by the CREATES Act
The passage of the CREATES Act by Congress would allow consumers, patients, hospitals, and others to be able to get improved care at affordable prices while prices of prescription drugs are reduced
Every individual should have access to affordable health coverage and care. Insurance providers, businesses, doctors, health systems, etc. continue to find new methods to reduce healthcare costs so as to enable patients get better in quick manner. A major challenge to ensuring patients obtain quality care at reduced prices is the cost of prescription drugs. Prescription drugs costs continue to be an obstacle to healthcare as the prices continue to increase annually.
Passing the Creating and Restoring Equal Access to Equivalent Samples (CREATE) Act would be a huge step forward in making prescription drugs more affordable, cutting premiums and other healthcare costs for businesses and their employees.
The CREATES Act addresses several techniques used by brand-name drug makers to keep less expensive generic medications off the market. Today, the Food and Drug Administration mandates that drug manufacturers comply with a Risk Evaluation and Mitigation Strategy (REMS) as part of the approval process for new medications. For generic drug makers, the REMS process requires them to get product samples of the brand-name drug so they can conduct comparisons to ensure patient safety, as well as drug efficacy.
Certain manufacturers of brand-name have however been taking advantage of loopholes in the process to withhold those essential product samples from generic drug makers. This effectively denies generic drug companies the chance to get their medications approved and allows much more expensive brand-name drugs to keep a hold of their market monopoly.
The CREATES Act would change that by holding brand-name drug manufacturers responsible if they were to deny access to their samples to keep a generic competitor off the market. By encouraging greater competition, the CREATES Act would offer more choices and reduce drug costs drastically for patients and consumers. As drug prices decreases, so will premiums for the health plans that cover them.
It has been discovered by the Congressional Budget Office that the CREATES Act would save the federal government more than $3 billion over the next ten years and help reduce patient out-of-pocket costs. These anti-competitive behaviors are currently projected to cost patients more than $5 billion in extra drug costs annually.
This is why the CREATES Act, a bipartisan piece of legislation, enjoys a great deal of strong support from both Republicans and Democrats, as well as from consumers, patients, hospitals, physicians and health insurance providers.
The time to pass the CREATES Act is now. The Senate Judiciary Committee approved the CREATES Act with a strong, bipartisan majority on the 14th of June. Congress can build on this momentum and advance this common-sense and market-based approach that will speed generic drug availability and reduce costs for consumers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9348751902580261,
"language": "en",
"url": "https://sidsenergy.wordpress.com/category/islands/",
"token_count": 2315,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.06494140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4cb55f79-5534-4f47-b779-58d41355ae5f>"
}
|
Caribbean economies suffer from some of the highest electricity prices in the world. Despite their abundance of renewable energy sources, Cayman has a relatively low level of renewable energy penetration; the economy continues to spend a large proportion of its GDP on imported fossil fuels.
The Caribbean Transitional Energy Conference (CTEC) is about building our resilience as a small nation, about diversifying our energy sector and the way that we do business.
It is about ensuring sustainable social and economic growth through strong leadership, recognising the threat of climate change and the vulnerability of islands across the world and voicing our commitment to take the measures that we can take now. More
Comments Off on Caribbean Transitional Energy Conference
These countries are across the globe in the Caribbean, the Pacific, Atlantic and Indian Oceans, and the Mediterranean and South China Sea.
In addition to common difficulties faced by developing countries, SIDS have an additional series of challenges to cope with that require special assistance from the international community.
These challenges were highlighted in the 1994 Barbados Programme of Action (BPOA) and the Mauritius Strategy of Implementation (MSI) of 2005, both of which stated that the difficulties SIDS face in the pursuit of sustainable development are particularly severe and complex.
Recognition of these issues was reinforced in September of 2014 when Member States of the United Nations officially adopted the Small Island Developing States Accelerated Modalities of Action, known as the SAMOA Pathway.
The challenges that SIDSs face are varied, but all conspire to constrain their development processes.
They typically do not have a wide base of resources available to them, and thus do not benefit from cost advantages that this could potentially generate.
Coupled with small domestic markets, they experience difficulties in profiting from globalisation and trade liberalisation and are cripplingly reliant on external and remote markets with limited opportunities for the private sectors.
The cost of provision of energy, infrastructure, transport and communication are high, and along with high population densities, creates increased pressure on these already limited markets.
These developing countries generally have a heavy reliance on tourism and services; however, as a consequence of their low resilience and location, they are also heavily affected by disasters due to frequent natural hazards.
The unique characteristics and vulnerabilities facing SIDS were first addressed by the international community at the Earth Summit (United Nations (UN) Conference on Environment and Development) in Brazil in 1992.
The SIDS case was the focus of Agenda 21, a non-binding, voluntarily implemented plan of action of the Summit, committed to addressing the problems of sustainable development of SIDS.
This plan involved adopting methods to enable SIDS to function and cope effectively with environmental change, and to mitigate the impacts and reduce the threats posed to their marine and coastal resources.
Following Agenda 21, the Barbados Programme of Action was introduced in 1994, in an effort to provide further aid and support to SIDS. Similarly, its ultimate aim was to improve sustainable development.
It highlighted the challenges of converting Agenda 21 into precise strategies, movements and procedures at the national, regional and international level and listed fifteen areas of priority for specific action.
Five further areas were selected by the UN General Assembly in 1999, recognising their urgency. These five were: climate change, as the rising sea level could render some low-lying SIDS submerged; natural and environmental disasters and climate variability, with an emphasis of improving disaster preparedness and recovery; freshwater resources, preventing water shortages as demand increases; coastal and marine resources, promoting the protection of coastal ecosystems and coral reefs; energy, developing solar and renewable energy in order to lessen dependence on imported oil; and finally tourism, focusing on the management of the growth of the tourism industry and the protection of the environment and cultural integrity.
The 2005 Mauritius Strategy of Implementation further complemented the BPOA.
It gave recognition to the challenges that are unique to SIDS, and proposed further action towards their sustainable development.
The MSI emphasised the location of SIDS in the most vulnerable regions of the world with respect to natural and environmental disasters and their rapidly increasing impact.
It made call for a global early warning system covering threats such as tsunamis, storm surges and cyclones, and stressed that some major adverse effects of climate change are already being observed.
Further, the MSI recognised the importance of international trade for building resilience and sustainable development in SIDS, and established the necessity for international institutions, including financial ones, to pay more specific attention to the structural drawbacks of SIDS.
The MSI went further on matters of trade, stating that “most small island developing states, as a result of their smallness, persistent structural disadvantages and vulnerabilities, face specific difficulties in integrating into the global economy”.
More recently, in September 2014, the Small Island Developing States Accelerated Modalities of Action, also known as the SAMOA Pathway, was adopted. As in the case of the previous adoptions, the strategy recognises the need to support and invest in SIDS so that they can achieve sustainable development. Distinguishing the Samoa Pathway slightly from the BPOA and the MSI is the idea of investing in the education and training of the people of SIDS.
The aim of this idea was to create “resilient societies and economies, with full and productive employment, social protection and decent work for all”, and to provide “full and equal access to quality education at all levels”, the latter which is a vital ingredient for achieving sustainable development.
The promotion of education for sustainable development is especially crucial for SIDS that are under direct threat from climate change, as it will “empower communities to make informed decisions for sustainable living rooted in both science and traditional knowledge”. Finally, the SAMOA Pathway supports efforts “to promote and preserve cultural diversity and intercultural dialogue, which provide a mechanism for social cohesion and, thus, are essential in building blocks for addressing the challenges of social development”.
Many SIDS have recognized the need to embrace sustainability through their own internal processes, however, without external aid from the international community, the required change will not come quickly enough. Following on the adoption of the Samoa Pathway, 2015 is rapidly becoming a watershed year for global processes of importance to SIDS.
Convergence is occurring across a broad spectrum of activities as this year has seen the international community deliberate on the Post 2015 framework for disaster risk reduction which culminated in the adoption of the Sendai Framework, new expected agreements in the post 2015 development agenda with Sustainable Development Goals replacing the Millennium Development Goals. New agreements are also expected on how development is financed and there remains expectation of a new international agreement on climate change.
Given their far reaching impact, these developments are critical, particularly when viewed from the perspective of the small island developing state.
Notwithstanding the global consensus, serious challenges remain for SIDS and for the foreseeable future; they will remain a special case for sustainable development.
However, with a global consensus and an avid commitment to the advancement of sustainable development in these countries, positive change is most certainly on the horizon.
George Nicholson is the Director of Transport and Disaster Risk Reduction and Anastasia Ramjag is the Research Assistant of the Directorate of Transport and Disaster Risk Reduction of the Association of Caribbean States.
Note: the opinions expressed in Caribbean Journal Op-Eds are those of the author and do not necessarily reflect the views of the Caribbean Journal. More
The Cayman Islands Airports Authority (CIAA) has unveiled the interior conceptual drawings for the multi-million dollar expansion project at Owen Roberts International Airport (ORIA).
Commenting on the design created by Florida based firm RS&H Group, CIAA’s CEO Albert Anderson said, “The interior design is very impressive and I am confident that once completed the new expanded airport will be a first-class terminal facility
The CI$55 million expansion project should take around three years to complete and will nearly triple the current space at the airport. Construction on the first phase of the project is expected to begin this summer.
Here is the Cayman Islands Government's chance to save money and show their support for alternative energy. Covering the roof and parking lots with solar panels, and using LED lighting would set an example for Caymanians and Caymanian businesses to follow. Editor
The Global Ocean Commission and the Permanent Mission of Sweden to the United Nations are happy to invite you to their side eventon Wednesday 21 January, lunchtime, on the margins of the UN BBNJ negotiations.
Side Event: The Ocean We Need for The Future We Want
David Miliband, Co-chair Global Ocean Commission, President and CEO of the International Rescue Committee (IRC) and former UK Foreign Secretary
Lisa Emelia Svensson, Ambassador for Ocean, Seas and Fresh Water, Ministry of the Environment, Government Offices of Sweden
Shorna-Kay Richards, Minister and Deputy Representative, Permanent Mission of Jamaica to the UN
Max Diener, Legal Advisor, Ministry of Foreign Affairs of Mexico
The Global Ocean Commission report (www.globaloceancommission.org) released in June 2014 contains eight proposals directly related to the governance, sustainable use and conservation of marine biodiversity in Areas Beyond National Jurisdiction. The convening of the Global Ocean Commission came from the realization that the context of modern ocean governance had changed markedly since UNCLOS was negotiated.
This side event will consider these solutions and proposals which the Global Ocean Commission has tabled for a future healthy ocean in the context of the BBNJ negotiations and the potential new implementing agreement.
The Co-chair of the Global Ocean Commission will give insights to their deliberations drawn from the diverse backgrounds of the Commissioners, and will reflect on the compelling evidence which lead them to advocate strongly in their report for a new UNCLOS Implementing Agreement for the high seas.
The other eminent speakers will focus on the intimate linkages between the BBNJ process and the potential impact the outcome of these negotiations will have on the other ocean issues.
The event is the preeminent meeting place for international leaders and energy experts at the forefront of the clean energy movement. Securing energy independence and developing a clean energy industry that promotes the vitality of our planet are two reasons why it is critical to reaffirm already established partnerships and build new ones throughout the Asia-Pacific region and the world. The summit will provide a forum for the high-level global networking necessary to advance this emerging clean energy culture.
Join a broad international community of over 1500 attendees from over 25 countries!
Keynote speakers include:
Neil Abercrombie, Governor, State of Hawai‘i Major General Anthony Crutchfield, US Army, Chief of Staff, US Pacific Command (PACOM)
Kyle Datta, General Partner, Ulupono Initiative Captain James Goudreau, Director, Navy Energy Coordination Office, US Navy Rahul Gupta, Principal, Public Service Practice, Sustainability, and Cleantech, PricewaterhouseCooper
Mike Howard, President & CEO, Electric Power Research Institute (EPRI) Taholo Kami, Regional Director, IUCN Oceania Regional Office (ORO)
Richard Lim, Director, State of Hawai‘i, Department of Business, Economic Development & Tourism (DBEDT)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.970960259437561,
"language": "en",
"url": "https://www.bondora.com/blog/when-can-you-peak-financially/",
"token_count": 1012,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0458984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:09350fab-1c41-4fa8-acea-44fa7d9590ae>"
}
|
We all make assumptions about how our financial life will play out. We will first get an education, learn a skill or trade, begin our working careers, and work our way up the ladder to a higher income where we can afford the life we want. But is this really true? At what point do most people actually reach their financial peak? Let’s look at what the statistics say about how your financial life is likely to play out.
The 20s and 30s
You will most likely spend more money than you earn. Expenses can take the form of credit cards, car loans, or student debt. Obviously, spending more than you earn is not a sustainable way to build your wealth, but at this age, you understand that it can often be a necessary evil. As a result, you might still be relying on some help from your parents to make ends meet and survive the early stages of your career.
At this stage, you may have just begun your working career. But just because your career is only getting started doesn’t mean you can’t quickly work your way up the income ladder. Most people will see their biggest wage increase in their 20s and 30s, with women’s incomes increasing slightly faster than men’s.
Unfortunately, for women, there is prejudice against them. Having children and taking time off from work can hold back their earning potential. Dominie Moss, the founder of the London-based executive search firm The Return Hub, says, “Once women come back to work it can often be assumed they are no longer as focused, ambitious or committed as their male counterparts and [are] therefore not given the best accounts [or] juiciest deals to work on.”
The 40s and 50s
If you are a woman with a college degree, you will most likely reach your peak earning potential at 40-years old. For men, peak earning happens later, generally in the early to mid-50s. This also depends on your career path. For instance, physical laborers will likely see their earnings peak at an earlier age and subsequently decline as their bodies age because they cannot keep up with the physically demanding work. Alternatively, those seeking executive positions like CEOs might see their earnings peak in their late 50s or later, as they finally achieve their careers’ pinnacle position.
As you approach the peak of your income, you could be making 127% more than when you first started your career. This might sound like a significant sum, but it’s to be expected given that you have been working for several decades.
At this age, you are more likely to take on the biggest expense of your life: a home purchase. By your 40s, you have saved enough for a down payment on a house and can reasonably afford to make a monthly mortgage payment. Just don’t overextend yourself by purchasing a house you can’t afford.
The 60s and 70s
You have finally reached your highest net worth as an individual. Your retirement accounts have had decades to accumulate compound interest, and you might be close to or have already paid off your house in full. With your children out of the house and your expenses slowing, you can begin to think about retirement.
What does it all mean?
It’s all well and good to look at these facts by themselves, but what do they mean for your financial future? Let’s take a look at some things we can learn from looking at these numbers:
- Speed up your savings earlier – As your earnings quickly rise in your 30s, you should consider increasing your savings even more. Your income will not increase this fast in the coming decades, so it’s better to save now to get ahead.
- Compound interest is your friend – If you want to hit peak financial wealth in your 60s, you better take advantage of compounded interest early on. The magic of compound interest is that you will begin earning interest on top of your already earned interest, thereby exponentially growing your wealth over time.
- Affording retirement – When you finally reach the golden years of your life, will you be financially prepared to retire? Can you expect that your biggest expenses (like your mortgage) will be completely paid off, and your retirement accounts will be enough for the rest of your life? It’s best to think about these questions early in your career so you can budget and plan for the future.
- Time can be your friend or your enemy – As you can see, when you lay out the decades of your life, it becomes apparent that time can work for or against you. If you save early, budget for the future, and work your way up the income ladder, you should have no problem planning a secure financial future. But should you not take these steps seriously, you may wake up one day wondering why you can’t afford the life you really wanted.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9383636713027954,
"language": "en",
"url": "https://www.entrepreneur.com/encyclopedia/primary-market-research",
"token_count": 570,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.10888671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3b518654-596e-4913-a4ef-2cb126b2ba4d>"
}
|
Primary Market Research
When conducting primary market research, you can gather two basic types of information: exploratory or specific. Exploratory research is open-ended, helps you define a specific problem, and usually involves detailed, unstructured interviews in which lengthy answers are solicited from a small group of respondents. Specific research, on the other hand, is precise in scope and is used to solve a problem that exploratory research has identified. Interviews are structured and formal in approach. Of the two, specific research is the more expensive. Figure 3.1 provides a sample cost analysis form for different research methods.
When conducting primary research using your own resources, first decide how you'll question your targeted group: by direct mail, telephone, or personal interviews. If you choose a direct-mail questionnaire, the following guidelines will increase your response rate:
- Questions that are short and to the point;
- A questionnaire that is addressed to specific individuals and is of interest to the respondent;
- A questionnaire of no more than two pages;
- A professionally-prepared cover letter that adequately explains why you're doing this questionnaire;
- A postage-paid, self-addressed envelope to return the questionnaire in. Postage-paid envelopes are available from the post office;
- An incentive, such as "10 percent off your next purchase," to complete the questionnaire.
Even following these guidelines, mail response is typically low. A return rate of 3 percent is typical; 5 percent is considered very good. Phone surveys are generally the most cost-effective. Some telephone survey guidelines include:
- Have a script and memorize it-don't read it.
- Confirm the name of the respondent at the beginning of the conversation.
- Avoid pauses because a respondent's interest can quickly drop.
- Ask if a follow-up call is possible in case you require additional information.
In addition to being cost-effective, speed is another advantage of telephone interviews. A rate of five or six interviews per hour is typical, but experienced interviewers may be able to conduct more. Phone interviews also can cover a wide geographic range relatively inexpensively. Phone costs can be reduced by taking advantage of less-expensive rates during certain hours.
One of the most effective forms of marketing research is the personal interview. They can be either of these types:
- A group survey. Used mostly by big business, group interviews or focus groups are useful brainstorming tools for getting information on product ideas, buying preferences, and purchasing decisions among certain populations.
- The in-depth interview. These one-on-one interviews are either focused or nondirective. Focused interviews are based on questions selected ahead of time, while nondirective interviews encourage respondents to address certain topics with minimal questioning.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.936201810836792,
"language": "en",
"url": "https://www.sphvalue.com/post/investing-in-financially-feasible-renewable-energy-projects",
"token_count": 2586,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1884765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6f1577cf-78f0-4106-b3cd-bd233d4e31e4>"
}
|
Investing In Financially Feasible Renewable Energy Projects
By: Thomas Pastore
A June 2010 article in the Financial Times.com discusses the debate over global warming. Ms. Jane Lubchenco, the administrator of NOAA (US National Oceans and Atmospheric Administration) states that, “the average temperature in the world has increased by 0.56 degrees Celsius (or 1 degree Fahrenheit) over the past 50 years. The rise may seem small but it has already altered our planet…glaciers and sea ice are melting, heavy rainfall is intensifying, and heat waves are more common.”
One of the sure most concerns of citizens in the United States and worldwide is reducing carbon emissions. Environmental and climate concerns have resulted in government agencies and businesses making significant capital expenditures in the implementation of renewable energy projects. Some economists have coined this the next industrial revolution. Renewable energy projects represent a paradigm shift in energy consumption planning for any organization. This article focuses on the implementation of a solar photovoltaic (“PV”) system in Southern California by a non-profit organization. However, in general these analyses can be applied to businesses as well as other renewable energy or hybrid systems.
Federal and State incentives along with a number of different financing structures can help make the implementation of renewable energy systems feasible.
SUCCESSFUL ENERGY PLAN IMPLEMENTATION
The following major steps need to be taken when putting together a successful energy consumption strategy:
1. Assessing the energy consumption under the existing overall infrastructure, such as building insulation, equipment age, and types of light bulbs, just to name a few. This is called a demand side audit.
2. Implementing the necessary changes, as a result of the demand side audit, to minimize energy consumption and make the overall infrastructure most efficient.
3. Installing efficient and cost effective renewable energy central plants, including PV systems.
4. Continuously monitoring energy consumption levels and patterns.
5. Developing a curriculum program and ongoing training around the implementation and operation of a renewable energy central plant.
Conducting proper due diligence is essential to evaluate installation, maintenance and contractual obligations. Due diligence procedures entail various analyses of proposed PV systems as follows:
1. Engineering analyses of design proposals, installation sites, and ongoing maintenance.
2. Financial analyses of a PV system’s implementation costs, financing costs, operating costs, and maintenance costs.
3. Legal analyses of proposed contracts between a non-profit organization, the PV system installer, and the investor who becomes the owner once the PV system is energized.
4. Project management and analyses from the perspective of the non-profit organization.
LIFE-CYCLE FINANCIAL ANALYSIS OF A PV SYSTEM
The life-cycle analysis must encompass all cash flows during the life of a PV system, from the preliminary design stage through the removal of the PV system once it ceases operations.
Several different designs may be presented from the original preliminary design to the ultimate one that meets an organization’s current and anticipated near future energy needs.
A. Considerations important to this analysis include:
A PV system may be fully financed or upfront capital investment may be required.
Applications for all available incentives, both federal and state.
Structuring a power purchase agreement (“PPA”) or an equipment lease agreement with a third party that commences once the PV system is energized.
Maintenance of the PV system, along with production guarantees from the maintenance provider, for a negotiated time period, usually of 20 years or less.
Current energy costs escalated periodically to reflect expected energy costs could be used as the baseline for calculating savings during the life of the PV system.
Once the PV system stops operating, it has to be replaced or removed, also known as decommissioning costs.
It is important to note that maintenance costs are relatively minimal since the PV panels are usually guaranteed for 20 years, and the inverters are guaranteed for 10 years. A PV system could operate for as many as 25 to 40 years.
Monetary incentives are available from both federal and state programs to assist with the cost of installing PV systems. Federal incentives are provided by the National Energy Policy Act of 2005, while state incentives are usually provided through the local utility company servicing the area and the California Public Utility Commission (“CPUC”).
Federal incentives include an Investment Tax Credit (“ITC”) or a Treasury Cash Grant (“TCG”) equal to 30% of eligible costs. Another incentive comes from the IRS’s Modified Accelerated Cost Recovery System, under which businesses can recover investments in solar, wind, and geothermal property placed in service after 1986 over a five-year schedule of depreciation deductions. Since the economic life of such property is 25 to 40 years, this incentive allows for relatively rapid recovery of deductable depreciation of an investment compared to the expected economic life of the property installed.
The California Solar Initiative (“CSI”), which is regulated by the CPUC, offers an incentive to further reduce the cost of installing PV systems. The CSI is a performance based incentive (“PBI”) that is calculated based on projected kilowatt hours produced by a PV system. Different PV system size limits exist under each utility company. In addition, the CSI is composed of a number of declining steps, where the PBI rebate rate decreases as the number of MW installed increases by certain increments.
Incentives may change considerably over time. It is important to keep abreast of changes in incentives and formation of new incentives. Information on all federal and state incentive programs around the country is available at the Database of State Incentives for Renewables and Efficiency, www.dsireuse.org/.
Non-profit organizations are not able to benefit from any tax credit or depreciation incentives since they do not generate taxable income. For-profit third party ownership allows non-profit organizations to indirectly benefit from all available incentives that would otherwise not be available. This benefit is passed through to the non-profit organization in the form of a lower payment under the chosen financing structure, as discussed next.
POWER PURCHASE AGREEMENTS
A PPA can be a contract between a non-profit organization and a third party, typically an investor, where the non-profit organization purchases power produced by a PV system based on a pre-determined price per unit, i.e., $/kWh produced. A PPA specifically for the purpose of providing a solar energy system is also known as a solar service agreement. A typical PPA term is 20 years. Such an agreement allows a non-profit organization, which cannot fully utilize all available incentives, to indirectly benefit from them through a lower PPA energy rate.
EQUIPMENT LEASE AGREEMENTS
Under an equipment lease agreement, the installer sells the PV system to a third party, typically an investor, which then leases the PV system to a non-profit organization. As the PV system owner, the lessor can apply for and receive the TCG. The lease payment is a fixed amount and, unlike a PPA, does not vary with production. A typical lease term is 15 years. Tax counsel should be consulted to assure that the terms of the lease meet the criteria of an operating lease. All available incentives are reflected in the form of a lower lease payment.
MEASURING SAVINGS FROM A PV SYSTEM
Determining if a PV system is financially feasible requires a comparison of annual costs to the purchasing party, i.e., non-profit organization, over the life of the PV system to the purchasing party’s offset utility costs during the life of the PV system.
The first step in calculating the utility cost that is being offset by the PV system production is establishing the appropriate utility rate per kilowatt hour, and then applying it to the PV system’s kilowatt hours produced. For example, Southern California Edison utility rates include charges for energy use, by customer, and by demand. Energy use charges involve delivery service and generation charges based on time of use (“TOU”), customer charges and related facilities, and a power factor adjustment. Demand charges are not TOU charges. Time related demand depends on TOU during summer (12 a.m. on the first Sunday in June through 12 a.m. of the first Sunday in October) and winter (the remainder of the year). TOU rates are based on three time periods, on-peak, mid-peak, and off-peak, with maximum demand rates established for each time period based on the maximum average kilowatt input recorded during any 15-minute interval during each month. On-peak hours are noon through 6 p.m. on summer weekdays, except holidays. Mid-peak hours are 8 a.m. to noon and 6 p.m. to 11 p.m. on summer weekdays, except holidays, and off-peak hours account for all remaining hours.
FINANCIAL FEASIBILITY ANALYSES
There are three primary methods of financial analyses.
The first is the net present value (“NPV”) method, which is the sum of the present values of the annual cash flows during the life of the PV system minus the present value of the investments. An appropriate discount rate accounts for the time value of money and uncertainties associated with the cash flows. This method is important, as it shows the net value of the PV system from year to year.
Another method is based on the internal rate of return (“IRR”), which is the discount rate that makes the project’s cash flows and investments have a zero NPV. It is important to define a threshold IRR prior to evaluating the PV system. An IRR of 0% does not make a project financial feasible as it fails to compensate an investor for the time value of money and the uncertainties associated with future cash flows.
The last method is the payback period, which is the length of time required to recover an initial investment through cash flows generated by the investment. The payback period is important when considering an organization’s financial ability to implement a PV system.
All financial feasibility analyses are highly dependent on a PV system’s cost, which in turn is subject to market price fluctuations of commodity type raw materials, such as PV panels and steel. If these price fluctuations cannot be controlled in the procurement process, there is the potential for a significant adverse impact. This could make a PV system financially unfeasible.
OTHER QUANTITATIVE BENEFITS
Additional quantitative benefits to the PV system owner include carbon credits, renewable energy credits (“RECs”), and possible employee health care savings as a result of a cleaner environment. Qualitative externalities include reduction of pollution and greenhouse gas emissions, reduced dependency on utility providers, and greater control over energy price volatility. In addition, PV systems can provide power during traditional power outages, whether due to natural disasters or any other reason.
Installing PV systems in parking lots and on rooftops or other existing structures provides shade while not infringing on an organization’s operations and not requiring the acquisition of additional space. Finally, minimal maintenance cost is associated with PV systems, with long-term reliability of 25 to 40 years.
Financial analysis is critical to assessing the feasibility of an energy consumption plan. A complete financial analysis includes all factors present during the life-cycle of a PV system. These factors include, but are not limited to, the financing structure terms, investment costs, available incentives, utility energy costs, and externalities. Proper application of financial analyses to determine the financial feasibility of a PV system provides a critical portion of the overall due diligence procedures in implementing a PV system.
ABOUT THE AUTHOR
Thomas Pastore, ASA, CFA, CMA, MBA
Mr. Thomas E. Pastore is Chief Executive Officer and co-founder of Sanli Pastore & Hill, Inc. Mr. Pastore is an Accredited Senior Appraiser (ASA), Business Valuation Discipline, of the American Society of Appraisers, a Chartered Financial Analyst (CFA) Charterholder, a Certified Management Accountant (CMA), and received his Masters in Business Administration (MBA). He has valued over 2,000 businesses during his career, including numerous energy and clean technology companies. He regularly testifies in court as an expert witness. Mr. Pastore frequently speaks on business valuation to professional organizations.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9553278088569641,
"language": "en",
"url": "http://classonline.org.uk/blog/item/first-100-days-eradicating-poverty",
"token_count": 1187,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.494140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c5133e7c-72ca-45c8-811b-3117cc1f542a>"
}
|
First 100 Days - Eradicating Poverty
Moussa Haddad is Senior Policy & Research Officer at Child Poverty Action Group.
The next government will inherit a child poverty crisis. According to Institute of Fiscal Studies (IFS) estimates, there are four million children living in poverty in 2014/15 after housing costs are taken into account. This is an increase of 400,000 since 2010, and 300,000 more are projected to be tipped into poverty by 2020.
Even as all the main parties remain committed to ending child poverty by 2020, we are moving dramatically in the opposite direction. This amounts to a costly failure: CPAG estimates that child poverty costs the country at least £29 billion a year in services that deal with the effects of poverty, and, in the longer-term, in losses to the economy from wasted potential. It is also a social and a moral failure, denying millions of children the childhoods they deserve. Focusing on child poverty requires us to address structural inequalities that produce poverty across society as a whole: it is an issue that affects and should concern us all. So what can a new government do about this widening social deficit?
The first thing to be said is that there is no silver bullet. Poverty is absolutely not inevitable – and looking at what has worked in other times and places is valuable – but nor is it something that can be eradicated overnight. For that reason, poverty must be tackled in three stages: immediate actions to make poverty the priority it needs to be; practical steps that can move us in the right direction over the course of the next Parliament; and constructing a long-term plan for ending child poverty once and for all.
The incoming government cannot do everything in its first 100 days, but it can use that time to strike the right tone and to set out its priorities. Given the extent of the looming crisis, one of its first steps must be to commit to making child poverty a national priority in its programme for government. A new government should use its first Spending Review to mandate preventative spending today to avoid the enormous costs of child poverty tomorrow, changing the way government thinks about public spending so that it takes a long-term view of costs and benefits. It must signal an end to the Robin Hood in-reverse that has characterised austerity politics. Research from LSE, Manchester and York universities has shown that the poorest half of the population have lost income over this Parliament while those in the richest half have gained, all without any overall impact on deficit reduction. More of the same is not socially sustainable.
One significant symptom of hardship is the rapidly rising use of food banks, with the Trussell Trust network alone giving emergency food to over 1 million people in 2014/15, more than a third of them children. CPAG’s experience in providing welfare rights advice in a food bank since 2013 – together with research looking into the experineces of more than a thousand food bank users across the country⁵ – has given us a rich understanding of the drivers of the phenomenon. In the majority of cases, food bank use is driven by an acute financial crisis caused by the failures of the benefits system. Urgent reforms are needed at two levels: first, technical changes to existing benefit rules and regulations, as well as improvements in administration within the Department for Work and Pensions, must be made to ensure that delays and errors in the benefit system – which are causing significant hardship to families – are minimised. A progressive government must also commit to an independent review of the benefits sanctions system, to ensure that sanctions are genuinely used as a last resort. Second, the next government must fix the system of emergency support – the safety net beneath the safety net of local welfare provision, short-term benefit advances, and hardship payments – that is designed to protect people when things do go wrong. This should include raising awareness of these provisions, simplifying their application procedures and ensuring that dedicated funding is in place to meet need.
Another immediate step the incoming government can make is to help protect families from rising living costs. Children’s benefits have been chipped away at over the course of the last Parliament, with child benefit losing 14 per cent of its value during that time. A ‘triple lock’ for children’s benefits, as for pensions, would ensure that they fall no further – and are gradually restored to their former value. Better work incentives under Universal Credit – particularly increased work allowances and new allowances for second earners – are a crucial aspect of making sure that Universal Credit’s poverty-fighting potential is realised.
Perhaps the most important thing the new government must do is reinvigorate the fight against child poverty with a concrete and credible plan for its eradication. This is not the place to write that plan, but we know from past experiences what works. During the 2000s, financial support for families with children, helping more parents into paid work and ensuring childcare provision, were key pillars of the success in reducing child poverty by over a million. To that we can add action on such structural issues as low pay, the housing crisis and education. Overcoming child poverty requires a truly cross-governmental approach, and a genuinley progressive government must be honest about the scale of the challenge – and use that as a driver in producing a plan for eradicating poverty that touches on all these areas, both tackling the long-term determinants of poverty and alleviating it in the here and now.
Whatever form the new government takes, it will contain parties and politicians who have loudly proclaimed their commitment to ending child poverty. Today, the economy is growing again and the fiscal deficit is falling – but a costly social crisis is looming. Now is the time for politicians to make good on those commitments, and give our children the childhoods they deserve. The hard work starts now.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9707416296005249,
"language": "en",
"url": "http://www.susannealleyn.com/ethereum/bitcoin-technology-training-you-to-deal-and-invest.html",
"token_count": 383,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5fd51dbc-40fb-4149-bc10-e64451972a10>"
}
|
Crypto currency is digital currency that works on the encryption technique and is managed by Bitcoin technology that is the financial institutions of crypto currency. The crypto currency does not come under the purview of any private or governmental financial institution and is managed independently. There are many crypto currencies that are in existence such as: Bitcoin, Ethereum, Ripple, and many more. There are almost 2000 crypto currencies that are in existence although there are only some that cannot be mined. These currencies are managed by Bitcoin technology that work independently and have their own set of rules of exchange and mining. These companies have also developed trade markets for the exchange of crypto currencies.
If you want to generate the bitcoin, you can do so by a process known as mining. Mining is a process in which transaction records are into public ledger of bitcoin. People who generate bitcoins are known as miners. Bitcoin can be easily generated by solving the mathematical computations. However, the point you must remember is that the computations gets harder with the formation of every new bitcoin. Bitcoin generation does not require any kind of investment but being a miner you may have to expense a good amount on the installation of the mining software.
If you don’t want to expense that then too you can earn btc price for free but it will require some serious efforts from you. You will have to put all your efforts in solving the computations of software and then you can download them online.
However, whichever type of Bitcoin wallet one uses it is always important to take all the possible safety measures so as to protect your wallet against theft or loss. However, I f you create a wallet without taking any safety measures and now worried about its protection then it is type for you to stop worrying and create a new secure wallet. Yes, by taking many individual’s bad experience on wallet theft/wallet loss into consideration recreating of new wallet came into existence.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9269715547561646,
"language": "en",
"url": "https://chainreaction.anl.gov/projects/fast-charging-longer-distance-batteries-for-electric-vehicles/",
"token_count": 529,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0361328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0d880382-985a-474c-b1eb-db802767e459>"
}
|
Critical Need for this Technology
As the automotive industry shifts away from fossil fuels to renewable energy, the need for better storage solutions has grown. To meet these needs, there is a race to develop the next energy storage solution for vehicles that are safe, cost effective, and provide a high energy density.
Current battery technology is cost prohibitive. The cost of a battery pack currently makes up about 40 percent of the total price of an electric vehicle. To make electric vehicles competitive to their internal combustion engine counterparts, several improvements need to be made to current state of the art batteries. Namely, the cost, energy density, charge rate and size are some of the most important features.
SiMat EnerTech’s silicon-based batteries have the advantage over the current technology because they provide a very high capacity battery that both reduces the size and charge time needed for operation, providing more options for automotive designers.
Potential CO2 Reduction
Light-duty vehicle transportation is responsible for approximately 6,000 MtCO2e of emissions annually. Electric vehicles can reduce these emissions, and they cause between 28% and 72% of the emissions relative to an internal combustion engine vehicle. In practice, these emissions reductions will not be realized immediately due to the time required for wide adoption of new technologies. SiMat EnerTech’s technology has the potential to accelerate this adoption of electric vehicles through improved battery performance and lower cost. If SiMat EnerTech’s technology accelerates electric vehicle adoption by one year, compared to current predictions for electric vehicle adoption, the additional emissions reduced would average 65-125 MtCO2e per year over the next 30 years.
- Mainstream battery manufacturers, such as LG Chem and Panasonic.
- Start up battery manufacturers, such as Paraclete and Nanograf.
SiMat EnerTech’s rechargeable batteries can apply to any application with batteries. The market available to SiMat’s batteries range from consumer electronics all the way to electric vehicle batteries. Its goals are to bring high energy, fast charging and long cycle life batteries to vehicles such as hybrid electric vehicles, drones and electric planes.
R & D Status of Project
While still in the beginning stages of research, SiMat EnerTech has validated the feasibility of its technology in a laboratory environment. The company has tested its anodes in half cells and are planning for full cell validation tests shortly.
Leon Shaw – Inventor, CEO and President
Chris Passolano – Chief Engineer
Primary industry: Energy Storage
Category: Rechargeable Lithium Ion Batteries
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9509537220001221,
"language": "en",
"url": "https://freestyleskaters.org/are-erc20-coins-forks-of-ethereum/",
"token_count": 1110,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.035888671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:48dc0c79-ec8c-48dc-97cf-5a5045dfdb69>"
}
|
What Is Ethereum (ETH)?
Ethereum is a decentralized open-source blockchain system that includes its own cryptocurrency, Ether. ETH works as a platform for numerous other cryptocurrencies, along with for the execution of decentralized smart agreements Ethereum was first described in a 2013 whitepaper by Vitalik Buterin. Buterin, along with other co-founders, secured financing for the task in an online public crowd sale in the summer of 2014 and formally launched the blockchain on July 30, 2015.
Ethereum’s own purported objective is to become a global platform for decentralized applications, permitting users from all over the world to write and run software application that is resistant to censorship, downtime and scams.
Who Are the Creators of Ethereum?
Ethereum has a total of 8 co-founders an uncommonly a great deal for a crypto job. They initially fulfilled on June 7, 2014, in Zug, Switzerland.
Russian-Canadian Vitalik Buterin is perhaps the best understood of the lot. He authored the original white paper that first explained Ethereum in 2013 and still works on improving the platform to this day. Prior to ETH, Buterin co-founded and wrote for the Bitcoin Publication news website.
British programmer Gavin Wood is probably the 2nd most important co-founder of ETH, as he coded the first technical implementation of Ethereum in the C++ shows language, proposed Ethereum’s native programming language Solidity and was the very first chief innovation officer of the Ethereum Foundation. Before Ethereum, Wood was a research study scientist at Microsoft. Later, he moved on to develop the Web3 Structure.
Amongst the other co-founders of Ethereum are: – Anthony Di Iorio, who financed the job throughout its early stage of development. – Charles Hoskinson, who played the primary function in establishing the Swiss-based Ethereum Foundation and its legal framework. – Mihai Alisie, who offered assistance in establishing the Ethereum Structure. – Joseph Lubin, a Canadian business owner, who, like Di Iorio, has helped fund Ethereum during its early days, and later established an incubator for startups based upon ETH called ConsenSys. – Amir Chetrit, who helped co-found Ethereum however stepped away from it early into the development.
What Makes Ethereum Distinct?
Ethereum has originated the idea of a blockchain wise agreement platform. Smart contracts are computer programs that immediately carry out the actions needed to fulfill an agreement in between numerous parties on the internet. They were designed to lower the requirement for relied on intermediates between contractors, therefore minimizing deal costs while likewise increasing transaction dependability.
Ethereum’s principal development was creating a platform that permitted it to perform wise agreements utilizing the blockchain, which even more reinforces the currently existing advantages of wise contract technology. Ethereum’s blockchain was created, according to co-founder Gavin Wood, as a sort of “one computer system for the entire planet,” theoretically able to make any program more robust, censorship-resistant and less vulnerable to scams by running it on an internationally distributed network of public nodes.
In addition to smart agreements, Ethereum’s blockchain is able to host other cryptocurrencies, called “tokens,” through the use of its ERC-20 compatibility requirement. This has been the most common use for the ETH platform so far: to date, more than 280,000 ERC-20-compliant tokens have been launched. Over 40 of these make the top-100 cryptocurrencies by market capitalization, for example, USDT LINK and BNB B: Related Pages:
New to crypto? Find out how to purchase Bitcoin today Ready to get more information? Visit our finding out center Want to search for a deal? Visit our block explorer Curious about the crypto area? Read our blog site
How Is the Ethereum Network Safe?
Since August 2020, Ethereum is secured via the Ethash proof-of-work algorithm, belonging to the Keccak household of hash functions.
There are strategies, nevertheless, to shift the network to a proof-of-stake algorithm tied to the major Ethereum 2.0 upgrade, which released in late 2020.
After the Ethereum 2.0 Beacon Chain (Phase 0) went live in the beginning of December 2020, it ended up being possible to begin staking on the Ethereum 2.0 network. An Ethereum stake is when you deposit ETH (acting as a validator) on Ethereum 2.0 by sending it to a deposit contract, essentially acting as a miner and therefore securing the network. At the time of writing in mid-December 2020, the Ethereum stake cost, or the amount of cash made daily by Ethereum validators, is about 0.00403 ETH a day, or $2.36. This number will change as the network develops and the quantity of stakers (validators) boost.
Ethereum staking rewards are identified by a distribution curve (the involvement and typical percent of stakers): some ETH 2.0 staking rewards are at 20% for early stakers, but will be reduced to end up in between 7% and 4.5% annually.
The minimum requirements for an Ethereum stake are 32 ETH. If you choose to stake in Ethereum 2.0, it implies that your Ethererum stake will be locked up on the network for months, if not years, in the future till the Ethereum 2.0 upgrade is completed.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9472193121910095,
"language": "en",
"url": "https://investinganswers.com/dictionary/e/equity-financing",
"token_count": 1684,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.056884765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0be9fafb-4f6a-408e-9e08-02d7f8105dd1>"
}
|
What Is Equity Financing?
Equity financing occurs when a company aims to raise capital by offering investors partial ownership interest in the company. This type of financing allows the company to raise enough funds without taking out loans or incurring any debt. A business that wants to grow – but doesn’t have enough revenue or additional cash – may want to turn to investors to fund its growth.
How Does Equity Financing Work?
Equity financing involves the sale of the company's stock. A portion of the company’s ownership is given to investors in exchange for cash. That proportion depends on how much the owner has invested in the company – and what that investment is worth at the time of financing.
Ultimately, the final arrangement will be up to the company and investor. For smaller and private companies, the process typically involves some form of written agreement. For larger and public companies, the arrangement can be more complex (e.g. IPO offerings).
Is Equity Financing Long Term?
Equity capital is considered long-term. If the equity is publicly traded, investors may sell to other investors at any time, but the company can consider the equity to be long-term financing.
Common Types of Equity Financing
There are multiple ways that businesses can raise capital through equity financing:
1. Angel Investors
Angel investors are individuals who specifically provide funding for businesses. They typically have a sizable amount of cash on hand and are looking for good returns on their investments. Most angel investors look to fund startups or early-stage companies because they can shape the direction of the business from the beginning.
2. Mezzanine Financing
Mezzanine financing combines debt and equity financing. Typically, medium-sized businesses select this type of funding because it counts as equity on a company's balance sheet and provides businesses with a lower debt-to-equity ratio. The less a business relies on debt to fund their operations, the less risk there is. Therefore, this option can help attract more investors.
3. Royalty Financing
Also known as revenue commission, royalty financing occurs when investors provide cash for a company's expenses in exchange for a percentage of a product's sales. Since investors expect to receive immediate payments, the business needs to prove it's already generating revenue.
To determine whether it’s worth the investment, investors will want to look at proof such as profit and loss statements and/or a company’s balance sheet.
4. Venture Capital Firms
Venture capital firms) are entities which provide funding to business in return for ownership or shares. Like angel investors, venture capital firms also look to invest in businesses that offer high rates of return. These firms use combined funds from multiple professional investors.
To “guarantee” growth, some venture capitalists will want to sit on the company board or take a mentoring approach to help its leaders. Venture capital firms will plan to exit from their equity position by selling the company to an acquiring company or by taking it public with its own IPO.
5. Initial Public Offering (IPO)
This type of financing occurs when a company chooses to offer shares on a publicly traded market for the first time. Since the company wants to transition to a publicly-traded company, it needs to comply with SEC guidelines.
Before shares become publicly available, companies will need to publish a prospectus, including detailed financial statements, to attract investors.
Crowdfunding involves selling shares of a company to the public. In other words, privately-held businesses attempt to raise money by selling part-ownership of a company to the public, usually through an unconventional channel such as social media.
Debt Financing vs. Equity Financing
Equity financing offers partial ownership of your business in return for a lump sum of money. The investor becomes a stakeholder in the company and therefore has a say in running it.
Debt financing on the other hand, occurs when business owners raise money by taking on loans. Investors who lend the company money become creditors and the company will pay the investors both the principal and a predetermined amount of interest.
Is Equity Financing More Expensive?
Equity financing is more expensive because the investor has a claim in a portion of the business’ future earnings. In debt financing, a business only has to pay back a loan after a predetermined amount of time.
How to Get Equity Financing
The two main methods of obtaining equity financing are by seeking private funding sources or offering up public shares of the company. Private funding methods tend to be simpler since they don’t require as many formalities as public offerings.
For instance, private sources of funding tend to require that the company strikes an agreement directly with the investor, while offering up public shares is a more complicated legal process. In either case, companies will need to create a business plan (including financial projections) to show potential investors that its leaders have the expertise to grow and sustain the company.
How to Calculate Equity Financing
The shareholder offers an initial amount for a percentage in the company, and the total amount of capital in the company grows. As the company’s valuation continues to increase, so will the shareholder’s.
For example, a company is currently valued at $600,000 and an investor wants to invest $400,000 for a total company value of $1 million. The company owner(s) would then control 60% of the shares of the company, having sold 40% of the shares of the company to the investor through equity financing.
Equity Financing Examples
Let’s take a look at some equity financing examples.
Equity Financing Example #1
Let’s say an investor offers $100,000 for a 10% stake in Company ABC. This means the current value of Company ABC would be $1 million ($100,000 * 10 = $1 million, or 100% of the company’s capital).
In five years, Company ABC is valued at $2 million. This would mean that the investor’s share would be worth $200,000 – twice the original funding amount.
Equity Financing Example #2
Company XYZ generates $1 million in product sales and wants to grow even more. An investor wants to create a royalty financing agreement where it provides Company XYZ with $50,000 for 6% of its revenue each year.
This means if Company XYZ generated $1 million, the investor would receive $60,000 each year. Or, if sales grew to $2 million annually, the investor would make $120,000 annually.
Pros of Equity Financing
There are plenty of benefits of equity financing:
Investors in companies may have access to a large network who can potentially help them grow (whether that’s attracting more investors or mentoring opportunities).
Getting money from shareholders is not as risky as incurring debt. If the business doesn’t generate as much revenue (or worse, goes bankrupt), investors won’t expect repayment.
Flexibility with Funds
Money from investors can be used by the company to grow or increase revenue. Depending on the arrangement, there may also be no obligation to pay it back in regular installments.
Cons of Equity Financing
Although equity financing can be beneficial to many companies, there are some pitfalls to watch out for:
More Time Spent on Bookkeeping
Since investors and/or stakeholders need to see how the company is faring, it’s especially important that accounting and reporting are updated and accurate. To put it another way, investors want to see how the company’s finances are doing, so companies need to be able to show that to them.
To receive funding, companies will have to relinquish control over how some things are run.
Table of Contents
- What Is Equity Financing?
- How Does Equity Financing Work?
- Common Types of Equity Financing
- Debt Financing vs. Equity Financing
- How to Get Equity Financing
- How to Calculate Equity Financing
- Equity Financing Examples
- Pros of Equity Financing
- Cons of Equity Financing
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.963001012802124,
"language": "en",
"url": "https://redesignmobile.com/tag/prepaid/",
"token_count": 217,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.224609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:59f1e484-5928-4326-845e-9a173dbd1db6>"
}
|
In November, the U.S. Consumer Financial Protection Bureau (CFPB) published proposed regulations seeking to protect consumers in a market that has mostly been ignored: prepaid banking products.
The prepaid product market has grown from a $1 billion business in 2003 to an estimated $100 billion in 2014, with no signs of slowing. About 8 percent of all U.S. households use prepaid card and accounts, representing the growing number of people without bank accounts. Typically, prepaid accounts provide a way to pay bills electronically or purchase items online for those without credit or debit cards.
What exactly are prepaid banking products? Mostly preloaded and reloadable debit cards, but the proposed regulations extend beyond any plastic in your wallet. The CFPB also seeks to regulate electronic code or any device designed to store prepaid funds or capable of being loaded with funds. That means the regulation would cover not only physical prepaid cards such as those issued by employers to pay wages or those provided to pay government benefits (such as unemployment), but also electronic wallets such as PayPal, Google Wallet or Venmo.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9384814500808716,
"language": "en",
"url": "https://revision.co.zw/indemnity-subrogation-and-contribution/",
"token_count": 261,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.16015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:3effb459-cc73-480a-ae55-5241d2658094>"
}
|
The wreck belongs to the insurance company after compensation is paid. Image credit damagemax.com
ZIMSEC O Level Commerce Notes: Insurance: Indemnity: Subrogation and Contribution
Is the compensation paid to the insured after they have suffered a loss.
It restores the insured to his/her previous financial position before he/she incurred the loss.
The insurer will pay only the amount of loss suffered by the insured.
The indemnity principle does not apply to life insurance (indemnity in this instance would have required the insurance company to raise the dead person!)
The principle does not allow the insured to make a profit.
It includes contribution where a risk is insured with two or more companies i.e. if the insured takes out policies with two different companies over the same risk, when the risk occurs both companies contribute (using a ratio based on premiums paid by the insured) towards the settlement of that loss.
The indemnity clause also includes subrogation:
this gives all the rights over the damaged goods/property to the company settling the claim.
The damaged property belongs to the insurer.
This principle prevents the insured from making a profit out of their loss.
The insurance compensates the insured and takes possession of the scrap.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.948957622051239,
"language": "en",
"url": "https://www.jmscapitalgroup.com/blog/the-potential-of-a-roaring-twenties",
"token_count": 829,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.045166015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c8ccf874-3ae9-4b62-bd58-133ebfd19f13>"
}
|
The Potential of a Roaring Twenties
Last week we discussed optimistic economic growth forecasts for 2021, highlighted by a Goldman Sachs projection of 8% growth, which would be the highest US economic growth rate since 1951. But even more critically, there are reasons to believe that longer-term growth prospects may be brighter than they have been in recent decades. While nothing is guaranteed, it does appear that some longer-term economic headwinds may be winding down, with additional tailwinds forming to give growth a push.
Neil Irwin lays out the case for economic optimism in some detail at https://www.nytimes.com/2021/03/13/upshot/economy-optimism-boom.html. The maturation of the global economy, combined with the fruits of technological innovation and expansionary fiscal and monetary policy, may combine to produce more benevolent economic conditions than we have seen for some time.
The globalization of the economy over the past twenty years, while good for long-term global growth, brought a great deal of disruption to the US economy. Cheap labor from China, India, and Mexico, while enabling lower prices for consumer goods, hurt many US firms and workers in the short run, particularly in manufacturing. The expansion of the internet and communication services enabled increased outsourcing of US jobs. However, much of the pain from these global transitions have passed—and with a rapidly expanding middle class in emerging markets, cheap labor simply isn’t as cheap as it used to be.
Technological innovation also stands potentially ready to improve life and the US economy. In recent years we have appreciated the Peter Thiel joke that “we wanted flying cars, instead we got 140 characters.” But there’s often a significant lag between the initial promise of technology and its realization. Back in the 1980s the Solow paradox espoused the idea that the computer age could be seen everywhere but in the productivity statistics. It took another decade before productivity ramped up.
Cheap batteries and driverless car technology look to be poised to revolutionize the transportation sector. Plunging solar energy prices will have an increasing impact on the energy sector. The flexibility offered by remote work and zoom meetings can still be utilized even in a post-pandemic world. And the mRNA technology that brought us COVID vaccines may have many more applications across health care.
Finally, fiscal and monetary policy makers have largely shrugged off inflation and debt fears, and have been fully invested in taking advantage of low interest rates to spur economic recovery and growth. President Trump appointed Jerome Powell to chair the Fed, and pulled his party away from austerity politics. President Biden has shifted his party away from deficit concerns and towards economic expansion. Jerome Powell has overseen a shift in the Fed’s philosophy from preemptively hiking interest rates to forestall incipient inflation, to giving the economy more slack to run, in case the natural rate of unemployment is lower than what Fed models had previously suggested.
In a way that would have been nearly unthinkable a decade ago, economics has been taking a hiatus from its role as the “dismal science.” And it’s still possible that the 2020s US economy will be disappointing—for example, high US infrastructure costs may limit our ability to take advantage of technological breakthroughs—and we believe that the case Irwin outlines is close to a best case scenario. But for now, there are reasons to think the economy is currently better positioned that it generally has been over the past two decades.
JMS Capital Group Wealth Services LLC
417 Thorn Street, Suite 300 | Sewickley, PA | 15143 | 412‐415‐1177 | jmscapitalgroup.com
An SEC‐registered investment advisor.
This material is not intended as an offer or solicitation for the purchase or sale of any financial instrument or investment strategy. This material has been prepared for informational purposes only, and is not intended to be or interpreted as a recommendation. Any forecasts contained herein are for illustrative purposes only and are not to be relied upon as advice.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9487403035163879,
"language": "en",
"url": "https://www.moneydibs.com/how-does-property-tax-work/",
"token_count": 347,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.032470703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:26aa3351-49be-4897-8917-13e1dfa0f499>"
}
|
In the United States, although different states might apply different property tax rates, the amount owners owe in property tax is decided by a similar system of U.S. law.
Basically, property tax is determined by multiplying the property tax rate by the current market value of the lands in question. Most taxing authorities will recalculate the tax rate annually. Almost all property taxes are levied on real property, which is legally defined and classified by the state apparatus. These real properties usually include the land, structures, or other fixed buildings.
Property owners are subject to the rates determined by the municipal government. A municipality will hire a tax assessor who assesses the local property, and the assessor will assign property taxes to owners based on current fair market values. Sometimes the assessor might be an elected official. Finally, this value will become the assessed value for the home.
The payment schedule of property taxes also varies by locality. Almost all local property tax codes provide the exact schedule that explains when the owner can discuss their tax rate with the assessor or formally contest the rate. If you do not pay your property taxes, the taxing authority may assign a lien against the property, and the buyers should always complete a full review of outstanding liens before purchasing any property.
Sometimes the property tax also includes the real estate tax. Many of us might have head of real estate tax. The difference between the real estate tax and the property tax is that property tax can include both real property and tangible personal property, while the real estate taxes are taxes on real property only. Tangible personal property might include personal belongings such as cars and boats. Thus, when calculating property taxes, local municipality will also calculate tangible personal property.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.8574703335762024,
"language": "en",
"url": "https://www.nu.edu/ourprograms/college-of-professional-studies/accounting-finance-economics/courses/fin675/",
"token_count": 411,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.04296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5805451d-0a53-4dd4-a1e7-4df1b4f176cd>"
}
|
This course emphasizes microeconomic concepts related to managerial decision-making. Students will learn to analyze the global business environment of industrialized and developing countries, and to think strategically, using micro and macroeconomics principles. Markets, consumers, producers, trade, distribution, welfare, tariffs, non-tariffs barriers, and monetary and macroeconomics issues of development and transitions will be discussed.
- Evaluate how output prices affect factor prices.
- Assess the impact of tariff on small and large countries.
- Analyze a country’s international investment position.
- Analyze the merits of arguments for international trade restrictions, including those based on infant industries, monopolies, strategic trade policy, externalities, scientific tariff, competition with low-wage foreign supplies and so on.
- Evaluate the effect of economic growth on welfare.
- Appraise the activities of the foreign exchange markets.
- Evaluate the wide-ranging effects of opening international trade.
- Analyze the tariffs’ effects on resource allocation and income distribution.
Why Choose National University?
We’re proud to be a veteran-founded, San Diego-based nonprofit. Since 1971, our mission has been to provide accessible, achievable higher education to adult learners. Today, we educate students from across the U.S. and around the globe, with over 180,000 alumni worldwide.
Focus on one subject at a time — one month at a time — and finish your degree faster.
75+ Degree Programs
Choose from associate, bachelor’s, and master’s degrees, plus credentials and certificates.
On Campus or Online
Study when and where it’s convenient for you with evening, weekend, and 100% online classes.
Apply or transfer any time. Classes start monthly, and applications are accepted year round.
Attend class and learn onsite at one of over 20 locations in California.
As a Yellow Ribbon school, we offer tuition discounts to servicemembers and dependents.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.97515469789505,
"language": "en",
"url": "https://blog.hrdownloads.com/topic/financial-literacy",
"token_count": 203,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.25390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1352ecef-f8a9-46b6-b425-03700c7c232a>"
}
|
Canadians worry about money. More than two fifths of Canadians consider money to be their biggest stressor, and stories abound of people struggling under mortgages, bank loans, and student debts, or failing to save money for their future. Commentators ascribe some of this worry to problems of ‘financial literacy,’ the understanding people have (or lack) about money, especially about saving, spending responsibly, and investing.
Many Canadians now find themselves living from paycheque to paycheque, their financial situation so precarious that one emergency could be disastrous. A survey in the United States found that 47 percent of respondents would not be able to cover a $400 emergency, or would have to borrow money or sell something to cover it; another study found that more than a quarter of participants could not come up with $2,000 for an emergency in 30 days, and a further 19% could only come up with that much money in time if they pawned possessions or took out payday loans.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9557522535324097,
"language": "en",
"url": "https://finchmoney.com/blog/what-is-adjustable-gross-income-agi/",
"token_count": 1036,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0084228515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:532f0fca-cd9f-4911-bce8-042988320aa1>"
}
|
What is Adjustable Gross Income (AGI)?
Before filing your taxes, you must understand what adjusted gross income (AGI) is so you can potentially lower your taxable income. That means you’ll pay less in taxes. That’s a win in our book. In this article, we will talk about AGI, Deductions, and Tax Credits.
What is Adjusted Gross Income (AGI)?
Your adjusted gross income is the starting point for calculating income on which you’ll pay taxes. To get to this number, you will need to make subtractions from your gross income. Your gross income is all of the money you made or acquired during the year. Your gross income is not just limited to your job income but also includes dividends, capital gains, retirement distributions, and other income sources.
How Does Adjusted Gross Income Work?
Let’s dive deeper into how adjusted gross income works. Just like it sounds, your AGI is your gross income (the total amount of money you made or acquired over the year) adjusted. The allowable adjustments depend on your financial situation. That said, below are some common adjustments:
- Retirement account contributions
- Educator expenses
- Student loan interest paid
You can report each of these when you file your annual income tax return. These ‘above the line’ adjustments are subtracted from your gross income and result in your adjusted gross income.
What About Deductions?
While adjustments are commonly called deductions, there is a distinct difference between them. While adjustments get subtracted from your gross income, deductions get subtracted from your adjusted gross income. You can think of deductions as “step two” in your process of finding your final taxable income.
The IRS has a complete list of deductions, and these change often. You can choose to take the standard deduction or complete an itemized deduction. Common deductions include:
- Charitable contributions
- Home mortgage interest paid
- Health Savings Account (HSA) contributions
- Medical expenses
Do Tax Credits Work the Same Way As Deductions or Adjustments?
Tax credits do not work like adjustments or deductions. They are a “credit” back to the taxes you owe. The number of tax credits you can claim is dependent on your adjusted gross income. For example, childcare or the care of a dependent provides a certain amount of tax credit to individuals. This tax credit is based on your income levels.
Adjustments vs. Deductions vs. Credits
In summary, adjustments get subtracted from your gross income and result in your adjusted gross income. Then, deductions get subtracted from your AGI, and you can choose to itemize deductions or claim the standard deduction. Finally, tax credits provide you with a credit back to the total taxes you owe. All three of these components will reduce how much you owe in taxes for a given year.
How Do I Calculate Adjusted Gross Income?
The formula for adjusted gross income is as follows:
Gross Income – Adjustments = Adjusted Gross Income
Example of Calculating AGI
Let’s look at an example of how this might work. Lara Finch is reporting a total of $56,000 for her gross income. She also contributed $3,000 to a tax adjustable retirement account like an IRA. Also, she paid $250 in student loan interest. This means Lara Finch can subtract both of these qualified adjustments from her gross income resulting in an AGI of $52,750.
$56,000- $3000- $250 = $52,750
She can further reduce her taxable income by claiming deductions and credits where she is eligible.
How Do I Report My AGI?
You will report your adjusted gross income when you file your income tax return each year. Most tax filing platforms will calculate this number for you based on the information you provide. Your final AGI will appear on the tax form, but where it appears depends on the type of tax form you complete.
For example, on the tax forms 1040 and 1040-SR, you can find your AGI on line 11. The line your AGI appears does change when the forms change, so your AGI for other years may be in different places. The IRS makes tweaks and updates to its forms each year.
If you need to find your previous adjusted gross income, you will need to review your tax return for that year. You can request this through the IRS.
What Is the Best Way To Lower My AGI?
One of the best ways to lower your AGI and benefit in the long term is by contributing more to your retirement accounts. In the example above, Lara Finch lowered her AGI by adjusting for student loan interest paid and retirement contributions. The student loan interest paid is great because she reduces her debt; however, the retirement contributions will set her up for long-term wealth.
If you are looking for ways to lower your AGI, focus first on your retirement plan, and see where you can increase your contributions.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9593682885169983,
"language": "en",
"url": "https://in.news.yahoo.com/best-currency-invest-2021-152621968.html",
"token_count": 877,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.36328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5d3937be-5a75-4f86-8767-f5874b36af98>"
}
|
The year that ended was horrible for many currencies. The global star, the US dollar, has lost value against many currencies, especially after the troubled presidential elections. However, it was not the only one to suffer the effects of the economic crisis caused by the pandemic.
It is a case to ask: what determines the value of a currency? Balance between the quantity of that currency and the quantity of goods or services that can be bought with it, that is, the so-called purchasing power parity model. If the amount of currency in circulation increases, allowing people to buy the same type of goods or services several times, their value will increase, causing inflation. On the contrary, if the quantity of money is constant, but the production of goods and the supply of services continues to grow, people start to give more value to money, leading to a general fall in prices (deflation) and consumption.
Second question: who decides the amount of currency in circulation? Monetary policy is one of the “weapons” used by countries to stimulate their economies. As a result, central banks, such as the FED (United States), the ECB (European Union) or BOJ (Japan), determine the amount of currency that is issued and the interest rate at which they lend money.
For example, an expansionary monetary policy (more supply) leads, in theory, the value of that currency to fall. It is, therefore, necessary to make the right combination with the interest rate, so that, in case of a decrease and the growth of the economy is high, there will be no price increase (inflation). Thus, indicators such as inflation, unemployment and gross domestic product are implicit in the capacity of demand, thus influencing the value of the currency.
Of course, in a global world, this value is also driven by external factors (exports, foreign investment, tourism) and demand at the international level. There are also countries that, as a rule, set their exchange rate to other strong currencies, such as the US dollar or the euro. In a very simplistic way, it means that the central bank of these countries buys and sells its own currency in exchange for the currency to which it has set parity. But the fixed exchange rate regime is an exception.
Investment for adventurers
The small introduction we have just made to the world of currencies allows us to conclude that the reasons for the fluctuations in the value of currencies are not that complex, but the variables to be analyzed are many. In addition, it is a market that operates 24 hours a day, seven days a week. Thus, it is not enough to understand the possible implications of an increase in the interest rate and to master a set of data (current and future) about a country. You must always keep your eyes open, because it is possible to trade currencies anytime and anywhere in the world.
The best strategies
Within the volatility associated with this type of investment, there are several ways to do so with greater or lesser risk. One of them is the forex, an international market that allows the purchase and sale of foreign exchange. In the past, it was restricted to financial entities. Today, it is an open market not only for any investor, but for traders and brokers, who sometimes take advantage of different regulation between countries.
Getting started is very easy. Just follow these steps:
Register with a reliable broker such as Click Here
Open a live chart for binary options.
Become familiar with a simple and successful strategy.
Another solution for investing in foreign currency is investment funds, either through treasury funds (we do not recommend it) or through medium-term bonds, taking into account our prospects for the evolution of the currency against the euro. In fact, we have some of these bond funds in our portfolio with the most defensive strategy.
The advantage of investing through a fund is to diversify across multiple currencies and to have someone who manages the investment for you.
In the midst of this forex exchange adventure, the question that arises is to know which currencies are worth betting on. The forecast is not easy.
In our forecasts there are positive outlooks for the Swedish krona, the Japanese yen, the Norwegian krone, the Canadian dollar, the Brazilian real, the Swiss franc and the pound sterling.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9619100093841553,
"language": "en",
"url": "https://intempuspropertymanagement.com/can-all-electric-homes-be-cost-effective-in-california/",
"token_count": 1157,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.019775390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:71f29e5e-5205-485b-86f3-dab9c4977053>"
}
|
In California, residential electricity rates are among the highest in the nation, reports the Mercury News. However, average monthly utility bills are among the lowest because California homes use less electricity than those in other states. One of the reasons is that about 86 percent of California’s single-family homes, as well as most of the townhouses, mobile homes, and apartments, use natural gas for heating and cooking.
Moves to lower fossil fuel usage, including natural gas, because of global warming and climate change have inspired a lot of innovation in building fully electric homes. But this opens the question of whether the move will dramatically increase California residents’ utility bills.
Most likely, utility rates will go up, at least in the Bay Area. However, there are a lot of complicated variables at play and a differing of opinions, depending on whom you talk to.
Carmelita Miller, a lawyer with the Oakland-based social justice group Greenlining Institute, has expressed concerns about how efforts to fight climate change might penalize the state’s poorest residents. “It’s a really complicated topic. We want to make sure we end up with healthier people living in their homes and affordable access to clean energy,” she said.
Rock Zierman, who is CEO of the California Independent Petroleum Association (CIPA), said that moving to all-electric homes will be much more expensive than continuing to use natural gas. According to CIPA, U.S. households that use natural gas for heating, cooking, and drying their clothes save an average of $874 a year compared to homes that only use electricity.
However, Amber Mahone of the San Francisco-based environmental consulting firm Energy and Environmental Economics, said natural gas rates could potentially rise very quickly due to the costs of maintaining aging pipelines, particularly when spread among a shrinking number of households that use it. “There’s almost no future you can envision where gas rates don’t go up faster than we envisioned,” Mahone said.
Her organization released a study, which was funded by Southern California Edison, the Sacramento Municipal Utility District, and the Los Angeles Department of Water and Power, which found that electricity could become more cost-competitive. According to the study, new all-electric homes would actually see savings in the area $100 per year.
In addition, said, Mahone, new buildings would save money by not having to include separate gas hookups for power in the construction.
In the Bay Area, price savings or increases with all-electric homes could go either way. Newly built homes and low-rise apartments could see net energy costs rise by about $200, or fall by the same amount annually, depending on resident use. New homes built in the Bay Area would gain air conditioning, something that’s less common in older Bay Area buildings, which will be incorporated into electric heat pump heating and cooling units, which could add to costs.
Mahone argues that the real savings are in equipment. “You still see increases in bills relative to gas homes,” she said. “But they’re pretty small compared to the capital cost savings, which is why we look at the whole picture.”
Renters, on the other hand, won’t be so lucky. Mahone’s study found that new all-electric homes in the Bay Area would see an annual increase of $100 or less on renters’ utility bills. Smaller units, such as apartments, which tend to be more efficient to heat and cool, would see lower cost increases.
However, if electricity costs rise, new all-electric homes in the Bay Area might see cost increases of around $400 a year for houses and $200 for apartments.
Jon Switalski, who is the executive director of the natural gas advocacy group Californians for Balanced Energy Solutions, says that Mahone’s study is based on flawed electricity and gas rate assumptions. “Every family in California knows that their electricity bill far exceeds their bill for natural gas,” he said.
San Jose stands to be the largest city in the U.S. that could ban natural gas altogether.
However, officials wrote in a recent City Council memorandum that studies show removing natural gas could cause an “an increase in the annual utility costs for all electric buildings.”
According to the study by Energy and Environmental Economics, switching to all-electric housing could deliver a significant reduction in greenhouse gas emissions, starting at 45 percent fewer tons of carbon dioxide in 2020, and increasing to 82 percent fewer tons in 30 years as more clean power replaces natural gas.
“I don’t think they should expect big savings or big costs,” argues Mahone. “They [consumers] should expect to see high-performance, good equipment and a comfortable home. The carbon savings is kind of a no-brainer — it’s an immediate carbon savings and will only increase every year as the grid gets cleaner.”
Work with the Local Experts in Real Estate and Property Management
At Intempus Property Management, our goal is to help you get the most from the Bay Area’s vibrant housing market. As the leading Bay Area property management firm, our award-winning services consistently get five-star reviews from our clients. We’re here to help you with every facet of your real estate and property management needs. So, whether you’re looking to buy, sell, or rent a property, contact us. One of our friendly team members will be happy to talk with you!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9569530487060547,
"language": "en",
"url": "https://majenka.com/splittingzeros/part17",
"token_count": 353,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.042236328125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a7b2f232-e48d-4572-aadc-5dfeaa97ea70>"
}
|
Part 17. How quantitative easing works
We saw in Part 14 that when governments issue bonds and banks buy them it increases the money supply. There is an analogue to this called quantitative easing which increases reserves at banks. It works by the central bank buying government bonds off banks.
To demonstrate this, we will go back to our banks and accounts from Part 15, where the two government bonds issued are still in circulation, and get Central Bank to purchase the £20,000 government bond from 2nd Bank. We need to introduce a new account at Central Bank we will simply call Government Bonds. The bond purchase transaction is shown in Figure 17.1
And the complementary double entry for this interbank transfer is the reserves double entry as shown in Figure 17.2.
The balance sheets of all banks is shown in Figure 17.3, and as can be seen, 2nd Bank's reserves are now increased by £20,000 — at the expense of its government bond, and Central Bank's assets and liabilities are increased by the same.
So, quantitative easing only increases the reserves at banks. It doesn't directly increase the money supply that people and businesses use. As seen previously, reserves facilitate bank lending. However, quantitative easing, doesn't necessarily mean banks will lend more, it just provides the means by which a bank can lend more. It also requires the availability of bonds to purchase.
A government and central bank can work together to expand the money supply in an economy by the government issuing bonds and the central bank buying them. The caveat is that increasing the money supply without increasing the size of the economy can cause inflation or hyperinflation; that is, it can reduce the value of the underlying money and debt.back to Part 16
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9610203504562378,
"language": "en",
"url": "https://pmpaspeakingofprecision.com/2014/10/28/the-atlantic-apprenticeships-why-germany-is-so-much-better-at-training-its-workers/",
"token_count": 929,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.3828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:35d5e28b-b61a-4e27-b7c9-1eb2e0bf5f78>"
}
|
The need for talent is a universal concern- in Germany and in North America. The German apprenticeship model is effective in Germany. But can it be successfully transplanted here?
The Atlantic recently posted an article discussing the German Apprenticeship model here
They gave 3 key differences between German and US ideas of apprenticeships:
- The first thing you notice about German apprenticeships: The employer and the employee still respect practical work. German firms don’t view dual training as something for struggling students or at-risk youth. “This has nothing to do with corporate social responsibility,” an HR manager at Deutsche Bank told the group I was with, organized by an offshoot of the Goethe Institute. “I do this because I need talent.”
- The second thing you notice: Both employers and employees want more from an apprenticeship than short-term training. Our group heard the same thing in plant after plant: We’re teaching more than skills. “In the future, there will be robots to turn the screws,” one educator told us. “We don’t need workers for that. What we need are people who can solve problems”—skilled, thoughtful, self-reliant employees who understand the company’s goals and methods and can improvise when things go wrong or when they see an opportunity to make something work better.
- A final virtue of the German system: its surprising flexibility. Skeptical Americans worry that the European model requires tracking, and it’s true, German children choose at age 10 among an academic high school, a vocational track, or something in between. But it turns out there’s a lot of opportunity for trainees to switch tracks later on. They can go back to school to specialize further or earn a master craftsman’s certificate or train as a trainer in the company’s apprenticeship program—and many do.
The question that most North American businessmen have when discussing this issue is ROI- Return On Investment.
In Germany, according to the article, the State pays the training expense for each apprentice-
In the U.S., Companies will have to foot the bill for almost all expenses themselves.
Trained and credentialed employees will have the freedom to leave the employer, arguably before that employer can get any return on their training investment. see our post “What if I train them and They Leave?”
We think that the cost problem and the ROI problem can be solved, with work, here in North America.
But the problem that we need to solve first is what The Atlantic piece calls “the biggest obstacle:”
American attitudes toward practical skills and what Germans still unabashedly call “blue-collar” work. In America… we’re suspicious of anything that smacks of training.
I know as a parent, there is a lot of social pride at having ones children attend university.
But I am starting to see that the real pride is not about the university that one’s child attends, it is rather the fact that they got a job capable of offering a return on the Investment of all those college expenses.
The real pride for parents these days is being able to say that their child in fact has a full time job. Is living independently. And is not overburdened with debt.
In North America, the way to accomplish this is by “earn as you learn” to pursue a degree after getting a well paying career started. Often the employer provides tuition assistance.
Getting started in a well paying career in advanced manufacturing can be as simple as a one semester training program at a local community college. Not years and years of loans and expenses and fees with no immediate ROI. Earn as you learn makes ROI simultaneous with your efforts, not some dreamed for, long in the distant future hoped for outcome.
In September 2014, ~97 % of respondents (76/78 companies) expect Employment prospects to increase or remain the same over next three months. Prospects for employment remain strongly positive.
What is going to be the key for adopting apprenticeships here in North America?
I think that it will be the realization by all affected- businesses, potential employees, parents of students, educators, government officials- that there truly exists a critical need for talent.
In Germany, everyone knows this. Over here, well, for sure the employers do. everyone else- that is anyone’s guess.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9358945488929749,
"language": "en",
"url": "https://www.nwfsc.edu/ufaqs/federal-pell-grants/",
"token_count": 336,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.017333984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e9d647c2-4456-4328-bb72-8fc94ded8850>"
}
|
Federal Pell Grants
September 1, 2017
What is it?
The Federal Pell Grant Program provides need-based grants to low-income undergraduate and certain post baccalaureate students to promote access to postsecondary education. Grant amounts are dependent on: the student’s expected family contribution (EFC) (see below); the cost of attendance (as determined by the institution); the student’s enrollment status (full-time or part-time); and whether the student attends for a full academic year or less.
I’m enrolled at more than one school. Can I receive funds from both schools?
Students may not receive Federal Pell Grant funds from more than one school at a time.
How is my grant eligibility determined?
Federal Pell Grants are direct grants awarded through participating institutions to students with financial need who have not received their first bachelor’s degree or who are enrolled in certain post baccalaureate programs that lead to teacher certification or licensure.
Financial need is determined by the U.S. Department of Education using a standard formula, established by Congress, to evaluate the financial information reported on the Free Application for Federal Student Aid (FAFSA) and to determine the family EFC. After filing a FAFSA, the student receives a Student Aid Report (SAR) and the institution receives an Institutional Student Information Record (ISIR), which notifies the student if he or she is eligible for a Federal Pell Grant and provides the student’s EFC. By completing the FAFSA, you may also be eligible for other grants.
See All News
Back to News Center
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9674533605575562,
"language": "en",
"url": "https://www.unitedmortgageplus.com/2019/01/10/conventional-loans/",
"token_count": 617,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.028564453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d92f36d5-9da3-4cbc-ad28-1790fe1c5ccc>"
}
|
What Is A Conventional Loan
A conventional loan is a mortgage loan that is not insured or backed by the federal government. It is instead backed by private lenders, and any insurance is paid by the borrower. Conventional loans are the most common loan type, much more than government-financed loans. Though conventional loans offer the buyer more flexibility, they carry more risk because they aren’t insured by the government making them more difficult to qualify for.
What Is The Difference Between Conventional and Government-Backed Loans?
Government backed loans include options like FHA loans and VA Loans. FHA Loans are backed by the Federal Housing Administration, and VA loans are guaranteed by the Veterans Administration.
With an FHA loan, the required down payment is 3.5% and the borrower must pay mortgage insurance as part of their monthly payment in case you default on your loan. To qualify for a VA loan you must be a current or previous member of the United States Armed Forces or be an eligible surviving spouse. VA loans require no down payment, but a funding fee of between 1-3% of the loan amount must be paid.
The difference between these programs and a conventional loan is that if the borrower defaults, the lender is at risk. If you can no longer afford the payment, the lender will try to recover as much of the remaining balance as possible by selling the house. Because of this risk, you are required to pay private mortgage insurance on a conventional loan if you put more than 20% down.
What Are Different Types of Conventional Loans?
There are two types of conventional loans; conforming and non-conforming.
Conforming Conventional Loans
To be considered a conforming conventional loan, the loan must meet guidelines set by Fannie Mae and Freddie Mac; government sponsored entities that purchase mortgages from lenders.
One of the most important guidelines is the loan limit. The current loan limit for single unit properties is $453,100. In certain high-cost area the loan limit can increase to a maximum of $679,650.
Non-conforming Conventional Loans
Loans that exceed the loan limit are considered non-conforming. A non-conforming conventional loan is also known as a jumbo loan and is not purchased by Fannie Mae or Freddie Mac, but instead funded by lenders or private financial institutions.
Benefits of A Conventional Loan
There’s a reason conventional loans are the most popular type or mortgage loan. It has several features that are borrower friendly such as:
- Low Interest Rates
- Fast loan processing times
- Diverse down payment options, they can be as low as 3% of the sale price
- Various term lengths of fixed rate mortgages that range from 10 to 30 years
Because there are so many different options, you must decide what fits your situation best. How much you can put down, how long you want your term to be, and how much house you can afford are major factors when deciding what type of mortgage you want.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9361897110939026,
"language": "en",
"url": "http://mexbiznews.com/how-an-infrastructure-plan-could-help-build-mexicos-future/",
"token_count": 340,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0250244140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:eca67cf7-b7d1-4b10-a99f-c8483bbf5060>"
}
|
Knowledge @ Wharton
Over the past decade, Mexico’s manufacturing output has steadily increased, especially in the automotive, auto parts and electronic sectors.
And yet Mexico currently ranks 64 of 148 countries in terms of infrastructure, according to the Global Competitiveness Index of the World Economic Forum. Economists agree that Mexico’s prospects for becoming a truly industrial economy will remain limited unless the country accelerates its construction of the roads, railroads, ports, energy plants and other physical infrastructure essential in any modern industrial economy.
According to Barbara Kotschwar, research fellow at the Peterson Institute for International Economics in Washington: “Now is the moment for Mexico to get serious about its infrastructure . Latin America is woefully underfunded in terms of its infrastructure, and studies cite its infrastructure weakness as a major reason for Latin American underdevelopment.”
With that goal in mind, the Mexican government last year published its National Infrastructure Program for 2014-2018, a comprehensive array of projects that would cost the public and private sectors a combined total of about $600 billion. Under the umbrella of the program, Mexico expects to upgrade not only its transportation sector, but also its communications networks, along with its energy sector — including power, oil and gas — water; health care; urban development and housing, and the infrastructure for tourism.
What are the prospects that implementation of the program might wind up falling short of its ambitious goals? Observers note that with oil prices continuing to weaken, Mexico’s public sector may not be able to adequately fund key elements of the program. Also, they question whether the country has the organizational capacity to pull off such an ambitious plan.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9310906529426575,
"language": "en",
"url": "http://www.dagarimpex.com/uncommon-article-gives-you-the-facts-on-what-is-a-coefficient-in-math-that-only-a-few-people-know-exist/",
"token_count": 975,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0908203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:65656d6f-7b3b-440d-bc55-247bc78faa74>"
}
|
There’s no particular order where the properties www.papernow.org ought to be applied. You’ll be supplied the trinomial and in order to factor the trinomial, you are going to want to work backwards to acquire the 2 binomials. Some examples are below.
The key point to keep in mind is that each piece needs to be of precisely the exact same size as every other piece. Such a massive feature seat that has many unimportant features may have adverse influence on the linear regression model. A circle with a bigger diameter is likely to have a bigger circumference.
Some particular coefficients that happen frequently in mathematics have gotten a name. Though the Correlation Coefficient spends a reasonable amount of time in positive territory, it’s negative the bulk of the moment. Quite simply, it’s a tangent function analysis.
When it is positive, the inequality will stay the same. It is among the most gorgeous ideas in mathematics. Factoring gives you the capability to find solutions to complex polynomials.
Ask them some questions regarding the bookmaker you’re interested in. Let’s look at an important example. A number by itself is known as a Constant.
Understanding how to read Mathematics formulas is vital for optimum understanding and quick memory recall. Math gives out a great sigh. At this time you must be confused about how https://biology.cornell.edu/research maths may be utilized in real-life applications.
There are a number of different methods by which the student can figure out this kind of equation. An individual has to be sound in mathematics as a way to begin machine learning. You have the ability to also assess employing the manipulative based on the student’s explanation.
Density plays a crucial role in our knowledge of the physical properties of Earth materials. You’ll be supplied the trinomial and in order to factor the trinomial, you are going to want to work backwards to acquire the 2 binomials. They are found below.
In order to genuinely diversify from stocks, it’s frequently essential to look outside the stock marketplace. To locate the correct style, you will need to take under consideration the style of your house, too. The idea of lift is really very straightforward.
Three quantities need to be known to have the ability to figure out the quantity of work. There are lots of elegant binomial sums. The beta coefficient can be useful in attempting to predict a specific stock’s tendencies and calculate the total risk.
Sensitivity to the data distribution might be utilized to a benefit. Statistical analysis of information is a significant tool for practically any small business. Then the procedure for cash-in follows.
The multiply test is a bit more interesting. The term with the most degree is called the top term as it’s generally written first. It is an impossible task to gather all of the info you could ever need, so there are occasions when you’ve got to make intelligent assumptions to fill in the gaps.
In the very first stage, children need reinforcement they are doing well in the classroom so they can develop an awareness of industry. You will find a number of math tutors in your state. Incorporate the next five math vocabulary words at least one time into your discussion.
It’s essentially an infinite series. Global variable definitions must be initialized. They are found below.
Just showing an idea works in a great deal of cases isn’t sufficient to earn a notion into a theorem. Because there’s no direct method to measure gas, an indirect method must be used. The ideal way to understand any formula is to work a great example.
In statistics, the term correlation denotes the association between two variables. The level of negative correlation will probably vary over time. Unanticipated results shouldn’t research paper be ignored.
In the event the points are scattered about then there could be no correlation. Any details that are pertinent to learn about the graph ought to be mentioned. Opt for any variable, but nevertheless, it must be the very same for both equations, and multiply all pieces of equation A so the chosen variables contain exactly the same number in front (4X).
To summarize, if you’re searching for a significant practical application of Mathematics, have a peek at Linear Regression. Math gives out a great sigh. At this time you must be confused about how maths may be utilized in real-life applications.
Understanding how to read mathematics formulas takes a simple comprehension of the formula vocabulary and the way to recognize formula reading patterns. The 2 properties are thought to be negatively co-related. If it does not include a variable, it is known as a constant.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9379023313522339,
"language": "en",
"url": "https://freedomandsafety.com/en/content/blog/we-must-stop-choking-ocean-plastic-waste",
"token_count": 1087,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.06982421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:56d4451a-d469-4d78-b4e2-4b6c096da516>"
}
|
FREEDOM AND SAFETY
The ocean can’t take any more. For centuries we have dumped our unwanted waste into the seas, creating a crisis that threatens not only marine life, but everything that depends on the seas. And despite our technological prowess, humans have so far been unable or unwilling to stem the tide of pollution entering our ocean.
This global problem demands a global solution; one that brings together the best technology, the most effective resources and, importantly, the most dedicated and influential partners to launch an innovative recovery plan.
Based on the concept of the circular economy, Dow supports a collaborative model in order to attract the institutional investment needed to scale integrated, profitable and sustainable recycling and waste management projects that not only provide a solution to marine debris, but also offer an attractive financial return to investors.
Plastics are one of the foundations of modern life. Nearly everyone on the planet benefits from its versatility and lightness. Everyday items – from eyeglasses and bike helmets and cellphones to shoes – are made more durable and lighter thanks to plastics.
Plastics also play an often unseen - but tremendously positive role - in sustainability. Plastics packaging, for example, greatly extends the shelf life of food and retards spoilage, helping move more food from farm to table and delivering fresh food to millions around the globe. Their low weight also reduces transportation costs. Those environmental benefits extend to today’s vehicles, too. By introducing light but durable plastic materials, vehicles are now more fuel efficient, produce far less carbon dioxide emissions and are safer and more comfortable than ever before.
As global populations – and incomes – have risen over the past several decades, the use of plastics has also risen. This is especially true in Southeast Asia, where a rising quality of life has fueled a greater use of plastics than anywhere else.
The growing demand for plastics has created unexpected and serious waste problems. Much of the plastic currently produced, for example, ends up in the ocean. The Ocean Conservancy estimates that as much as 150 million metric tons of plastics are circulating in our oceans right now - and humans are adding another 8 million metric tons of plastics to the ocean every year.
While part of that waste comes from fishing vessels and offshore gas and oil platforms, the vast majority – 80% – actually comes from land-based sources. Three-quarters of this pollution is uncollected waste while the remainder comes from gaps in the current collection and recycling systems.
This waste is placing a tremendous burden on aquatic life. Many fish species, for example, consume plastics debris, confusing it for real food. The Ocean Conservancy estimates that at least 600 different wildlife species are threatened by plastics waste in the ocean. Nearly a billion people worldwide rely on seafood as their primary source of protein, while countless others depend on healthy, clean oceans for their livelihood.
As Erik Solheim, former head of UN Environment, put it: “It is past time that we tackle the plastic problem that blights our oceans. We’ve stood by too long as the problem has gotten worse. It must stop.”
Dow – one of the world’s largest producers of plastics used in packaging – believes this problem does have a solution. But a global problem like this needs new thinking, new partners, and open collaboration. Everyone – from producers to consumers and NGOs to governments – must play a part.
We must do a better job of capturing and reusing our plastics waste. The use of plastics – especially in some developing countries – has outgrown current recycling capabilities. In fact, more than half of the plastics reaching the ocean come from just five Asian countries that have been overwhelmed by plastics waste. Better land-based waste collection and management practices in the most polluted regions would help stop the flow of trash into waterways.
We see a significant opportunity to reduce total global plastics leakage by 45% by 2025 with an aggressive set of integrated and targeted waste management investments. And we’re currently working with others – like Circulate Capital – to fund and develop infrastructure that captures the waste before it has an opportunity to reach our oceans.
The best opportunity we have to truly tackle this problem in the long-term is by applying the principles of a circular economy that values plastics waste for what it is: an opportunity for additional growth. By circulating plastics back into the manufacturing stream and reusing them in new applications, we can create a more sustainable and valuable recycling model. Our goal is to seed solutions-based entrepreneurs who can demonstrate the tremendous value-add of plastics - and ultimately to attract institutional investors to scale in a way that offers immense benefits to the environment as well as attractive financial returns.
Dow and other science-based companies must lead the way in helping to innovate technologies that make it easier to recycle plastics into new products. But it will also require new commitments from government agencies, NGOs and industry to help provide the appropriate grants, loans, and capital investments necessary to seed these efforts. And it will require a healthy dose of consumer education to introduce the concept of a circular economy and demonstrate the uplifting power of collaboration.
Now is the time to resolve the issue of marine debris. We must step forward – collaboratively – to find new answers, to innovate new solutions and create new value systems that end plastics waste forever. Our oceans can take no more.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9561060070991516,
"language": "en",
"url": "https://odi.org/en/events/the-role-of-growth-in-development/",
"token_count": 1976,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03955078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:78faed60-d515-4b66-80c3-60ec427bbc2d>"
}
|
Professor Danny Quah - Professor of Economics and Head, Economics Department, London School of Economics (LSE)
Lord Adair Turner - Chair, ODI
Simon Maxwell - Director, ODI
Professor Quah opened the discussion with a brief summary of the role of economics as providing the greatest traction in terms of illustrating trade offs. There are false issues in how growth is understood and real issues of growth, related to trade offs. The points covered in the lecture were: 1. Quantitative significance of growth – inequality and poverty; 2. Sources of growth; 3. Global imbalances; 4. Tensions and risks.
1. Quantitative significance of growth
Within the last century, there has been a 50 fold increase in world income, but growth in income has not been equal. Although the world has experienced an unparalleled expansion of goods and services, in some countries, some people are living in conditions not completely different from 100 years ago,
The average person today lives on around $20 a day. In the rich world, that rises to around $30 a day. In poor countries the average person lives on around $10 per day. The very poorest people however, continue to live on $1 a day.
The aggregate picture tells us that there has been an absolute reduction in the number of poor people in the world. Between 1981 and 2004 the number of people living on less than $1 a day fell by 500 million. But the aggregate picture does not tell us how this reduction in the number of people living on $1 a day has been achieved.
2. Sources of growth
The key question, is how the reduction in the number of people living on less than $1 a day is being achieved? Economic growth has been the main driver, but China has been the main contributor. If we look at China, the number of people living on less than $1 a day fell between 1981 and 2004 by around 500 million people. This figure correlates closely to the reduction of the total world poor, those living on less that $1 a day.
If we look at the worlds remaining poor, those living on less than $1 a day, we can see that between 1981 and 2004, numbers have actually risen, slightly. Global growth over the last 25 years has doubled world income (measured using international dollars corrected for inflation and purchasing power parity), the number of people living on less than $1 a day has fallen by a third, but poverty outside of China has actually remained unchanged, or even increased. This means we are meeting the millennium development goals because of growth in China.
3. Global imbalances
If we look at shifts in per capita income over time and the number of people living on $1 a day, since 1990 to 2004, there has been tremendous progress in China, East Asia and the Pacific region, but very little progress in sub-Saharan Africa. This has meant that more people have been put into absolute poverty within sub-Saharan Africa, with some retraction of per capita incomes over this period.
So what about inequality? Can we link growth and inequality together? China’s growth has increased inequality although absolute levels of the number of poor people measured in terms of those living on less than $1 a day, has decreased. Inequality in China now exceeds that of the US (around 0.5 Gini co-efficient).
4. Tensions and risks
In terms of what we know about growth and production functions, our understanding is increasing, but so too is our understanding of the problems associated with the growth process. The global landscape is changing. The US has a huge trade deficit. There are serious issues in terms of world financial systems, environmental concerns and global as well as within country inequality that need to be dealt with, particularly if the growth process is to be positive in the developing world.
Speaker:Lord Adair Turner, Chair of ODI
Why do some countries grow and why don’t others? Does growth really matter to happiness and health? Richard Layard (2007) points out that income matters up to a certain point for happiness. But when average per capita income levels reach around £20-40k, increased income doesn’t really bring much more happiness. Will people really be happier if the UK GDP grows by 0.9% or 2%? Does it really matter? Growth matters a lot more in low income countries.
In terms of taking a longer term perspective on growth, prior to the 18th century, or thereabouts, the world didn’t really grow (as measured by GDP). But around the 18th century something started to happen in Western Europe and the US and Australia, and later Japan further to the Meiji restoration. We have come to expect growth as the norm. We have doubled living standards every thirty years.
Theory suggests that growth rates should be higher in developing countries, permitting them to ‘catch-up’. This is due to the globalisation of trade and capital. But so far, catch-up has been a mixed picture. We have seen partial catch up by the East Asian NICs (to a lesser extent South East Asia), and now China and India. City states such as Singapore and Hong Kong have managed to catch up, with living standards broadly comparable with those in the West.
But, it is still not clear if all countries will catch up. Key questions are therefore, why do some countries catch up and why don’t others? The following factors are fundamental to the growth process.
- Growth accounting: The key lesson from the East Asian experience is that countries need a high savings rate in order to grow. They need to accumulate capital. China’s one child policy has played a role in the ability to maintain saving rates (of around 40-50%). Demography is therefore not tangential but central to China’s growth process.
- Geography: As discussed by Sachs, geography has a key role to play in determining the economic possibilities of countries. That geography has given certain countries a set of natural harbours, and other countries are landlocked, is again not tangential but central to the growth process.
- Clusters: Once an industry gets going, it generates linkages and has self reinforcing cumulative effects. A lot of work has been undertaken analysing clusters ex poste, but not so much in terms of what gets them started in the first place.
- Culture: Does your population need a good work ethic in order to grow? How has the role of culture in East Asia contributed to their growth process? Although corruption exists in China, there is still a workable rule of law. How does culture and the type of governance, particular to countries, contribute to growth?
- Sequencing: With reference to Paul Collier and his most recent publication ‘The Bottom Billion’, over the medium term it has been argued that growth in sub-Saharan Africa will be made more difficult by the success of China, particularly in manufacturing. Economies of scale within manufacturing may mean it is more difficult for new entrants.
So what should our prescription be in order to get growth going? How does climate change impact on growth? How should our growth prescriptions adapt to take into account these new realities? With reference to the Stern report, if adapting to climate change means a reduction in economic growth of a few percentage points then the transition needs to start in the West. We have the technology, the resources and can afford to trade off a few percentage points of economic growth.
Question and answer session:
- How do we get knowledge clusters going? Danny Quah noted that in the case of East Asia clusters were driven by outsourcing mainly. Simon Maxwell remarked that the question that then arises is how the dynamic of clusters is sustained.
- What is the best way for sub-Saharan Africa to get growth? Danny Quah noted that Africa needs to be brought into the world trading system. Trade has a huge role to play in terms of the growth story of China, which trades 70% of its GDP. In terms of the most recent growth performance of sub-Saharan Africa, commodity prices and services are central to the story. But whether this is the start of a longer term growth period for the continent remains to be seen, similarly, how this growth performance contributes to expanding education, for example.
- Is sub-Saharan Africa frozen out of world markets for manufactured goods? Simon Maxwell pointed out that because of the performance of the East Asian NICs and China, does this mean that African is unable to use manufacturing as an industrialisation/ growth strategy? Danny Quah argued that there’s a lump of labour fallacy. If you double the workforce this doesn’t necessarily mean jobs are taken from elsewhere, what it can mean is more growth. International trade and external linkages are a key issue. The system needs to remain open and transparent; if we don’t get this right then all other efforts will fail.
- The overall message from Danny Quah was that we should go for growth in all cases due to the positive first order effects, which are cumulative and self-sustaining.
The second meeting in the series will examine the role of growth in development, how thinking on this has changed over time, and where we are now conceptually. It will seek to answer questions such as: how should growth be promoted? How should growth reforms be prioritised? And what role is there for the political economy of growth? Professor Danny Quah, Head of Economics at the LSE will present his analysis in answer to these questions and Lord Adair Turner, Chair of ODI will offer his perspective.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9616572856903076,
"language": "en",
"url": "https://talbotspy.org/the-future-of-financeand-much-else-by-al-hammond/",
"token_count": 1542,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.48046875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:53adc956-0d74-4845-a0c7-231adb32756d>"
}
|
With mail-in voting poised to begin in Maryland next month, there have been concerns (mostly misplaced) about the security of that process. By the time of next presidential election in 2024, however, it’s likely that voting from home via a novel technology called “blockchain” will completely eliminate any chance of fraud. Blockchains are also poised to eliminate fraud from credit card purchases and simplify (as well as secure) cross-border financial transactions. So blockchains are likely to be part of your financial future. But what is this novel and still poorly understood technology and where did it come from?
The invention of the internet created a fundamental shift in how we access and share information. In effect, the internet digitized the sharing of information in ways that made email, web sites, and smart phone apps possible—in the process changing the way we live. Now another technological innovation, the blockchain, looks ready to digitize both how we store information safely and how we share value (via digital money or digital tokens that convey ownership of physical assets). Indeed, the CEO of IBM has said “What the internet was for communication, blockchain will do for trusted transactions.”
The first blockchain—the bitcoin blockchain—was created in 2009 by an anonymous inventor who was also a gifted programmer. A blockchain is just digital record of transactions that is stored in a global network of computers in such a way that each page or “block” of transactions is unalterably linked to the previous one—forming a continuous set or “chain” of blocks of data. The computer network operates under consensus rules written into the blockchain software, so that all nodes of the network have to agree before a new block of transaction data can be added to the records. Because the records are stored in multiple places (distributed across thousands of nodes in the bitcoin network) and protected by advanced cryptography, no one person or institution has control of the data. That makes the data virtually impossible to alter—an intruder would have to take over more than half of the nodes simultaneously—and therefore much safer than your data stored at a credit agency, a merchant, or a credit card company (all of which can and have been hacked).
The bitcoin blockchain was intended to create a store of value that could not be manipulated by governments (by, for example, printing huge sums of money). But its invention stimulated a flood of ideas about how to apply the blockchain idea to other problems or opportunities. These innovations—involving many different consensus rules, but all using linked blocks of data distributed across many nodes—now seem poised to transform banking, credit cards, real estate transactions, and many other financial activities. Blockchains could even enable secure, fraud-free voting from home, while keeping the information about how each individual votes completely anonymous: indeed, the U.S. Postal Service has been issued a patent for just such a voting system.
Major financial institutions are adopting blockchain technology at a rapid pace. Fidelity and Morgan Stanley are preparing to offer their customers access to bitcoin and other digital or “crypto” currencies as well as stocks. The U.S. Office of the Comptroller of the Currency, which regulates banks, has just approved U.S. banks to store digital currencies for their customers (many European banks already do). And virtually every major bank is exploring blockchain applications. Mastercard is developing a blockchain replacement for debit and credit cards that could eliminate the growing and costly incidence of fraud and theft. Square, a financial firm that services small merchants, also enables individuals that use its cash app to buy and sell bitcoin or use it to pay bills—resulting in $875 million of bitcoin revenue last year. Paypal is preparing to offer similar services to its 300 million users worldwide. Walmart and UPS are starting to use blockchains to track supply chains and facilitate cross-border transactions. At a global level, the Depository Trust Closing Corporation, which settles some $54 trillion in cross-border financial transactions a year, has already closed $10 trillion in transactions with a blockchain. China recently launched a national blockchain platform and with it a prospective national digital currency. So blockchains are rapidly going mainstream.
Much of the attention around these innovations has focused on digital or “crypto” currencies such as bitcoin, a form of digital money that is not issued by a government or managed by a financial institution and which can be instantly transferred from one person to another anywhere in the world. In effect, bitcoin is a kind of software, created by the consensus rules of its underlying blockchain. That blockchain permanently stores the complete history of every bitcoin transaction, and updates the information—verifying and adding new transactions on which all the nodes of the network agree—about every 8 minutes. The operators of the nodes are paid for their services by transaction fees charged those making transactions and by a block grant of new bitcoin—created by the network’s governing consensus rules. Those rules also dictate that the supply of new bitcoin is cut in half every 4 years and will never exceed 21 million bitcoin. (About three-quarters of that amount has already been created.) So if demand increases while supply is limited, the price of bitcoin will rise—which accounts for its growing attractiveness as an investment. For that reason, bitcoin is often described as a potential digital gold, a secure (if volatile) store of value—and indeed, since its creation, the value of bitcoin has risen faster than gold or any other asset class, including stocks. In contrast, the purchasing power of the U.S. dollar has declined 20 percent since 2008.
One limiting factor to widespread use of bitcoin and other digital tokens is that they are not yet exactly consumer friendly. They are mostly bought or sold on digital exchanges (some of which have been hacked) and are typically stored in digital wallets that can be intimidating to use (since sending bitcoin to a wrong address is not recoverable). On the other hand, you can trade or send bitcoin to a friend 24/7 and the transaction typically takes only a few minutes—compared to as much as several days and much higher fees to send money across borders through the banking system. And improvements are coming, both in ease of access and use, in faster transactions, and secure third-party custody.
Adoption and use of bitcoin and other digital tokens is also accelerating. About 10 percent of the U.S. population are now believed to own some bitcoin. One analyst—noting that it took 10 years for 10 percent of the U.S. population to use the internet, but then adoption reached more than 70 percent in a second 10 years—predicts that bitcoin is following a similar timeline, with adoption driven both by increasingly institutional use and by millennials and still younger generations (who tend to be more comfortable with digital objects). More fundamentally, blockchain innovations are nearing commercial use in many different sectors of the economy—in the U.S., in Europe, and especially in Asia.
Al Hammond was trained as a scientist (Stanford, Harvard) but became a distinguished science journalist, reporting for Science (a leading scientific journal) and many other technical and popular magazines and on a daily radio program for CBS. He subsequently founded and served as editor-in-chief for 4 national science-related publications as well as editor-in-chief for the United Nation’s bi-annual environmental report. More recently, he has written, edited, or contributed to many national assessments of scientific research for federal science agencies. Dr. Hammond makes his home in Chestertown on Maryland’s Eastern Shore.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9498999118804932,
"language": "en",
"url": "https://vmt.net.au/the-significance-of-recycling-non-ferrous-metals/",
"token_count": 669,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0732421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0b1a749f-bb3d-456e-bccb-15c75db4a62c>"
}
|
The Significance of Recycling Non-Ferrous Metals14 January 2021
Non-ferrous metals are metals that do not possess a huge amount of iron. These metals are used by a lot of industries, particularly on manufacturing and construction, due to their high conductivity, non-magnetic property, low weight, and corrosion resistance.
Some of the most common non-ferrous metals today are aluminium, copper, brass, nickel, and tin. Other more valued metals like platinum, silver, and gold are also known to be non-ferrous. The high demand for these metal materials has encouraged recycling businesses to prioritise them along with ferrous metals. After all, recycling non-ferrous metals can bring numerous benefits to a lot of people and businesses.
Conserves and Saves the Environment
Recycling as a process normally caters to the collection of products that can be reused or reprocessed instead of sending them to the landfills. And as more products are being stopped from going into landfills, the number of waste products that are just deteriorating in open sites is posed to reduce significantly. The recycling of non-ferrous metals likewise helps in promoting a much cleaner environment.
Aside from less landfill waste, there are other elements in the environment that are affected positively by recycling non-ferrous metals. For one, raw resources of non-ferrous metals do not have to be extracted drastically anymore as non-ferrous products are being recycled continuously. The energy being consumed during raw extractions, as well as greenhouse gas emissions, can likewise be decreased thanks to the recycling process. Recycling non-ferrous metals can also help in reducing pollution.
Provides Healthy Economic Benefits
As mentioned, non-ferrous metals can be truly valuable as they are often found on a lot of products. Recycling these metals can then help provide benefits to the economy, especially with businesses and livelihood. The recycling industry alone can gain a huge amount of earnings as it sells recycled non-ferrous metals to businesses. The businesses that process reprocessed non-ferrous metals, in return, can benefit from these products’ lower price compared to the raw ones.
With industries utilising recycled non-ferrous metals, more and more people can gain income just by working at recycling centres. Alternatively, households and establishments that want to have their non-ferrous products recycled can obtain money just by selling them to recycling centres. All these benefits allow the non-ferrous recycling industry to thrive and helpful in the general economy of a country.
Recycling with Victorian Metal Traders
Recycling non-ferrous metals can truly provide benefits to both the environment and economy. Without recycling, the environment will most likely continue to get destroyed. Raw resources of metal materials might also be depleted as early as this decade if recycling will be not practised.
For more information about recycling non-ferrous metals, feel free to contact us at Victorian Metal Traders. We are the leading scrap metal recycling company based in Victoria, Australia. We are buyers of all grades and all quantities of ferrous and non-ferrous metals. We buy scrap metal from all areas of Victoria and we export globally.
Optimized by: Netwizard SEO
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.964021623134613,
"language": "en",
"url": "https://www.enotes.com/homework-help/why-do-recessions-not-last-for-long-periods-of-2727375",
"token_count": 279,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0322265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bc0029b0-2825-470f-a05c-35abb3565555>"
}
|
According to the classical perspective, interest rates are flexible. A recession leads to a decrease in consumer activity. As a result, consumers save more of their income instead of spending it. High savings means there’s more money available to borrow. Hence, banks will lower interest to make loans more affordable. Consequently, investments will increase and the economy will return to normalcy. Classical economists also believe wages are flexible. Therefore, in the event of a recession where labor supply is more than the demand. Market forces will force wages to go down so that firms can hire unemployed people. As more people join the workforce, their purchasing power improves and they can spend on the economy.
Keynesian economists have a different view on recession. According to them, consumer confidence goes down during periods of uncertainty. The buyer becomes irrational and cannot make sound decisions. When that happens, it becomes the government’s responsibility to boost economic activity. However, since the government didn’t plan for such an expense, it has to borrow to spend on the economy. It can take weeks for the government’s loan to be approved. Once approved, the government prioritizes projects that can create employment and inject cash into the economy. It takes a while for government spending to have a positive effect on the economy since companies have to bid for infrastructure projects and people have to apply for new jobs.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.944232165813446,
"language": "en",
"url": "https://www.investopedia.com/terms/d/debtgdpratio.asp",
"token_count": 1094,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0634765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:67984d7d-2aa0-4c1d-8392-40a0050a0c25>"
}
|
What Is the Debt-to-GDP Ratio?
The debt-to-GDP ratio is the metric comparing a country's public debt to its gross domestic product (GDP). By comparing what a country owes with what it produces, the debt-to-GDP ratio reliably indicates that particular country’s ability to pay back its debts. Often expressed as a percentage, this ratio can also be interpreted as the number of years needed to pay back debt, if GDP is dedicated entirely to debt repayment.
A country able to continue paying interest on its debt--without refinancing, and without hampering economic growth, is generally considered to be stable. A country with a high debt-to-GDP ratio typically has trouble paying off external debts (also called “public debts”), which are any balances owed to outside lenders. In such scenarios, creditors are apt to seek higher interest rates when lending. Extravagantly high debt-to-GDP ratios may deter creditors from lending money altogether.
The Formula for the Debt-to-GDP Ratio Is
Debt to GDP=Total GDP of CountryTotal Debt of Country
What Does the Debt-to-GDP Ratio Tell You?
When a country defaults on its debt, it often triggers financial panic in domestic and international markets alike. As a rule, the higher a country’s debt-to-GDP ratio climbs, the higher its risk of default becomes. Although governments strive to lower their debt-to-GDP ratios, this can be difficult to achieve during periods of unrest, such as wartime, or economic recession. In such challenging climates, governments tend to increase borrowing in an effort to stimulate growth and boost aggregate demand. This macroeconomic strategy is a chief ideal in Keynesian economics.
Economists who adhere to modern monetary theory (MMT) argue that sovereign nations capable of printing their own money cannot ever go bankrupt, because they can simply produce more fiat currency to service debts. However, this rule does not apply to countries that do not control their own monetary policies, such as European Union (EU) nations, who must rely on the European Central Bank (ECB) to issue euros.
A study by the World Bank found that countries whose debt-to-GDP ratios exceeds 77% for prolonged periods, experience significant slowdowns in economic growth. Pointedly: every percentage point of debt above this level costs countries 1.7% in economic growth. This phenomenon is even more pronounced in emerging markets, where each additional percentage point of debt over 64%, annually slows growth by 2%.
- The debt-to-GDP ratio is the ratio of a country's public debt to its gross domestic product (GDP).
- If a country is unable to pay its debt, it defaults, which could cause a financial panic in the domestic and international markets. The higher the debt-to-GDP ratio, the less likely the country will pay back its debt and the higher its risk of default.
- A study by the World Bank found that if the debt-to-GDP ratio of a country exceeds 77% for an extended period of time, it slows economic growth.
Examples of Debt-to-GDP Ratios:
Debt-to-GDP Patterns in the United States
According to the U.S. Bureau of Public Debt, in 2015 and 2017, the United States had debt-to-GDP ratios of 104.17% and 105.4%, respectively. To put these figures into perspective, the U.S.’s highest debt-to-GDP ratio was 106% at the end of World War II, in 1946. Debt levels gradually fell from their post-World War II peak, before plateauing between 31% and 40% in the 1970s—ultimately hitting a historic 23% low, in 1974. Ratios have steadily risen since 1980 and then jumped sharply, following 2007’s subprime housing crisis and the subsequent financial meltdown.
The Role of United States Treasuries
The U.S. government finances its debt by issuing U.S. Treasuries, which are widely considered to be the safest bonds on the market. The countries and regions with the 10 largest holdings of U.S. Treasuries are as follows:
- Taiwan at $182.3 billion
- Hong Kong at $200.3 billion
- Luxembourg at $221.3 billion
- The United Kingdom at $227.6 billion
- Switzerland at $230 billion
- Ireland at $264.3 billion
- Brazil at $246.4 billion
- The Cayman Islands at $265 billion
- Japan at $1.147 trillion
- Mainland China at $1.244 trillion
Limitations of Debt-to-GDP
The landmark 2010 study entitled "Growth in a Time of Debt", conducted by Harvard economists Carmen Reinhart and Kenneth Rogoff, painted a gloomy picture for countries with high debt-to-GDP ratios. However, a 2013 review of the study identified coding errors, as well as the selective exclusion of data, which purportedly led Reinhart and Rogoff to make errant conclusions. Although corrections of these computational errors undermined the central claim that excess debt causes recessions, Reinhart and Rogoff still maintain that their conclusions are nonetheless valid.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9630401134490967,
"language": "en",
"url": "https://www.marottaonmoney.com/some-college-degrees-not-worth-the-investment/",
"token_count": 616,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:241e867c-f21c-4ba1-b346-87bf17488b06>"
}
|
A typical college degree is worth up to a million bucks over a career, but that’s not true for every degree. Prospective college students need to do their homework beforehand because some degrees aren’t worth the investment
Of the 1312 colleges evaluated in the PayScale College ROI Report, graduates from 58 institutions are estimated to be worse off after 20 years compared with those who skipped college and went straight to work. These 58 lackluster institutions make up 4.42% of all the colleges surveyed. The lowest grade goes to Shaw University in Raleigh, North Carolina, where PayScale estimates that grads will be $121,000 worse off after 20 years for earning a degree.
To calculate this estimate, PayScale uses an opportunity cost measure they call return on investment (ROI). After factoring all the net college costs, the report compares 20 years of estimated income of a college graduate versus 24 years of income from a high school graduate who started working immediately and didn’t have to pay college expenses (or take loans).
The full list is available here.
Future college students (and their parents) must realize that not all colleges are equal. The graduates from the lowest ranking schools report earning less income after graduation. The PayScale website is helpful because it allows you to see reported earnings of graduates from over a thousand colleges. I also assume that low-performing schools in this report tend to offer less financial assistance, which leaves their graduates with larger debt burdens.
However, the most highly endowed colleges can reduce their cost of attendance with grants and scholarships. For example, Stanford is one of the most expensive schools based on sticker price, but its financial assistance is typically generous. So the net cost is very competitive, and their ranking is number 4 based on the PayScale study.
Debt burdens are relative. A doctor’s salary can more quickly pay off a high-price education loan than a teacher. A good rule of thumb is to avoid incurring college debts that will be more than half of your expected annual income. Limiting loans to no more than 50% of a future salary allows graduates to pay off their debts after five years using 10% of their future salary.
Some students begin to realize their faulty economics only after they have enrolled. Not surprisingly those schools with the lowest ROI also have the highest dropout rates in the country. One of the circles along the bottom of Figure 2 is Adam’s State, which has a 21% graduation rate and a 20-year net ROI of minus $20,143.
Figure 2. The 2014 college ROI data via PayScale is available here.
What should be clear from this data is the world of difference between the outcomes of graduates of highly rated schools and those near the bottom of the barrel. Attending a college with a poor ROI is not necessarily a mistake, but the financial aid package better be sweet. Like any investment, you need to do your homework too before you commit your time and money to an unknown outcome.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9715077877044678,
"language": "en",
"url": "https://www.realmediahub.com/heres-how-to-kickstart-financial-planning-for-youth/",
"token_count": 692,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01165771484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6b0c1354-3e12-48be-a164-a6c6357d1870>"
}
|
When it comes to financial planning, most people think of it as simply being able to pay their bills. Or the other notion that is associated with financial planning is that you need to be a finance maestro. Both of these ideas are the furthest you can be from fact. All you need to have for succeeding at financial planning is a strong knowledge of addition and subtraction. More precisely, you need to know how much money you earn and how much you spend.
While going to school and colleges have taught all of us a lot of things, managing personal finances is not one of them. Unfortunately, that is the exact place that money management should be inculcated into young boys and girls. This lack of knowledge leads many young professionals to make poor financial choices at the start of their careers. These choices can include unnecessary purchases, taking too much debts, etc. However, this is the time that can set a strong foundation for savings and wealth generation in the future. Here are some tips you can follow for sound financial planning:
Control your spending
All of us remember being told by parents, relatives, friends, etc. to spend too much money. If all of us paid attention to this advice, there wouldn’t be a problem of poor finances. Hence, the first you should take to get your finances in good shape is to control your spending.
While there is nothing wrong with treating yourself with a nice restaurant meal or buying some gadget you really like, you have to plan the act so that it doesn’t have an effect on your finances. Ultimately, you have to spend the right amount at the right time so that you can still have a lot left in your account.
Another thing that you should do about spending is to not do it through credit cards. Credit cards are financial instruments that should be only used in emergency situations. However, many people choose credit cards to make purchases regularly. Credit cards have an extremely high interest rate so you end up paying back more than what you borrowed.
Track your money
You cannot learn how to spend right until you know how much are you spending already. Hence, you should create a record of what your regular expenditures are. Additionally, you should create a record of how many unplanned expenses you have made. Go through these records to find out which expenses are important and which of them can be cut out. Moreover, it can be the case where you are spending the right amount of money in total, but it is just distributed across different expenses inefficiently.
Create emergency fund
Managing personal finances well can be different for everybody. There is no one way that works for everybody when it comes to managing money. However, there is one unwritten rule of good financial planning. The biggest payment you should ever make with your money should be to yourself.
Regardless of how careful you are in life, there is no telling when something bad happens. And as the famous saying goes ‘Hope for the best and prepare for the worst’, part of the money you earned should be set aside to be used only in the case of an emergency.
Now that you have understood how to create a financial plan, what are you waiting for? Begin today and create a financial plan for a better, prosperous future. You can also take help from mutual fund experts who can guide you to create an optimum financial plan for your investment portfolio. Happy investing!
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9644177556037903,
"language": "en",
"url": "https://www.rebelyid.com/2016/05/the-difference-between-florence-alabama-and-florence-italy/",
"token_count": 677,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.482421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ed1837f9-89bf-4025-b154-adfd6cd479e7>"
}
|
Inequality in American Life is not as easy to measure as you would think and probably even more difficult to make relevant. The common solutions from the left point more to reducing the wealthy than raising the poor, as if the results will be the same.
While there is a point where inequality can affect social stability it is less relevant than economic growth and income mobility. I contend that the current obsession with inequality is a by product of lousy growth and ineffective growth policies
The big flaw in studying inequality is that it measures groups, not individuals: individuals rise and fall and display more mobility than groups. America more than any other country celebrates the individuals. How it is measured is also critically important. What years you start and end, what is included in income, whether it is measured pre or after tax, whether transfer payments are included, whether it is adjusted for hours worked, and whether it measures individual or household income can greatly affect the measure of inequality. Not surprising many sources chose a measurement that exaggerates it.
Jeff Jacoby addresses inequality in Up and down — but mostly up — the income ladder
The 25th great-grandsons of medieval Florentine shoemakers and wool merchants may still be riding high, but things don’t work that way in America. Here, riches-to-rags stories are not uncommon. When Bhashkar Mazumder, an economist at the Federal Reserve Bank of Chicago, examined the earnings of thousands of men born between 1963 and 1968, he discovered that 17 percent of those whose fathers were in the top tenth of the income scale had dropped to the bottom third by the time they were in their late 20s or early 30s. Movement between income groups over the course of a lifetime is the norm for most Americans. The rich often get richer, but plenty of them get poorer, too. Though the top 1 percent makes a popular target, it’s actually a group no one stays in for very long. On the other hand, it’s a group that 11 percent of Americans will reach at some point during their working lives.
Affluence in America is dynamic, and our economic system is biased toward success. But bias isn’t a guarantee. Mobility — up and down — depends to a great degree on the choices that people make for themselves. Individuals who finish high school, marry before having children, don’t engage in criminal activity, and work diligently have a very high likelihood of achieving success. Those who don’t, don’t.
Of course, there are impediments to mobility that are beyond the control of any individual, and that are most likely to hurt those who start out in America’s poorest precincts. Broken public schools, for example. The normalization of single-parent households. Too-easy access to welfare benefits. Counterproductive mandates, like minimum-wage laws and stifling licensing rules. Would that our political demagogues and professional populists put as much effort into dismantling those barriers as they do into demonizing the rich and yapping about inequality.
Yappers notwithstanding, the American Dream is far from dead. This isn’t Florence. No one is locked out of economic success today because of their ancestors’ status long ago. America remains the land of opportunity. Make the most of it.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9105333089828491,
"language": "en",
"url": "https://www.routledge.com/Management-Accounting-in-Public-Service-Decision-Making/Prowle/p/book/9781138366176",
"token_count": 1692,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.140625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f2ea2743-2cf5-4a76-83d7-9c40fff7dbf3>"
}
|
Radical changes to public service delivery have swept across many regions of the world. Management accounting methods are vital to support operational and strategic decision making in public services internationally. This book provides a comprehensive and “leading-edge” guide to the topic.
Written by an expert scholar with practical experience of public service delivery, the book takes account of key trends such as increased demand for public services, financial austerity, technological change and enhanced performance management.
A globally relevant book, informed by cutting edge academic research and benefitting from integrated case studies, this is essential reading for both students and practitioners involved with the financial aspects of public services management.
Table of Contents
About the author
List of figures
List of tables
List of case studies
List of abbreviations
PART A: Context of management accounting in public services
- Public service organisations and the public sector
- What are public services?
- What are the distinctive features of public services?
- Challenges facing public services;
- Aims and structure of the book
- Operations management of public services
- Aspects of public service operations
- Operations management activities in public services
- Relevance of management accounting
- The importance of strategy in public services
- The nature of strategic management
- Strategic change in public services
- The strategic management process in public services
- Management accounting and strategic decision making
- Public service reforms
- Why does public service reform take place?
- What are the objectives of public service reform?
- What constitutes public service reform?
- Success and failure in public service reform
- What role should management accounting play in public service reform?
- Leadership, management and decision making in public services
- Leadership and management in public services
- Decision making in public services
- The role of management accounting
- The relevance of management accounting in public services
- The nature and purpose of management accounting
- The modern roles of management accounting in relation to public services
- Behavioural aspects of management accounting
- Financial analysis and economic analysis in public services – the distinction
- The management accounting in modern public services
- The factors that drive the configuration of management accounting in public service organisations?
PART B: Management accounting practice in public services
- Costing and cost information for decision making in public services
- How cost information can be used to manage public services
- Approaches to the classification of costs
- Costing systems and cost models in public services
- Identifying and estimating costs in public services
- Difficulties and complexities of costing in modern public services
- Developing costing systems in public services
- Management accounting and operational/tactical decision making in public services
- Operational and strategic decision making: The distinction
- Operational decision making in public services
- Management accounting methods
- Management accounting and public service strategy
- Cost and income benchmarking
- Strategic capital investment appraisal
- Strategic cost improvement
- Programme/client group analysis and budgeting
- Strategic options analysis
- Pricing strategies
- Strategic financial forecasting
- Decision support models
- Strategic financial leadership
- Management accounting and management control in public services
- Key management tasks
- The nature and purpose of management control
- Operational/tactical management control
- Cash and working capital control and management
- Budgeting systems and budgetary control in public service management
- Strategic management control
- Management accounting and performance management/improvement in public services
- The nature of performance in public services
- Systems of performance management
- Using performance information
- Improving performance
- The management accounting contribution
- Management accounting and risk management in public services
- The distinction between risk and uncertainty
- Risk and risk management in public service organisations
- Risk management frameworks in public services
- The importance of organisational resilience
- Management accounting in an environment of risk and uncertainty
- Contemporary aspects of management accounting in public services
- Modern costing developments
- Inter-organisational cost management (IOCM)
- Environmental management accounting (EMA)
- Management accounting and modern operational processes
- Technology and management accounting
Malcolm J. Prowle is currently a professor at the Gloucestershire Business School who has extensive experience of public services finance, in the UK and overseas.
"This book comes when public services face three major challenges: the austerity legacy; the pandemic and climate change. These require new strategic thinking and make unprecedented demands on decision makers. Without sound information, good management accounting, decisions are made in ignorance. With its emphasis on strategic issues, it should be essential reading for public sector managers and accountants alike." — Roger Latham, CPFA, Former Chief Executive and County Treasurer, Nottinghamshire County Council, Past president of CIPFA
"This book offers an interesting and complete analysis of the crucial role Management Accounting should play in public services. It is written in a format for international relevance, both for students and public managers. Its contribution is very necessary in the current economic context of countries like Spain that are facing the challenge of optimizing their public spending." — Dr Carolina Pontones Rosa, Associate Professor, Public Sector Accounting, University of Castilla-La Mancha, Spain
"Professor Prowle utilises his wealth of practitioner and scholarly experience, advanced methods and applied case studies to explore and bring alive the impact of high quality management accounting on decision-making across the public sector. Traditionalist to contemporary, operations to strategic, performance to risk; this book compares and contrasts through expert-informed, well-researched and lived insight." — Dr Peter Cross, CPFA, Chief Financial Officer, De Montfort University, Leicester, Former NHS Finance Director
"An excellent book that covers wide areas of management accounting in public sector organisations. It is one of a few books that address the context of the public sector in which management accounting operates at the international level. With its practical applications, the book will be useful for both managers and students in my country." — Dr Ali Alyamoor, Senior Lecturer in Management Accounting, University of Mosul, Iraq
"Malcolm Prowle puts management accounting into the context of public services, and from there goes on to elucidate how best to apply state-of-the-art approaches at a practical level. This involves considering the various strategic and operational decisions that face public services and examining the potential relevance and applicability of management accounting methods in each case." — Rob Whiteman, CPFA, Chief Executive, CIPFA
"An easily accessible and informative book for anyone keen to develop knowledge and understanding of the role of management accounting in public sector decision making. A vitally useful source for all public service managers grappling with a plethora of financial and practical challenges, resulting from increased demand and supply-chain pressures, made more complex in our post-Covid 19 world." — Lynne Barrow, CPFA, Assistant Dean (International), Hull University Business School
"The title of Professor Prowle’s book does scant justice to its relevance. Consistent with the premise that management accounting is a vital tool for decision making across a broad range of public sector organisations and functions, the text itself is much more than an arid accounting textbook. Using straightforward language, and drawing on numerous examples from the author’s extensive practical experience, it is as much about management as it is about accounting. While undoubtedly an invaluable resource for members of the management accounting profession, this book is also essential reading for anyone who is, or aspires to become, a financially literate public sector manager.
Professor Prowle’s book is also highly relevant for those who are tasked with leading public services in the developing world. Without undue reliance on specialist language, it provides a clear explanation of key concepts in public sector management, describes the vital role that information can play in guiding both operational and strategic decision making, and explores the contributions that management accounting can make in support of efforts to improve or reform public services. As such, the book occupies an important ‘sweet spot’ between generic management textbooks and more technical material." — Philip Davies, Former Permanent Secretary, Fiji Government Ministry of Health & Medical Services, Former Deputy Secretary, Australian Government Department of Health & Ageing, Former Vice-Chair, WHO Executive Board
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9310162663459778,
"language": "en",
"url": "https://www.wallstreetmojo.com/full-form-of-cvv/",
"token_count": 1379,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.03564453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:99eb6a55-927c-46de-be7d-608aab0013e5>"
}
|
Full Form of CVV – Card Verification Value
The full form of CVV is the Card Verification Value. CVV (also termed as CSC (card security code) or CVN (card verification number) or CVC (card verification code) or CVD (card verification data)) can be defined as a combination of security features that are used in debit, credit, and ATM cards for eliminating or minimizing the possibilities of fraud and establishing the identity of the owner of the card.
The purpose of a card verification code or CVV is to minimize the risks of fraudulent transactions by establishing the cardholder’s identity. Users largely use credit and debit cards for online shopping or for making other payments, such as bill payments, etc.
Online portals are not allowed to save the card verification number provided in the credit and debit cards. This means that even if the portals have all the details concerning the cardholders’ debit or credit card, it still cannot access the card verification number unless the latter does not type it separately for confirming the payment transaction. This makes it impossible for others to access the card information for initiating payments without the cardholder’s authorization and safeguarding him or her against potential fraud.
The characteristics of the CVV number are as follows:
- CVV is a three-digit number on MasterCard, Discover, and VISA debit cards, credit cards, and ATM cards while a four-digit number on American Express debit or credit cards.
- CVV is provided at the back of the MasterCard, Discover, and VISA debit cards, credit cards, and ATM cards while displayed at the front of the American Express debit or credit cards.
- A CVV acts as a security feature for “card not present transactions.”
The two types are as follows:
#1 – CVV1:
CVV1 is used in card-present transactions to verify if the data is valid and issued by a banking institution. It is provided in the card’s magnetic stripe.
#2 – CVV2:
CVV2, unlike CVV1, is a code printed on the card. It is used in the case of a card, not present transactions such as mail order/ telephone order (MOTO) or internet, and acts as an added security feature for preventing potential frauds.
A card verification value number can be found on debit and credit cards. A CVV is a three-digit number provided in the magnetic stripe of a debit or credit card. CVV can be located as a three-digit number provided at the back under the magnetic stripe of the MasterCard, Discover, and VISA debit cards, credit cards, and ATM cards. In the case of an American Express debit and credit card, the CVV number can be found as a four-digit number provided at the front right side of the same.
A card verification value is of huge importance in today’s world, where most shoppers prefer to shop for everything online. This is why most businesses are selling their products online to cater to more and more customers and adjust to today’s requirements.
Not only shopping, but people also tend to pay their bills like credit card bills, electricity bills, telephone bills, mobile recharge, insurance bills, etc. online. This saves them a lot of time and from harassment since the users with Digitalization can pay their dues, bills, or even go shopping anytime and that too anywhere without needing to go out and stand in long queues and wait for their turn.
But with growing Digitalization and online transactions, the scope for fraud has also risen. With the rise in frauds, the security mechanism in executing online transactions has also been constantly enhanced to safeguard the users from robbed. Debit and credit cards come with a CVV, which acts as an added security feature in initiating online transactions.
Online portals cannot save this vital information of the cardholders, and therefore, the users will have to provide the CVV for initiating the transactions online manually. Online portals can save other cardholders’ details but will not have access to the CVV provided in the previous transaction. This safeguards the cardholders from potential frauds and secures their online transactions. A CVV code prevents and eliminates fraud and protects the cardholders from data breaching.
CVV vs CVV2
CVV and CVV2 differ from each other on various parameters.
- A CVV2 is a type of CVV, and it is used for card, not the present type of transactions. On the other hand, a CVV can either be a card not present and card-present transactions.
- CVV is the short form used for card verification value, whereas CVV2 is the short form used for card transaction value 2.
- A CVV is a three-digit number displayed on the MasterCard, Discover, and VISA debit cards, credit cards, and ATM cards/ four-digit numbers displayed on the American Express debit and credit card, and it acts as an added security feature during online transactions. On the other hand, a CVV2 is a three-digit number generated during the second process, which is why it becomes more difficult to guess the same correctly.
The benefits of CVV are provided as follows:
- Minimizes Fraud: CVV minimizes the possibilities of fraud and protects cardholders against the same. Online card transactions cannot be executed without a CVV. Hence, the users will have to manually provide the CVV for initiating the payment from their bank accounts.
- Safeguards from Data Breaching: CVV protects the cardholders from data breaching. It disallows the online portals to save the CVV of the shoppers. The shoppers will need to insert their CVV every time they are making an online transaction. The online portals can save other information but not CVV and the four digits pin for the card.
A CVV can be regarded as an added security feature in online transactions. It is a three-digit number and is mentioned in the backside of the MasterCard, Discover, and VISA debit and cards. In the case of an American Express branded debit and credit cards, a CVV is a four-digit number, and it is displayed at the front. A CVV can be of two types CVV1 and CVV2. CVV1 is used in a card-present transaction, while CVV2 is used for card, not present transactions.
This has been a guide to the Full Form of CVV and its definition. Here we discuss the characteristics, types, and location of CVV along with importance, benefits. You may refer to the following articles to learn more about finance –
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9233841300010681,
"language": "en",
"url": "http://five4.five4media.co.uk/construction-of-biogas-plants-in-serbia/",
"token_count": 1220,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10205078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:b1080e80-1272-4445-91a5-64b2502e50d9>"
}
|
DEVELOPMENT CONDITIONS ACCORDING TO THE ENERGY DEVELOPMENT STRATEGY OF SERBIA BY THE YEAR 2030
The strategic direction of energy development in the Republic of Serbia is primarily determined by internationally assumed commitments. The Treaty Establishing the Energy Community is the first treaty between the Republic of Serbia and the European Union, whereby the Republic of Serbia has undertaken the commitment to implement EU regulations. Membership in the Energy Community (treaty entered into force in 2006) and the process of accession to the European Union are of vast significance. The importance of the Treaty Establishing the Energy Community was confirmed by the ratification of the Stabilization and Association Agreement in 2008.
The key priorities of the adopted strategy (until 2030) are the establishment of energy security, energy market development and an overall transition to sustainable energy.
RAW MATERIAL CONDITIONS AND POTENTIAL
According to verified reserves of oil and natural gas – unless significant discoveries are made, it can be expected that, by 2030, the exploitation of these energy resources in the country will have become low or entirely exhausted.
At present, the most significant domestic energy resource is coal – the reserves of which should, according to the projected level of consumption, remain sufficient for exploitation even after 2050.
The laws and bylaws that govern the majority of RES related activities are:
– Energy Law (Official Gazette of the Republic of Serbia, No. 145/2014)
– Law on Efficient Use of Energy (Official Gazette of the Republic of Serbia, No. 25/2013)
– Decree on Incentive Measures for the Production of Electricity from Renewable Energy Sources and Highly Efficient Cogeneration of Electricity and Heat (Official Gazette of the Republic of Serbia, No. 56/2016).
– Decree on the Conditions and Procedure for Acquiring the Status of Privileged Electricity Producer, Temporary Privileged Electricity Producer and Producer from RES (Official Gazette of the Republic of Serbia, No. 56/2016)
TECHNICAL AND ECONOMIC CONDITIONS
The potential annual production of biogas is estimated from the data on the annually available quantity of substrates. For this purpose, indicative literature data on biogas yields are used. Potential biogas yields are mainly expressed using the amount of fresh substrate mass. The potential yield of biogas varies depending on the moisture content of the substrate and also depends on the share of organic dry mass. The share of dry and organic dry mass in the substrate is determined for a more accurate measurement of annual biogas production.
CURRENT STATE OF DEVELOPMENT IN SERBIA
The rapid increase in the world population and growth of the global economy have resulted in increasing energy consumption. It is estimated that total global energy consumption will have grown by 58% by the year 2040 as compared to 2010. The use of conventional technologies for the production of energy from fossil fuels has led to fossil fuel depletion, increased environmental pollution as well as increased greenhouse gas emissions causing climate changes. Greenhouse gas emissions increased by 70% between 1970 and 2004, thus necessitating the introduction of new energy sources that would lead to a secure energy supply, reduce environmental pollution and mitigate climate changes. One way of producing renewable energy and possibly replacing fossil sources is the application of anaerobic digestion and biogas production. The application of biogas has recorded a significant increase during the second half of the twentieth century, especially in developing countries. In addition to energy production, this also serves as a solution for sanitary problems through wastewater treatment.
By definition, SWOT analysis represents a suitable manner of observing the prospects or obstacles for project implementation in the form of a comparative overview of the main strengths, weaknesses, opportunities and threats. The Energy Development Strategy of the Republic of Serbia enables the identification of key positive and negative factors that could affect the achievement of objectives, an overview of items that may serve for stimulating the implementation of the Strategy, as well as what might lead to delays and issues, either due to internal weaknesses or external constraints.
Sources of financing for the construction of biogas plants play a significant role and thus strongly affect the financial assessment. In most countries of the European Union and beyond, such projects are encouraged via the award of grants, for all or part of the investment, or via granting particularly favorable loans. The reason for the above is quite simple, given that encouraging such projects contributes to the realization of national goals in terms of energy efficiency, use of national human and material resources and reduction of dependence on imports of energy generating products. This also applies in Serbia. There are numerous options available, albeit with many limitations. Below is a description of the major sources of financing, with particularly favorable conditions for the production and use of renewable energy sources, including biogas. Each option should be properly considered, and all the costs incurred during the provision of funds should be taken into account, as the seemingly lowest may eventually turn out as significant.
Pursuant to the analyses presented in the material, it may be concluded that, according to the current conditions of the Decree on the Conditions and feed-in tariffs, as well as raw material prices, INVESTING IN BIOGAS PLANTS IN SERBIA IS PROFITABLE.
Investments may be organized as standalone with the purchase or lease of the land required (400 ha for 1 MW) and self-organization of the entire operation – production, commissioning and maintenance for biogas plant development.
Another form of construction is possible through joint ventures with farmers, where the farmers would provide the land required for plant construction, secure the operation of the plant and uptake the post-fermentation mass and heat. The partner investor would provide the financing, documentation necessary for the commissioning of the plant and contract with the Electric Power Industry of Serbia (electricity supply).
For the full market report please contact [email protected] for further details.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9590181112289429,
"language": "en",
"url": "https://financeprofessorblog.blogspot.com/2009/02/",
"token_count": 275,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.08154296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:15b3ecd1-af14-4d84-a42b-8795a13ac16b>"
}
|
Economic View - Can Talk of a Depression Lead to One? - News Analysis - NYTimes.com:
"The attention paid to the Depression story may seem a logical consequence of our economic situation. But the retelling, in fact, is a cause of the current situation — because the Great Depression serves as a model for our expectations...reducing consumers’ willingness to spend and businesses’ willingness to hire and expand. The Depression narrative could easily end up as a self-fulfilling prophecy.
The popular response to vivid accounts of past depressions is partly psychological, but it has a rational base. We have to look at past episodes because economic theory, lacking the physical constants of the hard sciences, has never offered a complete account of the mechanics of depressions.
The Great Depression does appear genuinely relevant. The bursting of twin bubbles in the stock and real estate markets, accompanied by huge failures of financial institutions and a drop in confidence, has no more recent example than that of the 1930s....To understand the story’s significance in driving our thinking, it is important to recognize that the Great Depression itself was partly driven by the retelling of earlier depression stories. In the 1930s, there was incessant talk about the depressions of the 1870s and 1890s; each of those downturns lasted for the better part of a decade."
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9481781721115112,
"language": "en",
"url": "https://lawaspect.com/united-states-economic-situation/",
"token_count": 851,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a5a38101-6c2d-47ff-8637-b97165a83f44>"
}
|
The US has the largest and most technologically powerful economy in the world, with a per capita GDP of $49,800. In this market-oriented economy, private individuals and business firms make most of the decisions, and the federal and state governments buy needed goods and services predominantly in the private marketplace. US business firms enjoy greater flexibility than their counterparts in Western Europe and Japan in decisions to expand capital plant, to lay off surplus workers, and to develop new products.
At the same time, they face higher barriers to enter their rivals’ home markets than foreign firms face entering US markets. US firms are at or near the forefront in technological advances, especially in computers and in medical, aerospace, and military equipment; their advantage has narrowed since the end of World War II. The onrush of technology largely explains the gradual development of a “two-tier labor market” in which those at the bottom lack the education and the professional/technical skills of those at the top and, more and more, fail to get comparable pay raises, health insurance coverage, and other benefits.
Since 1975, practically all the gains in household income have gone to the top 20% of households. Since 1996, dividends and capital gains have grown faster than wages or any other category of after-tax income. Imported oil accounts for nearly 55% of US consumption. Crude oil prices doubled between 2001 and 2006, the year home prices peaked; higher gasoline prices ate into consumers’ budgets and many individuals fell behind in their mortgage payments.
Oil prices climbed another 50% between 2006 and 2008, and bank foreclosures more than doubled in the same period. In addition to dampening the housing market, soaring oil prices caused a drop in the value of the dollar and a deterioration in the US merchandise trade deficit, which peaked at $840 billion in 2008. The sub-prime mortgage crisis, falling home prices, investment bank failures, tight credit, and the global economic downturn pushed the United States into a recession by mid-2008.
GDP contracted until the third quarter of 2009, making this the deepest and longest downturn since the Great Depression. To help stabilize financial markets, in October 2008 the US Congress established a $700 billion Troubled Asset Relief Program (TARP). The government used some of these funds to purchase equity in US banks and industrial corporations, much of which had been returned to the government by early 2011.
In January 2009 the US Congress passed and President Barack OBAMA signed a bill providing an additional $787 billion fiscal stimulus to be used over 10 years – two-thirds on additional spending and one-third on tax cuts – to create jobs and to help the economy recover. In 2010 and 2011, the federal budget deficit reached nearly 9% of GDP. Wars in Iraq and Afghanistan required major shifts in national resources from civilian to military purposes and contributed to the growth of the budget deficit and public debt.
Through 2011, direct costs of the wars totaled nearly $900 billion, according to US government figures. US revenues from taxes and other sources are lower, as a percentage of GDP, than those of most other countries. In March 2010, President OBAMA signed into law the Patient Protection and Affordable Care Act, a health insurance reform that will extend coverage to an additional 32 million American citizens by 2016, through private health insurance for the general population and Medicaid for the impoverished. Total spending on health care – public plus private – rose from 9.
0% of GDP in 1980 to 17. 9% in 2010. In July 2010, the president signed the DODD-FRANK Wall Street Reform and Consumer Protection Act, a law designed to promote financial stability by protecting consumers from financial abuses, ending taxpayer bailouts of financial firms, dealing with troubled banks that are “too big to fail,” and improving accountability and transparency in the financial system – in particular, by requiring certain financial derivatives to be traded in markets that are subject to government regulation and oversight.
Long-term problems include stagnation of wages for lower-income families, inadequate investment in deteriorating infrastructure, rapidly rising medical and pension costs of an aging population, energy shortages, and sizable current account and budget deficits – including significant budget shortages for state governments.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9595266580581665,
"language": "en",
"url": "https://smallbusiness.chron.com/sales-maximization-vs-profit-maximization-22079.html",
"token_count": 700,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09423828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:effe87dd-455b-442d-8912-b155f45fa672>"
}
|
Sales Maximization Vs. Profit Maximization
Sales are the first step toward profits. Without sales, there are no profits. Profit is also called income, net profits and net income. Sales are also called revenues and result from products and services. In a perfect business world, sales maximization would work hand-in-hand with profit maximization to create the ideal scenario for company owners and shareholders.
Sales maximization is an activity, while profits are a byproduct. It may seem like an odd way to think about sales and profits. Sales require manufacturing, or purchasing, a product for resale. It also requires promotional efforts, sales staff, customer service and shipping. These are all activities. Sales minus expenditures for inventory, marketing, sales staff, shipping, general and administrative costs equals profit. Profit is what's left over.
Profit maximization has a lower limit of risk. Sales maximization leaves the company at risk. There is no guarantee that the higher sales level will generate income. In fact, many firms will sell a product at or below cost to establish a new customer base. There is no guarantee those customers will remain at a higher price level.
Groupon is an example of a company that is maximizing sales at the cost of profit. Their advertising cost to obtain new members is higher than the sales generated by those members. Their logic is that the once a critical mass of members has been reached, sales will increase to the point the company makes a profit. In the meantime, they're spending advertising dollars at an accelerating rate.
If a company is selling a product in a relatively stable market with controllable costs, an increase in sales will generally result in an increase in profits.
In an effort to increase profit, companies decrease expenses, cut back on promotional efforts and stretch staffing levels. While this works in the short term, it can backfire in the long run. Cutting back purchases for inventory means orders take longer to fill, resulting in dissatisfied customers. Decreasing advertising results in customers not knowing about the product. If they don't know about the product, they won't buy it. A better solution than across-the-board cuts, or eliminating promotional efforts, is to analyze what works and what doesn't.
Evergreen Versus Temporary
The goal of any for-profit business is to make a profit. Public companies have shareholders who expect, and demand, the company be profitable. Financial industry pundits judge a company on whether it has reached its projected profit levels. That goal does not change over time. Sales maximization may be a temporary goal to enter a new market, introduce a product or generate cash.
Neither sales maximization nor profit maximization is cash maximization. Sales that aren't collected immediately may leave the company in a cash deficit position. That position may be compounded by a fast growth rate that requires purchases of inventory for sales that won't be paid for if the company offers payment terms of 60 to 90 days. The company is owed the money and the sales are booked to the income statement resulting in a profit. However, the asset shows up as an accounts receivable, not in cash.
Katie Jensen's first book was published in 2000. Since then she has written additional books as well as screenplays, website content and e-books. Rosehill holds a Master of Business Administration from Arizona State University. Her articles specialize in business and personal finance. Her passion includes cooking, eating and writing about food.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9085792303085327,
"language": "en",
"url": "https://www.econlowdown.org/in_plain_english?module_uid=92&p=yes",
"token_count": 180,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.02392578125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1c541cb6-8e1d-4c56-a25e-f2b8463e12c4>"
}
|
In this module, you will learn about the establishment of the Federal Reserve System, its history, its structure and its functions.
When you have finished this module, you should be able to:
- Explain that the Federal Reserve System is the central bank of the United States.
- Explain how members of the Board of Governors are chosen.
- Explain the structure of the Board of Governors.
- Describe the three components in the structure of the Federal Reserve System.
- Explain the three responsibilities of the Federal Reserve: conducting monetary policy, supervising banks and providing financial services.
- Explain why the Fed is called the bankers’ bank.
- Define monetary policy.
- Explain the difference between regulation and supervision.
Before we get started, please take a minute and answer the following questions. We will ask you these questions again after you have completed the module.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9463146924972534,
"language": "en",
"url": "https://www.educba.com/asset-retirement-obligation/?source=leftnav",
"token_count": 1316,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.15625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:84b256bd-8acb-43af-91d9-56336754e249>"
}
|
Introduction to Asset Retirement Obligation
Asset Retirement Obligation is a legal and accounting requirement in which a company needs to make provisions for the retirement of a tangible long-lived asset to bring the asset back to its original condition after the business is done using the asset.
Explanation: Companies in several industries have to bring an asset back to its original state after the asset is taken out of service. This may involve industries like oil drilling, power plants, mining, and many other industries. It also applies to the properties taken on lease, where the properties have to be brought back to their original shape. After the use, the asset may have to detoxicate like in nuclear plants, or machinery have to be removed like in oil drills. The expected expenses to be incurred on such restoration are taken care of by Asset Retirement Obligations.
How does Asset Retirement Obligation Work?
The restorations expenses are incurred at the end of the useful life of the asset, but a discounted liability is created on the balance sheet along with a corresponding asset right after the construction or initiation of the project or when the fair value of the restoration can be determined. This liability is then increased at a fixed rate gradually to match the expected obligation at the end of the life of the asset.
The recognition and accounting of asset retirement obligations are published by the Financial Accounting Standards Board (FASB) in the United States and by the International Financial Reporting Standards in the rest of the world. These institutions provide detailed guidelines on the treatment of Asset Retirement Obligations.
For the right measurement of the liability, the company must determine the fair value of the liability when it incurs it, and if the fair value of the liability cannot be determined, the liability should be recognized at a later date when the fair value becomes available. Prompt recognition of the liability can be very useful for stakeholders as these liabilities are high-value liabilities, and their recognition shows a better picture of the liabilities.
Accounting for Asset Retirement Obligation
Accounting for Asset Retirement obligation requires to recognize of the present value of the expected retirement expenses to be recognized as a liability and fixed asset. The interest rate used for discounting is the risk-free rate adjusted for the effect of the entity’s credit standing. The liability is then increased every year at the risk-free rate and measured at subsequent periods for the change in expected cost. The increase in liability is recognized as an accretion expense on the income statement and is calculated by multiplying the liability amount by the risk-free rate. Any change in the expected expense is adjusted to the liability balance after every revision. The asset recognized on the balance sheet is depreciated, and the expense is recorded on the income statement.
Differences between Us GAAP and IFRS
Basis of Comparison
|Initial Measurement of Asset Retirement Obligation (ARO) Liability||The fair value is recognized as a liability as and when it becomes available. The discount rate used is the risk-free rate||The liability is measured as the best estimate of the expenditure to settle the obligation discounted at the pre-tax rate|
|Asset Recognition from ARO||ARO amount is added to fixed assets at the time of the estimate||Generally included in property plant and equipment. Recognized in inventory if incurred during a period when the property was used to produce an inventory|
|Subsequent Measurements||Revision is done from time to time to either the amount or timing of cash flows. Upward and downward revisions are discounted using current and original risk-free rates, respectively.||Checked for change on every balance sheet date. Both the expected cash flow and the rate of discount can be changed, and adjusted liability can be shown on the basis of new assumptions.|
Example of Asset Retirement Obligation (With Excel Template)
Let’s take an example to understand the calculation of Asset Retirement Obligation in a better manner.
Assume a power company builds a power plant at a site with a 50-year lease. The asset takes 3 years to be built and has to be necessarily retired at the end of 47 years after it was built. The cost of dismantling the equipment, detoxification of the site and cleaning of the site is $50,000 in today’s dollars. Because retirement has to be done after 47 years, this cost will definitely be higher at that time. To take that into consideration, the retirement cost will increase at the rate of inflation. Assuming an inflation rate of 3%, the cost of retirement at the end of 47 years will be $200,595. Assuming a risk-free rate of 7%, this obligation’s present value comes out to be $8.342. See the illustration below for details.
Cost After 47 years is calculated as
- = $50,000 * (1 + 3% ) 47
- = $200595
For Detail Calculation refer the above template.
Advantages and Disadvantages
Advantages and Disadvantages of Asset Retirement Obligation are:
- As asset retirement obligation will be a real and big expense, it makes sense to provide for the expense as soon as the liability’s fair value can be determined.
- Asset retirement obligations help pre-plan the restoration of the property to its original state.
- Asset retirement obligations show fairness and accuracy of the financial statements.
- Asset retirement obligations are based on estimates and are prone to errors of judgment.
- The liability changes frequently.
- The rates used while recognizing the liability may change going forward and may change the liability.
- Asset retirement obligation do not cover work done after other events that affect the assets like natural calamities (earthquakes, floods, etc.)
Asset Retirement Obligations are important from an accounting point of view. Had it not been the regulatory requirement, businesses would have used their discretion in disclosing these costs, which could have hurt the stakeholders badly as these costs could cause a severe drain in the company’s cash balances and may adversely impact the business. Accounting for the obligation well in advance gives the business the time to plan and set aside resources for the event.
This is a guide to Asset Retirement Obligation. Here we discuss the Example and How does Asset Retirement Obligation Work? along with Advantages and Disadvantages. You can also go through our other suggested articles to learn more –
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9468711614608765,
"language": "en",
"url": "https://www.paperadvance.com/blogs/john-mullinder/where%E2%80%99s-the-garbage-coming-from-more-and-more-from-homes.html",
"token_count": 409,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.326171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2b61985d-f009-4fa8-8232-9e1c649c65c8>"
}
|
Municipal politicians love to point to “industry” as the main contributor to Canada’s waste stream. And while it’s true that most garbage today does come from industrial sources, there are clear signs that more and more garbage is being dumped by householders.
The gap between the two sources is narrowing.
A PPEC analysis of Statistics Canada data from 2008 to 2016 shows residential sources of waste tonnages climbing by 9% over the period while at the same time non-residential (industrial) sources of waste fell by 11 per cent. The waste we’re talking about here is paper, plastic, glass, metals, textiles, organics (food scraps), electronics, white goods such as fridges and appliances, and construction, renovation and demolition materials like wood, drywall, doors, windows and wiring.
The demographics and urban/rural split in each province and the strength of its industrial infrastructure obviously play a role in each province’s waste disposal history and performance. But by 2016, the residential share of the overall Canadian waste stream defined by Statistics Canada had increased in all but two provinces. In Quebec it jumped from 46.4% to 56.2% or 9.8 percentage points. Alberta registered a 7% increase in residential share of the waste disposed over the period.
At the same time as the residential share of the overall garbage stream climbed in most provinces, “industry’s” share obviously fell, in six of the eight provinces where data are supplied. Data for Newfoundland and Labrador, Prince Edward Island, Yukon, Northwest Territories and Nunavut are suppressed to meet the confidentiality requirements of the Statistics Act. The biggest falls in industrial share of the waste stream occurred in Quebec, Alberta, Saskatchewan and Ontario.
Food for thought as we design strategies to reduce Canada’s waste pile. Next: what materials are being diverted from Canada’s waste stream? And just how well or poorly are we doing?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9007144570350647,
"language": "en",
"url": "http://accelerateeducation.com/course_descriptions/AE_ConsumerMath.html",
"token_count": 431,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.055908203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:cebfd082-e952-48b9-9d62-75fd2057481d>"
}
|
This course focuses on the mathematics involved in making wise consumer decisions. Students explore the many ways in which mathematics affects their daily lives. The first semester will cover paychecks and wages, taxes, insurance, budgets, bank accounts, credit cards, interest calculations, and comparison-shopping. Second semester topics include vehicle and home purchasing, investing, and business and employee management.
- Solve basic arithmetic problems that require addition, subtraction, multiplication, and division of whole numbers, fractions and decimals.
- Estimate and round numbers.
- Calculate your earned income along with deductions and fringe benefits.
- Compute percentages, ratios, and proportions.
- Keep accurate banking and checking account records.
- Formulate a personal budget which includes expenses (utilities, insurance, taxes) incurred with home ownership.
- Identify the cost of buying on credit.
- Point out the importance of wise consumer buying, saving and investing.
- Use customary and metric units of length, volume, and weight to estimate measures and to convert from one system to another.
- Construct and read bar, line, circle, and pictographs as well as interpret information on a map.
- Compute the cost of remodeling a room such as area, number and cost of tile, amount and cost of carpeting, and amount and cost of painting.
- Compute net pay, deductions, federal and state income taxes.
- Compute premiums for life insurance and health insurance and understand Social Security benefits.
- Compute sticker price, financing, insurance, depreciation, and maintenance for an automobile.
- Read and interpret bus and airline schedules.
- Determine the cost of a trip including gasoline, meals, and hotels and use a mileage chart to calculate travel distances.
- Use unit prices, calorie charts, and cost of preparing a meal when grocery shopping.
- Compute the retail price of an item as well as the cost of renting an item.
- Explore methods of dividing profits/losses in a business partnership.
- Compute profit and loss on a stock transaction.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9249877333641052,
"language": "en",
"url": "http://finanalys.com/blog/wacc-guide",
"token_count": 555,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.005523681640625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:9a420298-4e50-4f6b-9ba6-6d14c11a787a>"
}
|
Lots of materials have been written on the meaning, application and calculation methods of the weighted average cost of capital. The purpose of this article is to summarize the basic concept of WACC and set out the standardized business calculation method of WACC as simply as possible.
In general, weight average cost of capital is the indicator, which is used in assessing the need to invest in various securities and projects, discounting the expected returns from investments and measuring the cost of capital. The weighted average cost of capital shows the minimum funds return of the enterprise on the capital invested, or its profitability, i.e. this is the total cost of capital, calculated as the sum of the return on equity and borrowed capital, weighted by their specific share in the capital structure. The economic meaning of the weighted average cost of capital is that the organization can make any decisions (including investment) if their level of profitability is not lower than the current value of the weighted average cost of capital indicator.
WACC in this context will be used as the discount rate to calculate the net present value (NPV). If the NPV of the project is positive, therefore, the project is not only self-sustainable, but it also makes a profit above the average for the company. You can calculate the internal rate of return (IRR), the threshold cost of financing above which the project is not effective, and compare it with the company's WACC. Ideally, the IRR rate should be much higher than the WACC.
The WACC formula is:
WACC = (E/V x Re) + ((D/V x Rd) x (1 – T))
E = market value of the firm’s equity
D = market value of the firm’s debt
V = total value of capital (equity plus debt)
E/V = percentage of capital that is equity
D/V = percentage of capital that is debt
Re = cost of equity (required rate of return)
Rd = cost of debt (yield to maturity on existing debt)
T = tax rate
Example of WACC calculation
For example, if you are considering a project of investing in agricultural project in China. So you need to make decision if it’s profitable and worth your investment, you need to calculate WACC of this project and then use it for NPV and IRR computation.
CPI of USA and China for 2019-2029 (Forecast of IMF)
Example of WACC calculation for agricultural project in China.
In next articles we will review other next steps in identification of ways to use WACC indicator. If you have any further questions, please contact us.
Sources of information:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9570190906524658,
"language": "en",
"url": "https://kateparham.com/qa/how-do-you-prove-a-payment.html",
"token_count": 977,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.038330078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:10fd3aa7-9d5d-4fa7-9940-e3caf81be1ad>"
}
|
- How do I prove a wire transfer?
- What is a valid receipt?
- What makes a receipt official?
- What is a bank transfer slip?
- How long does it take for wire transfer?
- What is payment proof?
- Is a bank transfer proof of payment?
- What a receipt should include?
- What is a payment receipt?
- What is a bank transfer receipt?
- What is a wire transfer vs direct deposit?
- How do I write a proof of payment?
- Is a receipt proof of payment?
- How much does a wire transfer cost?
How do I prove a wire transfer?
Wire transactions are safe and can be processed same-day.
Proof of payment using a wire transfer is usually a transaction register from your bank showing your account balance prior to transaction, the transaction, and the account balance after the transaction is finalized.
Some government agencies use a warrant system..
What is a valid receipt?
What is a valid receipt? A receipt is a written acknowledgement that the vendor has been paid for providing goods or services. To be valid, it must show: The name of the company providing the goods or services. When the specific services were rendered or articles purchased.
What makes a receipt official?
your company’s details including name, address, phone number and/or email address. the date of transaction showing date, month and year. a list of products or services showing a brief description of the product and quantity sold.
What is a bank transfer slip?
Transfer Slip – A transactional document that records the movement of merchandise from one store to another. The Payment Gateway was designed with you the merchant in mind.
How long does it take for wire transfer?
within 24 hoursWire transfers are a fast way to send or receive money electronically. While the speed of a transfer depends on several factors, most wire transfers between domestic U.S. bank accounts are completed within 24 hours. Transfers between U.S. and international accounts are completed in 1–5 days.
What is payment proof?
Generally, payment proof can be defined as payment tool that serve as a proof of transaction occurrence between a buyer and a seller. … In this case, transaction proof will make it easier to check and crosscheck the match between the written financial transaction and the transaction proof owned by the company.
Is a bank transfer proof of payment?
Uploading proof of payment – evidence of a completed bank transfer – will allow us to credit your account before we receive the funds. This credit can be used to cover margin requirements and for other trading purposes.
What a receipt should include?
This is the information that should be included on a receipt:Your company’s details including name, address, telephone number, and/or e-mail address.The date the transaction took place.List of products/services with a brief description of each along with the quantity delivered.More items…•
What is a payment receipt?
A payment receipt, also referred to as a receipt for payment, is an accounting document that a business provides its customer as proof of full or partial payment toward a product or service. Payment receipts typically include the following information about the transaction: Business name.
What is a bank transfer receipt?
A bank receipt is a document that contains a summary of the transaction details that were used to send a payment to Flywire’s account. The details include: Account number. … Date the funds were released to Flywire.
What is a wire transfer vs direct deposit?
Direct deposit is often the most convenient way for you to receive regular payments from your employer or the government because money is deposited directly into your bank account. Wire transfers offer a reliable way to immediately get money to another person no matter where they are.
How do I write a proof of payment?
Starting to WriteMake sure you state explicitly what the payment is for, or what payment/transaction the letter is in regard to.Include all relevant information, such as the parties involved, dates of payments and amounts due or guaranteed.Be straightforward and polite.
Is a receipt proof of payment?
While an invoice is a request for payment, a receipt is the proof of payment. It is a document confirming that a customer received the goods or services they paid a business for — or, conversely, that the business was appropriately compensated for the goods or services they sold to a customer.
How much does a wire transfer cost?
Wire transfer fees are generally between $25 and $30 for outgoing transfers to a bank account within the US, and between $45 and $50 for transfers going out of the US. There can also be fees to receive the money, generally around $15.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9441320300102234,
"language": "en",
"url": "https://money.com/meet-litecoin-a-faster-bitcoin-that-gamers-love/",
"token_count": 1404,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10009765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d6eb8538-8631-4982-82ad-c01b0308f0fd>"
}
|
Bitcoin was the first cryptocurrency, but as the first iteration of a revolutionary idea, it was born with notable drawbacks.
For instance, one feature of Bitcoin’s blockchain-based currency — security guaranteed by a decentralized method of tracking transactions — is also a flaw when it comes to the speed of transactions. Bitcoin can only process seven transactions per second at the upper limit, whereas Visa transacts tens of thousands in the same period.
That’s why alternative cryptocurrencies have been in development since the beginning of this decade. Their aim is to solve the challenges presented by Bitcoin.
One of the earliest competing visions was presented by engineer Charlie Lee, the thought-leader behind Litecoin.
With scalability and transactability in mind, Lee built a system that focused on delivering a viable cross-border payment platform that would surmount the substantial barriers currently in place.
The resulting coin stands out for embracing many of the values that make the cryptocurrency space so unique from a security perspective while effectively reducing transaction costs.
How Litecoin Works
Like many other blockchain-based cryptocurrencies, Litecoin borrowed from many of the concepts first pioneered by Bitcoin.
Numerous associated activities like mining, encryption, and proof of work are rooted in the same principles as the precursor cryptocurrency, though with some twists and different protocols underpinning its design.
The first and most obvious difference is the total amount of coins available for circulation. Instead of capping the total amount of issuance at 21 million like Bitcoin, the total number of Litecoin available for mining is 84 million.
However, unlike other attributes related to Litecoin, this one distinction is of lesser importance given that just like Bitcoin, fractional amounts of coins can be sent and received.
Where Litecoin really stands out is how long it takes to generate new blocks.
One of the main complaints facing Bitcoin is the sheer amount time it takes for a transaction to be confirmed, which currently stands at 10 minutes. By comparison, Litecoin takes just 2.5 minutes to record these transactions to prevent double-spending, improving the number of transactions it can handle relative to Bitcoin by a factor of four.
In his efforts to ensure Litecoin was a more viable currency platform without being accompanied by secondary solutions, Lee wanted to match the rates of other cross-border transaction processing groups like PayPal.
Apart from giving Litecoin greater transactability, it also improved upon the idea of scalability if the idea were to be adopted by a growing group of users.
How Litecoin Mining Works
The algorithm that Litecoin deploys for confirming transactions on the blockchain, known as Scrypt, is viewed as slightly less complex than the SHA-256 encryption algorithm used by Bitcoin.
Unlike Bitcoin, which largely depends on more specialty mining hardware to accomplish parallel processing power more efficiently, Scrypt’s more simplified design means that ordinary individuals can mine with PC components more common in retail or enthusiast-level machines.
While in the case of Bitcoin, it means that transaction confirmation rests in the hands of the most powerful miners that dominate the platform due to their advanced hardware, Litecoin in some ways can avoid this concentration of influence thanks to its design.
Ultimately, Litecoin is busily positioning itself to usurp market share from some of the more common cross-border transaction hubs like PayPal. One reason Litecoin can accomplish this feat is thanks to the smaller fees associated with each transaction.
Unlike the percentage amount associated with most e-payment methods, confirming a Litecoin transaction only costs 1/1000 of the amount being transferred.
Who Uses Litecoin?
Unlike Bitcoin, which has found itself championed more as an asset than a currency, Litecoin is rapidly rising as a popular tool for transactions, largely due to its notable advantages in terms of speed and scalability.
Although not necessarily a universal solution quite yet, it has long been a frontrunner in the field thanks to the common analogy of Litecoin being silver to Bitcoin’s gold for the cryptocurrency space.
Litecoin is being increasingly adopted by e-commerce as a means of transaction, but it really shines in the e-gaming space where it has won numerous fans. Especially in Asia, where online gaming is extraordinarily popular, Litecoin has built a tremendous userbase that utilize it to transact across blockchain-based gaming platforms that are rapidly growing in acceptance.
The Competition Litecoin Faces
A notable decision that characterized the growing divergence in opinions within the Bitcoin community was realized last summer with the hard fork decision that led to the emergence of Bitcoin Cash.
The idea behind this solution was to expand the block size to 8 MB from Bitcoin’s current 1 MB, enabling greater scalability and a currency solution that was more geared towards transactability than an asset for speculation. Just like Litecoin has focused on speed to help it garner greater adoption and use cases, Bitcoin Cash in many ways is seeking to emulate Litecoin’s scalability.
However, apart from the decision to change block size for competition, second layer solutions are being advanced to fix some of Litecoin’s perceived inefficiencies.
SegWit, or Segregated Witness, was designed to speed the record-keeping process and enable better transactability alongside lower fees for Litecoin and Bitcoin.
This idea was formulated to allow the incorporate of second layer solutions, or frameworks that are designed to be built on top of the existing chains to supply greater functionality, mainly geared towards the purposes listed above.
One of the biggest developments to come from SegWit is the Lightning Network, an off-chain protocol designed to help offset several problems associated with the original blockchain designs.
Apart from improving scalability by enabling the confirmation of billions of transactions, it allows instantaneous transfers that can be conducted off-chain without the need for third party oversight for transactions and record-keeping. So though Litecoin might have the edge in terms of speed and costs, this solution may see Bitcoin overcome many of its own associated difficulties.
The Key to Unlocking Litecoin’s True Potential
Just like any other emerging platform, Litecoin’s longevity will be largely dependent on how quickly it is adopted as a solution for the problems it seeks to solve.
Apart from its original design, which lends itself to better scalability and transactability thanks to reduced confirmation times and low costs, second layer solutions have the potential to revolutionize micropayments, optimize speed, and make Litecoin more attractive for numerous reasons.
In the end, the cryptocurrency’s value as a solution will be demonstrated by how it is embraced by community participants and the velocity of transactions within the ecosystem.
If transaction numbers climb and more online brokers offer Litecoin trading, the ecosystem will also likely ascend in value and appreciate in tandem with rising interest in its application.
However, absent more widespread adoption, Litecoin will face an uphill battle to accomplish its goal of revolutionizing value transfers.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9541674256324768,
"language": "en",
"url": "https://rushabh.substack.com/p/3-frameworks-for-making-complex-decisions",
"token_count": 4080,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.00360107421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:29e0e81a-a505-49f3-88b7-c4b5cffbe63d>"
}
|
Life is full of complex decisions: capital purchases, such as a car or a house, planning a vacation, choosing a new job, picking a product strategy, prioritizing roadmaps, hiring someone. Complex decisions have several shared traits: the list of options is often extensive, evaluation criteria are ill-defined, the outcomes are hard to predict, input data is unavailable or incomplete. Humans understand large systems by building mental models, which are more straightforward than the reality they represent. Mental models are a great thing: they allow us to make progress without getting bogged down in every little detail. But they also have their flaws. Most notably, human cognitive biases, our failures to think and communicate clearly, lead us to sub-optimal decisions.
Psychologists and behavioral economists have spent a considerable amount of time and mental energy dedicated to understanding the human decision-making process. By systematically understanding our cognitive biases and flaws, smart people have come up with frameworks to counteract their ill effects. Using these frameworks can lead to decisions with better eventual outcomes., how we fail, and how to make decisions that lead to better results. Amos Tversky, Daniel Kahneman, Richard Thaler, Dan Ariely, and Chip Heath have done seminal research in this field — and distilled their ideas into highly readable books. Thinking FastandSlow, Nudge, Predictably Irrational, and Decisive are amongst my favorites. I strongly encourage you to read these to gain a deeper understanding of the field.
In this post, I present three practical frameworks to improve decision- making in different contexts. Frameworks are hard to understand in the abstract. Just reading theory leads to a shallow understanding of how to apply them in practice. To make things more concrete, I use two practical problems that I have solved using some combination of these three frameworks.
How to buy a car: Large capital purchases, such as buying a car or a house, can make for challenging decisions. Some input data for our car-buying determination: I have a growing family with small kids. I have a short commute to work. I care about style and comfort. I don't intend to race my car on a track. I care about the environment and would like something efficient. I have a nominal budget of $50k in mind. A cursory examination of the car market should quickly reveal a broad spectrum of options. For the sake of this post, let's narrow that down to 1) a minivan (Honda Odyssey), 2) a hybrid sedan (Prius), and 3) a pure electric (Tesla Model 3).
How to pick the next feature: We make many complicated decisions at work. Product Managers and organizational leaders often need to decide what part of their product they should focus on given their goals. This strategic choice is perhaps the most impactful, on par with perfect execution. Input data: our app is in the market. It is growing slowly. Churn is higher than we'd like. Research shows that the current set of customers like the app, but don't love it. Should we focus on acquiring new users, increasing lifetime value, or churning fewer users?
How Not To Decide
1. Gut Feeling
Listening to your gut is probably the most common approach to decision making. It's the way we make most decisions - if we did an exhaustive process to decide what to eat for lunch, we'd never get anything done or be able to make any progress. Instinct is your subconscious brain pattern matching inputs with what it has seen in the past and making a quick, shortcut decision. Our brains are fantastic at taking in vast amounts of data and making gestalt decisions; don't fight your instincts.
However, when it comes to highly complex decisions, the very brain that helps us make rapid decisions and move forward with life, deludes us into making bad decisions. Our fast decision-making process is often known as the "reptilian brain" or "System 1 thinking". It is the reason we survived on the savannah: when we thought we saw a lion, our brain didn't take its time working through whether it was a bird, or a blade of grass, or a zebra. It told us to climb the tree first. Our deep-thinking, thoughtful cousins were pruned out of the family tree by the lion. These instant reactions, fight-or-flight instincts, all the shortcuts our brains use, can show up as cognitive biases in decision-making.
Let's use the car buying example. Imagine we walked into the local Toyota dealership. It's a boiling hot day in the middle of summer. Salespeople are extremely busy, overworked, and slightly rude. They give us the keys to a car that's been baking in the sun. We test drive it, hate it, and pass summary judgment: it's a pile of rubbish. The car is way too hot, takes forever to cool down, drives like a sloth. Additionally, we're unhappy about not being treated like royalty and don't want to buy from that dealer anyway — hard pass.
We have attributed the rudeness of a particular salesperson to not just the entire dealership, but all the dealers of this specific car manufacturer. This mistake is called a fundamental attribution error. We have attributed the car's inability to cool down instantly to a manufacturing flaw. We have ignored the base rate: all vehicles sitting in the sun on that day were hot and would take time to cool down. As a result of these biases, we may have discarded a perfectly reasonable option thanks to our instant decision making brain.
2. The Giant Spreadsheet
I love spreadsheets. They allow me to organize my life (and I love organization) and view things at various levels of detail. It is very tempting to distill every decision to some formula and take the flawed human out of the loop. The formula can be straightforward: weighted sums seem to do the trick. Every decision now becomes so precise, so mathematically elegant. Don't like the outcome? You must have gotten the inputs or weights wrong.
Let's visit our product feature prioritization decision. We could build features that target acquisition, LTV, or churn. Each row has a cost and an impact estimate. Any Product Manager worth their salt will come up with a table of priorities, each of these features as rows, and drop columns to show potential impact. Some complex mathematical jiu-jitsu comes next, and the potential impact column has numbers and color coding from red to green. We must pick the greenest feature because our matrix just told us so!
This approach is problematic because it reduces humans to automatons and throws out all intuition. Moreover, it overly simplifies the model by diminishing highly complex information into a number. In the made-up example above, working on notifications seems to win over everything else, using the scheme I've put in. But that the model itself is biased - cost and impact estimates might be completely bogus, our gut might tell us that focusing on acquiring new users is more important, or that the cost of doing notifications is probably higher.
Intuition is essential — our brains are pattern-matching against past experiences and predicting the future. Numerical models create a false sense of precision and delude us into trusting the models. Our minds are excellent at translating vast amounts of information into decisions, and we should trust them while finding ways to correct their shortcomings.
The next section outlines the three decision frameworks that I have used in some shape or form. None of these frameworks are mine - I have merely adapted them for my purposes and found them to be applicable and relevant.
Framework 1: Reducing Dimensionality
The credit for this idea goes to my friend and colleague Josh Williams. The principles are easy to understand and apply on the fly, require little formal work, and help break through a decision making logjam.
Complex decisions are often challenging because they contain an overwhelming number of dimensions. Decomposing the problem results in a large number of smaller choices along each dimension. However, dimensions are not orthogonal — changes in one affect another. Trying to optimize all dimensions at the same time quickly gets overwhelming.
Take the car-buying example: we need to make individual decisions about passenger capacity, gas mileage, styling, manufacturer, safety features, cargo hauling, maintenance, buy vs. lease, and so on. In the example above, we need a car that can carry five humans, is efficient, stylish, safe, easy to maintain, and costs less than 50k. A Porsche 911 is stylish and safe, but doesn't cost less than 50k or carry five humans. A minivan fits most of the requirements but is on the lowest end of the style spectrum. A Prius is in the "meh" range on most things but does excellent on efficiency. The perfect car simply doesn't exist. What do we do?
A good approach in such circumstances is to reduce dimensionality. If you magically cared only about your budget and passenger capacity, the answer would become much more apparent. We can reduce dimensionality in 3 ways:
Aggressively ignore dimensions that you don't care about. In the car example, we could stop caring about maintenance. Maintenance plans are straight forward. Almost every major manufacturer has a good policy. Let's get rid of that completely.
Create "threshold" dimensions that you care about up to a certain point, but not beyond. For example, safety matters to my family, with our small children. But beyond a specific safety rating, any car is sufficiently safe, and we don't need to optimize any further.
Establish trade budgets. This is not dimension reduction per se, but helpful in understanding the relationships between different things. For example, if we care more about efficiency than style, and getting a high gas mileage is worth twice as much as having a sexier car. This approach gives us a rough calculator to prioritize the dimensions we genuinely care about.
The beauty of this framework is that you can quickly sort through the dimensions that matter and devalue or completely discard the ones that don't. Moreover, when we end up with a few real choices at the end of the process, we are assured that all of them satisfy our constraints and would make us happy. Beyond this point, all decisions are good decisions.
Framework 2: Mediating Assessments Protocol (MAP)
This approach is from an excellent article by Daniel Kahneman, Dan Lovallo, and Olivier Sibony. If this summary piques your interest, I encourage you to read the article in full. It is clear, easily understandable, and practical. If you're at a tech company, such as Google or Facebook, and are using a structured interviewing process, you're already using MAP without knowing about it.
Remember the "giant spreadsheet" approach to making decisions? The problem with that approach was that it threw out all human intuition. What if we kept an element of intuition in the mix, but had a way to neutralize a variety of cognitive biases? This is the central idea behind the MAP framework proposed by Kahneman and team.
Let's revisit the feature prioritization problem. We need to make a decision on which feature to build next. There is a trap here — we can easily mislead ourselves into believing that we are following a structured process by sitting through presentations about each option, evaluated in its entirety, with pros and cons, followed by a decision making or voting meeting. This method is subject to precisely the same biases - confirmation bias for things you like, and recency bias for the last option presented. It is essentially the equivalent of a holistic gut call.
Here is the MAP alternative:
Agree upfront on what the goals are. To continue with our example, let's say the objectives are 1) increase the number of daily users, 2) improve the performance of the app, and 3) reduce our operational costs.
One presentation per goal. This method allows us to compare all proposals, on a particular dimension, instead of looking at all the aspects of one proposal. If we have a scoring rubric, we can score proposals per goal at this stage. These assessments are called mediating assessments.
A final evaluation of all proposals, while looking at the mediating assessments. Note: we are not merely taking a weighted average of the intermediate scores. Instead, we are using our judgment at this juncture while keeping all the data in front of our eyes.
The changes seem subtle, but the impact can be profound. The best proof of this approach is in the use of a structured interviewing process to evaluate candidates. If you have interviewed at modern tech companies, like Facebook, Google, or most modern startups, you have experienced this. Instead of having each interviewer simply provide an overall score, the interview process involves a series of mediating assessments.
In structured interviewing processes, each person interviews and makes a judgment about one area of competency - coding, system design, communication, people management, etc. Interviewers score candidates per dimension. The hiring committee looks at all the intermediate scores and then determines an overall rating. This approach is different from each interviewer judging the candidate in all of the different areas and giving one overall score. Structured interviewing is the norm in almost all tech companies. Studies on personnel selection have conclusively shown that using such approaches to interviewing leads to more accurate long term outcomes.
Framework 3: WRAP
This framework is a summary of the WRAP process outlined in the Heath Brothers' fabulous book Decisive. I strongly encourage you to read the book, as well as use the summaryresources on their website (free registration required.)
The WRAP framework focuses on avoiding or overcoming cognitive biases that creep into all human thinking. It is easy to understand and practical to apply. Each maxim can be used independently toward decision making; apply a few or all.
1. Widen the frame
Let's go back to the car-buying example. We are trying to choose between a minivan or a Prius. This problem statement implies a particular frame: we have to decide between A and B.
However, the car is a means to an end - commuting to work, transporting children to school, picking up groceries, or traveling for leisure. In our choice, did we consider solving the more significant problem using some other means? Do we need a car at all? Could we use an electric bike to commute? Or Instacart for all groceries? How would that change our set of options?
A narrow frame is a common decision-making trap. It focuses our thinking on available options, instead of opening our minds to all possibilities, some of which may solve the problem in unique or non-traditional ways.
A classic sign of this trap is the "whether or not" question. When you hear your friend ask you "whether or not they should quit their job" or "whether or not they should build a feature" or "whether or not they should buy an iPad," you should smell a trap. One way out of this trap is removing the option you are leaning toward and making that a non-option. What if you absolutely could not quit your job or buy the iPad? What would you do then?
2. Reality Test Your Assumptions
When we survey the set of available options, we build models in our head of how those options are going to work out. These models get tested when they meet reality, and usually don't survive. We try to improve our models by finding evidence that supports or disproves the model. However, because of confirmation bias, we are much more likely to seek validating proof, rather than the contrary, or disconfirming evidence.
One way to get around this pernicious problem is to look for opposing or disconfirming evidence. Imagine we are in love with a particular feature. Instead of looking for reasons to support our instinct, look for the holes in our reasoning. Why could this feature fail or underperform?
How do other similar features perform? This line of questioning helps us determine the base rate. If most similar features underperform (low base rate), it is unlikely that this particular one is going to be the breakout.
Looking for disconfirming evidence can be difficult, especially when we're already heavily biased toward pursuing a particular path. One trick is to do a joint "premortem" exercise. Get together in a room, and imagine that you're six months into the future. The feature has been built and launched and isn't doing well. What went wrong?
Another approach to reality testing assumptions is to dip a toe in without diving in all the way. In the car buying example, we could rent a minivan for a week, followed by renting another car for a week, to test out what it would feel like living with that car. The cost of a mistake (perhaps you hate the way the minivan drives or turns out that the sedan is entirely too small for your family) is tiny compared to buying the car and discovering you made a mistake.
3. Attain Distance
One of the most striking passages in Andy Grove's book "Only The Paranoid Survive" is about Intel's decision to pivot from making computer memory to making microprocessors. Intel started as a memory company - and they were the world leader in manufacturing memory chips in the late 70s through the early 80s. The microprocessor business was niche, dwarfed by the massive memory business. However, the memory business was seeing enormous pressure from Japanese manufacturers and steadily losing margin. Pivoting the company from their roots, to go all-in on microprocessors was an incredibly difficult decision. Grove described how they finally did it:
— via Google Book Search
For complicated psychological reasons, we seem to make clearer decisions when we are deciding for others instead of ourselves. One of the most effective techniques for attaining distance is to ask:
"What would you tell your best friend to do in the same situation?" — Personal Context
"If you were let go and we hired someone else, what would they do in the same situation?" — Professional Context
4. Prepare To Be Wrong
We typically overestimate the impact of any particular decision. In reality, most decisions are reversible, or at least have escape hatches that are less catastrophic than we initially believe them to be. Jeff Bezos summarizes the concept of reversibility:
Some decisions are consequential and irreversible or nearly irreversible – one-way doors – and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don't like what you see on the other side, you can't get back to where you were before. We can call these Type 1 decisions. But most decisions aren't like that – they are changeable, reversible – they're two-way doors. If you've made a suboptimal Type 2 decision, you don't have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups.
— Jeff Bezos, Amazon Annual Shareholder Letter
What if we made a wrong car-buying decision? We own a car that we don't like, which we need to sell subsequently, and buy another car. There is a quantifiable dollar cost and some hassle in selling and buying cars and filing paperwork. But that's it. With that understanding, we no longer fear making a decision, knowing that the cost of reversing that decision is not life-altering.
Life is full of decisions. In the majority of cases, our instinct is a great decision-maker. However, when faced with highly complex decisions, the evolutionary processes that helped us survive the lions on the savannah can mislead us into making poor, often irrational choices. Using frameworks to make such complex decisions allows us to counter some of those cognitive biases and make good long term choices.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9370052814483643,
"language": "en",
"url": "https://spinsafe.com/why-digital-india-is-vulnerable-to-new-age-cyber-attacks/",
"token_count": 406,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.34765625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:8fd0002c-5025-4dc6-8883-3d88984e3d18>"
}
|
Even as governments and economies are coming to terms with the new normal and gearing up for long term impact of the crisis, technology innovators have been working overtime to design solutions which could help retain operations and growth. Supported by government policies and innovations in emerging technologies like AI, ML, IoT etc., India has been steadily inching towards a technology enabled economy.
However, with the rise of connected devices, efficient internet penetration, and widespread digitisation of multiple sectors, including education, finance, healthcare, retail, and even agriculture and logistics, comes the threat of cyber-attacks which can cause not only monetary losses but compromise data privacy and put the economy and lives in danger. As of the first quarter of 2020, India already recorded a 37% rise in cyber-attacks. Risks like data leakage, connection to unsecured Wi-Fi networks, phishing attacks, ransomware, spyware, apps with weak encryption (also known as broken cryptography) are some of the common cyber threats plaguing us. IoT and connected devices have also reported increased cases of data breaches.
Being the second largest consumer for smart devices and a country with one of the largest base of internet consumers, India continues to remain a sitting duck, vulnerable to several national and international cyber-attacks. Some of the key reasons for this vulnerability can be listed as:
- Outdated systems and processes: While we do enjoy smart personal devices, a large part of corporate and business technology systems continue to depend on outdated or legacy infrastructure, with poor or inadequate cyber security protection.
- Accelerated digital adoption, over a short span of time: The widespread digital adoption across public and private sectors, has left little or no time for the proper development of a backend cyber security infrastructure, putting a large amount of data at risk.
- Limited understanding about cyber security: The understanding of cyber security and its prevention continues to be limited to installation of antivirus and malware protection software on individual computers/ devices. Even as cyber-crimes are getting more and…
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9438084959983826,
"language": "en",
"url": "https://thelawdictionary.org/article/legal-definition-of-tolling-agreement/",
"token_count": 476,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.050537109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:314ecaf4-7cac-40c7-a7f6-16226c98a55b>"
}
|
The Tolling Agreement might be mistaken for a highway toll bridge where money is collected. But it also has another definition with regards to rights and contract law. Here is the legal definition of the Tolling Agreement.
“Asserting Rights after Statute of Limitations”
The Statute of Limitations (also Statute of Repose or Nonclaim Statute) allows for the court system to proceed in an orderly fashion. Collecting evidence, deposing witnesses and filing claims would be quite difficult if there were no time restraints on lawsuits. The Statute of Limitations sets a fixed time period for completing certain matters.
While the statute of limitations may be good in most cases, it may be wise to suspend the rules due to some unforeseen event. A contract can be written with something called a Tolling Agreement, which allows for pausing, delaying or suspending the time period that will automatically kick in. This provision extends rights past the normal statute of limitations time period. Parties who have agreed to tolling, waive any defense.
At times, an action cannot be adequately completed in due time; tolling allows parties and authorities more time to assess and determine the legitimacy and viability of claims. Common circumstances where tolling may be involved include underage juvenile status, insanity, bankruptcy, natural disaster or good-faith negotiations. In each of these cases, a “special condition” exists that could lead to a sensible extension of right beyond the time frame limits. Liability insurance and other agreements may be invalidated by tolling agreements.
“Sports Use Tolling Agreements”
Another place where tolling agreements are used quite often are sports. Most modern professional sports – basketball, baseball, football, hockey and soccer – have collective bargaining agreements, which carefully stipulate the rights of both players and owners based on strict fixed time frames.
In college, when an athlete transfers or is injured, he can apply for another year of eligibility. This is a form of tolling.
Another example is a minor league deal for baseball or hockey. Young athletes want a chance to make the major league team. Many contracts have time frames where the major league must make a decision after a certain period of time. If not, the athlete wants to try out with another team. A Tolling Agreement may suspend this time period due to injury.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9614855647087097,
"language": "en",
"url": "https://www.atb.com/company/insights/the-owl/the-future-of-alberta-oil/",
"token_count": 731,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.298828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:94815d64-e57f-4ae1-a270-0871bbf0912a>"
}
|
The future of Alberta oil
Because we are talking about billions of dollars of annual investment and output, even small setbacks create big holes to fill
By ATB Economics 9 March 2021 2 min read
Today’s Owl steps back from the near-term trends affecting the Alberta economy to examine the longer-term challenges facing our oil patch.
Not that long ago, it seemed inevitable that Alberta’s vast oil resources—the province is home to the third largest proven oil reserves in the world after Venezuela and Saudi Arabia—would continue to be developed until production reached six, seven or even eight million barrels per day. (Production averaged 3.7 million barrels per day in January.)
After all, the puzzle of how to extract bitumen economically from the oil sands of northern Alberta had been solved and the world was hungry for oil. The added production would be shipped to Asia, the U.S. and eastern Canada by new and improved pipelines and Canada would become a true “‘energy superpower.” There was talk of Alberta eliminating its provincial income tax, building up its oil and gas savings fund to rival Norway’s and creating economic opportunities that would improve the lives of millions of people.
But things changed. The U.S. fracking revolution increased supply, the price crashed, pipelines became environmental lightning rods and the war against carbon heated up.
On the one hand, the forces arrayed against oil use haven’t had much success. With the exception of 2008 and 2009 during the Great Recession and 2020 during the pandemic, global oil consumption has increased every year since the Kyoto Protocol of 1997 and there is a loose consensus that oil will be a major part of the global energy mix for many years to come.
On the other hand, the effort to reduce oil consumption continues. Several major oil pipeline projects have been cancelled. The large-scale adoption of electric vehicles—while still just a dot on the horizon—is becoming more realistic. It will take time, but the clock is ticking and global oil consumption could start to plateau or even come down. (Although a fossil fuel like oil, natural gas is seen by many as an important “transition fuel” and is, therefore, on a different track than oil.)
And because we are talking about billions of dollars of annual investment and output, even small setbacks create big holes to fill. If Alberta’s oil and gas output (using 2019 as an example) dropped by 15 per cent, the province would need to find new economic activity equivalent to Ontario’s auto sector to fill the hole.
Alberta’s oil industry is not disappearing. But we have to adapt to a world in which carbon is under siege. Growth, in turn, is going to have to come from different industries.
Some will be related to oil and gas extraction such as clean energy technology and petrochemicals. Some, like renewable energy, will complement oil and gas in the province’s energy portfolio. Some will build on other sectors such agriculture and agri-food, tourism and health services. And some will be in areas such as artificial intelligence, entertainment and anything existing businesses and new entrepreneurs set their sights on.
It’s a bright future for Alberta, but it’s going to take a lot of hard work to keep it that way.
Answer to the previous trivia question: Louise McKinney was the first woman to be elected to the Legislative Assembly of Alberta in 1917.
Today’s trivia question: How many litres are there in a standard barrel of oil?
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9622908234596252,
"language": "en",
"url": "https://www.jagranjosh.com/general-knowledge/expenditure-limit-in-the-lok-sabha-and-assembly-elections-1552375290-1?ref=list_gk",
"token_count": 1078,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.279296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dce04b42-b5ea-4ebf-8d6f-a75817e76c09>"
}
|
What is the Expenditure limit in the Lok Sabha and Assembly Elections in 2019?
India is called the largest democracy of the world. Elections in India are celebrated like a national festival. But there is a proverb for Indian election that the elections of India are based on the 3Ms i.e. Money, Mind and Muscles.
The total election expenditure in India is increasing in leaps and bounds. “The combined US presidential and congressional election in 2016 cost was around $6.5 billion. The upcoming Lok Sabha election of India is going to be world's most expensive one.
In 1952, the cost was 60 paisa per elector which became Rs. 17 per elector in 2004 and declined to Rs. 12 in 2009.
It is worth to mention that total government expenditure for the first three general elections was around Rs. 10 cr each.
The government expenditure rose to Rs. 100 crore until the eighth general election in 1984-85. It crossed Rs. 500 crore for the first time during the 11th general election in 1996 and went beyond Rs. 1,000 crore during the 14th general election in 2004.
The total government expenditure for the last Lok Sabha polls in 2014 at was around Rs. 3,870 crore which was 3 times more than the expenditure incurred for the 15th general election in 2009. It does not include expenditure of political parties.
In the 2009 Lok Sabha polls, the cost to the exchequer was Rs. 1,483. This does not include the expenses incurred for security and the amount political parties will spend.
Centre for Media Studies reports that for the general elections of 2014, BJP spent more than Rs. 700 crore for campaigning and publicity.
Details submitted to the Election Commission reveal that BJP had spent Rs. 17.60 billion (Rs 1,760 crore) on fighting elections in the last five years to won 22 state elections.
A candidate is not free to spend as much as he/she likes in the election. The law prescribes that the total election expenditure shall not exceed the maximum limit prescribed under Rule 90 of the Conduct of Election Rules, 1961. It would also amount to a corrupt practice under sec 123 (6) of Representation of Peoples Act, 1951.
Let’s have a look on the expenditure limit in the elections;
Limit for Lok Sabha Elections;
The maximum limits of election expenditure vary from State to State. Bigger states of India are allowed to spend more than the smaller states.
A candidate can spend upto Rs.70 lakh, depending on the state they are contesting the Lok Sabha election from. Expenditure limit in the bigger states like Andhra Pradesh, Maharashtra, Madhya Pradesh, Uttar Pradesh, West Bengal and Karnataka etc. is Rs. 70 lakh.
The expenditure limit in smaller states & UTs like; Arunachal Pradesh, Goa, Sikkim, Andaman and Nicobar Islands, Chandigarh, Dadar and Nagar Haveli, Daman and Diu, Lakshadweep and Puducherry was kept at Rs. 54 lakhs.
It is worth to mention that the limit for Delhi Lok Sabha election is also Rs. 70 lakh.
Limit for Assembly Elections;
Expenditure limit in the Assembly Elections of the bigger Indian states like, UP, Maharashtra, Bihar, west Bengal and Andhra Pradesh is set to Rs. 28 lakhs while this limit is kept at Rs. 20 lakhs for smaller states like; Arunachal Pradesh, Goa, Manipur, Meghalaya, Mizoram, Nagaland, Sikkim, Tripura and Puducherry.
Main components of the election expenditure are;
a. Expenditure on vehicles during election campaign:-34%
b. Expenditure on Campaigning equipment:- 23%
c. Expenditure on Election Rallies:- 13%
d. Expenditure on electronic and print media:- 7%
e. Expenditure on Banners, hoardings and pamphlets:- 4%
f. Expenditure on field visits:- 3%
Candidates have to keep a separate account and file the election expenses with the Election Commission of India under the law.
All registered political parties have to submit a statement of their election expenditure to the Election Commission within 90 days of the completion of the Lok Sabha elections.
While all candidates are required to submit their expenditure statement to the poll panel within 30 days of the completion of the elections.
An incorrect account or expenditure beyond the cap can lead to disqualification for up to three years under Section 10A of the Representation of the People Act, 1951.
It is observed that due to higher inflation in the economy the expenditure of elections is increasing in leap and bounds of yearly basis. Now political parties have to do huge campaigning to woo the voters.
Even some candidates have admitted that the EC allows a candidate to spend only Rs. 70 lakh to fight the Lok Sabha elections, but in reality this election costs up to Rs. 2 crore per candidate.
In the conclusion it can be said that the increasing election expenditure made the democracy as a puppet of some rich politicians that is why some honest and poor candidates are unable to fight the elections which is not a healthy practice for the democracy.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9555014371871948,
"language": "en",
"url": "https://www.pipelinelaw.com/2016/06/24/advisory-bulletin-warns-about-corrosion-under-insulation/",
"token_count": 911,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2265625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:50881bff-4b3a-4ad3-a873-1417052da72c>"
}
|
A recent PHMSA Advisory Bulletin warns the pipeline industry about Corrosion Under Insulation (CUI), which is frequently used on pipe transporting heavy crude oil. Such products are often heated for more efficient transport, thus the pipe is wrapped with foam insulation over the coating, and then further covered with a tape wrap over the insulation. The crude oil release from a Plains All American pipeline near Santa Barbara in May of 2015 used such thermal insulation, and the government’s investigation following that release prompted this Advisory from PHMSA.
In the Plains incident, PHMSA determined that water infiltrated the foam insulation, which allowed corrosion to develop. The cathodic protection (CP) system became ineffective, although CP readings continued to show sufficient current. Unfortunately, inline inspection (ILI) runs also did not reveal the extent of the corrosion. The most recent ILI of the line, run just before the incident, predicted a 47% wall loss at the point of rupture, which would not by itself trigger immediate repair under PHMSA rules. After the rupture, the actual extent of corrosion was confirmed to be 89%, which would have triggered corrective action. Plains had noted water infiltration after prior tool runs, and had noted an increasing number of corrosion anomalies, but the available data did not accurately predict the severity of corrosion. PHMSA’s Failure Investigation Report on the Plains incident is publicly available.
The risk of corrosion occurring due to water infiltration to pipe coating is certainly not unknown. Pipeline safety law requires that buried steel pipe be coated, with an electric current applied, both actions intended to prevent external corrosion. Different types of coating have been used over the decades, and the risk of corrosion may vary over time, by type of coating. Coating may also become ‘disbonded,’ meaning that airspace develops between the coating material and the pipe. That may also allow corrosion to develop. Electric CP is applied to pipe, in addition to coating, to further prevent the occurrence of corrosion. Disbonded coating or other forms of shielding between the pipe and the electric current can allow corrosion to develop. Current pipeline safety law also requires pipeline integrity assessments for lines located in environmentally sensitive or highly populated areas (called HCAs). Assessment methods include ILI of steel pipelines with technology that can detect corrosion wall loss, dents, cracks and other anomalies. Pipeline operators must inspect the sufficiency of CP on a monthly basis (for all pipeline, not just those in HCAs), and conduct ILI at least once every five years, to monitor and maintain pipeline integrity.
CUI is not a new phenomenon. The National Association of Corrosion Engineers (NACE) issued a report in 2006 that described the risk of corrosion under insulation, noting that while thermal insulation over coating is typically effective, corrosion under insulation can occur and presents a threat to pipeline integrity. Pipeline operators are required by law to take action to address known threats to pipeline integrity (usually more frequent ILIs and confirmation digs at identified locations to evaluate the existence or extent of that threat). The fact that the Plains ILI runs did not detect the CUI in this instance is the subject of continuing evaluation. It may have been due to inadequate ILI equipment or interpretation, unclear communication between the ILI vendor and the operator, or a failure of ILI technology generally to make accurate detection of wall loss associated with CUI.
The PHMSA Advisory Bulletin advises operators of all liquid pipelines to be aware of the risk of CUI and the associated risk of inadequate detection of wall loss by either CP or ILI. The Advisory is expressly linked to the Plains incident, but directed to the entire industry. The Advisory suggests that an operator of pipe with insulated coating consider one of several integrity management activities: (1) replace all pipe using thermal coating; (2) repair or recoat those sections of such pipe identified as having inadequate CP; (3) conduct more frequent ILI; (4) use more sophisticated ILI tools, capable of detecting stress corrosion cracking; (5) adopt more stringent repair criteria than required by law in an attempt to capture undetected corrosion before it becomes a serious risk; or (6) adopt a more advanced method of leak detection.
Advisory Bulletins issued by PHMSA do not have the force of law and cannot by themselves form the basis for enforcement. They often find their way into enforcement actions, however, in the guise of compliance or corrective action orders appended to allegations of violation of other regulations.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9528475999832153,
"language": "en",
"url": "https://www.tutorialspoint.com/marketing_management/marketing_management_overview.htm",
"token_count": 1317,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.21484375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0853c84c-8ffe-4d60-b801-89827b48c46a>"
}
|
A market can be defined as the summation of all the buyers and sellers in an area or region under consideration. The area may be a country, a region, a state, a village or a city.
Market is a place where goods, commodities or services provided by the sellers are swapped with the buyers or purchasers for some value combined with need, demand, supply etc.
We can say that it is a place, which satisfies the potential needs of the buyers as well as the sellers. The market may have a physical existence or a virtual one. It may be local or global one.
A market has its own characteristic features. It involves only exchange and trade of commodities but that activity also has its own features.
Let us take a look at the characteristics of a market.
A place for swapping goods and services for some value. The goods can be swapped for money, land or some other commodity.
This is a place where you can negotiate commodities
Coverage of all customer requirements is possible here
This is a place for innovation and creation
There is potential or capacity for buying and selling.
There is share of consumption as well as total part of demand.
Let us now take a look at the key elements of the market.
The key elements that make a market, without which a market is not complete, or the elements on which a market depends are as follows −
Place − The area where the swapping of goods, commodities or services takes place between the seller and the buyer. The place should be convenient to both the parties.
Demand − Market runs on supply and demand. A seller provides the products or services and a buyer wants to fulfill his/her requirements. A product with high demand is supplied more.
Seller − A seller is the person or the party who offers a variety of or even a single product or service to others in return of some valuable item.
Buyer − A buyer is the person or party who needs a product or service and in return is ready to pay some valuable item as demanded by the seller for the product.
Price − This is the cost or the amount that is to be paid for a product or service. It should be fixed; else, it may lead to conflict as well as an imbalance in the seller-buyer relationship.
Government Regulation − The government makes some regulations that both the buyer and seller have to abide. Everyone is treated equally in front of the law. For example, the buyer is not allowed to sell illegal products while the seller is prohibited from buying them.
Product Specification − It is very important to specify the quantity required, ingredients used and all other details of the product as everybody has different tastes and requirements. It is also not necessary that what suits one person should suit another.
These are the key elements that can make or deteriorate a market. A market runs with all these elements together; if one of them is removed, there is no market. For example, if we remove the buyer from the market, the question of who will purchase the commodities arises. In the same way, each element has its own role in the market.
There are numerous reasons why a market grows or reduces its profitability. There are different factors affect the growth of a market in many ways.
Let us understand the importance and effect of each factor given below on a market with the help of relevant examples.
Flipkart offers a special sale offer, where the candidate needs to register for an item in order to purchase it. In this way, the site gets an idea about the product’s demand and thus it tries to maintain the quantity of the item as per the demand. If the number of buyers is more, the product needs to be bought again. However, if the buyers are fewer, then the product needs to be hiked to increase the sale.
If a person wants to buy a car, following things need to be considered: what type of a car does he /she need, which brand, what are the brands available, what is the budget, etc. Most importantly, with this factor, one gets a variety of choices in a limited budget.
Lakme launches a new product, which gives the customer three-in-one service. It works as a face wash, face scrub as well as face pack. But the question is what was the need.
The simple answer is competition; this product is a technique to attract more customers and cope with the growing competition.
We buy a product only if it stands up to our expectations. Yardley claims that it moisturizes and nourishes the skin for six hours, so a person with dry skin will buy it expecting that claim to be true.
Cultural factors like the culture and tradition we follow also affect the market. For example, an Oriya woman would prefer a Sambalpuri saree for some special event over silk or any other type.
An individual will prefer buying gold only when the rates are down. When the rate is Rs 20,000 for 10g, the customers increase while, when the rate is Rs 26,300 for 10g, the customers decrease.
What the market provides is very much dependent on social factors. Analysis shows that social factors impact the business of beverage companies. For example, Pepsi projects itself as a non-alcoholic beverage because it has to maintain the strict differences in cultures around the world.
Political factors are also important. Something that is banned by the government cannot be sold in the market, for example, the recent meat ban.
Marketing management is the process of planning & implementing the conception, pricing, promotion and distribution of products or services. It is a target-oriented process and an operational area of management.
Marketing management is basically an organizational discipline, which focuses on the practical usage of marketing orientation, techniques and methodologies in companies and organizations and on the management of a firm's marketing resources and activities.
The following are the main objectives of marketing management −
To satisfy the clients’ requirements and their objectives.
To leverage the gain for the growth of business.
To develop customer base for the business.
To create an appropriate marketing mix.
To raise the quality of life of people.
To build a good image of the organization.
To maintain the long-run concept.
Now, we are clear about the need and objective of marketing management. Moving forward, let us discuss the broad marketing concepts in detail.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9613832831382751,
"language": "en",
"url": "https://amctrst.org/2017/11/30/more-economics-risks-and-rewards/",
"token_count": 1144,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2255859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1e54bf34-573c-408c-b885-4ffefae083c3>"
}
|
Improvement in societal standard of living comes from advances in economies. Most advances in economies derive from technological innovation creating new products and services or enhancing productivity, at least in today’s environment. And technological innovation is either incremental, building on prior technology or fundamental, building on scientific discovery.
Scientific discovery often comes from research universities, funded both by the government (in wartime for defense, in peacetime mostly for health) and by private industry. Most incremental innovation is funded by industry and by private investors. Regardless of the source of funding, designing, developing, and producing the product that consumers or business want to purchase requires one or more entrepreneurs.
Who are these entrepreneurs, either independent or corporate employees (sometimes called “intrapreneurs”)? They are people who obviously believe in the idea or concept, but also are motivated by the lure of potential rewards despite the inherent risks
What are the risks? They may involve loss of money, loss of other opportunities to make money, loss of reputation, fear of failure, waste of a substantial portion of one’s productive years. For a corporate employee, it may risk his career and potential pension, perhaps his current employer.
As an interesting aside, entrepreneurship in Europe has lagged that in the U.S. significantly. One of the cultural reasons, I believe, is that Europeans are far harsher and more judgmental of failures. Someone who starts a company in Germany (or elsewhere) that fails, will be referred to, often for the rest of his life, as “Frederick, the one who started the software company that went bankrupt”.
What are the kinds of rewards being sought? They include: making a lot of money, being one’s own boss, favorable reputation, earned respect, satisfaction of accomplishment, economic and organizational independence. For most entrepreneurs, the chance to make a lot of money looms large, although it is certainly not the only motivating factor.
One of the enabling factors that has fueled the growth of innovation has been the availability of risk capital, or venture capital. Although reputation and bragging rights motivate the venture capitalist, his overwhelming motivating potential reward is monetary.
Consideration of the history of venture capital in the U.S. is very interesting. Prior to World War II, wealthy families backed occasional ventures, but there was no formal and professional organization. After the war, four VC firms came into existence: J H Whitney & Co, Rockefeller Brothers (later changed to VenRock), American Research and Development, and Bessemer Securities. By the mid 1970’s, there were a handful more. Then, in 1978, President Jimmy Carter tried to eliminate any difference in taxation rates between ordinary income and capital gains. Congress rebelled, and passed the Steiger Amendment, which enshrined a 50% reduction in tax rate for gains on investment held longer than 12 months. The outpouring of money into new and existing venture capital investment firms since that date has been nothing short of incredible. And, with all that money available now, entrepreneurs started emerging from the woodwork, and we have had the boom in new companies and new products that has changed the way we live. Motivation matters.
So what, if anything, can the government do to encourage entrepreneurship? I believe there are four areas: protection of intellectual property, funding of basic research, licensing requirements, and taxation policy.
Our patent laws are good. Our young and still small companies are vulnerable to foreign nations and companies stealing their intellectual property. The government could take a stronger position with foreign governments, most notably China, to put end to the practice, perhaps even by joining the U.S. startup company as a joint plaintiff.
It is difficult for our corporations to fund basic research, when they are under pressure from investors to produce quarterly earnings gains. The downstream benefits of basic research are long in coming, and risky to begin with. We should have a national policy of funding the great research universities and qualified researchers, through the ups and downs of the economy, but of carefully and fairly vetting the various proposals. We should encourage universities to develop well-thought out technology transfer templates and agreements.
Licensing for many small businesses, especially service-based ones, is done mainly by the states, rather than by the federal government. There are many cases of restrictive licensing, lobbied for by existing competitors that make entrepreneurship in smaller sized businesses far more difficult than it should be. State governments should set up commissions to review all licensing regulations, with an eye toward fostering new business formation and competition with entrenched firms.
We should encourage the long-term investment nature of venture capital with tax policy. Without changing tax policy for investment in public companies, we could tax investment gains in private companies (private at the date of investment) on a sliding scale, favoring length of ownership. For example, an investment in a startup held at least two years could receive a tax break of 20% off the public shares capital gains rate, one held three years 40% off, etc., down to 90% off from 5 years and on. This would definitely incentivize both entrepreneurs and investors in startups to take a long range view.
I think we get reasonably good marks in the arena of encouraging innovation. The suggestions above will help strengthen our position, but the most important thing we can do is to avoid doing damage to what we have today. Any prospective law or regulation should be strangled in its cradle, if it makes it more difficult to start a business, if it fails to protect intellectual property, if it reduces funding for scientific discovery or if it makes financial gain less appealing for high risk investments in startups.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9312455654144287,
"language": "en",
"url": "https://www.ag.ndsu.edu/archive/streeter/2004report/tourism_intro.htm",
"token_count": 570,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.020263671875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2a0980f5-b9b5-4382-9771-62b71755e5b9>"
}
|
Nancy M. Hodur, F. Larry Leistritz, and Kara L. Wolfe2
Table of Contents
• Participant Expenditures
• Direct and Total Economic Impacts
• Potential for Future Growth
Rural communities around the country are increasingly looking to the tourism sector as a source of economic growth, and North Dakota’s unique resources support the potential for rural tourism development. Our 62 National Wildlife Refuges, more than any other state, showcase the potential for wildlife-oriented recreation. Over the past decade, hunting and fishing by nonresident sportsmen has increased substantially (Bangsund et al. 2004), which in turn has stimulated the development of outdoor recreation-oriented businesses (Hodur et al. 2004). Many business operators and other community leaders would like to broaden the region’s nature-based tourism sector to include birding and other wildlife viewing, hiking, biking, and similar soft adventure activities. However, little is known about the region’s nature tourists, their backgrounds, interests, and satisfaction with available opportunities.
Previous research has identified nature tourists in general, and particularly birders, as a substantial source of economic activity. However, past studies have produced widely varying estimates, with an Arizona study reporting that visiting nature tourists spent an average of $84 per person while a Nebraska study reported expenditures of $1,875 per visitor. Given the wide range of findings from previous research, a study of participants in a local birding festival offered timely insights regarding this group of nature tourists.
One group trying to capture the economic development potential of North Dakota’s natural resources is Birding Drives Dakota, a non-profit corporation representing several communities formed to promote the Coteau region of central North Dakota as a birding destination. Birding Drives Dakota (BDD) has published a brochure describing area birding opportunities, mapping self guided tours, and offering tips for sighting birds unique to North Dakota, such as the Baird’s Sparrow. The group sponsors an annual event called the Potholes and Prairie Birding Festival. The first festival, held in 2003, was very successful with over 300 participants. This study focused on participants attending the 2004 festival, held in Jamestown, ND June 11-14.
The purpose of the study was to examine the present and potential economic impact of nature tourism in nonmetro areas of North Dakota. Specific objectives included:
1. Determine the residence, demographic characteristics, and expenditures of participants attending the 2004 Potholes and Prairie Birding Festival (the Festival)
2. Estimate the secondary and total economic impacts associated with the Festival, including visiting participants’ expenditures.
3. Examine the potential for further growth in birding and related activities in the region.
Next section: Study Methods
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9756898880004883,
"language": "en",
"url": "https://www.elikarealestate.com/blog/millennials-set-impact-real-estate-market/",
"token_count": 851,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.12451171875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:0ec73f01-19a1-4e51-8cbc-f0e4fa18676d>"
}
|
Many are familiar with the Baby Boomer Generation and its impact on society. However, the Millennial Generation is garnering more attention from researchers. Although there are no precise birth years that define the Millennial Generation (also known as Millennials or Generation Y), it is generally defined as those born from the early-1980s to the early-2000s. There are more than 80 million Millennials (born between 1980 and 1999), according to U.S. Census Bureau statistics.
This will have an impact on a broad swath of society. However, we focus our attention on the real estate market. We will examine trends with an eye on how it will impact New York City’s housing market going forward.
Millennials take on student debtMillennials take on student debt
The generation is highly educated. The high school graduation rate is currently 72%, and 68% of those individuals are entering college, according to the U.S. Bureau of Labor. Although this should generate higher earnings over the long-term, Millennials have taken on an average of $25,000 in student loans. According to TransUnion, the average student debt climbed to 29,575 in 2014, from $17,442. Approximately 40% of the group have some amount of student debt.
This trend is likely to continue, and with students leaving college with more debt. Tuition is rising faster than inflation. In spite of this, enrollment has continued to increase. A few years ago, this was partly explained by the challenging employment market as people went to school to attain new skills. However, demand will likely continue to increase in the face of a changing employment market that emphasizes knowledge, in our opinion.
Although graduates are saddled with higher debt in the near-term, ultimately this should result in higher wages.
Difficult employment marketDifficult employment market
Recent studies show the unemployment rate among Millennials is about twice that of the entire workforce. Moreover, a study in 2011 showed the underemployment rate in the 18 – 24 age group was 28.6%, the highest in the various age brackets.
Aside from that, the economy changed as they were growing up. Their parents and neighbors likely were laid off at some point, and the Great Recession dealt a severe blow to many households.
Moving outMoving out
Poor employment prospects have led many recent college graduates to move back home with their parents. However, the recent improvement in the economy, which has brightened the employment picture, may provide the impetus for Millennials to seek their place. This should give a boost to the NYC housing market. Overall, the group is expected to form 24 million new households by 2025, according to Harvard’s Joint Center for Housing Studies.
There are indications that the reasons above only delayed moving out. Other recent surveys show three-quarters still believe homeownership is an important long-term goal. Over the next five years, 8.2 million new households will form, and 74% plan on moving out over the next five years.
New York City offers many amenities that this generation craves. A recent study by Brookings showed that Millennials would rather have interesting experiences rather than possessions. NYC, with its access to a myriad of events, Broadway shows, and entertainment offers what this generation is seeking. Niche recently named NYC as one of the best places to live for Millennials.
Baby boomers are moving out of the city. In the years 2007 – 2013, there was a 13.9% decline in that generation’s population. However, we expect this to be replaced by Millennials. Manhattan is in the top 10 markets where they are migrating. Metro Washington D.C. is high on the list. However, as employment prospects improve, we expect this situation to change. The unemployment rate in Manhattan was 6% recently, compared to 3.2% in Arlington County in Virginia.
High unemployment and slow wage growth, along with higher student debt have delayed many millennials entering the housing market. However, as the economy has improved, the situation should change. New York offers many of the amenities this generation seeks, which should increase demand for the renter market, and ultimately, the home buying market.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9660377502441406,
"language": "en",
"url": "https://www.firsttechwc.co.za/post/directors-view-2",
"token_count": 1308,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.10546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e2a5cc44-028b-4481-9512-fe92bf74ac9d>"
}
|
Last week, Microsoft released the results of their two year experiment of hosting a datacenter underneath the ocean floor bed just off the Scottish coast line. This was in an effort to extract greater efficiencies from a thermal cooling perspective, whilst using less electricity and the associated carbon cost to the planet.
On a daily basis, companies are talking about the concept of “going green” and being more environmentally friendly, but few are actually planning for a change in our planet’s climate. Executives should visualize their companies in 30 years’ time, as climate change impacts their profitability and corporate survival.
Unfortunately, most of them do not, with the focus very much short term on the next quarter’s earnings to please shareholders. The thinking that permeates is always that the next CEO or his successor can fret about the long-term climate strategy…
First, some background: At an elevation of 3300 feet in the Spanish Pyrenees, a wine farmer by the name of Miguel Torres invested in his company’s future by planting 104 hectares of a pinot noir cultivar 10 years ago. At the time, common knowledge dictated that grapes won’t do well that high above sea level, owing to the cold temperature. However, Torres, being the head of one of the largest wine families in Spain, had access to scientific data that showed that the current Rioja wine region would be nonviable for growing grapes within 40 – 70 years, owing to climate change.
The actual wine belt of Europe would move north by as much as 40 kilometres per decade so much so, that some farmers are considering planting grapes in the south of England and even as far north as Scandinavia. Should this happen Migual had effectively mitigated his companies risk by planting this vineyard in the Pyrenees as a hedge to future climate change.
The question begs however, what companies are currently doing to prepare or plan for downside scenarios when it comes to climate change.
Planning for more hurricanes or rising sea levels owing to climate change, might be an important issue for a coastal property developer, but it is very difficult to justify spending a lot of shareholders money on mitigating risk X if the company could actually be blindsided by risk Y instead.
The evidence thus suggests that the corporate sector is doing very little other than reducing carbon emission and perhaps improving on their environmental sustainability. Some have taken the more pessimistic approach of adaptation.
The problem with an adaptation strategy is of course that it has very little PR value for a company. Nobody is trying here to save the planet for future generations….it is all about trying to remain profitable when the earth starts falling apart. No streetwise company is going to publicize that kind of thinking.
New regulations in the USA from the Security and Exchange Commission will force companies to reveal any material risk that they may suffer from climate change. This will allow investors to adapt their strategies according to the way that companies view climate risk.
Climate change can however be viewed by companies as more than an ominous reality but perhaps even as a business opportunity.
In places like Brazil, state-owned banks such as BNDES and Banco do Brazil are evaluating whether investing in projects makes sense if the sustainability does not exceed 20 or 30 years. Agricultural giants such as Monsanto are developing genetically engineered crops that can withstand drought better and some global shipping firms are using satellite imagery to plot more fuel-efficient transportation lines through a partial ice-free Artic passage. In the American West, power companies such as TransAlta has put future power plants on hold, seeing that water rights for the project life duration could not be ensured.
Dan Ariely, a behavior economist that penned the book “Predictably Irrational”, is quoted as saying “that climate change is a problem that is perfectly designed to make people do nothing: It happens far in the future; its effects will be felt most greatly by other people; and the efforts of any one individual are minuscule.”
The problem with climate change is of course that the time horizon spans over decades and most corporate companies work on a business plan of typical 5-7 years at a time. Investing massive amounts of money in a project where the potential return could only reflect in 20 or 30 years might be difficult to swallow for most shareholders and executives, especially if the future probability is difficult to predict.
Take the example of an electronic manufacturer that relies on silicon supplies from Bolivia for its semi-conductors. If your climate change model predicts massive upheaval in that country’s economy, leading to possible political unrest and a massive increase in the price of silicon, do you start stockpiling silicon now?
This is a serious potential risk to the company, but there is very little that you could do about it from a hedging point of view, as silicon is not a futures traded commodity. The company might as well disclose the risk in its SEC filings and continue doing business as before.
Another hedging strategy would be to buy insurance against potential climate change.
Already the majority of reinsurance companies are pricing risk according to their climate change models. Companies are however experiencing a false sense of security thinking that their risks are hedged. Remember that insurance policies are only valid for 12 or 24 months at a time, thereafter you are actually renewing on a yearly basis. Should the insurance company’s climate change risk model predict a catastrophe in 20 years’ time, there is no need to increase premiums now. Nothing prevents them from refusing cover in 15 years’ time as the effects of climate change slowly unfolds or increasing the premiums at an unaffordable rate when it suits them.
Think back to the Bhopal chemical disaster in India in 1984. After the event, pollution liability insurance went off the market completely for a time. When the reinsurance companies offered it again, the cost was 10 times higher than before the event!
Thus, insurance does not work very well as an adaptation strategy to climate change either, as rising insurance cost is inherently difficult to protect a company against.
In conclusion, even if the effects of climate change are foreseeable, they can be impossible to hedge. The lesson here is that identifying a risk is not the same thing as being able to negate it.
Until next time, thank you for your continual support of First Technology and stay safe!
Johan de Villiers
First Technology Western Cape
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9201228618621826,
"language": "en",
"url": "https://www.foursolar.com/renewable-energy-production-and-management-by-four-solar/",
"token_count": 903,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1298828125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:4e9df30f-625f-4838-87b4-6ebabc016f47>"
}
|
Renewable Energy Production and Management
Human Beings have been abusing what were once abundant natural resources found on planet Earth, thereby contributing to global warming. The emission of toxic gases and the rise in the level of carbon dioxide have depleted Ozone Layer significantly as per Physicists. Time has come for us to act on reducing our dependency on fossil fuels and increasing renewable energy generation to fulfil energy consumption needs of this and next generations.As per a recent study conducted, it is estimated that almost 30% of the emissions generated from the electricity sector contributes to global warming.
The emissions come from the consumption of natural resources in the form of coal and other fossil fuels. Relying on solar energy is the most convenient and feasible option which we must consider to mitigate the issues related to global warming.
This is precisely where the services of Four Solar can be effectively utilized.
ABOUT FOUR SOLAR
Four Solar aims at providing superior solar energy solutions by integrating latest technologies from the best manufacturers – of panels and inverters – in the country and the world. The firm excels at design engineering of rooftop solar systems by using common sense and not compromising on simple points thereby maintaining energy production over the next 25 years.
Eastablished in 2013, Four Solar has completed more than 500 installations with a total of 35 MW project experience under the able leadership of Mr. Indrasen Bollampally, Managing Director.
Four Solar’s goal is to privatise power by making solar energy accessible to all the citizens of India.
ROOFTOP SOLAR SOLUTIONS:
The Rooftop Solar Solution allows us to install solar panels and inverters on the roof as per the design document. The experts at Four Solar visit the site for a detailed survey, design the panel and structure layouts and suggest a suitable solution. After the approval of the project, the solar panels are installed on the roof, facing south direction at an inclination of latitude (as per the place of site) for maximum generation. The inverter converts the Direct Current to Alternating Current [AC power] which is used by the electrical appliances in a building – residential, commercial or industrial.
Benefits of Rooftop Solar:
- Own Production: This is the first time ever a household can produce and consume power from rooftop. Anyone who produces and consumes power is called a Prosumer. A prosumer depends less on the government and more on self.
- ROI: The financial plan formulated by Four Solar under the CAPEX model gives financial returns in 3.5 to 5 years from the date of commissioning. The lifespan of solar panels is generally estimated to be between 25-27 years.
- Revenue Asset: The solar system, after ROI, will continue to save money up to 25 years making this a long-term revenue making asset.
- Reduction of CO2: Be a proud citizen by reducing 1.46 Tonnes of CO2 per KW per Year.
The package of Solar in a Box is a unique strategy developed by the Four Solar team that allows individuals to purchase a solar set that is easy to install within a residential environment. This box is equipped with all the necessary tools that can help you to harness solar energy efficiently. The product is directly delivered at your doorstep which makes it even more convenient for the consumer to avail and use.
The box contains solar panels, inverter, structure and cables along with an instruction manual that will guide you through the installation process. All you need to do is visit the official website of Four Solar to purchase the Orange Box.
SOLAR DG SYNC
The integration of Solar and DG is required to avoid failure of a DG Set and / or Solar Inverter. An inverter works on reference voltage, that is, it requires voltage from the connected source of power (Grid, DG or Battery). However, in situations of surplus power from solar, during power shutdown and DG on, precautions must be taken for safety and financial reasons. Therefore, this integration can be critical in few situations.
A DG Set works optimally at 30%, that is, DG must burn 30% fuel while solar power generates 70%, translating to 70% savings. A DG is designed to burn 30% fuel even on less load. What happens when there is surplus power when DG is on? Surplus power from rooftop solar must go back to the source (the one that gives reference voltage), that is, DG.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9654275178909302,
"language": "en",
"url": "https://www.ipl.org/essay/Boeing-Swot-Analysis-FJB26A22SG",
"token_count": 1002,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.212890625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:701ceebb-b64f-4729-9635-205215d346ee>"
}
|
Building the 747 was a remarkable feat of the time. Original development of the 747 came as a request from the military for a new cargo plane. 50,000 people called the incredibles were needed to build and design the 747. The 747 was made possible because of the new General Electric engines ("Boeing 747 Jumbo Jet - History"). This new engine used ⅓ the fuel and had twice the power.
Delta created its separate subsidiary in response to competitive threat of low-cost airlines. In addition, its subsidiary used pilots of its parent airline with independent decision-making authority. Does song have an effective strategy? Evaluate strategies by using three tests of effectiveness? Low-cost airline: Faster growth of low-cost aviation industry with homogenous service makes this industry fragmented across the United States.
Finn Lau Yong Huan D16125478 Mechanization & it Malcontents The object I have chosen for design classic is an Aircraft manufacture by America Boeing Commercial Airplane Company – Boeing 787 by 1969. There are many first ever design used in this aircraft. This most recognize aircraft ever in the history also got a nickname call “The Jumbo Jet” or “Queen of the sky”. However, this aircraft was the longest airliner in the world for 747-8 version as well as one of the most recognizable aircraft in the world. Other than that, this plane was also designed for both passengers and cargo use.
Most of JetBlue’s primary competitors including Southwest Airlines and Delta Airlines are larger and have financially very strong and established brand name. Many of the competitors enhanced their services and dropped prices to give tough competition. In addition, there has been a lot of merger and acquisition activity within the industry which caused fares to reduce further putting pressure on revenues and earnings of JetBlue. Geographic Risk: JetBlue when expanding into Latin America is also subjected to high risk. These countries are emerging markets, and face risks due to political and economic instability, underdeveloped legal systems, strike from third party service providers etc.
Therefore, in the short time, consumers are getting more careful with their discretionary spending and this lead to a serous drop in passenger volume. Consumers are not ready to pay former prices for airlines. These changes affect a change in demand of air flights. Besides, customers are intend to choose a cheaper costs airlines operators when the economy is suffering It is considered that the airline industry is a cyclical system. Airlines have to scope with high fuel prices, labor demands, operating and maintaining costs, and declining passengers.
Supersonic flight is an aircraft can fly faster than the speed of the sound. It have been developed in 20 century which used usually for research and military purpose. There are two planes Concorde and Tupolev Tu-144 that used for civilians as airlines. In this research I will discuss how supersonic flights are different than subsonic flights, besides, the definition of transonic region and its effects on control and more about supersonic flight designs, the power plant limitations while operating supersonic region, sonic booms and so on. On the morning from claiming october 14, 1947, those unexpected double-crack of a sonic blast pierced the serenity of the mojave desert.
The ramjet engine has socially affected the world as its being used in over twenty various aircrafts to date and improving aerial safety. A ramjet engine is at times also referred to as a flying stovepipe jet and athodyd. Ramjets are a form of air breathing jet engines, which use the forward motion to pressurize the taken in air without the use of an axial compressor. Ramjets can be used in small-scale flight innovations for high-speed usage such as in weaponry especially in missiles. Ramjets have been successfully used in helicopter rotors as tip jets.
The aviation industry will need to address these economic impacts to overcome pilot shortages. The effects of the emerging aviation crisis caused by pilot shortages extend beyond the airline industry into almost every conceivable sector, also
Threats: FlyDubai just like any other business faces threats to its existence. For instance, with the global financial crisis and later the Eurozone crisis, the number of travellers has significantly reduced due to economic hardships. This has affected the profit levels of the airline as well as slowed down its growth prospects. The airline also faces intense competition from other low cost airlines forcing it to extensively invest in product differentiation to counter the competition. This is an expensive
Airlines were responsible for the large USD17 billion of economic losses globally. The returns generated by airports are weighed down by the US, where airports are owned by local governments and funded by tax-efficient municipal bonds. They are not run to generate a return in their own right, but to bring wider economic benefits. Outside the US, airports generally produce higher returns, often aided by price regulation.” (CAPA,
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.940955638885498,
"language": "en",
"url": "https://ceritypartners.com/thought-leadership/government-debt-in-the-modern-era/",
"token_count": 1526,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.28515625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:04b6ee63-77a7-4665-ad16-665516a6fe25>"
}
|
The level of government debt will rise as a result of the fiscal response to offset the economic damage of COVID-19.
Real interest rates, inflation and the dollar all point to minimal short-term market risk from the increased government debt.
With deflation a more likely threat in the next 12 months, government yields are unlikely to rise substantially.
Down the line, the size of government debt may matter again, but today is not that day.
In the 1980s, strategist Ed Yardeni referred to investors who sell government bonds to force tighter fiscal policy as “bond vigilantes.” These individuals were the gatekeepers of responsible government spending. But now, by all accounts, they seem to have disappeared. Over the past two decades, the vigilantes have remained on the sidelines despite soaring government debt.
Before the Global Financial Crisis of 2008-2009, economists and politicians worried that foreign investors, who accumulated Treasury debt, would eventually tire of the persistent trade deficits and dump the debt. After the European Debt Crisis in 2010-2011, where the vigilantes moved across the Atlantic to drive up rates on Greek, Italian and Spanish bonds, fears emerged that the U.S. would also face a government debt crisis. Yet, the vigilantes never came.
Growing U.S. Debt
The sustainability of government debt is once again in the spotlight thanks to the current COVID-19 financial crisis. At the beginning of the year, the Congressional Budget Office was already forecasting a $1 trillion federal budget deficit for 2020. The Committee for a Responsible Budget now estimates the deficit will reach $3.7 trillion due to the multiple stimulus packages needed to address this crisis. One way to effectively measure the fiscal burden is to look at outstanding government debt relative to Gross Domestic Product (GDP). GDP measures the entire income for the U.S. economy. In theory, the higher the ratio, the less sustainable the debt burden. With government debt to GDP already over 100%, Treasury issuance in 2020 will likely cause the ratio to surpass the peak level during World War II. A good portion of debt sits on the Federal Reserve’s balance sheet, and the expansion of Quantitative Easing means the Fed will continue to absorb a sizable portion of the new issuance.
Does This Growing Debt Matter?
With interest rates at historic lows and government debt levels close to record highs (as a % of GDP), should we worry about the budget deficit? In the short run, we believe the fiscal stimulus measures are necessary to support the economy. That said, this increased level of government debt has three potential implications:
Crowding-Out Effect. In classic economic theory, government borrowing can crowd out private investment. The crowding-out effect works by pushing real interest rates higher, raising the cost of borrowing for governments, private companies, and individuals. Current fiscal spending and monetary policy are targeted to fill the hole created by the economic stoppage in Q1 and Q2. It is a big hole to fill, and there are few indicators that the government will crowd out private investment in the near term. Real interest rates – proxied by using 10-year Treasury Inflation Protection yields – are now negative, signaling plenty of room for government spending.
The Return of Inflation. In countries that can issue their own currency, such as the U.S., inflation is the real constraint to government spending. While the U.S. can print an unlimited amount of currency to finance fiscal spending, there are consequences to increasing money supply. Spending can distort prices and increase inflation, which in more extreme cases, can run out of control. Right now, core inflation is running at 2.1% year over year, and current measures show little reason to be concerned. Any unexpected spike in inflation would negatively impact both the bond and equity markets.
Dethroning the Dollar. Government debt and the dollar play a vital role in the global financial system. Investors demand dollars and U.S. government debt for the safety and diversification benefits. Banks use government debt as a source of collateral and to satisfy regulatory capital and liquidity requirements. If the renminbi or a basket of currencies replaced the dollar system, the usefulness of U.S. government debt would diminish, leading to a massive repricing. We see no signs of this happening.
How are The Markets’ Reacting to the Growing Debt?
With deflation more of a worry than inflation in the short-term, rising government debt is unlikely to concern the markets. Turning to the economic experience in Japan, where despite government debt to GDP rising to well over 200%, interest rates remain tethered to zero. The Japanese experience over the past two and half decades provides a window into the modern economic environment, where absent inflation pressure, governments can borrow with minimal (short-term) cost. As the developed world was experiencing disinflation, central banks turned their attention to staving off deflation, not inflation. The Fed and global central banks are now more sensitive to volatility in asset prices as the vigilantes have found a new cause. In a world more frightened by the prospects of deflation, the vigilantes have moved into risk assets. When sell-offs in equities and corporate credit occur, they now cry for monetary authorities to loosen financial conditions to alleviate the pressure. The long-term chart of the 10-year Treasury shows markets are much more worried about slowing growth than the increasing issuance of government debt.
Will Vigilantes Return to the Government Debt Market?
Absent any inflationary pressures or large-scale movement to dethrone the dollar, fiscal and monetary authorities are likely to journey into new frontiers that today only exist in the pages of textbooks or op-eds of newspaper articles. Quantitative easing in 2008 and 2009 was an unconventional monetary tool that is now a conventional policy in the central bank’s toolkit. Fancy economic terms like debt monetization and “helicopter money” may become mainstream over the next decade. Eventually, pushing the boundaries will have consequences. That said, the forces that would cause the government debt burden to impact the market are currently broken. We will continue to watch for signs of change, but for now, the vigilantes have other matters with which to concern themselves.
Meet the Author
Tom is a Partner in the New York office and has nearly ten years of experience in various investment management roles. He is a member of the Investment Committee, Investment Manager Selection Sub-Committee, the Compliance Sub-Committee and the Performance Monitoring Sub-Committee.
Before joining Cerity Partners, Tom served as an Investment Analyst at Spero-Smith Investment Advisers where he was responsible for the due diligence and analysis of third-party managers and assisted in global market and asset allocation research. Prior to joining Spero-Smith, Tom worked as a Registered Investment Advisor in Syracuse, NY, where he researched a factor-based investment strategy. He started his career as an analyst in the Corporate Debt Products group at Bank of America in Boston, where he worked on a team that managed the bank’s exposure to a portfolio of middle market and multinational companies.
Tom earned a Masters of Business Administration from the S.C. Johnson Graduate School of Management at Cornell University. At Johnson, he served as a portfolio manager on the Cayuga Fund, a student-run, market-neutral hedge fund. He earned a Bachelor of Science degree in Business Administration from Boston University. Tom holds the Chartered Financial Analyst® designation.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9611384868621826,
"language": "en",
"url": "https://cryptocolumn.com/10-best-bitcoin-mining-pools-2018/",
"token_count": 5272,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.07958984375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:477e96f7-d429-4e90-84ec-505afbc117d7>"
}
|
- 1 Basic Introduction to Cryptocurrency Mining
- 2 The Difficulty of Mining by Oneself
- 3 What is a Mining Pool?
- 4 How Mining Pools Work: The Short Technical Summary
- 5 Should I Join a Mining Pool?
- 6 10 Best Bitcoin Mining Pools of 2018
- 7 Conclusion
Bitcoin mining is just another way that one can get involved in the Bitcoin ecosystem. While for most, simply buying and holding Bitcoin is likely to result in higher profits than by mining Bitcoin, with the right setup and amount of capital, Bitcoin mining can still be a lucrative venture.
While there are some that choose to do it solo and mine by themselves, many Bitcoin miners opt to join what is known as mining pools.
Basic Introduction to Cryptocurrency Mining
To understand what mining pools are, let’s start with a basic definition of what Bitcoin mining is. If you’re already familiar with the concept of Bitcoin mining, feel free to skip ahead to the next section.
As you can probably imagine, Bitcoin mining doesn’t actually involve going into some underground mine and somehow extracting a coin that only exists in cyberspace.
Mining is done with computers or specialized mining devices known as application specific integrated circuits (ASICs), which run mining software and solve complex mathematical problems. This process of solving difficult math problems has the effect of validating Bitcoin transactions and adding them to the Bitcoin blockchain, or record of all Bitcoin transactions.
The math problems that miners are trying to solve with their mining hardware involve what’s known as a “nonce”. Every Bitcoin block, which contains multiple Bitcoin transactions, requires a certain “nonce”, or string of random numbers, in order to be confirmed and added to the blockchain.
Miners try to be the first to guess the correct nonce using their mining device(s) because whoever finds the right nonce for a given Bitcoin block, is awarded Bitcoin.
The term “mining” is used because miners perform this difficult “Proof-of-Work” (hard math problems) in the hopes that they receive Bitcoin, which like minerals, such as gold or silver, can be worth a lot of money ($10,492.20 per Bitcoin as of February 28, 2018 – Coinmarketcap).
The thing with Bitcoin mining is that it has grown tremendously difficult since Bitcoin’s inception in 2009. Indeed, mining Bitcoin was once possible using one’s personal computer or laptop.
However, these days, that is completely infeasible, as the computational work needed to solve the mining math has shot through the roof to the point where ASICs or other specialized mining hardware is necessary to turn any sort of profit.
The Difficulty of Mining by Oneself
As mentioned, it has become increasingly difficult to mine Bitcoin.
However, not only is it impossible to mine Bitcoin without a Bitcoin ASIC miner or other similar devices, but it is also very hard (basically impossible) to mine Bitcoin by oneself, especially if one only has a single miner to his or her name.
Given Bitcoin’s high price, Bitcoin mining is no longer a hobby. When lots of money is involved, people get serious. There are very large Bitcoin mining companies with significant amounts of capital invested in humongous mining operations.
Indeed, in places like China, where electricity is cheap (mining is very energy-intensive), there are even entire “mining farms”, where hundreds, if not thousands, of Bitcoin mining devices are lined up next to each other to mine the world’s foremost cryptocurrency.
A small number of these huge mining operations effectively control the majority of Bitcoin mining, with China in particular controlling over 80% of Bitcoin’s hash rate (explained below).
This matters because one’s probability of finding the correct nonce and subsequently receiving Bitcoin for one’s mining success is largely based on hash rate.
ASIC mining devices each have a hash rate that is listed in their specifications. This hash rate is the number of times that the ASIC device can guess at the correct nonce of a block per second.
The hash rate of the Bitcoin network, then, is the total hash rate or hashing power of all Bitcoin miners currently trying to mine Bitcoin.
With Bitcoin’s meteoric rise in price and popularity, the Bitcoin network’s hash rate has shot up tremendously, which means that there are more and more people trying to figure out the correct nonce for each block. This has the effect of making it harder and harder to mine Bitcoin for any given individual Bitcoin miner.
For example, the Antminer S9, which is largely considered the premier Bitcoin miner for consumers on the market today (February 28, 2018) has a hash rate of about 14 TH/s.
If you had the only Bitcoin miner on the planet (a single Antminer S9), and the Bitcoin network’s hash rate was subsequently 14 TH/s, you would successfully be finding every block’s nonce and mining all the Bitcoin available for mining.
However, that is not the case. As of February 27, 2018, the Bitcoin network’s hash rate is a staggering 23,172,169 TH/s. To put that in perspective, that is about 1,655,155 Antminer S9s.
At a price of $3,100 (via Amazon), which doesn’t include the power supply (about another $200), it would take about $5.5 billion to control Bitcoin’s hash rate and mine every Bitcoin.
As you can probably imagine, trying to find the correct nonce by yourself, then, is basically impossible unless you invest a very hefty amount of money into mining hardware to increase your chances.
What is a Mining Pool?
Since the difficulty of Bitcoin mining has increased exponentially over the years, miners have started to pool their computing resources together into what are called mining pools, which share both computing power and mining profits in the form of Bitcoin.
Though mining pools are largely good for miners since miners, when combining their hashing power via mining pools, collectively have more opportunities to find the right nonces for blocks and subsequently earn more Bitcoin (though split amongst themselves, resulting in more consistent but smaller individual profits), the formation of mining pools has unfortunately led to a lot of centralization in Bitcoin mining, with a handful of mining pools controlling much of Bitcoin’s hash rate.
This goes against the ethos of Bitcoin and cryptocurrency in general, both of which are supposed to be decentralized.
How Mining Pools Work: The Short Technical Summary
Miners are able to pool their hashing power because of the algorithm that Bitcoin uses for mining, known as SHA-256. With SHA-256, miners are able to combine their computing power and consequently form mining pools. This is made possible by parallel processing, which splits mining program instructions amongst mining devices.
Mining Pools vs. Cloud Mining
Cloud mining is another way to get involved in Bitcoin mining but differs from joining a mining pool or trying to mine solo.
Cloud mining is similar to buying shares of a company. For instance, when you buy a company’s stock, you might not necessarily be doing anything in the company itself, such as working a job there, but by owning part of its stock, you might be entitled to some of the company’s profit in the form of dividends.
Cloud mining is similar in the sense that you “buy the mining power” (in essence, you are renting it) of cloud mining companies’ hardware. While you don’t do anything directly related to mining, such as dealing with installation, maintenance, and so on, you are still entitled to some of the cloud mining company’s profit in the form of Bitcoin.
Cloud mining can be a good way to get exposed to Bitcoin mining because you can benefit from the profits of Bitcoin mining with a lower initial investment (depending on how much you spend), smaller risks (e.g. if you lose a small sum of money vs. a lot for a Bitcoin ASIC miner), less maintenance (hardware maintenance and so on), and little to no expenses (vs. electricity and so on if you were to own physical mining hardware).
Nevertheless, cloud mining isn’t as great as it sounds and has somewhat of a bad reputation due to the various cloud mining scams out there. Furthermore, if you want to rent a lot of hash rate from a cloud mining company, you might have to pay more relative to the same amount you would have paid for the same amount of hash rate via buying mining hardware (after all, the company has to cover electricity, management, and other costs somehow).
Lastly, there is always the risk of the cloud mining company going bankrupt as Bitcoin mining is very competitive and mining companies have folded in the past.
Should I Join a Mining Pool?
As mentioned, it is very difficult to mine Bitcoin by oneself, due to the very high investment needed to purchase a lot of hash rate in the form of significant amounts of specialized Bitcoin mining hardware, which doesn’t come cheap.
While we won’t consider whether or not you should rent a cloud mining contract, if you’re looking to decide between mining by yourself and joining a mining pool, joining a mining pool would make more sense for the vast majority of people.
Mining by oneself is futile in 2018 because the chances of finding the correct nonce for any given block, and being rewarded in Bitcoin, is basically zero (unless you have tons and tons of hash rate).
On the other hand, by joining a mining pool, which has a collectively higher chance of finding the right nonce, you will be rewarded Bitcoin more consistently (though in smaller amounts – which could of course be a higher amount in the long-run than if you were to solo mine).
However, mining pools do have some cons, such as potential pool provider downtimes or outages because of events like DOS attacks or regular maintenance. Also, mining pools can charge fees, which can further eat into mining profits.
Lastly, depending on the mining pool, payouts can be slow, which can be less than ideal if you are looking to cash out your Bitcoin profits to fiat currencies like USD quickly, since Bitcoin prices change quickly.
10 Best Bitcoin Mining Pools of 2018
For those who want to get involved in a mining pool, here are the 10 best Bitcoin mining pools of 2018:
Antpool is the biggest Bitcoin mining pool in terms of hash rate. Based in China, Antpool is run by Bitmain, the world’s largest manufacturer of Bitcoin ASIC mining devices, such as the aforementioned Antminer S9. In fact, much of their mining pool runs on Antminer devices.
Antpool mined its first Bitcoin block in March 2014, which means that it was created more or less four years after the first mining pool, Slush Pool. As of February 28, 2018, Antpool controls about 12% of the Bitcoin network’s hash rate.
Creating an Antpool account is free. However, there can be fees for using Antpool’s services:
PPS, or Pay Per Share, means that you are paid proportional to the amount of hash rate you contribute to the pool. PPS is considered a steady and predictable way of making money in a mining pool (but perhaps with lower profits).
Since miners are paid whether or not the mining pool is mining Bitcoin blocks (the pool pays you even if they aren’t earning new Bitcoin from finding blocks), PPS usually has higher fees. With Antpool, opting to get paid via PPS incurs a 5% fee.
PPLNS, or Pay Per Last N Shares, means that your earnings are based on the “N”, or number, of nonces that the pool finds. With PPLNS, your earnings are tied to the success of the mining pool in finding nonces, adding blocks to the blockchain, and ultimately earning Bitcoin, as you are paid based on your average hash rate contribution to finding the right nonces over a given period of time. PPLNS usually has lower fees than PPS. PPLNS with Antpool incurs zero fees.
PPS+, or Pay Per Share Plus, was introduced around the end of 2016 by mining pools. PPS+, like PPS, pays miners consistently based on how much hashing power they contribute to the pool.
The difference between PPS+ and PPS, then, is that PPS+ also rewards miners with part of any Bitcoin transaction fees for blocks mined by the pool based on the PPLNS calculation method (paid part of block transaction fees based on your average hashing power contribution towards finding the right nonces and mining blocks over a given period of time). Like PPS, fees for PPS+ can be higher. Antpool’s PPS+ fees are 4%.
Stratum is the mining protocol that Antpool supports. Antpool’s mining nodes are spread across the world in locations like China, Germany, and the US and pool members are automatically routed to the closest node location for the best performance.
Antpool makes payments to miners daily as long as their balance is over the minimum payment threshold of 0.001 BTC.
Antpool offers solid security options like two-factor authentication, email alerts, and wallet locks.
Also, Antpool has a relatively sleek interface, which can be easier to use for new miners. The mining pool even offers mobile apps for iOS and Android.
In addition to its Bitcoin mining pool, Antpool also has mining pools for Litecoin, Ethereum, Ethereum Classic, Dash, Bitcoin Cash, Siacoin, and Zcash.
While Antpool is the biggest Bitcoin mining pool and has frequent payments, good security, and a sleek user interface, it does not share Bitcoin transaction fees, which are typically paid to miners who find the right nonce for each Bitcoin block, with members of its mining pool unless they opt for the PPS+ payment method.
Moreover, since the mining pool is so big, individual payouts tend to be smaller (though smaller payments could add up in the long run since Antpool ends up adding a lot of blocks to the blockchain due to its impressive hashing power).
Furthermore, Antpool has recently been surrounded by controversy, because of the pool’s opposition to Bitcoin changes, such as SegWit, or Segregated Witness, a proposal that would increase Bitcoin transaction speeds by splitting Bitcoin transactions into two: the original transaction data and the “witness” data, or the signature part of the transaction, which verifies that the sender has the funds necessary to make a Bitcoin payment.
The witness part of the transaction data accounts for 65% of any given transaction’s size and when it is separated from the original transaction data and moved towards the end of the transaction data, the size of a block increases from 1MB to 4MB, which means more transactions can be processed per block by the Bitcoin network, thus improving the time in which transactions themselves are processed (less of a transaction backlog).
While Antpool’s (and parent company Bitmain’s) reasons for opposing SegWit are not entirely clear, this has become a huge point of contention in the Bitcoin community.
Joining Antpool’s mining pool means that you would have to agree with its direction as well, such as opposing Bitcoin proposals like SegWit, which are miner-activated.
F2Pool is another large Chinese mining pool that was launched in 2013. F2Pool is also known as Discus Fish by many Bitcoin miners. This name comes from when F2Pool did not have an English user interface and was only known for their coinbase signature, which contains “Discus Fish”, the nickname of one of F2Pool’s operators, in Chinese letters.
The pool runs on the stratum mining protocol and offers PPS+ payments at a 4% fee. Payments are made daily as long as withdrawals are equal to at least 0.005 BTC.
Like Antpool, F2Pool’s interface is easy to use and good for beginners.
Along with Bitcoin mining, F2Pool offers mining pool services for Ethereum, Litecoin, Ethereum Classic, Siacoin, Dash, Monero, and Zcash as well.
As of February 28, 2018, F2Pool controls 6.2% of Bitcoin’s hash rate.
While Bitfury is yet another big player in the Bitcoin mining space, Bitfury is different from other mining pools because it is private and not open to the public. Bitfury, like Bitmain, produces Bitcoin mining hardware. However, like their mining pool, Bitfury’s products are not available to the general public.
As of February 28, 2018, Bitfury controls 1.3% of Bitcoin’s hash rate.
Launched in 2014, BTCC is another China-based mining pool.
Along with running a Bitcoin mining pool, BTCC also runs a Bitcoin exchange, wallet, and other Bitcoin-related services.
While BTCC is based in China, it has customers and servers worldwide in locations like the US, Europe, South America, China, and Africa.
The BTCC mining pool runs on stratum and charges a 1% fee based on the FPPS, or Full Pay Per Share, approach. FPPS is similar to PPS but miners also receive a proportion of block transaction fees (a standard proportion is calculated for any given period vs. miners being dependent on the pool finding blocks as with PPLNS or PPS+) proportional to their hash rate relative to the pool’s total hash rate.
Payments are made daily at 10 A.M. China Standard Time (UTC+8) as long as they meet the minimum payment threshold of 0.01 BTC.
In addition to its Bitcoin mining pool, BTCC has mining pools for Bitcoin Cash, Litecoin, and Super Bitcoin.
BTCC also offers mobile apps for iOS and Android so pool members can monitor things like hash rate, profits, and more.
As of February 28, 2018, BTCC controls about 4.1% of Bitcoin’s hash rate.
ViaBTC is a relatively new mining pool that has been around for a little over a year, as it was founded in May 2016.
The ViaBTC Bitcoin mining pool offers payouts in the forms of PPLNS and PPS+.
ViaBTC charges a 4% fee for PPS+ and a 2% fee for PPLNS, since with PPLNS, miners are incentivized to stick around longer in order to benefit from any transaction fees that the pool receives. Transaction fees are paid for both methods.
Convenience is a major feature of ViaBTC: sign-ups for the pool can be done quickly with just an email, username, and password. Data is both detailed and real-time with monitoring available for blocks, hash rates, miners, users, and more, all in clear graphical fashion.
Moreover, ViaBTC payouts are distributed daily; however, ViaBTC automatically cancels accounts that don’t make enough to receive a payment for 10 days in a row. Users are also able to check mining activity via ViaBTC’s iOS and Android mobile apps, which also offer Bitcoin wallets for safeguarding one’s assets.
ViaBTC offers cloud mining and cryptocurrency exchange services on top of its mining pool service. It also has Bitcoin Cash, Litecoin, Ethereum, Ethereum Classic, Zcash, and Dash mining pools as well.
Despite how new it is, ViaBTC controls about 9.8% of the Bitcoin network’s hashing power.
BW Pool, based out of China, was created in August 2014 and co-founded by a major player in miner manufacturing, LK Group, Ltd and by a major player in cryptocurrency exchanges, CHBTC.com.
BW charges 0% for PPS, 4% for PPS+, and 1% for PPLNS. Minimum payouts start at 0.005 BTC and payouts are done daily at 9 A.M. China Standard Time (UTC+8).
Along with a mining pool, BW Pool also offers the following Bitcoin-related services: Bitcoin mining chip development, Bitcoin ASIC miner manufacturing and sale, an interest-bearing Bitcoin wallet, and Bitcoin cloud mining.
As of February 28, 2018, BW Pool’s share of the Bitcoin network hash rate is 1.5%.
BTC.Top is another mining pool based out of China. It was founded by Jiang Zhuoer, who worked at China Mobile in Shanghai, the world’s biggest mobile phone operator, and who led a 13-person Big Data and Data Warehouse team.
BTC.Top has only been around for a little over a year but already is the third largest mining pool by share of Bitcoin network hash rate as of February 28, 2018, with a formidable 11.2% of the total Bitcoin hash rate, putting it in third place behind BTC.com and Antpool.
Despite its size, the Chinese mining pool is private and not open to the general public.
Unfortunately, other details are sparse for non-Chinese speakers as their site is only available in Chinese.
Slush is often recommended to Bitcoin mining beginners. It was the first Bitcoin mining pool to ever be established and has a reputation for being reliable and trustworthy.
Created in 2010, which is quite early in terms of Bitcoin’s history, Slush runs on stratum and was founded by Satoshi Labs and is based out of the Czech Republic. Satoshi Labs also makes the popular TREZOR hardware wallet as well as runs coinmap.org, which shows a map of physical locations that accept Bitcoin as payment.
Slush charges a 2% fee for miners that join its pool. To withdraw Bitcoin profits, miners can set withdrawal thresholds (minimum 0.001 BTC) and payment will only be sent out when one’s threshold is reached.
While some pools may offer daily payments, Slush sends out payments every hour. A 0.0001 BTC fee is levied if users set their payout thresholds below 0.01 BTC.
To discourage “pool hopping”, or switching between mining pools in the hopes of making a quick profit and then leaving the mining pool (leaving the pool with less hash power to find nonces and mine blocks), Slush employs a score-based system that assigns higher weight to miners that stick around longer in each period of time that payment amounts are determined. Moreover, Slush also shares transaction fees with its miners.
Slush Pool offers a lot of great features, such as notifications when mining devices are facing problems or go offline, and a democratic process involving submitting and upvoting ideas that one would like to see implemented in order to improve the pool. Slush Pool also plans to introduce new artificial intelligence (AI) features in the near future.
Slush’s interface is very user-friendly and the team offers regular updates and communication via outlets like social media to keep members of the mining pool in the loop with the latest news and events. Support is also available through their IRC channel and through email.
Along with its Bitcoin mining pool, Slush offers a Zcash mining pool as well.
Slush is the biggest non-Chinese mining pool, with 10.9% of the Bitcoin network hash rate as of February 28, 2018.
Bitclub.Network, based out of Reykjavik, Iceland, was founded in October 2014 and runs in the middle of the pack when it comes to the world’s biggest mining pools, with a 1.6% share of Bitcoin’s hash rate as of February 28, 2018.
Originally a private mining pool, BitClub’s mining pool opened its doors to the public when it reached the milestone of 1% of Bitcoin’s hash power.
BitClub’s mining pool has servers worldwide, allows users to be paid via debit card, has a 0% fee, offers live stats and reporting, and even has prize giveaways for its users. The pool also offers an affiliate program and pays commissions to miners who refer new members.
Staff is available for support by live chat in English and Chinese and by email.
While BitClub does control a significant portion of Bitcoin’s hash rate, reports have surfaced saying that BitClub is a Ponzi scheme.
GBMiners is the first Bitcoin mining pool to be based out of India and founded by Amaze Mining & Blockchain Research Ltd.
GBMiners payments are based on the PPS+ method.
Despite the fact that GBMiners controls a formidable 1.6% of Bitcoin’s hash rate, as with BitClub, there unfortunately has been some news saying that GBMiners is a Ponzi scheme as well.
While it can be tempting to jump right into the world of Bitcoin mining, there is a lot to consider before making the plunge.
First off, mining without an ASIC is a complete waste of time if one wants to make money. Moreover, mining by oneself with just one or a few ASICs is also probably a waste of time since one’s chances of finding nonces, confirming blocks, and gaining Bitcoin will be close to zero. As such, it’s in the interest of the vast majority of people to join mining pools, such as the ones mentioned.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9788538813591003,
"language": "en",
"url": "https://definitions.uslegal.com/p/pay-as-you-earn-taxation/",
"token_count": 89,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.275390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:1e1325a3-6be4-48ab-9d93-84d4865ea898>"
}
|
Pay-as-you-Earn Taxation Law and Legal Definition
"Pay-as-you-earn" taxation is also called withholding. It refers to the sum of money that an employer withholds from the employee’s payments. This amount will be deposited with the government and credited against the employees' tax liability when they file their returns. These amounts are withheld for federal and state taxes as well as for social security taxes.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9461640119552612,
"language": "en",
"url": "https://findanyanswer.com/what-is-competitive-advantage-in-management",
"token_count": 222,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.054443359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e4bf6fef-3c13-4533-93b8-40843a0e26be>"
}
|
What is competitive advantage in management?
Correspondingly, what is a competitive advantage in business?
A competitive advantage is an advantage over competitors gained by offering consumers greater value, either by means of lower prices or by providing greater benefits and service that justifies higher prices.
Subsequently, question is, what is competitive advantage with example? Examples of Competitive Advantage Access to natural resources that are restricted to competitors. Highly skilled labor. A unique geographic location. Access to new or proprietary technology. Like all assets, intangible assets are those that are expected to generate economic returns for the company in the future.
Also Know, what do you mean by competitive advantages?
Competitive advantages are conditions that allow a company or country to produce a good or service of equal value at a lower price or in a more desirable fashion. These conditions allow the productive entity to generate more sales or superior margins compared to its market rivals.
What are the three basic types of competitive advantage?
There are three different types of competitive advantages that companies can actually use. They are cost, product/service differentiation, and niche strategies.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9304757118225098,
"language": "en",
"url": "https://fount.aucegypt.edu/etds/858/",
"token_count": 826,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.1689453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:274a30f6-15ef-43f2-90a8-f30326e4a181>"
}
|
Egypt has excellent natural conditions for the generation of electricity from Renewable Energy (RE) sources. It has in particular an immense potential for solar and wind energy. At the same time, an Egyptian energy crisis is emerging. Conventional domestic energy sources are declining and thus, Egypt will have to rely increasingly on costly imported energy in the near future. In addition, energy efficient (EE) consumption is an almost unknown term within the Egyptian society. Experts project for Egypt an energy saving potential of approximately 70%. Therefore, the promotion of RE & EE could be a successful policy instrument to mitigate the emerging Egyptian energy crisis. Furthermore, the development of a green Egyptian economy based on RE & EE could contribute to the country´s economic growth by increasing foreign direct investment, creating employment and providing much needed technology transfer. Besides these political considerations, Egypt could also fulfill its international obligation under the United Nation Framework Convention on Climate Change (UNFCCC) to mitigate the adverse effects of climate change by promoting RE & EE. Key to the sustainable diffusion of RE & EE and the eventual creation of a green domestic economy is the implementation of a sound and consistent legal policy. The Egyptian regulator has already identified this need and has implemented a series of regulatory measures to promote RE & EE. In the field of RE, it is the declared objective of the Egyptian government to satisfy 20% of the country's primary energy demand through RE by 2020. In order to reach this ambitious goal, Egypt is implementing a tendering system to award RE projects to private developers. However, it is the long-term objective to regulate the Egyptian RE sector through a feed-in law. For the promotion of EE, the Egyptian government has not yet formulated an official policy. Nevertheless, it has developed EE Building Codes, EE standards and labels for electric appliances, and it has promoted solar water heating systems. Unfortunately, these efforts have not yet led to the creation of a substantial RE & EE sector in Egypt. In general, the reason for this failure is that the Egyptian RE & EE strategy lacks comprehensiveness as well as consistent long-term dedication. Invitations to tender usually focus entirely on wind energy projects, while neglecting the huge potential for solar energy. Legal and price uncertainties further impede the promotion of RE. The Egyptian approach to promote EE is highly fragmented and incoherent, and suffers from lack of enforcement and acceptance in the market. However, existing subsidies for conventional energy sources and unfavorable electricity pricing structures for RE & EE remain the key barriers to the sustainable development of the sector. Although there are many regulatory instruments to minimize the aforementioned barriers, only consistent long-term commitment by the Egyptian government can lead to the establishment of healthy domestic RE & EE industry.
LLM in International and Comparative Law
Library of Congress Subject Heading 1
Climatic changes -- Law and legislation -- Egypt.
Library of Congress Subject Heading 2
Renewable energy sources -- Law and legislation -- Egypt.
The author retains all rights with regard to copyright. The author certifies that written permission from the owner(s) of third-party copyrighted matter included in the thesis, dissertation, paper, or record of study has been obtained. The author further certifies that IRB approval has been obtained for this thesis, or that IRB approval is not necessary for this thesis. Insofar as this thesis, dissertation, paper, or record of study is an educational record as defined in the Family Educational Rights and Privacy Act (FERPA) (20 USC 1232g), the author has granted consent to disclosure of it to anyone who requests a copy.
Institutional Review Board (IRB) Approval
Not necessary for this item
(2010).A legal policy analysis: the current and prospective regulatory framework for renewable energy and energy efficiency in Egypt [Master's Thesis, the American University in Cairo]. AUC Knowledge Fountain.
Abulzahab, Karam J.. A legal policy analysis: the current and prospective regulatory framework for renewable energy and energy efficiency in Egypt. 2010. American University in Cairo, Master's Thesis. AUC Knowledge Fountain.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9120206236839294,
"language": "en",
"url": "https://sustainableearth.biomedcentral.com/articles/10.1186/s42055-018-0004-3",
"token_count": 18616,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.14453125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a748ca17-e05f-4f58-9df1-d2a6c339a358>"
}
|
- Open Access
The Planetary Accounting Framework: a novel, quota-based approach to understanding the impacts of any scale of human activity in the context of the Planetary Boundaries
Sustainable Earth volume 1, Article number: 4 (2018)
Human impacts on the environment are so great that we are at risk of changing the state of the planet from one that is hospitable to one that is hostile to humanity. Scientists have proposed nine Planetary Boundaries, global environmental limits within which the risk of changing the state of the planet is low, but already, four have been exceeded.
Policy makers and scientists want to use the Planetary Boundaries as a tool for global environmental management. However, the Boundaries were intended as a gauge of the magnitude and urgency of the situation, not as a guide to resolving it. They are not easily applied to personal or policy action that is measurable or scalable. Here we show how the Planetary Boundaries can be translated into a framework for the management of the global environment, the Planetary Accounting Framework.
The Planetary Accounting Framework is a new approach to environmental accounting in which environmental impacts are compared to global limits, the Planetary Quotas. The Planetary Quotas are limits for human activity, derived from the Planetary Boundaries. Each Quota is a limit for an “environmental currency” such as carbon dioxide emissions, or reforestation that can be scaled and compared to human activity using existing environmental assessment frameworks.
The Quotas and Framework were developed by combining three key theories. Management theory shows that a multi-level, poly-scalar approach is needed to manage the global environment. Accounting theory highlights the importance of accounting against limits if a realistic approach to achieving change is sought. Environmental accounting theory demonstrates that there are different categories of indicators, and that only if indicators are uniformly in the pressure category can human activity be related to a limit and scaled accordingly.
The Planetary Accounting Framework shows how individual actions, strategies by firms, city level infrastructure, and national policies can be expressed in terms of the Planetary Boundaries. Decisions can now be made at different levels or sectors regarding policy, planning, technology, business operations, legislation, and behaviour in the context of global environmental limits. It enables the practical application and communication of the Planetary Boundaries to different scales of human activity.
The sum of the planet’s physical, chemical and biological processes is known as the Earth system. The Earth system comprises many interconnected processes (such as evaporation, transpiration, and photosynthesis) that store, transfer, and transform matter and energy according to the laws of physics and biogeochemistry .
When Earth system processes are in balance, the Earth system can operate in a particular state for many thousands of years. However, major disturbances to Earth-system processes can lead to an abrupt change of state. The transition from the most recent glacial period – the Younger Dryas – is an example of how rapid the change can be. Some regions are believed to have experienced more than 10 °C of warming in a single decade .
Homo sapiens evolved approximately 300,000 years ago . For more than 280,000 years, humans subsisted as hunter gatherers who moved to suitable areas where they could survive. The Holocene is the period of time which began approximately 11,650 years before present (taken as the year 2000) . The relatively warm and stable temperatures in the Holocene epoch saw the rapid development of humans from hunter gatherers to urban and agricultural settled societies [4, 5]. The state of the planet during the Holocene – henceforth referred to as a Holocene-like state – is the only environmental state of the planet in which we know settled societies can thrive [4, 5].
Scientists believe that the Holocene is over [4, 6, 7]. They believe we are in the transition to a new epoch, the Anthropocene, which roughly translates to “the human era” . The state of the planet during the Anthropocene is yet to be determined; it could be a Holocene-like state, or it could be a much warmer state. A warmer Anthropocene is unlikely to occur through gradual and linear change . Predictions are for non-linear, rapid, and potentially irreversible and sustained change to the climate and biosphere: substantial loss of species, devastating storms, significant sea level rise, and considerable displacement of communities .
There are external factors which could change the state of the planet that are beyond human control, for example, the output of the sun, or the shape of Earth’s orbit around it . However, without human influence, the stable Holocene period would be expected to continue for at least several thousand to as many as 50,000 more years . Human activity over the next 50–100 years will most likely determine the state of the planet during the Anthropocene. Human activity is the only factor affecting the state of the planet that is within our control, and the Holocene is the only state of the planet in which we know humans can thrive. It seems prudent to attempt to manage human activity such that we can retain a Holocene-like state of the planet during the Anthropocene.
In 2009, Rockström et al. proposed nine Planetary Boundaries, limits for Earth-system processes within which the risk of departure from a Holocene-like state is low (see Table 1). Together these Boundaries define a “safe-operating-space” for humanity .
We have already exceeded four of the Planetary Boundaries . The Planetary Boundaries show us that we are living outside the safe operating space, providing a gauge of the magnitude and urgency of the situation. The problem is, how to resolve this? As shown later in this paper, the PBs as stated do not translate into their significance for community, business, and policy.
This paper sets out how to do this using three key theories that are integrated into a way of managing the Planetary Boundaries and are expanded in the main text:
Management theory shows that the most effective approach to managing the Earth system is likely to be a poly-scalar approach, i.e., one that can be applied in different ways, across different areas of society, and at different scales, which is coordinated by a general system of rules.
Accounting theory highlights the importance of standards or limits in generating change.
Environmental accounting theory demonstrates that the type of indicator selected is critical to the applicability to policy and behaviour applications, in this case it highlights the need to convert the PB’s into pressures on the environment.
The purpose of this paper is to introduce a new paradigm - the Planetary Accounting Framework – based on the Planetary Quotas, that will help to make the Planetary Boundaries accessible and actionable. The Planetary Quotas are limits for human activity which are derived from the Planetary Boundaries. They show what is needed to return to and live within the safe operating space. The three theories set out above enable the overlap of three areas: LIMITS (Planetary Boundaries), CHANGE (poly-scalar management) and PRESSURES (environmental accounting) to create the novel concept of the Planetary Quotas (Fig. 1). These are therefore following the Sustainable Earth approach of how science, the Planetary Boundaries, can be related to both policy and community.
The Planetary Quotas form the foundations of the Planetary Accounting Framework (PAF). The PAF is a framework that shows how to apply the PQs. As shown in Fig. 2, this framework provides the platform for behavioural, policy, technological, and organisational change.
The paper begins with an overview of the three theories described above which together provide an integrated approach to change with respect to the Planetary Boundaries. We then show how the Planetary Quotas can be derived from the Planetary Boundaries with a brief description of how each of the Planetary Quotas was determined. This is followed by an overview of the PAF and how this can be used to shape policy and personal action. The paper ends with a discussion on the potential opportunities and constraints of the PAF and an overview of proposed future work on how to demonstrate the use of the PAF at different levels of human activity.
Theory 1: Poly-scalar management: An approach to managing the Earth system
The task of managing the Earth system is not straightforward. In the past, most theories on how best to manage shared resources (such as forests, fisheries, or the atmosphere) led to the conclusion that top-down governance or private management were the only effective options [13,14,15]. These theories were based on simple game theory that used the underlying assumptions that people would always act to maximise personal gain, regardless of the greater good [13,14,15]. The “tragedy of the commons” is that logic will drive humans to continue to overuse resources for immediate personal gains until everyone loses . These theories do not do justice to how communities actually work and how social science now understands the way human activity can change [16,17,18,19,20]. Cultures and communities are formed to enable broader goals to be pursued that enable more than individualistic gain. The question then is, if top-down governance and private action do not address our understanding of social science and change, what sort of global environmental management structures would be more effective?
Managing human impacts on the environment means managing human behaviour. This might mean the day to day behaviour of an individual, or the decisions made by a CEO, or government official, or a member of the community. Studies based on observed behaviour show that there are many factors which influence decisions and that behaviour is very difficult to predict [21,22,23]. Lifestyle, position within a family, within society, or at work, culture, motivations, past behaviours, habits, social norms, context, and technology all play a role [21, 22, 24]. In the past, behaviour change efforts have typically been targeted at community and personal values, and social norms . The findings that technology and context are key elements that influence decision making highlight the importance of infrastructure and technology and therefore governance and industry in driving pro-environmental (or other) decisions. As an example of how this can work, social media has been found to be an unintended driver for younger generations to switch from private to public transport as public transport allows them to stay connected to their peers during commuting time .
Nobel Prize winner Eleanor Ostrom began a movement in 1990 which used observed behaviour to dispute the validity of the theory of the commons altogether [16,17,18,19,20]. She and others showed through empirical evidence that the theory that individuals and small groups will not change their behaviour without external enforceable rules is far from inevitable [16, 26,27,28,29,30,31,32,33]. Community can shape the future through mutually accepted regimes of behaviour. In some instances these self-organised regimes have proved more effective than would have been feasible in the case of private action or top-down governance [18, 34]. Ostrom’s theory was that the most important factors which lead to cooperative behaviour by individuals towards the environment are the trust that the behaviour will lead to long-term benefits, and the belief that the majority of others are performing the same behaviour . She thus proposed a poly-centric approach for managing global environmental issues like climate change, as one which is coordinated by a general system of rules, but which enables different centres of activity to take different approaches towards the same end. The different approaches include private action, self-organisation or community-based activity, and government action at all scales. The general system of rules is included as a mechanism to impart trust in the long-term benefits of the actions, and that others are contributing to the same goal.
The science of change supports and extends the findings that different scales of activity are important and that not only community but also infrastructure and technology are key to driving change. The “magic of sustainability” is the idea that integrative solutions of community, business and government, can far exceed the sum of their parts (Fig. 3) . Specifically, when long-term community values and ethics overlap with mid-term government regulations and infrastructure and short-term business innovations, highly innovative and effective solutions which help to drive change can occur. Another insight from change theory is the importance of agents of change – individuals who create change in society – for example Rosa Parks – the black lady who refused to give up her seat in the white area of a bus and became a major catalyst for the movement for black rights.
Drawing from these theories, we propose that the most effective approach to change is likely to be one that can be used even more broadly than Ostrom’s poly-centric approach. We propose a poly-scalar approach and define it as one which is:
integrative across different scales, sectors, and timeframes, that is not controlled by a single body, but which could be implemented through government, private ation, or self-organised management, that is coordinated by a general system of rules which have different mechanisms at different centres of activity.
Just like all management, environmental management works better when it engages people in the required activities [37, 38]. Global environmental problems are typically caused by a multitude of actions which take place at a small scale [35, 39]. Household environmental impacts (including impacts of transport and upstream impacts of goods and services acquired by households) can account for as much as 70–80% of the economy’s environmental loads . Given the diverse nature of the causes of global issues, global or even national policies can miss local opportunities for change [39, 41]. People also tend to be more open to change implemented by local communities, businesses, organisations and authorities where plans have been developed with the specific community in mind, than to national level schemes [39, 41]. On the other hand, small-scale or local initiatives alone would be insufficient to manage a global problem such as climate change as many opportunities to reduce impacts rely on decisions which can only be made at a larger scale . Although the literature on behaviour change shows mixed results there is powerful evidence that when design and technology are changed to focus on the appropriate scale, then the results can be positive [24, 42,43,44].
Benefits of a poly-scalar approach to managing the Earth system include:
the possibility for immediate action at different scales – rather than a need to wait for global accord,
the facilitation of widespread experimentation and learning at multiple scales – rather than the need to determine an effective approach prior to rolling out global initiatives,
the flexibility to encompass different centres of decision-making which are formally separate – creating a bridge that is necessary to achieve change , and most of all
the ability to engage people in whatever scale of activity they can focus on.
One might argue that there is already a poly-scalar approach to managing the Earth system underway. There are efforts to reduce impacts at different scales and sectors and using different approaches. What is missing from the current approach is the “general system of rules” – the common goal for this multitude of activities. Without a common goal, efforts are piecemeal. Targets for environmental initiatives range enormously, from those aiming for a very loosely defined state of “sustainability”, to those working towards a circular economy, or others directing their efforts towards reducing their ecological footprint [46,47,48,49]. This can lead to a sense that environmental initiatives will make little difference to the final outcome. Moreover, many people lack confidence that others are working towards the same end.
Thus, global management theory needs to be applied to global environmental management and, in particular, the scale at which most people are engaged must be clarified. It is these poly-scalar approaches that lead to changes in design, technology, regulations, and hence behaviour [24, 44]. Such an approach would likely help to resolve the many issues inherent in managing globally shared resources and create opportunities for meaningful change.
Theory 2: Accounting theory – Creating a shared empirical basis for different environmental issues
Accounting theory highlights the importance of measuring and monitoring assets and flows in order to make informed decisions. These decisions are strongly influenced by the limits or standards that have been set on the products or services that firms and organisations are trying to bring to market. Governments, private organisations, and households alike make informed decisions and choices based on their knowledge of the state of their assets, of incoming and outgoing cashflow and the limits or standards that are guiding their behaviour. Environmental accounting translates these insights from accounting theory to the management of environmental impacts.
Environmental Impact Assessment (EIA) is the quantification of environmental damage from human activity. Environmental accounting is the measurement and monitoring of environmental impacts over time, and often against targets that can be standards or limits required to be met. Environmental accounting is a critical element in managing the impacts of human activity on the environment. It is now possible to estimate the environmental impacts of not only past and present but also future activities with increasing levels of accuracy. Thus, decision making, planning, policy and legislation can all be made with some understanding of the corresponding environmental implications. For this reason, environmental accounting is common practice for many businesses, cities, and nations and can also be done for individuals, groups of people, or products and services.
Environmental limits or standards are not new, for example the use of environmental footprints and/or life cycle assessments to help manage the global environment is commonplace. An example is the Ecological Footprint, a measure of human use of natural capital compared to the corresponding biological capacity – or available natural capital. This framework is used to assess the impacts of most nations , and has been used in other smaller scale applications such as the development of an online personal impact calculator . The Ecological Footprint is just one of many footprint tools. In one study of environmental footprints, 32 different footprint indicators were identified . In acknowledgement that environmental footprints do not give a holistic picture of sustainability, some authors have proposed “footprint families” which are typically comprised of carbon, ecological, and water footprints [53,54,55,56].
The primary shortcoming of using environmental footprints, footprint families, and environmental accounting in general to manage impacts is that the results are rarely given in the context of science-based targets [57, 58]. Targets are often self-selected. They are typically based on a percentage improvement from a previous reporting period, sectoral commitments (for example national commitments to meet carbon targets) or using sectoral or industrial benchmarks.
There are several reasons that the lack of science-based targets is important. To begin with, incremental targets typically lead to incremental improvements (as opposed to systemic changes – i.e., change to the entire system). Incremental improvements are unlikely to be sufficient to return us to the safe operating space. Moreover, incremental improvements are criticised for their rebound effect [59,60,61]. The rebound effect is the phenomenon that as one area of a system improves, people feel more able to relax in other areas of the system offsetting the initial improvements, or even resulting in a worse outcome. For example, a person looking to lose weight might go for a 20-min run. At the end of the run, she may feel that she deserves a treat to reward her efforts and eat a chocolate bar. The calories in an average chocolate bar are higher than most people would burn during a 20-min run. The net result of the run would thus be an increase in net calorie intake.
Incremental targets are less conducive to ongoing behaviour change than absolute targets . They indicate that the status quo is bad, and that we must continually reduce and improve. In contrast, science-based targets present a vision of the end goal. This allows a fundamental switch in conversation from negative conversations about the status quo, to hopeful conversations about a positive future. Studies of behaviour have shown that visions of a hopeful future are more useful to generate change than scare tactics about the status quo . Absolute targets do not negate the importance of incremental improvements. These are the basis of most personal and policy change and can be used to implement systemic change towards an end goal.
Carbon accounting is a strand of environmental accounting where global limits are often considered. There are debates as to a “safe” level of global warming and therefore maximum allowable CO2 emissions. Nonetheless it is possible to link CO2 emissions for an activity with a global budget based on scientific knowledge.
Carbon accounting has led to widespread understanding of what is a relatively complicated scientific problem. It is used across different sectors and at different scales of activity. Individuals and communities can calculate their “carbon footprint” – the amount of CO2 released due to the activities of the individual or community. Formal greenhouse gas accounting protocols have been developed for nations, cities, and products and services eg. [62, 63]. CO2 emissions have been translated into dollar values. Studies have been completed to assess the relative benefits of a carbon tax versus carbon trading. Different approaches for managing emissions and different technologies for reducing emissions or absorbing carbon from the atmosphere have been trialled in different locations and at different scales allowing for a very rapid uptake of knowledge and development.
Carbon accounting is a remarkable example of the importance of limits. These efforts at every scale have already led to some success. Economic growth has been decoupling from greenhouse gas emissions since 2000 . From 2014 to 2016 there was almost no increase in greenhouse gas emissions . In 2017 emissions rose to a new peak . It is disappointing that peak emissions have not yet occurred. However, trends over the last decade still appear promising. Implementing a poly-scalar approach with clearly defined global targets could help to increase the trust that efforts at every scale will make a difference to the end goal, and that others are working towards the same end.
In summary, to better manage the global environment, results of environmental impact assessments should be compared to absolute limits rather than incremental targets. We can use such an approach to drive systemic change. The PBs are absolute global limits. However, they cannot easily be connected to environmental impact assessments.
Theory 3: The DPSIR environmental accounting framework and accessible indicators
Several authors have highlighted the opportunity for the Planetary Boundaries to reform environmental governance at multiple scales e.g. [45, 58, 67]. Several efforts have already been made to use the PBs for environmental accounting at different scales. For example, there have been several attempts to link the PBs to existing environmental assessment frameworks including footprint tools and life-cycle assessments [68, 69]. National targets have been developed based on the PBs for Switzerland, Sweden, and South Africa, and regional targets for the European Union, and environmental accounting against these targets has begun [70,71,72]. However the work is disjointed and incomplete.
The PBs as designed by the planetary scientists who first proposed them, were not intended to be disaggregated or scaled . The purpose of the PBs was to provide a clear snap shot of the status quo of critical Earth-system processes based on how these systems are measured globally. They do not define limits for human activity.
Each of the works adapting or scaling the PBs and using these for environmental accounting has severe limitations. To begin with, none of them correspond to the PB for climate change. There is a wide variation in the indicators selected for biosphere integrity. So much so that it would be very difficult to contrast and compare any of the limits with one another or with the original PB. More importantly perhaps, none of the adaptations are suitable for use beyond the application for which they were intended. The national indicators developed would be difficult to apply to city or regional levels or to translate into business targets. This means that even within that nation, different levels of activity would be working towards different targets. The level of effort that has gone into each of the adaptations is high. It would not be practical to repeat such an involved process for every intended use. None of the adaptations are suitable for a poly-scalar approach.
The Driver-Pressure-State-Impact-Response (DPSIR) framework is used below to show why the Boundaries cannot easily be scaled or used in environmental accounting as they are. In response to the vast number of environmental indicators developed for environmental impact assessments, a system to categorise these was adopted by the European Environment Agency – the DPSIR framework, detailed in Fig. 4 [70, 72, 73]. The DPSIR framework not only enables the classification and therefore better understanding of indicators, it can also be used to translate indicators from one category to another as there is a causal relationship between each category :
Driver indicators describe human needs. Some examples of Driver indicators include kilowatt hours of electricity, kilometres travelled, or litres of fuel for transport.
Pressures which result from drivers are flows to the environment. One Pressure indicator resulting from the Driver indicators listed is CO2 emissions.
State indicators describe the environment. State indicators provide a snapshot of the status quo. Comparing the current State of a given ecosystem to a previous State allows us to understand the influence of human activity on the environment. For example, the change of the State indicator which corresponds to CO2 emissions – the concentration of CO2 in the atmosphere – has allowed us to understand the ramifications of emitting CO2. It is this sort of indicator that is commonly used in State of the Environment Reporting.
Impact indicators describe the results of changing environmental States. For example, one of the Impacts of the increased concentration of CO2 in the atmosphere is an increase in average global temperature. Another Impact is species extinctions.
Response is not a category of indicator. Rather it is included in the framework to show that different types of responses can be linked to different categories of indicators (see Fig. 4)
Human activity directly influences Pressures and Drivers and only indirectly influences States and Impacts. This means that State and Impact indicators are useful to describe the status quo, and to monitor change over time. However, they cannot be easily related to human activity. There is no straightforward way to divide the responsibility for the concentration of CO2 in the atmosphere between different nations, cities, regions, or individuals unless a different indicator can be found that is easily scaled. Nor can one directly compare specific human activities to the global average temperature. An individual deciding whether to take the car or the train to work, or a local government deciding whether to proceed with certain infrastructure – neither could begin to estimate the impacts of these decisions on the atmospheric concentration of CO2. It is only when these indicators are translated to the Pressure indicator – CO2 emissions, that it becomes possible to begin to allocate this global budget between nations, cities, or any other level.
Table 2 shows how each of the Planetary Boundary control variables fits into different DPSIR framework categories. There are three Pressure indicators, five State indicators, and one Impact indicator – i.e., the indicators do not all belong to a single category. This explains the reason why the Planetary Boundaries have not easily been translated into action.
For a poly-scalar approach to be applied to the Planetary Boundaries, a new set of Pressure indicators, derived from the PBs, is needed. We have called these the Planetary Quotas (PQs). Each of these are set out below. Once derived the PQ’s can then enable us to link human activity to key global limits through the Planetary Accounting Framework.
Developing the Planetary Quotas
Some authors have identified the opportunity to use the DPSIR framework to determine a causal relationship between human activity and the Planetary Boundaries [70, 72, 74]. Two of the national adaptations of the PBs use a methodology based on the DPSIR framework [70, 72]. However, neither study applied this approach across all of the PBs. Nor did either propose a set of indicators that were uniformly of the Pressure category.
The Planetary Boundaries are presented as distinct control variables with explicit limits. This is by design to make them easily communicable . In reality, there is a high level of interconnectivity between the PBs. For example, almost every PB affects biosphere integrity. Exceeding one PB affects our ability to remain within others.
For the PQs to be a robust translation of the PBs, this interconnectivity must be carried over to the PQs. It would not be suitable to translate each PB to a PQ without consideration of all of the other PBs and PQs. To manage this, the method used was to first translate each of the PBs into a list of critical pressures based on the scientific literature (see Additional file 1: Table S1), and then from this list, PQs were developed.
There are many pressures which only have minor contributions towards the PBs, so an exclusion protocol was applied for pressures which contribute less than 1% towards current global impacts. Excluding minor impacts is common practise in environmental assessment protocols as a means to simplify the process with minimal effect on the results . In total, thirty-two critical pressure were found. These were then analysed to determine which of the pressures could be grouped, and to find appropriate Pressure indicators to assess these with. The result was ten Pressure indicators which formed the basis of the PQ development.
Each of the PQ indicators found corresponds to one or more of the critical pressures and therefore one or more PB(s). The PQ limits were thus determined by assessing each of the corresponding PBs and selecting the most stringent limit.
The translation of the PBs to pressures and then to PQs is shown in Fig. 5. The direct relationship between the PBs and PQs is shown in Fig. 6. Two of the Planetary Boundaries have previously been identified as “core boundaries” for their high level of interconnectivity – Climate Change and Biosphere Integrity . Each of these correspond to more than half of the Planetary Quotas (see Fig. 6).
The PQs are summarised in Table 3. The scientific basis for each PQ is described briefly below. More detailed descriptions are included in the Supplementary Information where needed.
A quota for carbon dioxide emissions
Carbon dioxide (CO2) emissions is a critical pressure which affects several of the PBs (see Additional file 1). The PB that translates to the most stringent PQ for CO2 is the PB for the concentration of CO2 in the atmosphere of ≤350 ppm. The concentration of CO2 in the atmosphere is currently ≥400 ppm, i.e., this PB has been exceeded. No other pressures were grouped with CO2 for this PQ because the only way to meet the PB for CO2 concentration is through the uptake of CO2 from the atmosphere. The indicator selected for this PQ is thus net carbon dioxide emissions; net because to return to 350 ppm will require uptake of CO2 from the atmosphere.
There are several pathways for rapid decarbonisation in the literature e.g., [76,77,78,79]. However, only one of these shows the concentration of CO2 in the atmosphere as returning to 350 ppm within this century . This pathway entails rapid reductions in CO2 emissions of 15% per annum starting no later than 2020 followed by net CO2 uptake from 2030 to 2080, and net zero emissions thereafter. The proposed uptake of CO2 is approximately constant at 7.3Gt/yr. from 2050 to 2080. Thus, the limit is set as net carbon dioxide emissions ≤ − 7.3Gt/yr (see Additional file 1 for further detail).
All of the PQs should be reassessed over time. This is particularly so for the PQ for CO2 emissions. If 15% reductions do not start by 2020 this PQ will need to be amended at this time. Any delay will mean substantially higher reductions will be required.
A quota for methane and nitrous oxide emissions
Methane and nitrous oxide emissions (hence forth referred to as Me-NO) are the only two long-lived greenhouse gases in the list of critical pressures (Additional file 2) which can have positive limits whilst respecting the PBs. As such, these pressures have been grouped under the PQ for Me-NO. It is common practice to assess impacts of greenhouse gases (GHGs) in terms of the amount of CO2 emissions that would result in the same amount of global warming – equivalent CO2 (CO2e), this is the unit that has been selected here.
The PB most affected by Me-NO emissions is the PB for radiative forcing, i.e., a change in radiative forcing since preindustrial forcing ≤ ±1 W/m2. However, there are too many different factors which influence radiative forcing (e.g., greenhouse gas emissions, albedo (Earth’s reflectivity), and aerosol emissions) to use this PB to derive specific limits for Me-NO.
The IPCC has identified several emissions pathways for the future. Even the most stringent of these, RCP2.6, does not meet the PB for radiative forcing this century. However, the 2100 targets for Me-NO under RCP2.6 have been derived on the basis of optimal food production with minimal emissions and minimal land use. It can be shown that these targets are sufficient to respect the PB for radiative forcing (see Additional file 1).
Nitrous oxide is also an ozone depleting substance so the limits for nitrous oxide emissions must also be considered in the context of the PB for ozone depletion. It can be shown that the RCP2.6 2100 target is unlikely to prevent the PB for ozone depletion from being respected (see Additional file 1).
Thus, the RCP2.6 2100 targets have been used as the basis of the PQ for Me-NO. The limit is gross Me-NO emissions ≤ 5.4GtCO2e/yr.
A quota for forestland
There are several critical pressures which relate to land use and land-use change (see Additional file 1). Forestland is of particular significance however, because it plays an integral role in the carbon, water, and nitrogen cycles. Forests also provide habitat for over 80% of terrestrial species . Forestland function cannot be offset by other land types. As such, there are two PQs pertaining to land use, the PQ for forestland (discussed here) and the PQ for biodiversity which addresses land use more broadly, but with specific consideration for the impacts of land use on biodiversity (see section “A Quota for Water”).
The decarbonisation pathway used to determine the limit for CO2 emissions (see section “A Quota for Carbon Dioxide Emissions”) and the PB limit for land use of global forest land ≥75% of original forest area both suggest that approximately 0.9Gha of reforestation will be needed by the end of this century (see Additional file 1). Applying this linearly over the remainder of the century gives a PQ for forestland of deforestation ≤ -11Mha/yr.
A quota for ozone depleting substances
The hole in the ozone layer is an example of how significant damage from human activity can be, and of how effective global action can be in restoring planetary health. The Montreal Protocol is a universally ratified agreement to phase out the production and use of ozone depleting substances. It has been predicted that if the Protocol is respected, i.e., that Montreal gases are phased out, the Planetary Boundary for ozone depletion will be too . Not all ozone depleting substances are included under the Protocol, the notable exception being nitrous oxide. However, provided the PQ for Me-NO is respected, it is unlikely that nitrous oxide emissions would cause the PB for ozone depletion to be exceeded (see Additional file 1).
Montreal gases have different effects on ozone but can be collectively measured in the unit ozone depleting potential kilograms (ODPkg), which is a measure of the relative impact on the ozone of different gases compared to a benchmark substance. The PQ for ozone is set at Montreal gas emissions ≈ 0 ODP kg (see Additional file 1 for more detail).
A quota for aerosols
Aerosols are small particles suspended in the air. They can be released directly, or form as a result of emissions of precursor gases. There was not previously a Pressure indicator for the collective measurement of aerosols and precursors that could be related to the state of the atmosphere.
Aerosol optical depth (AOD) is an optical measure of the concentration of particles in the air. It is the ratio of incident light either scattered or absorbed by airborne particles in a vertical column of air . An AOD of 1 indicates that no light can pass. An AOD of 0 indicates perfectly clear skies.
Meyer and Ryberg have proposed a new unit, equivalent aerosol optical depth (AODe).Footnote 1 Characterisation factors have previously been proposed to link annual mass of emissions of aerosols and precursors to globally averaged change in AOD. Building on this approach, Meyer and Ryberg used these factors to link emissions from an activity to global average AOD and thus determine the AOD equivalent (AODe). This should not be confused with an estimation of actual change in AOD. Such an estimation would be highly inaccurate because of variations to local conditions and the interactions between different aerosols and precursors. AODe provides a link between emissions, the Pressure indicator, and the resultant optical depth, the State indicator. It is thus an appropriate indicator for the PQ for aerosols.
The World Health Organisation suggests that no level of particulate concentration is safe for human health, suggesting an AODe of zero would be the most appropriate. However, the impacts of aerosols on global warming must also be considered. Aerosols have a net cooling effect in the atmosphere and are believed to have substantially dampened the warming effects experienced so far because of greenhouse gas emissions. Eliminating them entirely could lead to accelerated warming which could be more harmful to humanity than a small amount of particulate concentration remaining in the atmosphere.
The PB for radiative forcing is linked to the PQs for CO2, Me-NO, forestland, Montreal gases, and aerosols. Using the previously discussed PQs, and the PB for radiative forcing, a range of acceptable AODe levels can be determined as 0.05 ≤ AODe ≤0.13. The WHO guidelines for maximum particles in the atmosphere can be translated to an upper limit of AODe ≤0.1, which is in line with the PB for aerosols. Thus, the PQ for aerosols is 0.04 ≤ AODe ≤0.1 (see Additional file 1 for additional details).
A quota for nitrogen
Reactive nitrogen is necessary to grow food. However, the overuse of nitrogen can cause algal blooms and create anaerobic dead zones in rivers, lakes, and oceans. The PB for nitrogen is 62TgN/yr. of intentionally fixated nitrogen. This is a Pressure indicator, yet it cannot easily be compared to human activity. Further, it is not the fixation of nitrogen that causes algal blooms. Rather, it is the loss of nitrogen to the environment. Thus, the indicator for the PQ for nitrogen is net nitrogen lost to the environment. This includes virtual nitrogen that has been lost to the environment during the production of food and products, and the nitrogen released in excreta, less any nitrogen recovered, for example through the denitrification of waste water.
The PB for nitrogen was set on the basis of estimates for critical environmental limits for of nitrogen in surface runoff . This basis is also suitable for the PQ indicator. As such, the PQ for nitrogen is net nitrogen lost to the environment ≤ 62 TgN/yr (see Additional file 1 for additional details).
A quota for phosphorus
Like nitrogen, phosphorus is also necessary to grow food but can cause algal blooms if used excessively. The PB for phosphorus is a flow of no more than 11 TgP/yr. from freshwater systems to the ocean. The limit is set at a point where the risk of a global anoxic ocean event is considered low .
This indicator is a Pressure indicator but not one that is easily comparable to human activity. A more accessible indicator has been selected for the PQ for phosphorus – net phosphorous released to the environment. It can be assumed that most phosphorus released to the environment will eventually make its way to the oceans. As such, the PQ for phosphorus is set at the same level as the PB for phosphourus i.e., net phosphorous released to the environment ≤ 11 TgP/yr.
A quota for water
Water availability varies significantly across the globe. In some areas it is plentiful. In others, it is very scarce. It is not feasible to transport water over long distances. For this reason, the concept of a global limit for water is debated. However, only a small fraction of total water consumption is direct consumption of local water. The far larger percentage of water consumed is “virtual water”, i.e., water used in the production of goods. Unlike water in its useable form, virtual water is traded globally. Approximately 40% of Europe’s water footprint is imported. We argue that the global distribution of water through trade justifies a global limit for water.
The PB for water consumption is for gross blue water consumption ≤4000 km3. Blue water refers to fresh surface water and groundwater, i.e., the water found in freshwater lakes, rivers and aquifers. Precipitation on land is classified as green water. The authors of the PBs acknowledge that green water is a scarce resource and should be considered within the PBs. However, because of the difficulty in defining a green water boundary they used blue water as a preliminary proxy indicator [5, 12].
Blue water consumption is not a suitable proxy for use in environmental accounting as this would imply that the use of green water, for example rain fed crops, has no impact. On the contrary, human appropriation of green water can result in loss of soil moisture and a decline in moisture feedback of vapour flows . Further, 74% of the global average water footprint of production between 1996 and 2005 was from green water .
Gross water consumption is also a poor proxy indicator for environmental accounting purposes as it ignores the end state of the water. Net water consumption and the inclusion of grey water, i.e., the amount of water required to assimilate pollutants in water, gives a more holistic indicator of human appropriation of the water cycle.
The indicator for the PQ for water is therefore net blue, green, and grey water. There is no clearly defined global limit for this indicator. However, on the basis that more than 30% of major groundwater sources are currently being depleted, it is argued by some that we are already at, if not beyond the limit . The PQ for water is thus ≤8500km3 based on the current global water footprint (which includes blue, green, and grey water consumption) (see Additional file 1 for further detail).
Some water accounting experts believe that a weighted water footprint would better account for regionality (see section “The Planetary Quotas in Context” for a discussion on regionality and the weighted water footprint).
A quota for biodiversity
climate change – shifting habitat to an extent that it is no longer suitable for the threatened species;
pollution that affects the health of species;
overexploitation of species, especially due to fishing and hunting but also overuse of ecosystem services leading to aforementioned habitat loss;
spread of invasive species or genes outcompeting endogenous species; and
habitat loss, fragmentation or change, especially due to agriculture, large-scale forestry, and human infrastructure.
Climate change is considered under the PQs for CO2, Me-NO, forestland, Montreal gases, and aerosols. Pollution is considered under the PQs for aerosols, water, nitrogen, phosphorus, and novel entities. The remaining three drivers have complex and diverse pathways. A study by the Convention on Biodiversity CBD summarised the primary drivers for over 500 invasive species and found over 40 drivers ranging from purposeful release for measures such as erosion control, and hunting, to escape of pets, contamination of international trade objects, and stowaways on container ships. At this time, no Pressure indicator exists to account for all three of these drivers.
Land-use change is considered by many to be the greatest threat to biodiversity [88, 90,91,92,93,94,95]. For this reason, the use of land-based indicators as a proxy for biodiversity is common practise. The Ecological Footprint is often used as a proxy indicator for biodiversity health on the basis that it is a measure of how much biologically productive land is used by humans. Some level of overexploitation of marine and terrestrial species is taken into account in this metric . The problem with using this indicator is that there is little consensus as to an appropriate limit [46, 86, 91, 96,97,98].
In a UNEP report on life cycle indicators, the need for a scalable indicator to assess the land use related impacts on biodiversity was identified and a new indicator proposed . The indicator is called the percentage disappeared fraction (PDF) of species. This indicator is similar to the Ecological Footprint in that different types of land are weighted in terms of relative impacts. However, it has been specifically developed as a proxy indicator for biodiversity loss. Moreover, the unit can easily be equated to the Planetary Boundary for biosphere integrity – extinction rate – as both are expressed in terms of the percentage of extinct (or disappeared) species. The difference between the PDF and extinction rate is in their determination. Extinction rate is determined through observation – it is an Impact indicator. In contrast, PDF is estimated using land use data – thus a Pressure indicator. The PQ for biodiversity it thus PDF ≤1E-4/yr.
The purpose of the UNEP report was to propose indicators that allow better consistency in the development and communication of green products. This differs from the purpose of the Quotas in that the Quotas are intended to be the basis of a global Planetary Accounting Framework that can be used for any scale of human activity. In the instance of the UNEP report, there is little need to account for positive land transformation. As such, all of the “correction factors” – numbers used to convert land transformation to percentage disappeared fraction – are positive (i.e. they lead to biodiversity loss). Further work will be required to determine correction factors for positive transformation which results in biodiversity gains.
A quota for novel entities
There is no indicator or limit proposed for novel entities in the PB framework. However, they are included as a PB to give an indication of their importance to planetary health. The authors of the PB framework define novel entities as new substances, new forms of existing substances and modified life-forms that have the potential for unwanted geophysical and/or biological effects.
The environmental impacts from novel entities most often occur because of the disposal of these. The release of toxins into waterways. The disposal of waste to landfill. The disposal of plastic into oceans. As such, we propose the indicator net imperishable waste measured in kilograms to account for the wide variety of novel entities.
There is no specific limit proposed in the literature for this metric. However, there is evidence that we are beyond the limit. For example, 83% of tap water samples from 12 nations have been found to be contaminated with plastic ; methane from landfills and wastes contributes approximately 23% of global methane emissions ; most fish which are high in the food chain now contain high levels of heavy metals such as mercury . The PQ for novel entities is therefore net imperishable waste ≈ 0 kg/yr.
The choice of net rather than gross waste is to allow environmental impact assessment results to show negative imperishable waste disposal. In this way, activities such as landfill mining which result in a net removal could be encouraged. Value could be assigned to such activities to allow for trading of impacts within a global cap. Further work should be undertaken to determine whether a zero limit is sufficient.
The Planetary Accounting Framework
The Planetary Quotas form the foundations for the new Planetary Accounting Framework (PAF). The PAF shows how the PQs can be used in a poly-scalar approach to manage global impacts. It can be used to assess the impacts of different scales of human activity against planetary limits. Figure 7 shows how the Framework can work for different scales and purposes.
The left-hand side shows the inputs and the right-hand side shows the outputs. The inputs are both top-down – scaling the Planetary Quotas to the scale of assessment – and bottom up – using environmental impact assessment methods to estimate impacts in each environmental activity.
Prior to completing an environmental impact assessment, the scope, i.e., the inclusions and exclusions, must be determined. The scope of assessment will depend on the purpose. For example, if a city government was looking to compare impacts per capita of their population with another city’s population it would likely be most appropriate to assess the final consumption of its inhabitants. In contrast, if the same city was trying to prioritise infrastructure and development it might be more appropriate to assess the impacts that occur within the city itself.
The inclusions and exclusions can make a very big difference to the results. For example, in Sweden the emissions produced within the Swedish borders has reduced from 72.7 MtCO2e in 1990 to 66.2 MtCO2 in 2010 (Swedish EPA, 2012a). However, when they calculated the emissions corresponding to the consumption of the inhabitants of Sweden, the results were 76 MtCO2 in 1990 and 95 MtCO2 in 2010 (Swedish EPA, 2010). One set of accounts showed a decrease in emissions while the other showed an increase. Both sets of accounts provide information that is useful, but for different purposes.
Once the scope is defined, an environmental impact assessment can then be done to determine the impacts in each of the PQ currencies using standard environmental assessment methods.
To translate the global PQs to the scale of the planetary accounts (e.g., national, city, business), an allocation procedure will need to be determined. The Planetary Quotas help to resolve the mathematics of apportioning shares of the operating space to different scales of human activity. However, distributing Earth’s finite resources among past, present, and future generation is not simply a question of mathematics. It is question of ethics, morals, and beliefs.
For the PAF to form the general system of rules for a poly-scalar approach to managing global impacts it should have different mechanisms at different scales and for different purposes. For such a flexible approach, allocation procedures also need to have a high degree of flexibility. An allocation procedure for PQs for the basis of self-organised initiatives is likely to be self-selected. Global negotiations for national commitments to PQs are likely to be heavily influenced by politics. Private organisations may agree sectoral approaches to allocating Quotas, may self-select allocation procedures for Quotas as part of an internal sustainability strategy, or may be allocated Quotas by local authorities or company managers. The PAF does not attempt to resolve the question of which allocation procedure(s) is most suitable. Rather, it has the flexibility to allow different types of allocation procedure to be applied as needed.
The Quotas as shown in Table 3 are global limits. Each Quota has been designed to be scalable, however, not every Quota is divisible. The Carbon Quota is an example of a Quota that is divisible – i.e., the global Quota of − 7.3 GtCO2/yr. could be divided by the global population (say 7.5 billion) to get an equal per capita share of ≈ − 1 tCO2/yr. per person.
In contrast, the PQ for aerosols is not divisible. The unit (aerosol optical depth equivalent) applies directly at any scale. Thus, both the global Planetary Quota, is the same as (for example) any individual’s Planetary Quota. Table 4 shows which Quotas are divisible and which are not.
The impact balance sheet
The results of the environmental impact assessment can then be compared to the scaled PQs in the planetary accounts. An “impact balance statement” can be used show the impact and limit for each PQ currency, and thus the credit or deficit.
The idea of “credits” in each currency could be seen as encouragement to “optimise on the edge” – to push our impacts to the limits. However, we include this terminology intentionally. Currently, language around environmental impacts is often very negative. There is a focus on reducing bad as opposed to improving good. The concept of environmental credits could help to shift the conversation to a focus on improving environmental maintenance. It could also provide a mechanism to financially incentivise those who are remaining below their targets or creating a net positive outcome.
The right-hand side of Fig. 7 shows some of the ways planetary accounts could be used. The PAF could allow meaningful decisions to be made at different levels or sectors regarding policy, planning, technology, business operations, legislation, and behaviour. It could enable the incorporation of the economic value of environmental impacts and management into existing global economic structures. The Framework enables the practical application and communication of the Planetary Boundaries to different scales of human activity.
The planetary quotas in context
In the latest update of the Planetary Boundaries the authors show that four of the nine Boundaries have been exceeded, and that two have not been measured. Global estimates (shown in Table 5) indicate that five of the Quotas are currently exceeded, one is on the threshold, and the remaining two are uncertain. The current global estimates are shown against each Quota in Table 4.
Planetary quotas versus planetary boundaries
The Planetary Quotas complement rather than replace the Planetary Boundaries. The relationship between the two sets of indicators can be compared to human health. If a person visits a doctor, the doctor might measure State indicators such as blood pressure, heart rate, weight, and liver function to assess his health. If he is not healthy, his doctor is likely to prescribe a course of action. This might include a maximum calorific intake, a minimum level of exercise, and the avoidance of some activities, such as smoking. The PBs provide an indication of planetary health. The PQs are the prescription for a healthy planet.
This means that meeting or exceeding PQs gives a different message to meeting or exceeding PBs. For example, we have exceeded the PBs for climate change. It would take decades of living within the PQs associated with climate change to return to within the PB for climate change. For PBs that have not yet been exceeded, the PQs give an indication of whether we are heading towards the limits, or are likely to remain within them.
Unlike the Boundaries, no “zone of uncertainty” has been included for the Quotas. The zone of uncertainty is included in the Boundary framework to account for the fact that the science is uncertain. The Quotas are intended for use in policy, the design of technology, regulations, and behaviour within communities, households and businesses. In keeping with the precautionary principle, we have thus set the Quotas according to the lower limits in the Planetary Boundary framework. Future work should include estimations of uncertainty around the Quota values.
Global vs regional limits and impacts - an issue of scale
Greenhouse gases have a long atmospheric lifetime and become well mixed in the atmosphere. This means it is of little importance where the gas is emitted. 1 kg of CO2 will have the same contribution to global warming wherever it is released.
This is not the case for all of the impacts that are considered within the PQs, for example water consumption or the release of nitrogen into the environment. It is not the case that 1 kg consumed or released in one location will have the same impacts as 1 kg consumed or released elsewhere. If we take a few thousand litres of water from a water source with abundant supply, the local impacts are likely negligible. Taking just a few litres from another, water poor source, may have disastrous local effects. The release of a kilogram of nitrogen in a sparse agricultural area will have less impact on the Earth system than in an intense agricultural zone with risks of ground water contamination.
In the example of water, some authors advocate for a weighted water footprint to incorporate the different impacts of consuming water from different locations. The premise is that water from water rich water bodies should be given less environmental weighting than water from water scarce water bodies. A unit of equivalent water has been proposed based on a water stress index that is based on the availability and withdrawals of a given water body.
The problem with this proposal is that the impacts assigned to a given withdrawal of water depend on withdrawals by others from the same water body. Whilst this may give a realistic measure of impacts, it does not offer a useful indicator for decision making and planning. A company which has put substantial effort into reducing water consumption could have their weighted water footprint doubled because an independent company starts to use the same water body. In the same way, if a large company set up and put local companies out of business thus eliminating their use of a water body, their weighted water footprint would go down, even if they had taken no steps at all to improve their water use efficiency. This is not consistent with other impacts that are assessed in terms of equivalency. For example, greenhouse gases are often measured in terms of equivalent CO2 emissions. The proposed new metric for aerosols discussed in the previous section uses the unit equivalent AOD. These equivalencies are not dependent on other actors. If a company emits 50 kg of nitrous oxide, this is equivalent to 14.9 t of CO2 no matter what other companies are doing.
There is no question that the scarcity of a source of water should be considered when environmental impacts of an activity are assessed. However, we disagree with the water weighting approach as it is inconsistent with environmental accounting practises. There are other ways that regionality could be included in planetary accounts. For example, a binary water scarcity indicator (yes/no) could be reported alongside the net water footprint to convey the suitability of the water source. Regional issues for other environmental currencies could be included in a similar way. Further work should be undertaken to explore this.
Planetary Accounting is not intended as the one super-system to resolve all environmental problems. The purpose of Planetary Accounting is to allow humanity to manage human activity such that it does not push the Earth system into a new geological state. There are many local environmental problems that do not translate simply into planetary limits. For example, land instability and polluted waterways due to poor farming practices, light pollution, and the urban heat island effect. Planetary Accounting does not replace local environmental management practices created locally and solvable locally; these must be dealt with at a local level. The Planetary Boundaries and the PQ’s derived from them are a context of limits that can be translated into action to solve these global constraints at various levels of activity.
The PQs show what is needed to return to and remain within the safe operating space of the PBs. They define an end goal rather than a pathway of reductions. There is no timeframe associated with any of the PQs except the PQ for CO2 emissions. This is because the CO2 budget is based on cumulative emissions so the longer we delay in achieving this PQ, the more stringent it would need to be. However, at any time that any of the Quotas or Boundaries are not respected, humanity is at risk of departure from a Holocene-like state. We should work to live within the PQs as soon as possible and if like the climate change boundary we have exceeded them then we must rapidly get to work to reclaiming a safe-operating space.
Comparing planetary quotas
We have not proposed a mechanism to compare one Quota to another or to amalgamate the results of environmental assessments into a single indicator of sustainability. This is intentional. The Earth cannot amalgamate these environmental currencies or trade one for another. If we consume too much water, this cannot be resolved by emitting less carbon, though it is appreciated that there is a nexus between water and carbon. At a global scale, each of the Quotas must be respected if we are to operate within the Planetary Boundaries.
The planetary quotas are a moving target not a static value
The Earth system is dynamic and the rate of increase in scientific understanding of its processes and limits is high. There is not time to wait until we have a perfect understanding of the system or its limits before we take action to operate within these – this may never eventuate. The indicators and limits presented in this paper are intended to be preliminary. It is our intention that, like the Planetary Boundaries, these are subjected to scrutiny, discussion, and analysis, and are regularly reviewed and updated over time as we advance in our collective knowledge and understanding.
Opportunities for planetary accounting in practice
The PAF has been designed with a high degree of flexibility with the intention that this could enable a wide range of applications that go beyond those envisioned by the authors. Some of the applications it could be used for are discussed here.
We have not included a mechanism to enable the trading of one PQ currency for another, however this does not preclude the opportunity to trade in each of the Quota currencies at lower scales. On the contrary, Planetary Accounting provides an opportunity for a global trading system for key global environmental “currencies” and in the process firms can see how these parameters interact and are synergistic. Moreover, the real costs to humanity of exceeding planetary limits – i.e. the costs of adaptation and mitigation – or the value of undershoot – i.e. the money not spent because nature provides a service - could be used to assign a monetary value to each environmental currency, for example $X / kg of nitrogen. The true cost to society might only be known in hindsight. However, if values were assigned to each unit of environmental currency, companies could make money from the restoration and maintenance of Earth-system processes. Such an exercise could facilitate the incorporation of the environmental impacts into existing global economic frameworks thus enabling a further decoupling of wealth creation and environmental footprint .
Behaviour change programmes such as a smart phone application could be based on the Planetary Quotas. In a live game, individuals could compete with friends and strangers across the globe to live within their share of the planet’s limits. The same could be used by firms wanting to create a market for new design and technology products and services.
To facilitate better producer and consumer responsibility, a product labelling system similar to the nutritional facts labelling system for food could be developed based on the Quotas (see Fig. 8). Whether this was displayed on products as part of a labelling system, or simply made available online, companies could use such a system to communicate the impacts of goods and services in different environmental currencies. A global labelling scheme could provide an opportunity to address the regional variation of some Quotas (such as the water Quota), discussed later.
Further work would be required to determine the appropriate format, inclusions, and exclusions for a labelling system such that it could be both accessible to a wide audience, and implementable for producers.
One of the major limitations of the PAF is a lack of available data. It would not currently be possible for a person to determine the impacts of their consumption to compare against her PQ as it would be difficult to estimate the impacts of most of the products and services that she uses. A company seeking to understand the impacts of their products may not be able to obtain data on the impacts of the extraction of raw materials. The time and cost associated with obtaining the data needed for a detailed environmental impact assessment can often be prohibitive. The availability of data and simplification of environmental impact assessments will be an important area of future work to make planetary accounting feasible.
In many applications double counting of impacts is to be expected – for example the impacts of a person’s consumption of a litre of milk would be counted in her accounts, the milk producer’s accounts, the city’s consumption-based accounts and the production accounts for the region where the milk was made. This sort of double counting is not a problem. However, if financial transactions are based on overshoot and undershoot, further work will be required to develop a system to manage double counting.
Another limitation of the PAF is that it has not yet been applied and evaluated as an instrument to guide policy, business, or behavioural decisions. In the development of the concept, and particularly of the framework, much effort was taken to envision the different applications to determine and address potential weaknesses of the system. However, there is no substitute for real world applications.
Humankind has the scientific knowledge needed to manage the Anthropocene and ensure a Holocene-like state of the environment is retained; but we will need to change as the limits expressed by the Planetary Boundaries are being approached or exceeded. There is evidence that a poly-scalar approach is the most effective change mechanism to manage the global commons through engaging different levels of human activity. Environmental accounting has advanced to the point that we can estimate what the environmental impacts of an activity are or will be. These three theories are advanced in the literature but are disconnected from one another. The Planetary Accounting Framework based on the new Planetary Quotas brings these three theories together.
Planetary Accounting is a novel framework that could facilitate an unprecedented, global, multi-scaled approach to managing the Earth system. There will undoubtedly be many ways to improve the system suggested here but this paper has started a process that can allow scientists and policy makers to work in a more concerted way to help create a future where the planet remains in the safe operating space.
Manuscript in preparation.
Aerosol Optical Depth equivalent
- CO2 :
Carbon dioxide equivalent
Extinctions per million species per year
Life Cycle Assessment
Methane and Nitrous Oxide
Planetary Accounting Framework
Parts per million
Skinner BJ. The blue planet : an introduction to earth system science / Brian J. Skinner, Barbara Murck. 3rd ed. Hoboken: Wiley; 2011.
Severinghaus JP, Sowers T, Brook EJ, Alley RB, Bender ML. Timing of abrupt climate change at the end of the younger Dryas interval from thermally fractionated gases in polar ice. Nature. 1998;391:141.
Hublin J, et al. Nature. 2017;546:289–92.
Rockström J, Steffen W, Noone K, Persson Å, Chapin FS, Lambin EF, Lenton TM, Scheffer M, Folke C, Schellnhuber HJ, et al. A safe operating space for humanity. Nature. 2009;461:472–5.
Rockström J, Steffen W, Noone K, Persson A, Chapin FS, Lambin E, Lenton TM, Scheffer M, Folke C, Schellnhuber HJ, et al. Planetary boundaries: exploring the safe operating space for humanity. Ecol Soc. 2009;14:32.
Crutzen PJ. Geology of mankind. Nature. 2002;415:23.
Zalasiewicz J, Williams M, Haywood A, Ellis M. The Anthropocene: a new epoch of geological time? Philos Trans R Soc A Math Phys Eng Sci. 2011;369:835–41.
IPCC. Summary for policymakers. In: Stocker TF, Qin D, Plattner G-K, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley P, editors. Climate change 2013: the physical science basis contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change. Cambridge and New York: Cambridge University Press; 2013.
IPCC. Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, New York: Cambridge University Press; 2013.
Trenberth K. Volume 1, the earth system: physical and chemical dimensions of global environmental change. In: MacCracken MaP J, editor. Encyclopedia of Global Environmental Change. Chichester: Wiley; 2002. p. 1.
Loutre MF, Berger A. Future climatic changes: are we entering an exceptionally long interglacial? Clim Chang. 2000;46:61–90.
Steffen W, Richardson K, Rockström J, Cornell SE, Fetzer I, Bennett EM, Biggs R, Carpenter SR, De Vries W, De Wit CA, et al. Planetary boundaries: guiding human development on a changing planet. Science. 2015;347:1259855.
Olson M. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge: Harvard University Press; 1965.
Hardin G. The tragedy of the commons. Science. 1968;162:1243–8.
Gordon HS. The economic theory of a common-property resource: the fishery. J Polit Econ. 1954;62:124–42.
Ostrom E. Governing the commons : the evolution of institutions for collective action / Elinor Ostrom. Cambridge, New York: Cambridge University Press; 1990.
Brondizio ES, Ostrom E, Young OR. Connectivity and the Governance of Multilevel Social-Ecological Systems: The Role of Social Capital. Annu Rev Environ Resour. 2009;34:253–78.
Ostrom E. Polycentric systems for coping with collective action and global environmental change. Glob Environ Chang. 2010;20:550–7.
Liu L. A New Perspective for Combating Global Climate Change. Transnational Corporations Review. 2010;2:78–81.
Hari MO. I. Solving global problems: Perspectives from International law and policy: the geography of Solving global environmental problems: reflections on polycentric efforts to address climate change. NYL Sch L Rev. 2013;58:777–931.
Eon C, Morrison G, Byrne J. The influence of design and everyday practices on individual heating and cooling behaviour in residential homes. Energy Efficiency. 2018;11:273–93.
Steg L, Vlek C. Encouraging pro-environmental behaviour: an integrative review and research agenda. J Environ Psychol. 2009;29:309–17.
Steg L. Values, norms, and intrinsic motivation to act Proenvironmentally. Annu Rev Environ Resour. 2016;41:277–92.
Eon C, Morrison GM, Byrne J. Unraveling everyday heating practices in residential homes. Energy Procedia. 2017;121:198–205.
Newman P, Kenworthy J. The end of automobile dependence : how cities are moving beyond car-based planning. Washington: Island Press; 2015.
Bernard T, Young J. The ecology of Hope: communities collaborate for sustainability. Gabriola Island: New Society Publishers; 1997.
Freeman DM. Local level organizations for local development: concepts and cases of irrigation organization. Boulder: Westview Press; 1989.
Korten D. Introduction: community-based resource management. In: Korten D, editor. Community Management: Asian Experience and Perspectives. Hartford: Kumarian Press; 1987.
Korten D, Klauss R. People Centred development: contributions toward theory and planning frameworks. Hartford: Kumarian Press; 1984.
McCay BJ, Acheson JM. The question of the commons. The culture and ecology of communal resources. Tucson: University of Arizona Press; 1987.
National Research Council. Proceedings of the Conference on Common Property Resource Management. Washington: National Academy Press; 1986.
Ostrom E. The rudiments of a theory of the origins, survival and performance of common property institutions. In: Korten D, editor. Making the Commons Work. Hartford: Kumarian Press; 1988.
Siy RY. Community resource management: lessons from the Zanjera. Manila: University of the Phillipines Press; 1982.
McKean M. Common Property: What Is It, What Is It Good For, and What Makes It Work? In: Gibson C, McKean M, Ostrom E, editors. Forest resources and institutions. Rome: The Food and Agriculture Organization of the United Nations; 1998.
Ostrom E. A polycentric approach for coping with climate change. Group DaER ed., vol. policy research working paper 5095. Washington: World Bank; 2009.
Newman P. Can the magic of sustainability revive environmental professionalism? Greener Management International. 2005;49:11–23.
Griffin M. Assumptions for success: a manager’s use of McGregor’s Y-theory assumptions produces significant changes in staff attitudes and performance. Nurs Manag (Harrow). 1988;19:32U–X.
Russ TL. Theory X/Y assumptions as predictors of managers’ propensity for participative decision making. Manag Decis. 2011;49:823–36.
Kates R, Wilbanks T. Making the global local responding to climate change concerns from the ground. Environ: Sci Policy for Sustainable Dev. 2003;45:12–23.
Moll HC, Noorman KJ, Kok R, Engström R, Throne-holst H, Clark C. Pursuing more sustainable consumption by analyzing household metabolism in European countries and cities. J Ind Ecol. 2005;9:259–75.
Neuvonen A, Kaskinen T, Leppänen J, Lähteenoja S, Mokka R, Ritola M. Low-carbon futures and sustainable lifestyles: a backcasting scenario approach. Futures. 2014;58:66–76.
Newman P, Beatley T, Boyer H. Resilient cities. 2nd ed. Washington DC: Island Press/Center for Resource Economics; 2017.
Enker R. Energy policy for buildings: Why economic interventions may be ineffective. In: CESB 2016 - Central Europe Towards Sustainable Building 2016: Innovations for Sustainable Future; 2016. p. 1366–73.
Enker RA, Morrison GM. Analysis of the transition effects of building codes and regulations on the emergence of a low carbon residential building sector. Energy and Buildings. 2017;156:40–50.
Galaz V, Crona B, Österblom H, Olsson P, Folke C. Polycentric systems and interacting planetary boundaries - emerging governance of climate change-ocean acidification-marine biodiversity. Ecol Econ. 2012;81:21–32.
The Brundtland Commission: Our Common Future. World Commission on Environment and Development; 1987.
Murray A, Skene K, Haynes K. The circular economy: an interdisciplinary exploration of the concept and application in a global context. J Bus Ethics. 2017;140:369–80.
Ewing B, Moore D, Goldfinger S, Ourslet A, Reed A, Wackernagel M. Ecological footprint atlas 2010. Global Footprint Network: Oakland; 2010.
Global Footprint Network: National Footprint Accounts, 2014 edition. 2014.
Footprint Calculator [http://footprintnetwork.org/en/index.php/GFN/page/calculators/].
Cucek L, Klemes JJ, Kravanja Z. A review of footprint analysis tools for monitoring impacts on sustainability. J Clean Prod. 2012;34:9–20.
Fang K. Footprint family: current practices, challenges and future prospects. Shengtai Xuebao/ Acta Ecologica Sinica. 2015;35:7974–86.
Fang K, Heijungs R, De Snoo GR. Theoretical exploration for the combination of the ecological, energy, carbon, and water footprints: overview of a footprint family. Ecol Indic. 2014;36:508.
Galli A, Wiedmann T, Ercin E, Knoblauch D, Ewing B, Giljum S. Integrating ecological, carbon and water footprint into a “footprint family” of indicators: definition and role in tracking human pressure on the planet. Ecol Indic. 2012;16:100–12.
Čuček L, Klemeš JJ, Kravanja Z. Overview of environmental footprints. In: Assessing and Measuring Environmental Impact and Sustainability; 2015. p. 131–93.
Laurent A, Owsianiak M. Potentials and limitations of footprints for gauging environmental sustainability. Curr Opin Environ Sustain. 2017;25:20–7.
Akenji L, Bengtsson M, Bleischwitz R, Tukker A, Schandl H. Ossified materialism: introduction to the special volume on absolute reductions in materials throughput and emissions. J Clean Prod. 2016;132:1–2.
Arvidsson R, Kushnir D, Molander S, Sandén BA. Energy and resource use assessment of graphene as a substitute for indium tin oxide in transparent electrodes. J Clean Prod. 2016;132:289–97.
Hertwich EG. Consumption and the rebound effect: an industrial ecology perspective. J Ind Ecol. 2005;9:85–98.
Kojima S, Aoki-Suzuki C. Efficiency and fairness of resource use: from a planetary boundary perspective. In: The Economics of Green Growth: New Indicators for Sustainable Societies; 2015. p. 31–48.
Fong WK, Sotos M, Doust M, Schultz S, Marques A, Deng-Beck C. Global protocol for community-scale greenhouse gas emission inventories - an accounting and reporting standard for cities. In: Greenhouse Gas Protocol. USA: World Resources Institute, C40 Cities, and Local Governments for Sustainability; 2014.
Greenhalgh S, Broekhoff D, Daviet F, Ranganathan J, Acharya M, Corbier L, Oren K, Sundin H. The GHG protocol for project accounting. In: Greenhouse Gas Protocol. USA: World Resources Institute and World Business Council for Sustainable Development; 2005.
Newman P. The rise and rise of renewable cities. Renewable Energy and Environmental Sustainability. 2017;2:10.
IEA. World Energy Outlook, special report on energy and air quality. Paris: IEA; 2017.
Le Quéré C, Andrew RM, Friedlingstein P, Sitch S, Pongratz J, Manning AC, Korsbakken JI, Peters GP, Canadell JG, Jackson RB, et al. Global carbon budget 2017. Earth Syst Sci Data Discuss. 2017;2017:1–79.
Cole MJ, Bailey RM, New MG. Tracking sustainable development with a national barometer for South Africa using a downscaled “safe and just space” framework. Proc Natl Acad Sci U S A. 2014;111:E4399–408.
Fang K, Heijungs R, De Snoo GR. Understanding the complementary linkages between environmental footprints and planetary boundaries in a footprint-boundary environmental sustainability assessment framework. Ecol Econ. 2015;114:218–26.
Sandin G, Peters GM, Svanström M. Using the planetary boundaries framework for setting impact-reduction targets in LCA contexts. Int J Life Cycle Assess. 2015;20:1684–700.
Dao H, Peduzzi P, Chatenoux B, De Bono A, Schwarzer S, Friot D. Environmental limits and Swiss footprints based on planetary boundaries. Geneva: UNEP/GRID-Geneva & University of Genever; 2015.
Hoff H, Nykvist B, Carson M. “Living well, within the limits of our planet?” Measuring Europe’s growing external footprint. In: SEI Working Paper 2014–05. Stockholm: Stockholm Environment Institute; 2014.
Nykvist B, Persson Å, Moberg F, Persson LM, Cornell SE, Rockström J. National Environmental Performance on planetary boundaries: a study for the Swedish Environmental Protection Agency. Sweden: Agency SEP; 2013.
EEA. EEA core set of indicators. Guide. Luxembourg: Technical report no 1/2005; 2005.
Häyhä T, Lucas PL, van Vuuren DP, Cornell SE, Hoff H. From planetary boundaries to national fair shares of the global safe operating space — how can the scales be bridged? Glob Environ Chang. 2016;40:60–72.
ISO. ISO 14040:1997 environmental management - life cycle assessment - principles and framework. Geneva: ISO; 1997.
Hansen J, Sato M, Kharecha P, Beerling D, Berner R, Masson-Delmotte V, Pagani M, Raymo M, Royer DL, Zachos JC. Target atmospheric CO2: where should humanity aim? Open Atmos Sci J. 2008;2:217–31.
IPCC. Annex II: Climate System Scenario Tables. In: Stocker TF, Qin D, Plattner G-K, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM, editors. Climate Change 2013: The Physical Science Basis Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, New York: Cambridge University Press; 2013. p. 1395–446. Prather M, Flato G, Friedlingstein P, Jones C, Lamarque J-F, Liao H, Rasch P (Series Editor).
Rockström J, Gaffney O, Rogelj J, Meinshausen M, Nakicenovic N, Schellnhuber HJ. A roadmap for rapid decarbonization. Science. 2017;355:1269–71.
Obersteiner M, Bednar J, Wagner F, Gasser T, Ciais P, Forsell N, Frank S, Havlik P, Valin H, Janssens IA, et al. How to spend a dwindling greenhouse gas budget. Nat Clim Chang. 2018;8:7–10.
Chin M. Atmospheric aerosol properties and climate impacts: DIANE Publishing; 2009.
de Vries W, Kros J, Kroeze C, Seitzinger SP: Assessing planetary and regional nitrogen boundaries related to food security and adverse environmental impacts. Curr Opin Environ Sustain 2013, 5:392–402.
Mekonnen MM, Hoekstra AY. The green, blue and grey water footprint of crops and derived crop products. Hydrol Earth Syst Sci. 2011;15:1577–600.
Hoekstra AY. Water footprint assessment: evolvement of a new research field. Water Resour Manag. 2017:1–21.
Ridoutt B, Pfister S. A new water footprint calculation method integrating consumptive and degradative water use into a single stand-alone weighted indicator. Int J Life Cycle Assess. 2013;18:204–7.
Galli A, Wackernagel M, Iha K, Lazarus E. Ecological footprint: implications for biodiversity. Biol Conserv. 2014;173:121–32.
Secretariat of the CBD. Global biodiversity outlook 4. Montreal: Convention on Biological Diversity; 2014.
MEA. Ecosystems and Human Well-Being: Biodiversity Synthesis. Washington: World Resources Institute; 2005.
CBD. Pathways of introduction of invasive species, their prioritization and management. Montreal: UNEP, Convention on biological diversity; 2014.
Secretariat of the CBD. Global biodiversity outlook 3. Montreal: Secretariat of the Convention on Biological Diversity; 2010.
Fahrig L. How much habitat is enough? Biol Conserv. 2001;100:65–74.
Groombridge B. Global biodiversity : status of the earth’s living resources : a report / compiled by the world conservation monitoring Centre. London: Chapman & Hall; 1992.
Bibby CJ. Recent past and future extinctions in birds. Phil Trans R Soc Lond B. 1994;344:35–40.
Ehrlich PR. Energy use and biodiversity loss. Phil Trans R Soc Lond B. 1994;344:99–104.
Thomas JA, Morris MG, Hambler C. Patterns, mechanisms and rates of extinction among invertebrates in the United Kingdom [and discussion]. Phil Trans R Soc Lond B. 1994;344:47–54.
Wackernagel M, Schulz NB, Deumling D, Linares AC, Jenkins M, Kapos V, Monfreda C, Loh J, Myers N, Norgaard R, Randers J. Tracking the ecological overshoot of the human economy. Proc Natl Acad Sci U S A. 2002;99:9266–71.
Soulé ME, Sanjayan MA. ECOLOGY: conservation targets: do they help? Science (New York). 1998;279:2060–1.
Margules CR, Nicholls AO, Pressey RL. Selecting networks of reserves to maximise biological diversity. Biol Conserv. 1988;43:63–76.
UNEP: Global guidance for life cycle impact assessment indicators. Frischknecht R, Jolliet O eds. Nairobi: UNEP; 2016.
Tyree C, Morrison D: Invisibles: the plastic inside us. Orbmedia.org; 2017.
Ciais P, Sabine C, Bala G, Bopp L, Brovkin V, Canadell J, Chhabra A, DeFries R, Galloway J, Heimann M, et al. Carbon and other biogeochemical cycles. In: Stocker TF, Qin D, Plattner G-K, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM, editors. Climate change 2013: the physical science basis contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change. Cambridge, New York: Cambridge University Press; 2013. p. 465–570.
Mercury Levels in Commercial Fish and Shellfish (1990-2012) [https://www.fda.gov/Food/FoodborneIllnessContaminants/Metals/ucm115644.htm].
Ridoutt BG, Pfister S. A revised approach to water footprinting to make transparent the impacts of consumption and production on global freshwater scarcity. Glob Environ Chang. 2010;20:113–20.
CO2 Emissions (metric tons per capita) [http://data.worldbank.org/indicator/EN.ATM.CO2E.PC?order=wbapi_data_value_2009%20wbapi_data_value%20wbapi_data_value-last&sort=asc].
FAO: Global Forest resources assessment 2015. Nations FaAOotU ed. Rome; 2016.
WHO. In: World Health Organisation, editor. Ambient air pollution: a global assessment of exposure and burden of disease. Geneva: WHO Press; 2016.
We acknowledge Alan Merry for his extensive feedback on early drafts of this manuscript.
Australian Postgraduate Award & Curtin University Postgraduate Scholarship received by Kate Meyer as a PhD stipend for the duration of the research.
Australia Domain Administration (AuDA) provided a top up scholarship for Kate Meyer for the duration of the research.
Ethics approval and consent to participate
Consent for publication
The Editor-in-Chief of the journal, Peter Newman, is an author of this article. The content was independently reviewed by peers in the field and the decision to accept this article for publication was made by a member of the Editorial Board. Peter’s position did not have any conscious influence on this decision. The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peter Newman was not involved in any of the reviewing process associated with this paper
About this article
Cite this article
Meyer, K., Newman, P. The Planetary Accounting Framework: a novel, quota-based approach to understanding the impacts of any scale of human activity in the context of the Planetary Boundaries. Sustain Earth 1, 4 (2018). https://doi.org/10.1186/s42055-018-0004-3
- Planetary boundaries
- Environmental accounting
- Poly-scalar management
- Environmental impact assessment
- Planetary accounting
- Planetary quotas
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9290722608566284,
"language": "en",
"url": "https://thehub.press/2020/07/22/the-austrian-tradition-in-economics-free-thoughts/",
"token_count": 181,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.205078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:c8d7473f-792c-4149-a26c-b33d08aecc08>"
}
|
The Austrian Tradition in Economics
This week we are joined by Peter J. Boettke, who explains this history and tenets of the Austrian tradition in economics. Boettke traces the school’s history from Carl Menger through Eugen Böhm-Bawerk and Joseph Schumpeter, Ludwig von Mises, Friedrich Hayek, and Murray Rothbard to contemporary economists such as Israel Kirzner, Vernon Smith, and Mario Rizzo. He explains what Austrian economics does and does not do, and distinguishes between what he calls “mainline” economics and “mainstream” economics.
What distinguishes Austrian economics from other schools of thought in economics? How did the Austrian school come to be known as the free market school?
Show Notes and Further Reading
Peter J. Boettke, Living Economics: Yesterday, Today, and Tomorrow (book)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.944455623626709,
"language": "en",
"url": "https://www.edocr.com/v/f62f620f/cgroner/p-classfr-tagfinancial-aid-wisdom-paying-for-colle",
"token_count": 488,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0224609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:be3cc169-4270-4848-b16c-b78483a40401>"
}
|
Financial Aid Wisdom –
Practical Tips about Paying for College
Copyright © 2012 by Fastweb LLC. All rights reserved. Visit www.fastweb.com and www.finaid.org for help planning and paying for college.
Tips about Saving for College
College costs double every decade and triple in the 17 years
from birth to college enrollment.
It is cheaper to save than to borrow. If you save $200 a month
for 10 years at 6.8% interest, you will accumulate $34,433. If
instead of saving, you borrow $34,433 at 6.8% interest with a
10-year repayment term, you will pay $396 a month, almost
twice as much.
Time is your greatest asset. Start saving for college as soon as
possible. If you start saving at birth, about a third of the college
savings goal will come from interest on the contributions. If you
wait until your child enters high school, less than 10% will come
It is never too late to start saving. Every dollar you save is about
a dollar less you will have to borrow. Every dollar you borrow
will cost you about two dollars by the time you repay the debt.
The one-third rule: Plan on saving a third of projected college
costs or the full 4-year costs the year the baby was born. Like
most life-cycle expenses, college costs will be spread out over
time, with about one third coming from past income (savings),
about one third from current income and financial aid, and
about one third from future income (loans). Since college costs
increase by about a factor of three over any 17-year period and
3 x 1/3 = 1, that suggests that your college savings goal should
be the full 4-year cost of college the year the baby was born.
You might not be able to predict which college your child will
attend, but you probably can predict the type of college, such
as an in-state public 4-year college, out-of-state public 4-year
college or a non-profit 4-year college. For a baby born in 2012,
this means saving $250/month, $400/month and $500/month,
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9696769118309021,
"language": "en",
"url": "https://www.mbceconomy.com/could-you-do-with-a-cash-advance-check-this-out/",
"token_count": 1198,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0830078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d8754c61-defb-4b65-b69d-d16d4d3d94ae>"
}
|
A Business Cash Advance can be described as a short term loan advance made from a financial institution such as a bank or an alternative lender to a business. It also refers to a service which is provided by credit card issuers allowing cardholders to withdraw a certain amount of cash against their available balance up to a certain limit. Often, it incurs an interest rate between 3 to 5% of the amount being borrowed.
The cash advance interest on credit cards is often higher than other transactions. This is because some purchases of items viewed as cash made with credit cards, are considered cash advances in accordance with the guidelines of the credit card network and therefore incur a much higher interest rate with no period of grace.
These purchases include; prepaid debit cards, gaming chips, money orders as well as some taxes and fees paid to the government. However, unless stated otherwise these purchases are processed as regular credit transactions.
Although they feature high interest rates or fees, cash advances are still highly attractive to borrowers; this is because they have the advantages of fast approval and funding.
Types Of Cash Advances
Although there are different types of cash advances, the common denominator is that they are all advances on available an bank balance with rigidly high interest rates and fees. The types of cash advances available include:
Credit Card Advances
This is the most popular form of cash advance. The money in this situation can either be withdrawn from an ATM or from a cheque deposited or cashed at the bank, depending on the credit card company.
These advances have a higher interest rate which is even higher than the amount on regular purchases. An average of 24% which is about 9% higher than the average Annual Percentage Rate for purchases is to be paid. As soon as the interest starts to accrue, no period of grace is granted, meaning that as soon as the loan matures, it has to be repaid immediately by a stipulated date.
This form of cash advance includes a fee, which might be a flat rate or a percentage of the amount advanced. If an ATM is used to withdraw these funds, a small usage fee would also be charged. The credit card advance balance differs from credit payments but they can both be paid monthly. However, if you decide to only pay the minimum amount due, the credit card company is allowed to apply it to either of the balances with a lower interest rate. They can be quickly and easily obtained.
Merchant Cash Advances
These are loans which are given to companies and businesses by alternative lenders or banks. Businesses whose credit is less than perfect use merchant cash advances to finance their activities. These are then paid for using future credit card receipts or a part of the funds the business receives from sales in its online account. Rather than look at the business’ credit score, these alternative lenders focus on the credit worthiness of the business by looking at multiple pay sources.
Payday Cash Advance
This cash advance is usually issued to special payday lenders with fees and interest rates sometimes exceeding 100%. The amount of cash to be loaned is often determined by the applicant’s paycheque and government regulations. These loans are short-term and must be paid by the next payday unless the loan is extended which means additional interest would be added. These loans could also be offered by business employers to their employees.
How A Merchant Cash Advance Can Help Your Business
Often times, businesses require a sizeable sum of income to fuel growth and success. As such, if cash isn’t present, then a merchant cash advance can provide the funds needed, here are five ways taking out a merchant cash advance can help your business;
Every business has its unique tools and materials which make carrying out its functions easier and yours is no exception. This equipment boosts your business’ efficiency and increases production rates, in turn increasing your profit margin. However, without funds, it would be almost impossible to acquire this equipment. As such, many businesses turn to merchant cash advances to get the required funds that are to be used in the acquisition of this equipment. The merchant cash advance is great for all businesses, regardless of whatever industry and field they are.
A merchant cash advance provides funding that can be used in paying old employees and also to pay new employees. Hiring more hands makes the business production rate faster and increases productivity, therefore, rather than running a one-man show, you can hire extra employees. If you lack the funds to do this, then taking a cash advance may be the best option for you.
One source of funds for purchasing inventory for a business is taking out a cash advance. To make sales, it is necessary that goods be readily available. The funding required to stock this inventory can be provided by taking a cash advance which allows you pay your suppliers immediately and have available goods for sale to your customers. They are also perfect for stocking inventory in high demand or seasonal products.
Advertisement and marketing
Without advertisement, you business might remain unknown even if it is the most exceptional in its field. As such, it is essential to market and advertise your business as a way to improve your business’ popularity. The expenses of advertising and marketing can be covered using small cash advances.
Despite the success rate of any business, there comes a time when expansion is needed to provide a larger market and a bigger customer base. While you might argue that you cannot afford such expansion, cash advances provide you with an advance on these much-needed funds, allowing you to carry out your business’ expansion.
It is no news that small businesses can often run out of funds required to carry out essential functions. In these situations, cash advances come into play, giving your business another source of cash flow aside its earnings. They can be more quickly obtained than other loans meaning that they can be gotten at any time, which therefore means they have a significant advantage over more traditional forms of loan.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9592359066009521,
"language": "en",
"url": "https://www.mic.com/articles/23846/obama-deficit-2013-we-might-never-be-able-to-reduce-our-deficit-here-s-why",
"token_count": 1058,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.390625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:dbe2cb0d-7501-4862-9840-2ad2d6c94b41>"
}
|
Let's review some of the cold hard facts about the current financial condition of the United States. Deficits will not become surpluses in the foreseeable future. Discretionary spending for social programs is going to get squeezed, as more and more money flows into big entitlements. Surpluses from promised improved economic conditions will be woefully small compared to the growing national debt. The president's plan to address financial problems does not yet include any meaningful spending cuts (note: In a recent interview, on Meet the Press, the president said he reduced spending by $1 trillion in 2011. Actually, he increased federal spending by $147 billion and increased the national debt by more than $1 trillion).
If the country moves forward and does not address spending, several issues will eventually drag our nation down. They include:
• The U.S. will not be able to borrow more money and/or print money indefinitely without serious repercussions. These could include reluctance on the part of global creditors to lend. This in turn would increase borrowing rates for the Treasury. Similarly, printing money eventually will increase inflation, despite assurances to the contrary. Hyper inflation would have a devastating impact on our nation’s purchasing power and the cost of living for our citizens.
• Keynesians, such as The New York Times' Paul Krugman, continuously assure us that spending is the key to financial recovery. This concept is axiomatic, to a degree. There is a level of spending by the government that surely will prop up the economy thereby avoiding significant long-term damage. However, it has limits, and the longer Congress waits to address spending and the resultant national debt, the more overbearing the problem will become. There seems to be no middle ground for Keynesians when it comes to spending.
• An economic recovery in the U.S. that will make a meaningful dent in the deficit has never happened to the extent necessary to reduce the projected levels of debt during the next decade. If projections of $25 trillion or more are even close to being accurate, the U.S. will be in hock for a very long time assuming that it posts $1-3 trillion surplus indefinitely, a virtual impossibility based upon history (see below).
• The burden of greater debt is going to fall on the shoulders of millennials if entitlements are not reformed. The sh*t will hit the fan in approximately 2033. So, if you are about 42 years old, you may have a problem collecting Social Security when you turn 62. These payments along with Medicare, which also becomes available at about the same age, are surely in jeopardy based upon forecasts. An effort to slow the drain on Social Security caused by the vast number of baby boomers reaching the eligible age will only serve to benefit young people. The choices are to do nothing and receive much less when you need it, or reform the system now and assure it will be solvent when the next generation needs money in retirement. Ironically, this issue has no direct impact on the rich, if you assume the highest tax rates remain the same indefinitely.
It is shocking that the president is refusing to even consider entitlement reform, and more shocking that millennials, the generation that will be most affected by inordinately high levels of debt, are not clamoring for action. Frankly, many older people already receive Social Security payments and will likely be gone before payments are in serious jeopardy. My personal interest in this problem is not for my own sake but for my children and grandchildren.
To make progress on the debt issue, several myths should be debunked. The economy cannot realistically grow fast enough to bring the national debt under control. We are currently not in a recession technically, and yet, our annual deficits are still $1 trillion or more. During the Clinton administration, surpluses (if you believe the numbers are accurate) only reached a highpoint of $230 billion, the highest in history. Should Americans believe anyone who projects an economic recovery significant to make a dent in the current $16 trillion national debt? I do not.
Excessively increasing the money supply will increase inflation at some point. The last thing our citizens need is higher costs without accompanying higher wages. I am dumbfounded that some economists have been lured into such a false sense of security about inflationary risk; maybe they were born after the last round of devastating inflation in the 1980s that affected our country.
Spending cuts do not directly affect rich people, the money not spent on pork barrel projects, outdated social programs, and the like does not go into the pockets of the 1%. It is true that taxes are less likely to increase if the country's financial condition is brought under control. But, there is a limit to the amount of tax hikes the system can withstand. It should also be noted that Obama reaped $600 billion over ten years from the wealthy on January 1, 2013. This amount is a rounding error compared to $16 trillion of debt.
Millennials are on deck. They will experience the brunt of future cuts in services and a likely tax hike for the middle class. Twenty years will pass quickly, and the next generation will be hurt badly if Congress continues to kick the can down the road.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9384369254112244,
"language": "en",
"url": "https://www.motilaloswal.com/blog-details/One-important-thing-that-you-should-be-assured-of---the-future/1050",
"token_count": 780,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.32421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:d273d460-5757-4f30-a33c-864e068a6797>"
}
|
Not everyone likes living in the present. Especially traders. They prefer planning finances for the future in the present for one universal reason – to be ready for contingencies in the time to come. Having said that, traders are only human,and always don't have clarity for making informed decisions for the time that is yet to pass by. But the effect of the severity of future emergencies can somewhat be in their control if they make use of the right investment instrument. For eg, if the trader sells chocolates, he must ensure that an increase in the future price of cocoa shouldn't affect his future sales. For this he can enter into a contract that safeguards his interests against inflation.This contract is called the futures.
Futures are financial contracts that make it obligatory for buyers to purchase, or sellers to sell, commodities or assets like a financial instrument at a predetermined future date and price (also called futures price). The futures contract comes with complete details on the quality and quantity of the commodity or asset,and are standardised for the purpose of trading on a futures exchange.
However,don't confuse futures with options. The difference between the two is that the latter gives the holder the right to buy or sell the commodity or asset at expiration, while the one holding a futures contract is obligated to fulfill the terms of his contract. But a futures contract can come with options that give the initial holder the right to enter the long side of the contract and buy the commodity or asset at the futures price. The short side of the contract makes it obligatory for the seller to sell the commodity or asset at the futures price. Now there can be either of two players in the futures market – hedger and speculator.
A hedger buys or sells in the futures market to secure the future price of the commodity or asset to be sold at a later date in the cash market. This helps the hedger protect himself against price risks. Hedger buying a commodity always tries to secure the lowest possible futures price, whereas a hedger selling a commodity always tries to secure the highest futures price. The futures contract comes with a definite futures price certainty for both buyer and seller, thus reducing risks associated with price volatility.
A spectator however looks at profiting from the futures price change that hedgers protect themselves against. Hedgers minimise risk, while spectators maximise risk for maximising profits.
Futures contract typeHedgerSpectatorShortSecure futures price now as security against future declining pricesSecure futures price now in anticipation of declining pricesLongSecure futures price now as security against future rising pricesSecure futures price now in anticipation of rising prices
Oh, there's one more – spot price
Unlike futures price, spot price is the current rate at which goods can be bought or sold at a specified time and place. A good’s spot price is the unambiguous value at any given time in the marketplace. However, the spot price of a commodity can be affected by its supply and demand. For eg, the price of precious metals go up in difficult times, thus resulting in anticipation of the possible increase in demand for the metals.
Spot prices are used for pricing of futures contracts of commodities, derivatives or assets. It is calculated using the commodity’s spot price, the risk free rate of return, and time taken by the contract to mature (including costs associated with storage or convenience). Similarly, spot price can be determined using the futures price.
So if you are a trader reading this article, you have learnt what it means to keep your eyes on the futures price of commodities. After all, business one started should only keep growing, unaffected by unforeseen events. And if you are an investor, you are now more informed to decide if you want to be the hedger, or the spectator.
Share your Name and Mobile Number with us and get started
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9126805067062378,
"language": "en",
"url": "https://www.nature.com/articles/ncomms10244",
"token_count": 13271,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.24609375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:475b305c-04d1-45c0-b46a-abb404f4c758>"
}
|
Fisheries data assembled by the Food and Agriculture Organization (FAO) suggest that global marine fisheries catches increased to 86 million tonnes in 1996, then slightly declined. Here, using a decade-long multinational ‘catch reconstruction’ project covering the Exclusive Economic Zones of the world’s maritime countries and the High Seas from 1950 to 2010, and accounting for all fisheries, we identify catch trajectories differing considerably from the national data submitted to the FAO. We suggest that catch actually peaked at 130 million tonnes, and has been declining much more strongly since. This decline in reconstructed catches reflects declines in industrial catches and to a smaller extent declining discards, despite industrial fishing having expanded from industrialized countries to the waters of developing countries. The differing trajectories documented here suggest a need for improved monitoring of all fisheries, including often neglected small-scale fisheries, and illegal and other problematic fisheries, as well as discarded bycatch.
Marine fisheries are the chief contributors of wholesome seafood (finfish and marine invertebrates; here ‘fish’). In many developing countries (and likely also in many ‘transition‘ countries), fish is the major animal protein source that rural people can access or afford1; and they are also an important source of micronutrients essential to people with otherwise deficient nutrition2. However, the growing popularity of fish in countries with developed or rapidly developing economies creates a demand that cannot be met by fish stocks in their own waters (for example, the EU, the USA, China and Japan). These markets are increasingly supplied by fish imported from developing countries, or caught in the waters of developing countries by various distant-water fleets3,4,5, with the consequences that:
Industrially caught fish has become a globalized commodity that is mostly traded between continents rather than consumed in the countries where it was caught7, and
The small-scale fisheries that traditionally supplied seafood to coastal rural communities and the interior of developing countries (notably in Africa)8 are forced to compete with the export-oriented industrial fleets without much support from their governments.
The lack of attention that small-scale fisheries suffer in most parts of the world9 manifests itself in potentially misleading statistics that are submitted annually by many member countries of the Food and Agriculture Organization of the United Nations (FAO), which may omit or substantially underreport small-scale fisheries data10. FAO harmonizes the data submitted by its members, which then becomes the only global data set of fisheries statistics in the world, widely used by policy makers and scholars11.
This data set, however, may not only underestimate artisanal (that is, small scale, commercial) and subsistence fisheries10, but also generally omit the catch of recreational fisheries, discarded bycatch12 and illegal and otherwise unreported catch, even when some estimates are available13. Thus, except for a few obvious cases of over-reporting14, the landings data updated and disseminated annually by the FAO on behalf of member countries may considerably underestimate actual fisheries catch. While this underestimation is widely known among many fisheries scientists working with FAO catch data, and is freely acknowledged by FAO, its global magnitude has not been explicitly presented until now.
Here we present the results of an approach called ‘catch reconstruction’15,16 that utilizes a wide variety of data and information sources to derive estimates for all fisheries components missing from the official reported data. We find that reconstructed global catches between 1950 and 2010 were 50% higher than data reported to FAO suggest, and are declining more strongly since catches peaked in the 1990s. These findings and the country-specific technical work underlying these results will hopefully contribute to member countries submitting more accurate fisheries statistics to FAO. Such improved and more comprehensive data contribute a foundation that can facilitate the implementation of ecosystem-based fisheries management17, which is a component of the ‘FAO Code of Conduct for Responsible Fisheries’18.
The sum of the reconstructed catches of all sectors in all Exclusive Economic Zones (EEZs) of the world, plus the catch of tuna and other large pelagic fishes in the High Seas leads to two major observations (Fig. 1; Supplementary Table 1). First, the trajectory of reconstructed catches differs substantially from those reported by FAO on behalf of its member countries. The FAO statistics suggest that, starting in 1950, the world catch (actually ‘landings’, as discarded catches are explicitly excluded from the global FAO data set) increased fairly steadily to 86 million tonnes (mt) in 1996, stagnated and then slowly declined to around 77 mt by 2010 (Fig. 1). In contrast, the reconstructed catch peaked at 130 mt in 1996 and declined more strongly since. Thus, the reconstructed catches are overall 53% higher than the reported data.
Furthermore, since the year of peak catches in 1996, the reconstructed catch declined strongly at a mean rate of −1.22 mt·per year, whereas FAO, at least until 2010, described the reported catch cautiously as characterized by ‘stability’19,20, though it exhibited a gradual decline (−0.38 mt·per year). The reconstructed total catches therefore represent a decline of over three times that of the reported data as presented by FAO on behalf of countries. A segmented regression21 identifies two breakpoints in the catch time series (that is, change in trend) of the reconstructed total catches as well as the reported catches. These are in 1967 as a result of a changing slope of the catch time series from a stronger increase prior to 1967 (reconstructed catches=2.82 mt·per year; reported catches=1.88 mt·per year) to a slower increase after 1967 (reconstructed catches=1.86 mt·per year; reported catches=1.30 mt·per year). The second breakpoint is in 1996 (the year of peak catch), with a subsequently decreasing trend (that is, slope) of −1.22 mt·per year for reconstructed catches and −0.38 mt per·year for reported catches, as also presented for the simple regression above (Fig. 1; see also Supplementary Table 2).
Note that the recent, stronger decline in reconstructed total catches is not due to some countries reducing catch quotas so that stocks can rebuild. For example, a similar decline (−1.01 mt·per year) in reconstructed catches is obtained when the catch from the Unites States, Northwestern Europe, Australia and New Zealand (that is, countries where quota management predominates) is excluded (Fig. 2; Supplementary Table 3).
Closer examination of the reconstructed versus reported catches in each of the 19 maritime FAO statistical areas suggests that some of the areas where industrial fishing originated, such as the Northwest Atlantic (FAO area 21), are the first regions of the world to demonstrate declining catches (Fig. 3). In contrast, lower-latitude areas demonstrate declines later, or still appear to have increasing catches, for example, the Indian and Western Central Pacific Oceans still showing generally increasing trends in reported catches (Fig. 3).
Catches by fishing sector
We present, for the first time, global reconstructed marine fisheries catches by fisheries sectors (Fig. 4; Supplementary Table 4). They are dominated by industrial fisheries, which contribute 73 mt of landings in 2010, down from 87 mt in 2000 (Fig. 4). At the global scale it is a declining industrial catch (combined with the smaller contribution of gradually reduced levels of discarding)12 that leads to declining global catches since 1996, while the artisanal sector, which generates a catch increasing from about 8 mt·per year in the early 1950s to 22 mt·per year in 2010, continues to show gradual growth in catches at the global scale (Fig. 4).
Also noticeable is that the inter-annual variations (small peaks and troughs) in both reconstructed catches and reported catches (Fig. 1) are mainly driven by industrial data, which are relatively well documented and reported in time series, while the small-scale sector data are smoother over time (Fig. 4), and more strongly influenced by continuity assumptions over time as part of the national reconstructions.
While some countries increasingly include subsets of artisanal catches in official catch statistics provided to FAO, subsistence fisheries catches (Fig. 4) rarely are10. Worldwide, subsistence fisheries caught an estimated 3.8 mt·per year between 2000 and 2010 (Fig. 4; Supplementary Table 4). The current global estimate of just under 1 mt·per year of recreational catches is rather imprecise, and recreational fishing is declining in developed, but increasing in developing countries.
Discarded bycatch, generated mainly by industrial fishing, notably shrimp trawling22, was estimated at 27 mt·per year (±10 mt) and 7 mt·per year (±0.7 mt) in global studies conducted for FAO in the early 1990s and 2000s, respectively23,24. However, these point estimates were not incorporated into FAO’s global ‘capture’ database, which thus consists only of landings. Here, these studies are used, along with numerous other sources, to generate time series of discards (Fig. 4). Discards, after peaking in the late 1980s, have declined, and during 2000–2010, an average of 10.3 mt·per year of fish were discarded.
Our reconstructed catch data, which combines the data reported to FAO with estimates of unreported catches (that is, reconstructed data are ‘reported FAO data+unreported catches’) include estimates of uncertainty (Fig. 1) associated with each national reconstruction. Note that many reconstructions are associated with high uncertainty, especially for earlier decades, for sectors such as subsistence which receive less data collection attention by governments, and for small countries or territories (Fig. 1; Supplementary Table 5)10. We include uncertainty estimates here, despite the fact that reconstructions address an inherent negative bias in global catch data (that is, address the ‘accuracy’ of data) and not the replicability of catch data collection (that is, the statistical ‘precision’ of such estimates), which is what ‘uncertainty’ estimates (for example, confidence limits) generally are used for. We do recognize that any estimates of unreported catches implies a certain degree of uncertainty, but so do officially reported data. Most countries in the world use sampling schemes, estimations and raising factors to derive their national catch data they officially report domestically and internationally, all without including estimates of the uncertainty inherent in the numbers being reported as official national catches.
Our comparison of the reconstructed versus reported catches in each of the 19 maritime FAO statistical areas suggests that some of the lower-latitude areas still appear to have increasing reported catches. This generally increasing trend is most pronounced in the Indian and Western Central Pacific Oceans (Fig. 3), where the reconstructed catches are most uncertain, as the statistics of various countries could only partially correct a regional tendency to exaggerate reported catches5. FAO’s Indian and Western Central Pacific Oceans areas are also the only ones with an increasing FAO reported catch, which, when added to that of other FAO areas, makes the FAO reported world catch appear more stable than it is based on our global reconstructions.
Our data and analyses show that, at the global scale, it is a declining industrial catch (plus a smaller contribution of gradually declining discards)12 that provide for the declining global catches, while artisanal fishing continues to show slight growth in catches (Fig. 4). Thus, the gradually increasing incorporation of artisanal and other small-scale catches in the officially reported data presented by FAO on behalf of countries is partly masking the decline in industrial catches at the global level. Since officially reported data are not (at the international level) separated into large-scale versus small-scale sectors25, this trend could not be easily documented until now. Obviously, these patterns may vary between countries. Furthermore, while parts of artisanal catches are increasingly included in official catch statistics by some countries, non-commercial subsistence fisheries catches, a substantial fraction of it through gleaning by women in coastal ecosystems such as coral reef flats and estuaries26 are generally neglected. The importance of subsistence fishing for the food security of developing countries, particularly in the tropical Indo-Pacific, cannot be overemphasized10,27.
Our preliminary and somewhat imprecise reconstruction of recreational catches indicates that this sector is largely missing from official reported data, despite FAO’s annual data requests explicitly allowing inclusion of recreational catch data. This activity, however, generates an estimated 40 billion USD·per year of global benefits, involves between 55 and 60 million persons, and generates about one million jobs worldwide28.
Finally, our country-by-country reconstructed data supports previous studies illustrating that global discards have decreased12,24. Discarded catches should therefore be included in catch databases, if only to allow for correct inferences on the state of the fisheries involved in this problematic practice.
The reconstructed catch data presented here for the first time for all countries in the world can contribute to formulating better policies for governing the world’s marine fisheries, with a first step being the recognition in national policies of the likely magnitude of fisheries not properly captured in the official national data collection systems. This recognition will hopefully contribute to improvements in national data collection systems, an aspiration that we share with FAO. For example, in Mauritania and Guinea Bissau, which, in large part as a result of the reconstructions29,30 and our ongoing direct engagement with these countries, are now initiating national data collection systems for recreational fisheries (a growth industry in both countries and missing from current data systems). It is hoped that this type of data, and other missing data (for example, subsistence catches)10, will be included in future national data reports to FAO, as is the case for some other countries such as Finland31. The taxonomic composition of this reconstructed catch (not presented here but available from the Sea Around Us and through the individual catch reconstruction reports, see Supplementary Table 5) can also contribute to the development of more useful first-order indicators of fisheries status32,33,34 than has been possible previously, especially in the absence of comprehensive stock assessments for all taxa targeted.
A policy change that would be straightforward for FAO to coordinate and implement with all countries around the world is to request countries to submit their annual catch statistics separately for large-scale and small-scale fisheries25, which would be an excellent contribution towards the implementation of the ‘Voluntary Guidelines for Securing Sustainable Small-scale Fisheries in the Context of Food Security and Poverty Eradication’ recently adopted and endorsed at the thirty-first Session of the FAO Committee on Fisheries and Aquaculture (COFI) in June 2014 (ref. 35). While we have found that many countries already have such data or data structure at hand, until all countries can implement such a data-change request, FAO could incorporate such a split into their internal data harmonization procedures, based, for example, on the same or similar information sources as used by the reconstructions.
The very high catches that were achieved globally in the 1990s were probably not sustainable. However, they do suggest that stock rebuilding, as successfully achieved in many Australian and US fisheries, and beginning to be applied in some European fisheries, is a policy that needs wider implementation, and which would generate even higher sustained benefits than previously estimated from reported catches36. On the other hand, the recent catch decline documented here is of considerable concern in its implication for food security, as evidenced by the decline in per capita seafood availability (Fig. 2). Note that the recent, strong decline in reconstructed total catches is also evident if catches in countries with well-established quota management systems (United States, Northwestern Europe, Australia and New Zealand) are excluded (Fig. 2). Low quotas are generally not imposed when a stock is abundant; rather low and reduced quotas in fully developed fisheries are generally a management intervention to reduce fishing pressure as a result of past overfishing. Similarly, it has been proposed that strongly declining catches in unmanaged, heavily exploited fisheries are likely a sign of overfishing32,33,34. The often raised suggestion that aquaculture production can replace or compensate for the shortfall in wild capture seafood availability, while being questionable for various reasons37, is not addressed here.
The last policy relevant point to be made here transcends fisheries in that it deals with the accuracy of the data used by the international community for its decision making, and the generation of factual knowledge that this requires. After the creation of the United Nations and its technical organizations, including the FAO, a major project of ‘quantifying the world’38 began to provide data for national and international agencies on which they could base their policies. As a result, large databases, for example on agricultural crops and forest cover, were created whose accuracy is becoming increasingly important given the expanding exploitation of our natural ecosystems39.
Periodic validation of these databases should therefore be a priority to ensure they avoid producing ‘poor numbers’40. For example, reports of member countries to FAO about their forest cover, when aggregated at the global level, suggest that the annual rate of forest loss between 1990/2000 and 2001/2005 was nearly halved, while the actual loss rate doubled when assessed by remote sensing and rigorous sampling41. Similarly, here we show that the main trend of the world marine fisheries catches is not one of ‘stability’ as cautiously suggested earlier by FAO42, but one of decline. Moreover, this decline, which began in the mid-1990s, started from a considerably higher peak catch than suggested by the aggregate statistics supplied by FAO members, implying that we have more to lose if this decline continues. Thankfully, this also means that there may be more to gain by rebuilding stocks.
For the global community, a solution could therefore be to provide the FAO the required funds to more intensively assist member countries in submitting better and more comprehensive fishery statistics, especially statistics that cover all fisheries components, and report data by sector25. Such improved statistics can then lead to better-informed policy changes for rebuilding stocks and maintaining (sea)food security. Alternatively, or in addition, FAO could team up with other groups (as was done for forestry statistics) to improve the fisheries statistics of member countries that often have fisheries departments with very limited human and financial resources.
Ultimately, the only database of international fisheries statistics that the world has (through FAO) can be improved. The more rapid decline of fisheries catches documented here is a good reason for this.
Catch reconstruction principles
The catch reconstruction approach rests on two basic principles16:
When ‘no data are available’ on a fishery that is known to exist, it is not appropriate to enter ‘NA’ or ‘no data’ into the database. Such entries will later be turned into a zero, which is a bad estimate of the catch of an existing fishery. This concern about the problematic ‘elegance of the number zero’ is also something that affects other scientific activities, such as climate modelling43;
Rather, a best estimate should be inserted in all such cases, based on the fact that fishing is a social activity that is bound to throw a ‘shadow’ on the society in which it is embedded, and from which an approximate and conservative (but better than zero) estimate of catch can be derived if fishing of this type is known to occur (for example, from the seafood or the fuel consumed locally, or the number of vessels engaged in fisheries and the average catch rate of vessels of this type and so on).
This approach addresses an inherent negative bias in national and, by extension, global catch data, although considerable uncertainty in catch data is likely to remain.
Notably, when doing reconstructions, it became apparent that the perception of ‘no data’ being available was not always correct: the ‘social shadow’ yields hundreds of articles in the peer-reviewed and report literature with catch data, or data from which catch rates could be inferred, even for remote islands10. Also, countries may sometimes send to FAO a stripped down version of the national catch data their fisheries research institutes actually possess, and may even publish on their websites.
What is covered here are both ‘coastal’ waters, defined as the waters within the EEZ (Supplementary Fig. 1) that countries have claimed since this was allowed under the United Nations Convention on the Law of the Sea (UNCLOS), or which they could claim under UNCLOS rules, but have not (such as many countries around the Mediterranean), and the open oceans, or High Seas, that is, the waters beyond national jurisdiction (that is, beyond the EEZs). The delineations provided by the Flanders’ Marine Institute (VLIZ, see www.vliz.be) are used for our definitions of EEZs. Countries that have not formally claimed an EEZ are assigned areas equivalent to EEZs based on the basic principles of EEZs as outlined in UNCLOS (that is, 200 nm and/or mid-line rules). Note that we (a) include territorial waters within our EEZs; and (b) treat disputed zones (that is, EEZ areas claimed by more than one country) as being ‘owned’ by each claimant with respect to their fisheries catches. We treat EEZ areas prior to each country’s year of EEZ declaration as ‘EEZ-equivalent waters’ (with open access to all fishing countries during that time). If the year of EEZ declaration could not be determined (and for ‘EEZs’ that were derived by us for non-claimant countries), we assign the year 1982 as declaration year, that is, the year of conclusion of UNCLOS.
We use different catch reconstruction approaches for EEZs (40% of the global ocean), and High Seas (60%), where the catches are mainly large pelagic fishes (notably tuna). Note that we also exclude the Caspian Sea from all considerations.
Domestic catch reconstruction method
Reconstructing time series of fisheries catch for all countries of the world from 1950 (the first year that FAO published its ‘Yearbook’ of global fisheries statistics) to 2010 was undertaken by fisheries ‘sectors’. However, because a standardized global definition of fishing sectors based on vessel size does not exist (for example, a vessel considered large-scale (industrial) in a developing country may be considered small-scale (artisanal) in developed countries), reconstructions utilize each country’s individual definitions for sectors, or a regional equivalent. These are described in each country reconstruction publication underlying this work. We consider four sectors:
Industrial: large-scale fisheries (using trawlers, purse-seiners, longliners) with high capital input into vessel construction, maintenance and operation, and which may move fishing gear across the seafloor or through the water column using engine power (for example, demersal and pelagic trawlers), irrespective of vessel size. This corresponds to the ‘commercial’ sectors of countries such as the USA;
Artisanal: small-scale fisheries whose catch is predominantly sold (hence they are also ‘commercial fisheries’), and which often use a large variety of generally static or stationary (passive) gears. Our definition of artisanal fisheries relies also on adjacency: they are assumed to operate only in domestic waters (that is, in their country’s EEZ). Within their EEZ, they are further limited to a coastal area to a maximum of 50 km from the coast or to 200 m depth, whichever comes first. This area is defined as the Inshore Fishing Area (IFA)44. Note that the definition of an IFA assumes the existence of a small-scale fishery, and thus unpopulated islands, although they may have fisheries in their EEZ (which by our definition are industrial, whatever the gear used), have no IFA;
Subsistence: small-scale non-commercial fisheries whose catch is predominantly consumed by the persons fishing it, and their families (this may also include the ‘take-home’ fraction of the catch of commercial fishers, which usually by-passes reporting systems); and
Recreational: small-scale non-commercial fisheries whose major purpose is enjoyment.
In addition to the reconstructions by sector, we also assign catches to either ‘landings’ (that is, retained and landed catch) or ‘discards’ (that is, discarded catch), and label all catches as either ‘reported’ or ‘unreported’ with regards to national and FAO data. Thus, reconstructions present ‘catch’ as the sum of ‘landings’ plus ‘discards’.
Discarded fish and invertebrates are generally assumed to be dead, except for the US fisheries where the fraction of fish and invertebrates reported to survive is generally available on a per species basis45. Due to a distinct lack of global coverage of information, we do not account for so-called under-water discards, or net-mortality of fishing gears46. We also do not address mortality caused by ghost-fishing of abandoned or lost fishing gear47.
For commercially caught jellyfishes (particularly Rhizostomeae, but also other taxa), it has been shown that over 2.5 time more are caught than reported to FAO (mostly as ‘Rhizostoma spp.’)48. This factor is used to estimate missing catches of unidentified jellyfish. However, this additional catch is, pending further study, not allocated to any specific country or FAO area, and is thus counted only in the world’s total catch.
We exclude from consideration all catches of marine mammals, reptiles, corals, sponges and marine plants (the bulk of the plant material is not primarily used for human consumption, but for cosmetic or pharmaceutical use). In addition, we do not estimate catches made for the aquarium trade, which can be substantial in some areas in terms of number of individuals, but relatively small in overall tonnage, as most aquarium fish are small or juvenile specimens49.
Most catch reconstructions consist of six steps15:
(1) Identification, sourcing and comparison of baseline catch times series, that is, (a) FAO reported landings data by FAO statistical areas, taxon and year; and (b) national or regional data series by area, taxon and year. Implicit in this first step is that the spatial entity be identified and named that is to be reported on (for example, EEZ of Germany in the Baltic Sea), something that is not always obvious, and which poses problems to some of our external collaborators, notably those in countries with a claimed EEZ overlapping with that of their neighbour.
For most countries, the baseline data are the statistics reported by member countries to FAO. We treat all countries recognized in 2010 (or acting like independent countries with regards to fisheries) by the international community as having existed from 1950 to 2010. This is necessary, given our emphasis on ‘places’, that is, on time-series of catches taken from specific ecosystems. This also applies to islands and other territories, many of which were colonies, and which have changed status and borders since 1950.
For several countries, the baseline data are provided by international bodies. In the case of EU countries, the baseline data originate from the International Council for the Exploration of the Sea (ICES), which maintains fisheries statistics by smaller statistical areas, as required given the Common Fisheries Policy of the EU. A similar area is the Antarctic waters and surrounding islands, whose fisheries are managed by the Commission for the Conservation of Antarctic Marine Living Resources (CCAMLR), where catch data are available by relatively small statistical areas50.
When FAO data are used, care is taken to maintain their assignment to different FAO statistical areas for each country (Supplementary Fig. 1), as they often distinguish between strongly different ecosystems. For example, the Caribbean Sea versus the coast of the Eastern Central Pacific in the case of Panama, Costa Rica, Nicaragua, Honduras and Guatemala. For each maritime country, the area covered extends from the coastline to the edge of the EEZ, including any major coastal lagoons connected to the sea, and the mouths of rivers, that is, estuaries. However, freshwaters are excluded.
(2) Identification of sectors (for example, subsistence, recreational), time periods, species, gears and so on, not covered by (1), that is, missing data components. This is conducted via literature searches and consultations with local experts. This step is one where the contribution of local co-authors and experts is crucial. Potentially, all four sectors defined by us can occur in the marine fisheries of a given coastal country, with the distinction between large-scale and small-scale being the most important25. For any entity, we check whether catches originating from the four sectors were included in the reported baseline of catch data, notably by examining their taxonomic composition, and any metadata, which were particularly detailed in the early decades of the FAO ‘Yearbooks’51.
The absence of a taxon known to be caught in a country or territory from the baseline data (for example, cockles gleaned by women on the shore of an estuary)26 can also be used to identify a fishery that has been overlooked in the official data collection scheme, as can the absence of reef fishes in the coastal data of a Pacific Island state10. To avoid double counting, tuna and other large pelagic fishes, unless known to be caught by a local small-scale fishery (and thus in the past not likely reported to a Regional Fisheries Management Organization or RFMO), are not included in this reconstruction step (see below under ‘High Seas and other catches of large pelagic fishes’).
Finally, if gears are identified in national data, but a gear known to exist in a given country is not included, then it can be assumed that its catch has been missed, as documented for weirs (hadrah) in the Persian Gulf52.
(3) Sourcing of alternative information sources on missing sectors identified in (2), via literature searches (peer-reviewed and grey) and consultations with local experts. Information sources include social science studies (anthropology, economics and so on), reports, data sets and expert knowledge. The major initial source of information for catch reconstructions is governments’ websites and publications (specifically their Department of Fisheries or equivalent agency), both online and in hard copies. Contrary to what could be expected, it is often not the agency responsible for fisheries research and initial data collection that supplies the catch statistics to FAO, but other agencies, for example, statistical office or agency. As a result, much of the granularity of the original data (that is, catch by sector, by species or by gear) may be lost even before data are prepared for submission to FAO. Furthermore, the data request form sent by FAO each year to each country does not encourage improvements or changes in taxonomic composition, as the form that requests the most recent year’s data contains the country’s previous years’ data in the same composition as submitted in earlier years. This encourages the pooling of detailed data at the national level into the taxonomic categories inherited through earlier (often decades old) FAO reporting schemes, as was discovered, for example, for Bermuda in the early 2000s (ref. 53). Thus, by getting back to the original data, much of the original granularity can be regained during reconstructions.
Additional sources of information on national catches are international organizations such as FAO, ICES or SPC (Secretariat of the Pacific Community), or a Regional Fisheries Management Organization (RFMO) such as NAFO (Northwest Atlantic Fisheries Organization), or CCAMLR54, or current or past regional fisheries development and/or management projects (many of them launched and supported by FAO), such as the Bay of Bengal Large Marine Ecosystem project (BOBLME). All these organizations and projects issue reports and publications describing—sometimes in considerable details—the fisheries of their member countries. Another source of information is the academic literature, now widely accessible through Google Scholar.
A good source of information for the earlier decades (especially the 1950s and 1960s) for countries that were part of former colonial empires (especially British or French) are the colonial archives in London (British Colonial Office) and the ‘Archives Nationales d'Outre-Mer’, in Aix-en-Provence, and the publications of ORSTOM (Office de la recherche scientifique et technique d’outre-mer), for former French colonies. A further source of information and data are non-fisheries sources, including household and/or nutritional surveys, which are occasionally used for estimating unreported subsistence catches. Our global network of local collaborators is also crucial in this respect, as they have access to key data sets, publications and local knowledge not available elsewhere, often in languages other than English.
Supplementary Figure 2 shows a plot of the publications used for slightly over 110 reconstructions against their date of publication. Although, recent publications predominate, older publications firmly anchor the 1950s catch estimates of many reconstructions. On average, around 35 unique publications were used per reconstruction (not counting online sources and personal communications).
Potential language bias is taken seriously in the Sea Around Us, to ensure that data are collated in languages other than English. Besides team members who read Chinese, others speak Arabic, Danish, Filipino/Tagalog, French, German, Hindi, Japanese, Portuguese, Russian, Spanish, Swedish and Turkish. To deal with other languages, research assistants are hired who speak, for example, Korean or Malay/Indonesian. We also rely on our multilingual network of colleagues and friends throughout the world, for example, for Greek or Thai. While it is true that English has now become the undisputed language of science55, other languages are used by billions of people, and assembling knowledge about the fisheries of the world is not possible without the capacity to explore the literature in languages other than English.
(4) Development of data ‘anchor points’ in time for each missing data item, and expansion of anchor point data to country-wide catch estimates. ‘Anchor’ points are catch estimates usually pertaining to a single year and sector, and often to an area not exactly matching the limits of the EEZ or IFA in question. Thus, an anchor point pertaining to a fraction of the coastline of a given country may need to be expanded to the country as a whole. For expansion, we use fisher or population density, or relative IFA or shelf area as raising factor, as appropriate given the local condition. In all cases, we consider that case studies underlying or providing the anchor point data may had a case-selection bias (for example, representing an exceptionally good area or community for study, compared with other areas in the same country), and thus use raising factors very conservatively.
(5) Interpolation for time periods between data anchor points, either linearly or assumption based for commercial fisheries, and generally via per capita (or per fisher) catch rates for non-commercial sectors. Fisheries are often difficult to govern, as they are social activities involving multiple actors. In particular, fishing effort is often difficult to reduce, at least in the short term. Thus, if anchor points are available for years separated by multi-year intervals, it usually will be more reasonable to assume that the underlying fishing activity continues in the intervening years with no data. We tread this ‘continuity’ assumption as a default proposition. Exceptions to such continuity assumptions are major environmental impacts such a hurricanes or tsunamis56, or major socio-political disturbances, such as military conflicts or civil wars57, which we explicitly consider with regards to the use of raising factors and the structure of time series estimates. In such cases, our reconstructions mark the event through a temporary change (for example, decline) in the catch time series, which is documented in the text of each catch reconstruction. At the very least, this provides pointers for future research on the relationship between fishery catches and natural catastrophes or conflicts. We note that the absence of such signals (such as a reduction in catch for a year or two) in the officially reported catch statistics for countries having experienced a major natural or socio-political disturbance can be a sign that their official catch data may not accurately reflect what occurs on the ground. This contributes to the emergence of ‘poor numbers’40. Overall, our reconstructions assume—when no information to the contrary is available—that commercial catches (that is, industrial and artisanal) can be linearly interpolated between anchor points, while non-commercial catches (that is, subsistence and recreational) can generally be interpolated between anchor points using non-linear trends in human population numbers or number of fishers over time (via per capita rates).
Radical and rapid effort reductions as a result of an intentional policy decision and implementation do not occur widely. One example we are aware of is the trawl ban of 1980 in Western Indonesia58. The ban had little or no impact on official Indonesian fisheries statistics for Western Indonesia, another indication that these statistics may have little to do with the realities on the ground. FAO hints at this being widespread in the Western Central Pacific and the Eastern Indian Ocean (the only FAO areas where reported catches appear to be increasing) when they note that ‘while some countries (i.e., the Russian Federation, India and Malaysia) have reported decreases in some years, marine catches submitted to FAO by Myanmar, Vietnam, Indonesia and China show continuous growth, i.e., in some cases resulting in an astonishing decadal increase (e.g., Myanmar up 121 percent, and Vietnam up 47 percent)’.42
(6) Estimation of total catch times series. A reconstruction is completed when the estimated catch time series derived through steps 2–5 are combined and harmonized with the reported catch of step 1. Generally, this results in an increase of the overall catch, but several cases exist where the reconstructed total catch is lower than the reported catch. The best documented case of this is that of mainland China14, whose over-reported catches for local waters in the Northwest Pacific are compensated for by under-reported catches taken by Chinese distant water fleets fishing elsewhere. In the 2000s, Chinese distant water fleets operated in the EEZs of over 90 countries, that is, in most parts of the world’s oceans5. Harmonizing reconstructed catches with the reported baselines goes hand-in-hand with documenting the entire reconstruction procedure. Thus, every reconstruction is documented and published, either in the peer-reviewed scientific literature, or as detailed technical reports in the publicly accessible and indexed Fisheries Centre Research Reports series or the Fisheries Centre Working Paper series, or other regional organization reports (Supplementary Table 5).
Several reconstructions were conducted in the mid- to late 2000s, when official reported data (that is, FAO statistics or national data) were not available to 2010 (refs 15, 59). All these cases are updated to 2010, in line with each country’s individual reconstruction approach to estimating missing catch data. Thus, all reconstructions are brought to 2010 to ensure identical time coverage (Supplementary Table 5).
Since these six points were originally proposed, a seventh point has come to the fore that cannot be ignored10:
(7) Quantifying the uncertainty associated with each reconstruction. In fisheries research, catch data are rarely associated with a measure of uncertainty, at least not in the form resembling confidence intervals. This may reflect the fact that the issue with catch data is not a lack of precision (that is, whether we could expect to produce similar results upon re-estimation), but about accuracy, that is, attempting to eliminate a systematic bias, a type of error which statistical theory does not really address.
We deal with this issue through a procedure related to ‘pedigrees’60 and the approach used by the Intergovernmental Panel on Climate Change to quantify the uncertainty in its assessments61. The authors of the reconstructions are asked to attribute a ‘score’ expressing their evaluation of the quality of the time series data to each fisheries sector (industrial, artisanal and so on) for each of the three time periods (1950–1969, 1970–1989 and 1990–2010). These ‘scores’ are (1) ‘very low’, (2) ‘low’, (3) ‘high’ and (4) ‘very high’ (Table 1). There is a deliberate absence of an uninformative ‘medium’ score, to avoid the effective ‘non-choice’ that this option would represent. Each of these scores is assigned a percentage uncertainty range (Table 1). Thereafter, the overall mean weighted percentage uncertainty (over all countries and sectors) was computed (Fig. 1).
We define foreign catches as taken by vessels of a maritime state in the EEZ, or EEZ-equivalent waters of another coastal state. Based on our definition of sectors, all foreign fishing in the waters of another country is deemed to be industrial in nature. As the High Seas legally belong to no one (or to everyone), there can be no ‘foreign’ catches in the High Seas. Prior to UNCLOS, and the declaration of EEZs by maritime countries, foreign catches were illegal only if conducted without explicit permission within the territorial waters of such countries (generally 12 nautical miles). Since the declarations of EEZs by the overwhelming majority of maritime countries, foreign catches are considered illegal if conducted within the EEZ but without access being granted by the coastal state. A distinct exception is the EU, whose waters are managed by a ‘Common Fisheries Policy’, which implies a multilateral ‘access agreement’.
Access permission can be tacit and based on historic rights (‘observed’ access), or more commonly in the form of explicit access agreements and involving compensatory payment for the coastal state. The Sea Around Us, building on previous work by FAO62, has created a database of such access and agreements, which is used to allocate the catches of distant-water fleets to the waters where they were taken.
This information is then harmonized with the catches reported by FAO for countries fishing outside their country’s ‘home’ FAO areas, which always identifies this catch as distant-water industrial catch (see below for tuna catches reported to RFMOs).
In line with INTERPOL and others63, we define illegal fishing as foreign fishing within the EEZ waters of another country without a permission to access these resources. We do not treat domestic fisheries’ violations of ‘fishing regulations’ as ‘illegal’. In general, our reconstruction method cannot readily distinguish between legal and illegal foreign fishing, as we do not necessarily know about all access agreements5,6. Thus, our data only pertain to ‘reported’ versus ‘unreported’ status, irrespective of legality of foreign fleets in a host country5. However, for around two dozen countries (mainly in West Africa) where the number of illegally operating vessels could be inferred, the fleet size can be multiplied by appropriate catch per unit of effort rates, leading to an estimate of illegal catch in these EEZs.
Industrial catches of large pelagic fishes
Nominal landings data. To date, there is no single, publicly available data set presenting industrial landings of tuna and large pelagic fishes for the entire world that is separate from the amalgamated FAO statistics, despite these fisheries being among the most valuable in the world64. Here, we first compile nominal industrial landings of tuna and other large pelagic fish caught either in the High Seas or within EEZs by fishing gear, taxon, countries and statistical reporting areas from data published by Regional Fisheries Management Organizations. Second, we use partially spatialized landings data provided by staff of the French ‘Institut de recherche pour le développement’ to spatially pre-assign the nominal landings data derived from RFMOs (Supplementary Table 6).
For each ocean, the nominal landings data are spatialized according to reported proportions in the previously spatialized data (Supplementary Table 6). For example, if the nominal data reports France catching 100 tonnes of yellowfin tuna (Thunnus albacares) in 1983 using longlines, but the spatial data only present 85 tonnes of yellowfin tuna reported in 1983 by France using longlines in four separate statistical cells, the nominal 100 tonnes for France are split into these four spatial cells according to their reported proportion of catch in the spatial dataset. This matching of the nominal and spatial records is done over a series of successive refinements, with the first being the best-case scenario, in which there are matching records for year, country, gear and taxon. The last refinement is the worst-case scenario, in which there are no matching records except for the year of catch. For example, if Sri Lanka reports 100 tonnes of yellowfin tuna caught in 1983 using longlines, but there are no spatial records for any country catching yellowfin tuna in 1983, the nominal 100 tonnes for Sri Lanka are split into spatial cells according to their reported proportions of total catch of any species and gear in 1983. The end result is a baseline landings database containing all matched and spatialized catch records, which sum to the original nominal catch tonnages.
Discards. A review of the literature for each ocean provided limited country- and fleet-specific discard data. Therefore, we average the discard rates across the entire time period and apply these to the region of origin of the fleet (for example, East Asia or Western Europe), rather than the actual country of origin of the fleet. Discards were spatialized in conjunction with nominal landings data.
Assembly of total catches
Ultimately, the total catch extracted from a given area, such as a given EEZ or EEZ-equivalent waters, or high seas waters within a given FAO area is computed as the sum of three data layers: (1) the reconstructed domestic catches within home EEZs (‘Layer 1’ data); (2) the derived catch by foreign fleets (‘Layer 2’ data); and (3) the tuna and other large pelagic fishes caught in the High Seas and in EEZs (‘Layer 3’ data).
Documentation of the catch reconstructions
The references and web-links of the contributions documenting the catch reconstructions that went into the re-estimation of the global catch of marine fisheries are documented in Supplementary Table 5. Altogether, 273 EEZs (or EEZ ‘components’) were covered in 247 catch reconstructions, which had 103 unique first authors and 279 unique co-authors in over 50 countries.
All data presented here are also deposited in the Dryad Digital Depository (DOI: 10.5061/dryad.4s4t1).
To examine if significant breakpoints exist in the catch data time series of both reconstructed total catches and reported catches that may illustrate a change in trends of catches over time (that is, a change in the slope), we analyse the time series trajectories using segmented regression21. For both the reconstructed as well as reported time series, we identify two breakpoints, being 1967 and 1996, respectively (Supplementary Table 2). These breakpoints suggest a change in regression slope, with the second breakpoint suggesting a trend reversal. This was validated by testing for a significant difference-in-slope parameter using the Davies test65, which tests for a non-zero difference-in-slope of a segmented regression relationship.
How to cite this article: Pauly, D. & Zeller, D. Catch reconstructions reveal that global marine fisheries catches are higher than reported and declining. Nat. Commun. 7:10244 doi: 10.1038/ncomms10244 (2016).
Mohan Dey, M. et al. Fish consumption and food security: a disaggregated analysis by types of fish and classes of consumers in selected Asian countries. Aquacult. Econ. Manage 9, 89–111 (2005).
Kawarazuka, N. & Béné, C. The potential role of small fish species in improving micronutrient deficiencies in developing countries: building evidence. Public Health Nutr 14, 1927–1938 (2011).
Swartz, W., Sala, E., Tracey, S., Watson, R. & Pauly, D. The spatial expansion and ecological footprint of fisheries (1950 to present). PLoS ONE 5, e15143 (2010).
Swartz, W., Sumaila, U. R., Watson, R. & Pauly, D. Sourcing seafood for the three major markets: the EU, Japan and the USA. Mar. Policy 34, 1366–1373 (2010).
Pauly, D. et al. China’s distant water fisheries in the 21st century. Fish Fish. 15, 474–488 (2014).
Le Manach, F. et al. European Union’s public fishing access agreements in developing countries. PLoS ONE 8, e79899 (2013).
Alder, J. & Sumaila, U. R. Western Africa: the fish basket of Europe past and present. J. Environ. Dev. 13, 156–178 (2004).
Belhabib, D., Sumaila, U. R. & Pauly, D. Feeding the poor: contribution of West African fisheries to employment and food security. Ocean Coast. Manage. 111, 72–81 (2015).
Pauly, D. Major trends in small-scale marine fisheries, with emphasis on developing countries, and some implications for the social sciences. Marit. Studies 4, 7–22 (2006).
Zeller, D., Harper, S., Zylich, K. & Pauly, D. Synthesis of under-reported small-scale fisheries catch in Pacific-island waters. Coral Reefs 34, 25–39 (2015).
Garibaldi, L. The FAO global capture production database: a six-decade effort to catch the trend. Mar. Policy 36, 760–768 (2012).
Zeller, D. & Pauly, D. Good news, bad news: global fisheries discards are declining, but so are total catches. Fish Fish. 6, 156–159 (2005).
Zeller, D., Booth, S., Pakhomov, E., Swartz, W. & Pauly, D. Arctic fisheries catches in Russia, USA and Canada: Baselines for neglected ecosystems. Polar Biol. 34, 955–973 (2011).
Watson, R. & Pauly, D. Systematic distortions in world fisheries catch trends. Nature 414, 534–536 (2001).
Zeller, D., Booth, S., Davis, G. & Pauly, D. Re-estimation of small-scale fishery catches for U.S. flag-associated island areas in the western Pacific: the last 50 years. Fish. Bull. 105, 266–277 (2007).
Pauly, D. Rationale for reconstructing catch time series. EC Fish. Coop. Bull. 11, 4–10 (1998).
Pikitch, E. K. et al. Ecosystem-based Fishery Management. Science 305, 346–347 (2004).
FAO. in Code of Conduct for Responsible Fisheries Food and Agriculture Organization of the United Nations (FAO) (1995).
FAO. in The State of World Fisheries and Aquaculture (SOFIA) 2010 197Food and Agriculture Organization (2011).
Pauly, D. & Froese, R. Comments on FAO's State of Fisheries and Aquaculture, or 'Sofia 2010'. Mar. Policy 36, 746–752 (2012).
Oosterbaan, R. J. in Drainage Principles and Applications Publication 16 ed Ritzema H. P. 175–224International Institute for Land Reclamation and Improvement (ILRI) (1994).
Andrew, N. L. & Pepperell, J. G. The by-catch of shrimp trawl fisheries. Oceanogr. Mar. Biol. Annu. Rev 30, 527–565 (1992).
Alverson, D. L., Freeberg, M. H., Pope, J. G. & Murawski, S. A. A Global Assessment of Fisheries by-Catch And Discards 233FAO Fisheries Technical Papers T339 (1994).
Kelleher, K. Discards in the World's Marine Fisheries. An Update 131 (FAO Fisheries Technical Paper 470 Food and Agriculture Organization (2005).
Pauly, D. & Charles, T. Counting on small-scale fisheries. Science 347, 242–243 (2015).
Harper, S., Zeller, D., Hauzer, M., Sumaila, U. R. & Pauly, D. Women and fisheries: contribution to food security and local economies. Mar. Policy 39, 56–63 (2013).
Chapman, M. D. Women's fishing in Oceania. Hum. Ecol. 15, 267–288 (1987).
Cisneros-Montemayor, A. M. & Sumaila, U. R. A global estimate of benefits from ecosystem-based marine recreation: potential impacts and implications for management. J. Bioecon 12, 245–268 (2010).
Belhabib, D. et al. in Marine Fisheries Catches in West Africa, 1950-2010, part I Fisheries Centre Research Reports 20(3) (eds Belhabib D., Zeller D., Harper S., Pauly D. 61–78Fisheries Centre, University of British Columbia (2012).
Belhabib, D., Nahada, V. A., Blade, D. & Pauly, D. Fisheries in Troubled Waters: A Catch Reconstruction for Guinea-Bissau, 1950-2010 Working Paper #2015-72 21Fisheries Centre, University of British Columbia (2015).
Zeller, D. et al. The Baltic Sea: estimates of total fisheries removals 1950-2007. Fish. Res. 108, 356–363 (2011).
Kleisner, K., Zeller, D., Froese, R. & Pauly, D. Using global catch data for inferences on the world’s marine fisheries. Fish Fish. 14, 293–311 (2013).
Froese, R., Zeller, D., Kleisner, K. & Pauly, D. Worrisome trends in global stock status continue unabated: a response to a comment by R.M. Cook on ‘What catch data can tell us about the status of global fisheries’. Mar. Biol. (Berlin) 160, 2531–2533 (2013).
Froese, R., Zeller, D., Kleisner, K. & Pauly, D. What catch data can tell us about the status of global fisheries. Mar. Biol. (Berlin) 159, 1283–1292 (2012).
FAO. in Voluntary Guidelines for Securing Sustainable Small-Scale Fisheries in the Context of Food Security and Poverty Eradication xii+18Food and Agriculture Organization of the United Nations (2015).
Sumaila, U. R. et al. Benefits of rebuilding global marine fisheries outweigh costs. PLoS ONE 7, e40542 (2012).
Cao, L. et al. China’s aquaculture and the world’s wild fisheries. Science 347, 133–135 (2015).
Ward, M. in Quantifying the World: UN Ideas and Statistics Indiana University Press (2004).
Rockström, J. et al. Planetary boundaries: exploring the safe operating space for humanity. Nature 461, 472–475 (2009).
Jerven, M. in Poor Numbers: How We Are Misled by African Development Statistics and What to Do About It Cornell University Press (2013).
Lindquist, E. J. et al. in FAO/JRC Global Forest Land-Use Change from 1990 to 2005 FAO Forestry Paper 169 xi+40Food and Agriculture Organization of the United Nations and European Commission Joint Research Center (2012).
FAO. in The State of World Fisheries and Aquaculture (SOFIA) 223Food and Agriculture Organization (2014).
Covey, C. Beware the elegance of the number zero. Clim. Change 44, 409–411 (2000).
Chuenpagdee, R., Liguori, L., Palomares, M. D. & Pauly, D. in Bottom-up, Global Estimates of Small-Scale Marine Fisheries Catches. 112 (Fisheries Centre Research Reports 14(8) University of British Columbia (2006).
McCrea-Strub, A. in Reconstruction of Total Catch by U.S. Fisheries in the Atlantic and Gulf of Mexico: 1950-2010 Working Paper #2015-79 46Fisheries Centre, University of British Columbia (2015).
Rahikainen, M., Peltonen, H. & Poenni, J. Unaccounted mortality in northern Baltic Sea herring fishery—magnitude and effects on estimates of stock dynamics. Fish. Res. 67, 111–127 (2004).
Bullimore, B. A., Newman, P. B., Kaiser, M. J., Gilbert, S. E. & Lock, K. M. A study of catches in a fleet of ‘ghost-fishing’ pots. Fish. Bull. 99, 247–253 (2001).
Brotz, l. in So Long, and Thanks For the All Fish: the Sea Around Us, 1999-2014—A Fifteen-Year Retrospective eds Pauly D., Zeller D. 81–85A Sea Around Us Report to The Pew Charitable Trusts, University of British Columbia (2014).
Rhyne, A. L. et al. Revealing the appetite of the marine aquarium fish trade: the volume and biodiversity of fish imported into the United States. PloS ONE 7, e35808 (2012).
Ainley, D. & Pauly, D. Fishing down the food web of the Antarctic continental shelf and slope. Polar Rec. 50, 92–107 (2013).
FAO. in Catches and landings (1977) Vol. 44, 343Yearbook of Fishery Statistics, Food and Agriculture Organization (1978).
Al-Abdulrazzak, D. & Pauly, D. Managing fisheries from space: Google Earth improves estimates of distant fish catches. ICES J. Mar. Sci. 71, 450–455 (2014).
Luckhurst, B., Booth, S. & Zeller, D. in From Mexico to Brazil: Central Atlantic Fisheries Catch Trends and Ecosystem Models eds Zeller D., Booth S., Mohammed E., Pauly D. 163–169Fisheries Centre Research Reports 11(6), University of British Columbia (2003).
Cullis-Suzuki, S. & Pauly, D. Failing the high seas: a global evaluation of regional fisheries management organizations. Mar. Policy 34, 1036–1042 (2010).
Ammon, U. in The Dominance of English as a Language of Science: Effects on Other Languages and Language Communities Walter de Gruyter (2001).
Ramdeen, R., Ponteen, A., Harper, S. & Zeller, D. in Fisheries Catch Reconstructions: Islands, Part III eds Harper S.et al. 69–76Fisheries Centre Research Reports 20(5), University of British Columbia (2012).
Belhabib, D. et al. in When ‘Reality Leaves A Lot To the Imagination’: Liberian Fisheries from 1950 to 2010 18Fisheries Centre Working Paper #2013-06, University of British Columbia (2013).
Pauly, D. & Budimartono, V. in Marine Fisheries Catches of Western, Central and Eastern Indonesia, 1950-2010. 51Fisheries Centre Working Paper #2015-61, University of British Columbia (2015).
Zeller, D., Booth, S., Craig, P. & Pauly, D. Reconstruction of coral reef fisheries catches in American Samoa, 1950-2002. Coral Reefs 25, 144–152 (2006).
Funtowicz, S. O. & Ravetz, J. R. Uncertainty and Quality of Science for Policy XI+231Springer (1990).
Mastrandrea, M. D. et al. in Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties Intergovernmental Panel on Climate Change (IPCC) (2010).
FAO. in FAO’s Fisheries Agreement Register (FARISIS) 4Committee on Fisheries, 23rd Session, 15-19 February 1999, Food and Agriculture Organization, COFI/99/Inf9E (1998).
UNODC. in Transnational Organized Crime in the Fishing Industry 140United Nations Office on Drug and Crime (2011).
FAO. in The State of World Fisheries and Aquaculture 209Food and Agriculture Organization of the United Nations (FAO) (2012).
Davies, R. B. Hypothesis testing when a nuisance parameter is present only under the alternative. Biometrika 74, 33–43 (1987).
Ainsworth, C. H. & Pitcher, T. J. Estimating illegal, unreported and unregulated catch in British Columbia's marine fisheries. Fish. Res. 75, 40–55 (2005).
Tesfamichael, D. & Pitcher, T. J. Estimating the unreported catch of Eritrean Red Sea fisheries. Afr. J. Mar. Sci. 29, 55–63 (2007).
The Pew Charitable Trusts, Philadelphia funded the Sea Around Us from 1999 to 2014, during which the bulk of the catch reconstruction work was performed. Since mid-2014, the Sea Around Us has been funded mainly by The Paul G. Allen Family Foundation and assisted by the staff of Vulcan, Inc., with additional funding from the Rockefeller, MAVA, and Prince Albert II Foundations. We thank our many collaborators, as listed in the Supplementary Acknowledgements, and also numerous additional colleagues who assisted in various aspects of this work over the last 15 years.
The authors declare no competing financial interests.
About this article
Cite this article
Pauly, D., Zeller, D. Catch reconstructions reveal that global marine fisheries catches are higher than reported and declining. Nat Commun 7, 10244 (2016). https://doi.org/10.1038/ncomms10244
Responses of marine ecosystems to climate change impacts and their treatment in biogeochemical ecosystem models
Marine Pollution Bulletin (2021)
How Can Information Contribute to Management? Value of Information (VOI) Analysis on Indian Ocean Striped Marlin (Kajikia audax)
Frontiers in Marine Science (2021)
Biological Conservation (2021)
Marine Policy (2021)
Diversity and Distributions (2021)
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9310050010681152,
"language": "en",
"url": "https://www.prosure.cz/data1/1601942356-mineral-mining-in-brazil-/4543/",
"token_count": 692,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.2470703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:7cdd3f7b-2b21-4829-82c6-4721864af740>"
}
|
Brazil Minerals Mineral Rights in Brazil
Brazil Minerals, Inc. (OTC: BMIX) along with its subsidiaries has a business model focused on: 1) mining specific areas for gold and diamonds, and 2) generating projects from its portfolio of high-quality mineral rights for stand-alone mines,
What’s next for Brazil’s mineral exploration push?
2021. 2. 15. Brazil’s ambition for mining growth. Brazil is already host to a strong mining industry the country is second only to Australia in iron ore production and also holds rich reserves of nickel, bauxite, and other metals. In 2018, the mining and metallurgy sector accounted for more than 2.4% of Brazil’s GDP.
mineral mining in brazil
Mining in Brazil Wikipedia. Mining in Brazil is centered on the extraction of gold, copper, tin, iron and bauxite. History. Discovery of first gold rush in 1690s, gold discoveries made in streams not far from present day city of Belo Horizonte. In 1729 diamonds were discovered in the same area. This started a diamond. get price
Mining in Brazil Wikipedia
In 2016, metallic ores totaled close to 77% of the total value of Brazilian mineral production that was sold. Eight elements totaled 98.6% of the value: aluminum, copper, tin, iron, manganese, niobium, nickel and gold. The biggest Brazilian highlight is in iron, which has the majority of the participation, whose production is mostly carried out in the states of Minas Gerais and Pará. According to the National Department of Mineral Production (DNPM), in 2011 there were 8,870
Brazil seeks potassium mining boost
2021. 1. 26. Brazil’s Ministry of Mines and Energy (MME), through the Geological Survey of Brazil, has released a report into the potential of growing its domestic potassium capability, identifying 3.2 billion tonnes of ore in the northern part of the country.
Brazil Minerals to expand lithium project MINING.COM
2021. 1. 4. The company’s lithium project is located in the northeast part of the state of Minas Gerais, Brazil. (Image courtesy of Brazil Minerals.) Brazil Minerals (OTC PINK: BMIX) has received exploration...
Brazil Minerals Britannica
Brazil also has deposits of several other There are also significant amounts of granite, manganese, asbestos, gold, gemstones, quartz, tantalum, and kaolin (china clay). Most industrial minerals are concentrated in Minas Gerais and Pará, including iron ore, bauxite, and gold.
BRAZILIAN MINING CODE William Freire
exploitation, including mineral exploration and mining, relations with landowners, sanctions and nullities, among other aspects. It points out that in Brazil the mineral resources contained in the soil and subsoil belong to the Federal Union, and that the regulatory agency for the mineral
Mining in Brazil Lexology
2019. 7. 4. The most active mineral regions in Brazil are in the states of Minas Gerais (reserves of gems, iron ore, gold, manganese, aluminium, graphite, bauxite, rare earths and niobium), Mato Grosso...
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9519045352935791,
"language": "en",
"url": "https://www.traveldocs.com/world-atlas/Bulgaria-atlas35",
"token_count": 4839,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.32421875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:2336994b-bfa7-49e8-b229-b342144b1117>"
}
|
Bulgaria's economy contracted dramatically after 1989 with the collapse of the COMECON system and the loss of the Soviet market, to which the Bulgarian economy had been closely tied. The standard of living fell by about 40%. In addition, UN sanctions against Yugoslavia and Iraq took a heavy toll on the Bulgarian economy. The first signs of recovery emerged when GDP grew in 1994 for the first time since 1988, by 1.4% and then by 2.5% in 1995. Inflation, which surged in 1994 to 122%, fell to 32.9% in 1995. During 1996, however, the economy collapsed due to shortsighted economic reforms and an unstable and de-capitalized banking system.
Under the leadership of former Prime Minister Ivan Kostov (UDF), who came to power in 1997, an ambitious set of reforms was launched, including introduction of a currency board regime, bringing growth and stability to the Bulgarian economy. The currency board contained inflationary pressures and the three-digit inflation in 1997 was cut to only 1% in 1998. Following declines in GDP in both 1996 and 1997, the Bulgarian Government delivered strong, steady GDP growth in real terms in recent years. Prime Minister Simeon Saxe-Coburg's economic team of young, Western-educated financiers continued to implement measures that helped sustain stable economic growth and curb unemployment. Measures introduced by the government were targeted at reducing corporate and individual taxes, curtailing corruption, and attracting foreign investment. The government also restructured the country's foreign debt, revived the local stock market, and moved ahead with long-delayed privatization of some major state monopolies. As a result of this progress, in October 2002 the European Commission declared Bulgaria had a "Functioning Market Economy."
Successive governments continued these reforms, and in 2007 the country joined the European Union. According to the World Bank, in 2006 Bulgaria attracted the highest levels of foreign direct investment, as a share of GDP, among Eastern European countries. In early 2007, to attract additional foreign investment, the Bulgarian Government lowered corporate tax rates to 10%, reportedly the lowest rate in Europe. A flat-tax rate of 10% for personal income, in place as of January 1, 2008, has helped to bring down domestic labor costs and reduce the share of the "gray" economy. In response to local governments' demand for financial independence in 2006, parliament passed fiscal decentralization of municipalities, granting them authority over collection and administration of some taxes, thus further enhancing local economic stability. The 2007-2009 global financial and economic crisis erased many of the gains attributed to conservative fiscal policies and tax reforms. After 10 years of steady growth, Bulgaria's economy fell into recession in the fourth quarter of 2008, causing an increase in both unemployment and household debt. The new government responded with an 82-point "anti-crisis" plan to maintain fiscal stability and promote economic recovery. The government also committed itself to strengthening control over EU funds and fighting organized crime and corruption.
GDP (2009, est.): $48.7 billion.
Real GDP growth: -5.0% (2009 est.); 6% (2008); 6.2% (2007); 6.3% (2006); 6.2% (2005); 6.6% (2004); 4.3% (2003).
Per capita GDP (2009, est.): $6,423.
Inflation rate: 1.6% (2009); 7.2% (2008); 11.6% (2007), 6.1% (2006); 7.4% (2005); 4.0% (2004); 5.6% (2003).
Unemployment rate: 9.1% (2009); 6.3% (2008); 6.9% (2007); 9.1% (2006); 10.7% (2005); 12.2% (2004); 14.3% (2003).
Natural resources: Bauxite, copper, lead, zinc, coal, and timber.
Official exchange rate: Lev per $1 U.S. = 1.36 (2009); 1.39 (2008); 1.33 (2007); 1.49 (2006); 1.66 (2005); 1.44 (2004).
Located on the Balkan Peninsula, Bulgaria extends from the western shore of the Black Sea to Yugoslavia in the west. In the north, the Danube River forms the greater part of Bulgaria's common boundary with Romania. Greece and European Turkey lie to the south and southeast of Bulgaria.
The country is divided roughly into three parallel east-west zones: the Danubian tableland in the north, the Stara Planina (or Balkan) Mountains in the center, and the Thracian Plain and the Rhodope and Pirin Mountains in the south and southwest. About one- third of the country lies at an altitude of 500 meters (1,640 ft.) above sea level. The average elevation is 480 meters (1,575 ft.) above sea level.
On the fringe of the humid continental climate zone, Bulgaria has a climate similar to the U.S. Midwest. The weather varies considerably from year to year, as do the several climatic subzones within the country. Summer temperatures average about 24 C (75 F); winter temperatures average around 0 C (32 F). Annual precipitation averages 63 centimeters (25 in.).
Official Name: Republic of Bulgaria
Area: 10,994 sq. km. (slightly larger than Tennessee).
Major cities: Capital--Sofia (1.2 million). Others--Plovdiv (350,000), Varna (300,000).
Terrain: Bulgaria is located in South Central Europe. The terrain is varied, containing large mountainous areas, fertile valleys, plains and a coastline along the Black Sea.
Climate: Continental--mild summers and cold, snowy winters.
Bulgaria is a parliamentary republic. The unicameral National Assembly, or Narodno Subranie, consists of 240 deputies who are elected for 4-year terms through a mixed electoral system: 209 members of parliament (MPs) elected according to the classic proportional representation system (voters vote for fixed, rank-ordered party lists for each of the 31 electoral districts, with a different list for each district), and 31 majority MPs elected individually under the majority representation system in each and every district (the winning candidate receives a plurality of the votes in the region). Parliament selects and dismisses government ministers, including the prime minister, exercises control over the government, and sanctions deployment of troops abroad. It is responsible for enactment of laws, approval of the budget, scheduling of presidential elections, declaration of war, and ratification of international treaties and agreements.
A 1-month official campaign period precedes general elections. The voting age is 18. Preliminary results are available within hours of poll closings. Parties and coalitions must win a minimum 4% of the national vote to enter parliament. Seats are then allocated to the parties in proportion to the distribution of votes in their respective electoral districts. Votes belonging to parties not passing the 4% threshold are distributed to other parties using the method of the smallest remainder. The lists of newly elected members of parliament are announced 7 days after the elections. The president must convene the new parliament within 1 month after the elections, and calls upon parties, coalitions, or political groups to nominate a prime minister and form a government. If the three largest parties, coalitions, or political groups fail to nominate a prime minister, the president can dissolve parliament and schedule new elections. In recent years, it has taken approximately a month for the new government to form.
A general election was held in Bulgaria July 5, 2009; turnout was 60.20%. Results were as follows: GERB 39.7%, BSP 17.7%, MRF 14.4%, ATAKA 9.4%, Blue Coalition 6.8%, RZS 4.1%, other 7.9%; seats by party were GERB 116, BSP 40, MRF 38, ATAKA 21, Blue Coalition 15, RZS 10.
Results of the June 7, 2009 European Parliament elections were GERB 24.36%, 5 seats; BSP 18.5%, 4 seats; DPS 14.14%, 3 seats; ATAKA 11.96%, 2 seats; NDSV 7.96%, 2 seats; Blue Coalition (SDS-DSB and other right-wing parties) 7.95%, 1 seat (turnout: 37.49%).
The president of Bulgaria is directly elected for a 5-year term with the right to one re-election. The president serves as the head of state and commander in chief of the armed forces. The president is the head of the Consultative Council for National Security and while unable to initiate legislation, the president can return a bill for further debate. Parliament can overturn the president's veto with a simple majority vote. Bulgarian Socialist Party candidate Georgi Parvanov won the November 2001 presidential election and was re-elected in October 2006 as an independent candidate in a run-off against Volen Siderov, the leader of extreme nationalist Ataka Party. The next presidential election will be held in 2011.
The prime minister is head of the Council of Ministers, which is the primary component of the executive branch. In addition to the prime minister and deputy prime ministers, the Council is composed of ministers who head the various agencies within the government and usually come from the majority/ruling party or from a member party of the ruling coalition in parliament. The Council is responsible for carrying out state policy, managing the state budget and maintaining law and order. The Council must resign if the National Assembly passes a vote of no confidence in the Council or prime minister.
The Bulgarian judicial system became an independent branch of the government following passage of the 1991 constitution. Reform within this branch has been slow, with political influence, widespread corruption, and long delays continuously plaguing the system. In 1994, the National Assembly passed the Judicial System Act to further delineate the role of the judiciary. In 2003, Bulgaria adopted amendments to the constitution, which aimed to improve the effectiveness of the judicial system by limiting magistrates' irremovability and immunity against criminal prosecution. Additional amendments to the constitution in 2006 and 2007 further increased oversight of the judicial system by the legislative branch. They introduced the Supreme Judicial Council as a permanently operating supervisory body, as well as an Inspectorate responsible for overseeing the performance of the judicial system as a whole and its individual members. The prosecution service was given absolute authority over all investigations, and the police received a mandate to investigate 95% of all crimes, which reduced the role of the investigative service.
The trial, appellate, and cassation (highest appellate) courts comprise the three tiers of the judicial system. Military courts (at trial and appeal level) handle cases involving military and Ministry of Interior personnel. Administrative courts, effective since March 2007, specialize in reviewing appeals of government acts.
The Supreme Administrative Court and the Supreme Court of Cassation are the highest courts of appeal and determine the application of all laws.
The Supreme Judicial Council (SJC) is composed of 25 members serving 5-year terms. Those who serve on the council are experienced legal professionals and are either appointed by the National Assembly, selected by the judicial system, or serve on the SJC as a result of their position in government. The SJC manages the judiciary and is responsible for appointing judges. In 2007 parliament revised the Judicial System Act to make it compliant with the latest constitutional amendments, which provided for the establishment of the Inspectorate with the Supreme Judicial Council: a standing body with 11 members who investigate complaints of magistrates' misconduct, with no right to rule on the substance of judicial acts.
The Constitutional Court, which is separate from the rest of the judiciary, interprets the constitution and constitutionality of laws and treaties. Its 12 justices serve 9-year terms and are selected by the president, the National Assembly, and the Supreme Courts.
Principal Government Officials
Prime Minister--Boyko Borissov
Deputy Prime Minister/Minister of Finance--Simeon Dyankov
Deputy Prime Minister/Minister of Interior--Tsvetan Tsvetanov
Minister of Foreign Affairs--Nickolay Mladenov
Minister of Defense--Anyu Angelov
Minister of Economy, Energy, and Tourism--Traicho Traikov
Bulgaria's Commissioner to the EU--Kristalina Georgieva, Commissioner for International Cooperation, Humanitarian Aid, and Crisis Response
Bulgaria maintains an embassy in the United States at 1621 22nd Street, NW, Washington DC 20008 (tel. 202-387-0174; fax: 202-234-7973).
Type: Parliamentary democracy.
Constitution: Adopted July 12, 1991.
Independence: 1908 (from the Ottoman Empire).
Branches: Executive--president (chief of state), prime minister (head of government), Council of Ministers (cabinet). Legislative--unicameral National Assembly or Narodno Subranie--240 seats. Members are elected by popular vote of party/coalition lists of candidates for 4-year terms. As of January 2008, seat allocation is as follows: CfB--82, NMSS--36, MRF--34, UDF--16, DSB--16, BND--16, BPU--13, Ataka--11, and independents--16. Judicial--three-tiered system.
Administrative divisions: 28 provinces plus the capital region of Sofia.
Suffrage: Universal at 18 years of age.
Main political parties: Coalition of Bulgaria or CfB (coalition of parties dominated by BSP); Bulgarian Socialist Party (BSP); National Movement Simeon II (NMSS); Movement for Rights and Freedoms (MRF); United Democratic Forces (UDF); Democrats for Strong Bulgaria (DSB); Bulgarian Peoples Union (BPU); Bulgarian New Democracy or BND (a parliamentary group formed by NMSS defectors); Attack Coalition (ATAKA); and Citizens for the European Development of Bulgaria (GERB). Results from the June 25, 2005 general election are as follows: CfB 31.1%, NMSS 19.9%, MRF 12.7%, ATAKA 8.2%, UDF 7.7%, DSB 6.5%, BPU 5.2%.
Back to Top
Ancient Thrace was partially located on the territory of modern Bulgaria, and Thracian culture provides a wealth of archeological sites within Bulgaria. In the second century A.D., the Bulgars came to Europe from their old homeland, the Kingdom of Balhara situated in the Mount Imeon area (present Hindu Kush in northern Afghanistan).
The first Bulgarian state was established in 635 A.D., located along the north coast of the Black Sea. In 681 A.D. the first Bulgarian state on the territory of modern Bulgaria was founded. This state consisted of a mixture of Slav and Bulgar peoples. In 864, Bulgaria adopted Orthodox Christianity. The First Bulgarian Kingdom, considered to be Bulgaria's "Golden Age," emerged under Tsar Simeon I in 893-927. During this time, Bulgarian art and literature flourished. Followers of Saints Cyril and Methodius are believed to have developed the Cyrillic alphabet in Bulgaria in the early 10th century.
In 1018, the Byzantine Empire conquered Bulgaria. In 1185 the Bulgarians broke free of Byzantine rule and established the Second Bulgarian Kingdom. A number of Bulgaria's famous monasteries were founded during this period. Following the 1242 Mongol invasion, this kingdom began losing territory to its neighbors. Ottoman expansion into the Balkan Peninsula eventually reached Bulgaria, and in 1396 Bulgaria became part of the Ottoman Empire. During the five centuries of Ottoman rule, most of Bulgaria's indigenous cultural centers were destroyed. Several Bulgarian uprisings were brutally suppressed and a great many people fled abroad. The April uprising of 1876, the Russo-Turkish War (1877-78), and the Treaty of San Stefano (March 3, 1878, the date of Bulgaria's national holiday), began Bulgaria's liberation from the Ottoman Empire, but complete independence was not recognized until 1908.
During the first half of the 20th century, Bulgaria was marred by social and political unrest. Bulgaria participated in the First and Second Balkan Wars (1912 and 1913) and sided with the Central Powers, and later the Axis Powers, during the two World Wars. Although allied with Germany during World War II, Bulgaria never declared war on the Soviet Union and never sent troops abroad to fight under Nazi command. Near the end of World War II, Bulgaria changed sides to fight the German army all the way to Austria; 30,000 Bulgarian troops were killed.
Bulgaria had a mixed record during World War II, when it was allied with Nazi Germany under a March 1941 agreement. The Law for the Protection of the Nation, enacted in January 1941, divested Jews of property, livelihood, civil rights, and personal security. Despite a February 1943 agreement requiring Bulgaria to transfer Bulgaria's Jews to Nazi extermination camps in Poland, Bulgaria did not actually deport any Bulgarian Jews or Roma to Nazi concentration camps. Under that agreement, however, Bulgarian forces transferred approximately 11,000 Jews from Bulgarian-occupied territory (Thrace and Macedonia) to Nazi concentration camps. In June 1943 the government "re-settled" Sofia's 25,000 Jews to rural areas. Tsar Boris--supported by the parliament (especially its prominent Deputy Speaker, Dimitar Peshev), the Orthodox Church, and the general public--aided the Jewish community and helped its 50,000 members survive the war, despite harsh conditions. The Bulgarian Jews remained safe, and when they were permitted to emigrate to Israel after the war, most of them did.
King Simeon II assumed control of the throne in 1943 at the age of six following the death of his father Boris III. With the entry of Soviet troops into Bulgaria in September 1944 and the defeat of the Axis Powers in World War II, communism emerged as the dominant political force within Bulgaria. Simeon, who later returned and served as Prime Minister, was forced into exile in 1946 and resided primarily in Madrid, Spain. By 1946, Bulgaria had become a satellite of the Soviet Union, remaining so throughout the Cold War period. Todor Zhivkov, the head of the Bulgarian Communist Party, ruled the country for much of this period. During his 27 years as leader of Bulgaria, democratic opposition was crushed; agriculture was collectivized and industry was nationalized; and the Bulgarian Orthodox Church fell under the control of the state.
In 1989, Zhivkov was removed from power, and democratic change began. The first multi-party elections since World War II were held in 1990. The ruling communist party changed its name to the Bulgarian Socialist Party and won the June 1990 elections. Following a period of social unrest and passage of a new constitution, the first fully democratic parliamentary elections were held in 1991 in which the Union of Democratic Forces won. The first direct presidential elections were held the next year.
As Bulgaria emerged from the throes of communism, it experienced a period of social and economic turmoil that culminated in a severe economic and financial crisis in late 1996-early 1997. With the help of the international community, former Prime Minister Ivan Kostov initiated a series of reforms in 1997 that helped stabilize the country's economy and put Bulgaria on the Euro-Atlantic path. Elections in 2001 ushered in a new government and president. In July 2001, Bulgaria's ex-king Simeon Saxe-Coburg-Gotha became the first former monarch in post-communist Eastern Europe to become Prime Minister. His government continued to pursue Euro-Atlantic integration, democratic reform, and development of a market economy. Bulgaria became a member of the North Atlantic Treaty Organization on March 29, 2004, and a member of the European Union (EU) on January 1, 2007.
Following June 2005 general elections, Sergei Stanishev of the Bulgarian Socialist Party became the new Prime Minister of a coalition government on August 16, 2005. In October 2006, Georgi Parvanov, the former leader of the Bulgarian Socialist Party, became the first Bulgarian president to win re-election. Despite his limited constitutional powers, President Parvanov has played an important role in helping to ensure a consistent, pro-Western foreign policy. The Stanishev government continued Bulgaria's integration with the Euro-Atlantic world and its close partnership with the United States. Bulgaria has attracted large amounts of American and European investment, and is an active partner in coalition operations in Afghanistan as well as in UN-led peacekeeping operations in the Balkans.
In the July 2009 general elections, Bulgarian voters punished the Socialist-led government for corruption scandals and frozen EU funds. GERB took 116 of 240 seats in parliament, and its leader (and former Sofia mayor) Boyko Borissov became the Prime Minister. Borissov formed a minority government supported by the Blue Coalition, Ataka, and RZS. The government's priorities include: promoting economic stability, unblocking the frozen EU funds, and fighting corruption. According to the latest opinion polls, Borissov’s government is the most popular government since the beginning of the transition in 1989.
Partly due to its mountainous terrain, Bulgaria's population density is one of the lowest in Eastern Europe, about 81 persons per square kilometer (207/sq. mi.). About two-thirds of the people live in urban areas, compared to one-third in 1956. Sofia, the capital, is the largest city. Other major cities are Plovdiv-site of a major annual international trade fair, the Black Sea cities of Varna and Burgas, and Ruse on the Danube River. The principal religious organization is the Bulgarian Orthodox Church, to which most Bulgarians belong. Other religions include Islam, Roman Catholicism, Protestantism, and Judaism. Before 1989, religious activity was discouraged by the Bulgarian Communist Party, but its new leadership has pledged to support the rights of all citizens to worship freely.
Bulgarian is the primary language spoken in the country, although some secondary languages closely correspond to ethnic divisions. The most important of these is Turkish, which is widely spoken by the Turkish minority. From 1984-89, the government, in effect, banned the use of the Turkish language in public. The new leadership has repudiated that policy. Russian, which shares the Cyrillic alphabet and many words with Bulgarian, is widely understood.
Education is free and compulsory to age 15. Scientific, technical, and vocational training is stressed.
Population (July 2009 est.): 7,204,687.
Population growth rate (2009 est.): -0.79%.
Ethnic groups (2001): Bulgarian 83.94%, Turkish 9.42%, Roma 4.68%, and other 2% (including Macedonian, Armenian, Tatar).
Religions (2001): Bulgarian Orthodox 82.6%, Muslim 12.2%, Roman Catholic 0.6%, Protestant 0.5%, others.
Language: Bulgarian 84.5%, other 15.5%.
Health: Life expectancy (2009 est.)--male: 69.48 years; female: 76.91 years. Infant mortality rate (2009 est.)--17.87 deaths/1,000 live births.
Work force: 2.67 million (2008 est.). Agriculture--7.5%; industry--35.5%, services--57% (2007 est.).
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.963618814945221,
"language": "en",
"url": "https://theconcreteinitiative.eu/the-concrete-blog/102-the-role-of-cement-and-concrete-in-the-circular-economy",
"token_count": 917,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.044921875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:07e571a1-330b-425d-90d7-fd95fa72cf59>"
}
|
The role of cement and concrete in the circular economy
Cement and concrete play a central role in the circular economy. But in order to fully unleash the potential of these two sectors, which are essential to society, we need to define, develop and implement the right policy framework. Perhaps I should start by highlighting why we are essential, as this point often seems to be overlooked when discussing policies, regulation and legislation. Cement and concrete ensure that we have homes and offices, schools and hospitals, as well as transport infrastructure. Not only that, we are a European industry – our entire life cycle is based in Europe and we hope to stay that way!
The question is: what is our role in the circular economy, and what do we realistically have to offer?
- Raw materials: Did you know that the raw materials that we use (limestone for cement and aggregates for concrete) are abundantly available in Europe? This is important, as we are not extracting and using scarce raw materials.
- The cement manufacturing process: It probably comes as no surprise to hear that we use primarily coal and petcoke in our kilns. But what you may be less aware of is the fact that we are able to replace part of these traditional fuels with fuels and biomass derived from waste. Not only does this form of energy recovery reduce our dependence on fossil fuels, we are also able to recycle the ash back within our process. Our main energy sources are : petcoke, oil are the primary fuels with which we fire up our kilns; that, you know; what is probably new to you is that the cement industry in Europe takes a pioneering role in recovering energy and recycling material from waste in an operation we call “co-processing”
- Concrete: Concrete is in fact made up of cement, water and aggregates (gravel, crushed stone, sand, recycled concrete). And guess what: concrete is 100% recyclable and can go back into concrete as a recycled aggregate or into other applications (e.g. road base). Another interesting fact is that concrete is such a durable material that structures can last for decades, or even centuries! We have all heard of the Channel Tunnel, but what you probably did not know is that the concrete used to build it is contractually guaranteed to last at least 120 years!
But which policies should be considered? In terms of the recycling of construction and demolition waste, I have outlined below a few of our thoughts:
- According to the Commission’s figures, approximately one third of all waste in Europe comes from construction and demolition. Only one third of that amount is recycled and it’s not technical difficulties that prevent a higher recycling rate. It’s market realities. Proof of that is that recycling rates greatly differ between European Member States with a 95% recovery rate in The Netherlands, for instance, against a European average varying between 30% and 60%. We are not alone in tackling this challenge. We will call upon the other material producers in the construction industry to work together and improve the collection and sorting of demolition waste and in creating an economically viable system encouraging its use.
- Through The Concrete Initiative, we have tried to focus on each of the three pillars of sustainability and how concrete can contribute to each of them. When giving equal weight to each of these three pillars, we need to carefully assess requirements such as on minimum recycled content which has sometimes been suggested: in imposing such requirement, it is crucial to look beyond the product and assess other economic costs or environmental impacts that can be generated. By way of an example, it would not make sense to transport concrete over long distances in order for it to be reused in a building when there is an option of recycling it in a different application (e.g. road base) but do so locally.
But there is nevertheless one important point to be borne in mind: even if we were to recycle all of the concrete construction and demolition waste produced annually as a concrete aggregate we would only meet between 10% and 30% of our aggregate needs. As a result, our industry will always be in need of virgin materials. As indicated, however, the durability and longevity of buildings is one of the factors that contribute to a less frequent recourse to virgin raw materials. A last plea is therefore for the policymakers to reflect on how to better value and recognise durability of products in regulation. It may be unexpected, but that is also a factor to be taken into account in the reflection on the circular economy.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9329107999801636,
"language": "en",
"url": "https://www.bamboohr.com/hr-glossary/social-security-tax/",
"token_count": 117,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.060791015625,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fcf33771-b383-4efb-a819-87cc649ef97c>"
}
|
Glossary of Human Resources Management and Employee Benefit Terms
Social Security Tax is a general term describing taxes paid for a variety of old age and retirement benefits. There are two types of social security taxes: OASDI (Old Age, Survivor, & Disability Insurance) and Medicare (HI - hospitalization insurance). In 1991 the IRS required employers to report the two taxes separately. The 1994 percentage for FICA is 6.2% and for MCARE it is 1.45%. The total deduction for the two taxes is 7.65%. Employers must usually match this deduction.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9341775178909302,
"language": "en",
"url": "https://www.blg.com/en/insights/2018/01/2017-year-in-review-top-10-legislative-and-regulatory-changes-of-significance",
"token_count": 4466,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0311279296875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:6382063a-42ab-4f9c-809d-a5b74389fac8>"
}
|
1. Efficiency Gains? New Federal Energy Efficiency Regulations Come Into Force
The federal Energy Efficiency Regulations, 2016 ("2016 Regulations") came into force on June 28, 2017, replacing the previous Energy Regulations ("Regulations"). The 2016 Regulations increase the minimum energy performance standards in 20 categories of residential and commercial products, requiring manufacturers to comply or face sanction. The 2016 Regulations were also rewritten in a more clear and structured manner, and do not contain references to "obsolete and out-of-date"1 standards, as the Regulations had done. Reporting requirements were also modified with respect to some product categories, recognizing that exporters to new markets and to the United States should not be burdened with too much additional compliance where the target market may not have the same stringent regulatory standards.
The 2016 Regulations can be accessed here.
2. Properly Assessment? Update on Federal Policy for Natural Resource Environmental Assessments
On June 29, 2017, the federal government released a discussion paper entitled "Environmental and Regulatory Reviews" ("Discussion Paper") outlining "potential reforms being considered to rebuild trust and modernize Canada's environmental and regulatory processes."2The paper was informed by extensive public consultations, expert panel reports and parliamentary studies over the preceding 12 months and is expected to bring broad changes to the federal environmental assessment and regulatory regime, including changes to the Canadian Environmental Assessment Act, 2012, National Energy Board Act, Fisheries Act, and Navigation Protection Act.3 Public comments on the Discussion Paper were invited until August 28, 2017. An electronic version of the Discussion Paper can be found here.
The Discussion Paper reduced the scope of changes to the project approval process recommended by previous reports, offering a more balanced approach to statutory and policy changes required to update the federal environmental and regulatory framework. Nonetheless, the proposal is expected to add substantial complication, time, and cost to regulatory review of projects.
As a part of the new process, the federal government will:
- establish a single government agency responsible for assessments of federally designated projects. The review would include social, health and economic aspects of a project in addition to environmental impacts;
- require an early planning phase to foster greater collaboration and engagements between interested parties;
- focus on consultation with Indigenous peoples based on recognition of Indigenous rights and interests from the outset;
- emphasize and ensure co-operation with jurisdictions including Indigenous governments.
These are important changes for all players in the energy sectors and we will continue to follow these developments in 2018 as the federal government drafts the proposed changes.
3. Mid-century modern? The National Energy Board Modernization Report Looks Ahead
The National Energy Board ("NEB") Modernization Report (the "Report") was released in May 2017 and was entitled "Forward, Together: Enabling Canada's Clean, Safe and Secure Energy Future."4 The Report was prepared by an expert panel that was tasked with "analysing the structure, role, and mandate of today's National Energy Board, and coming up with a set of recommendations to modernize the organization, and restore public trust in the institution."5 Modernizing the NEB has been said to be a part of the current government's review of Canada's environmental assessment and regulatory processes announced in June 2016.
The Report outlines what the expert panel heard from a vast array of individuals, organizations and agencies, making a set of recommendations that are driven by six key themes. Although the Report made numerous recommendations, the key changes proposed by the expert panel were as follows:6
- alignment of the role of national energy regulator with national policy on energy and climate;
- replacement of the NEB by a new agency called Canadian Energy Transmission Commission ("CETC");
- creation of a new Canadian Energy Information Agency;
- establishing a two-step decision-making project for new energy transmission projects where the first step would be to assess whether a proposed project is in the national interest, and the second step would provide for a detailed regulatory approval under the CETC and the Canadian Environment Assessment Agency;
- creation of an Indigenous Major Projects Office;
- creation of Public Intervenor Office;
- creation of Regional Multi-Stakeholder Committees;
- provision of an enhanced role for municipalities in proceedings;
- creation of a Landowners Ombudsman;
- establishment of stronger standards for land agents and review of compensation rules for infrastructure rights of way.
Further, the report recommends that the office of CETC's board of directors be located in Ottawa and not in Calgary. It has been suggested, however, that majority of the employees of the recommended organization would stay in Calgary.7 The Discussion Paper released by the Government of Canada on June 29, 2017 does not appear to endorse all of the recommendations of the Report and, as a result, it will be interesting to see how many of the recommendations made in the Report are actually implemented.8 We will closely monitor the progress in this respect in 2018.
4. Going Long? Ontario's Long Term Energy Plan Released
Ontario's 2017 Long-Term Energy Plan ("LTEP" or "Plan") entitled Delivering Fairness and Choice provides a roadmap of the province's energy plan over the next 20 years and, according to the Ontario government, focuses on the affordability and reliability of a clean energy supply, giving consumers more choice in the way they use energy while at the same time offering ways to conserve it.9
The LTEP forecasts adequate electricity supply in the near future, but predicts a shortfall beginning in the early-to-mid 2020's as demand continues to rise due to electric vehicles and transit systems. It also contemplates market renewal which aims at moving away from long-term electricity contracts and towards more competitive mechanisms.10
Further, the Plan emphasizes consumer education, protection and choice in the energy sector. It proposes to redesign electricity bills to make them easier to read and understand, and expands the Green Button Initiative, which gives consumers the ability to access and manage their energy and water data for conservation and management purposes.11
Lastly, the LTEP contemplates unprecedented levels of First Nations and Metis involvement in the energy sector. It puts in plan the potential connection of as many as 21 First Nation communities to Ontario's electricity grid and working to engage in consultations on how to improve the Independent Electricity System Operator's (IESO) Energy Partnership Program, which connects First Nations and Metis communities with partner organizations to build out renewable energy and transmission projects.12
The IESO and Ontario Energy Board have been directed by the Minister of Energy to execute the LTEP, starting with preparing and submitting implementation plans for review by January 31, 2018.13
5. Plein d'action? Electrifying Québec with the 2017-2020 Action Plan
As a part of the 2017-2020 Action Plan ("Action Plan"), the Québec Minister of Energy and Natural Resources announced the jurisdiction's first volumetric requirements on renewable fuels, like ethanol and biodiesel. The blending requirement will start at 5% for gasoline and 2% for diesel, and these numbers would be escalated after 2020. The Action Plan forms the first of three documents seeking to implement Québec's 2030 Energy Policy aimed at reducing the province's dependence on fossil fuels by 40% between now and 2030.14
The Action Plan sets out 42 measures, backed by $1.5 Billion in public investment, providing for concrete actions with the following objectives:
- increase the number of electric vehicles in Québec's fleet;
- address climate change and in reduction of greenhouse gas emissions;
- reduce oil dependence and therefore improve Québec's trade balance; and
- contribute to Québec's economic development by using the electric energy available in Québec.15
The Action Plan emphasizes Hydro-Québec's role as Québec's leading electricity producer: for instance, Hydro-Québec is instructed to develop a solar energy park as soon as practicable.16 Further, the Government of Québec aims to increase the number of plug-in electric and hybrid vehicles in the province's fleet to 100,000 by 2020.17
The Action Plan proposes incentives to several industry sectors to adopt the recommended measures. Trucking companies will receive grants if they reduce their fleet's fuel consumption, while transportation and mining companies will be eligible for funding to convert vehicles to electricity, natural gas or propane.18
6. Price, point? The Implementation of Carbon Pricing in Alberta
The Alberta government released the climate leadership plan in November 2015 ("Climate Leadership Plan") outlining the government's plan for combatting climate change in Alberta. Implementation of a new carbon price on greenhouse gas ("GHGs") emissions was one of the strategies contemplated under the Climate Leadership Plan.19 The enabling legislation for the plan, The Climate Leadership Implementation Act, received royal assent in June 2016.
Alberta's carbon levy took effect on January 1, 2017 and is expected to generate $3.9 billion in gross revenue over the next three years, more than half of which will be recycled through the small business tax cut and household rebates. The remainder is to be invested in programs that reduce emissions and diversify the economy. The levy is paid by consumers of fuel in Alberta, with rates determined by the emissions released when each fuel is combusted. Some specific fuels and uses are exempt from the levy, and consumers do not pay the levy on electricity, though industrial consumers do.
Alberta will transition from the current Specified Gas Emitters Regulation in January 2018. This system uses an output-based emission allocations approach for emissions-intensive industries. Any facility that emits 100,000 tonnes or more of greenhouse gases will be included in the new greenhouse gas management system. Many types of facilities fall under this system, not just oil and gas production and processing; coal- and gas-fired electricity generation will also be affected. Under the output-based allocation system, facilities will be allowed to emit a certain amount of greenhouse gases, free of charge from the carbon levy. This approach in principle protects industries from competitiveness impacts that could shift production to other jurisdictions. These "free" emissions will be determined based on a product-specific emissions benchmark. Benchmarks will be set relative to high-performing industry peers or competitors who produce the same or similar products.
According to the Government of Alberta, all revenue from the levy will be reinvested in efforts to reduce emissions; rebates to Albertans to offset costs increases; renewable energy projects and green infrastructure; and research and innovation. Although the cost of almost everything is expected to increase as a result of the levy, the government insists that the increase would be relatively small for consumers.20
A Climate Leadership Plan progress report, published in December 2017, provides an update on the actions taken, and the progress made, towards achieving the stipulated goals.
7. Well Done? Alberta Energy Regulator Orphan Wells Policy
As the Redwater decision by the Alberta Court of Appeal makes it way to the Supreme Court of Canada, the Alberta Energy Regulator ("AER") is making policy changes to deal with the implications of the decision. In Redwater, the Court of Appeal held that trustees in bankruptcy have the right to disclaim uneconomic assets of a bankrupt producer, with the uneconomic assets becoming the responsibility of the Orphan Well Association, rather than having those costs borne by the estate. Effectively this allows the creditors to realize on their loans without having to make allowances for uneconomic assets.
Directive 067,released in December 2017, provides for greater discretion to the AER in allowing a licencee to acquire or maintain a licence. Changes brought about by the directive include requiring additional information at the time of application, increased discretion regarding the rejection of an application where an applicant poses a risk, and requirements for keeping corporate information up to date.21
Licence eligibility types have been simplified to the following three types: no eligibility, general eligibility, and limited eligibility. Further, all parties with current eligibility under Directive 067 are required to ensure that AER has accurate information on file, and notice of material changes must be provided within 30 days.22 BLG has also confirmed that updated licencee information must be filed with the AER by January 31, 2018.
By making these changes, the AER is attempting to close what it considers a loophole in policy that allows directors and officials of oil and gas companies to use bankruptcy as an excuse to walk away from the wells that they are responsible for cleaning up. Further changes to policy regarding acquiring, maintaining, and abandoning oil and gas wells are expected in the coming months as the province continues its review of how orphan wells are regulated under the current system.23
In 2018, BLG will continue to be "watching the Directives" and how they may respond to the Supreme Court of Canada decision in Redwater.
8. Transfer, Payment? AER Changes Process to Transfer Application Decision
Pursuant to AER Bulletin 2017-13, the decision process for applications to transfer AER approvals has changed. Now, an application for transfer is subject to a standardized review period of 30 days before a decision is issued. The AER is encouraging applicants to submit all related applications and notifications for transfer at the same time. In accordance with section 30(2) of the Responsible Energy Development Act, the AER intends to combine all related transfer applications, publish them on its website, and review them concurrently regardless of whether they are received together or separately.24
The review period of 30 days ensures that the period for filing a statement of concern has lapsed before a decision is issued. However, all applications will continue to be published on public notice of application page on AER's website.25
In accordance with the Integrated Decision Approach advocated by the AER, changes to the decision process for transfer applications ensures that decisions on related applications are done concurrently, enabling the AER to manage approvals and issue a decision on related applications at the same time. This helps to make the decision-making-process consistent and transparent, allowing stakeholder input on related applications at one time rather than in individual pieces.26
9. Clean and Clear? Alberta Clean Power Update
In 2017, the Government of Alberta started to implement the Alberta Electric System Operator's ("AESO") recommendation to transition from an energy market to a new framework that includes an energy market and a capacity market. In an energy-only market, generators are paid for the electricity they produce based solely on the wholesale price of electricity, which fluctuates. These companies decide on the type of generation they produce and on the location of facilities. In a capacity market, private power generators are paid through a mix of competitively auctioned contracts which pay their fixed capital costs and revenue from the spot market.27
The AESO recommended a capacity market for the following reasons: it ensures reliability, increases stability of prices, provides greater revenue certainty for generators, maintains competitive market forces and drives innovation and cost discipline, and supports policy discretion and is adaptable for the future. The AESO is responsible for designing and implementing the capacity market and the process is expected to take three years. A capacity market is anticipated to be in place by 2021.28
This transition is expected to support the Renewable Electricity Program's (REP) plan of phasing out emissions from coal-fired generation by 2030. The REP is intended to encourage the development of 5,000 MW of renewable electricity generation capacity connected to Alberta grid between now and 2030. The REP Round 1 started early in 2017. In REP Round 1, 12 proponents submitted bid prices for 26 projects, with four projects being selected including Edmonton-based Capital Power and two large international companies, EDP Renewable Canada Ltd. and Enel Green Power Canada Inc. BLG will continue to closely monitor how the next stages under the REP unfold in 2018.
The capacity market is expected to be implemented by 2021, with AESO estimating that it would take additional two years to complete the design of the market and another year to finalize legal contracts and to set up procurement process. As a result, the first capacity contracts are expected to be formed at least three years after the design process begins. This means that capacity procured through initial auction would likely be in service in 2024 at the earliest.29
10. Red, White and Cruise? Impact of U.S. Energy Policy on Canada
Early in 2017, President Trump issued an executive order ("Order") inviting TransCanada Keystone Pipeline LP to re-submit its application to the State Department for a Presidential Permit for the construction and operation of the Keystone XL Pipeline ("Keystone"). Also included in the Order was a direction to the State Department to expeditiously review the application and reach a final determination within 60 days of TransCanada's application.
Although the invitation to apply to Keystone is encouraging for Canada, it remains a fact that President Trump has criticized Canadian energy policies on multiple occasions. Further, with the current Canadian emphasis on carbon pricing and phasing out energy policies that add to emissions of greenhouse gases, there is a valid concern that energy investment in Canada will become unattractive when compared its counterpart south of the border, especially considering President Trump's position on climate change and clean energy. It has been estimated that carbon pricing would lead to as much as a $20 billion to $25 billion increase in energy costs for Canadians, while the Americans face no corresponding increase. Almost certainly, this will result in a competitive advantage to the United States as far as energy investment is concerned.30
With carbon pricing already in place in Alberta and a national policy on its way, Canada and the United States have taken completely different routes in their approach to climate change and commitment to clean energy. While clean technology remains a priority for Canada, President Trump is committed to deregulating the energy industry and reviving coal-based energy production.
In 2018, BLG will be closely monitoring energy policy changes in the U.S., including what, if any, changes to the sector arise through the potential renegotiation of, or U.S. withdrawal from, NAFTA. NAFTA gave the United States secure access to Canadian energy when it incorporated the provisions of the Canada-U.S. Free Trade Agreement. Changes to, or dissolution of, NAFTA could dramatically impact what has become an integrated North American energy market under the trade agreement.
1 Government of Canada, Energy Efficiency Regulations, 2016: Regulatory Impact Analysis Statement, online: click here.
2 Government of Canada, Discussion Paper Released on Review of Environmental and Regulatory Processes, online: click here.
3 Gilmour et al, Canadian Government's Proposal to Reform Canada's Environmental Assessment and Regulatory Regime, Bennett Jones (July 04, 2017), online: click here; Also see Olszynski et al, Sustainability must be at the core of the government's approach to assessing and approving economic projects in Canada, Policy Options (September 5, 2017), online: click here.
4 Natural Resources Canada (website), Forward, Together - Enabling Canada's Clean, Safe and Secure Energy Future, Report of the Expert Panel on the Modernization of the National Energy Board, May 2017, and Volume II, Annexes, online: click here [NEB Modernization Report].
5 Ibid at 1.
6 Nigel Bankes, The NEB Modernization Report, University of Calgary Faculty of Law - ABlawg (June 14, 2017), online: click here.
7 CBC (website), Scrap NEB and replace it with 2 separate agencies, expert panel recommends (May 15, 2014), online: click here.
8 Nigel Banks, The Report of the Expert Panel on the Modernization of the National Energy Board and the Response of the Government of Canada, Energy Regulation Quarterly (September 2017), online: click here.
9 Ministry of Energy, 2017 Long-Term Energy Plan (October 26, 2017), online: click here.
10 Wong et al, The electrification of the economy: Ontario's 2017 Long-Term Energy Plan, Osler (November 15, 2017, online: click here; Also see Ontario Chamber of Commerce, Rapid Policy Update: Long-Term Energy Plan (October 26, 2017), online: click here.
14 Renewable Industries Canada, Statement regarding the Québec Government's 2017-2020 Action Plan under the 2030 Energy Policy (June 26, 2017), online:
15 Karine Seguin, Government of Québec: New Action Plan for Electrification in Transport, PIT Group, online:
16 Statement regarding the Québec Government's 2017-2020 Action Plan under the 2030 Energy Policy, supra note 15.
17 Seguin, supra note 16.
18 Mathieu LeBlanc and Martin Thiboutot, Québec Releases Energy Policy's 2017-2020 Action Plan, Canadian Energy Perspectives (July 7, 2017) online:
19 Astrid Kalkbrenner, Climate Change Legal Roadmap: Carbon Pricing Recommendations for Alberta, written whiten crew's Environmental law Centre (August 23, 2016), online:
20 Government of Alberta, Carbon Levy and Rebates, online:
21 Alberta Energy Regulator, New Edition of Directive 067: Eligibility Requirements for Acquiring and Holding Energy Licences and Approvals (December 06, 2017), online:
23 Geoffrey Morgan, Alberta to crack down on oil executives that dumped orphan wells on taxpayers, Financial Post Press Release (December 6, 2017), online:
24 Alberta Energy Regulator, Bulletin 2017-13: Changes to Process for Transfer Application Decisions (July 24, 2017), online:
27 Government of Alberta, Electricity Capacity Market, online:
28 Alberta Electric System Operator (AESO) (website), Capacity market transition, online:
29 Kimberly Howard and Gordon Nettleton, Alberta's Evolving Electricity Market – An Update on Recent Changes and Developments, 5:2 Energy Regulation Quarterly (June 2017), online:
30 Yukon News, Trump's Energy Policies May Threaten Canadian Business, (December 30, 2016),online:
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9426570534706116,
"language": "en",
"url": "https://www.brinknews.com/whos-driving-the-financing-of-a-green-future/",
"token_count": 1419,
"fin_int_score": 5,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.0380859375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:bba292a8-23c7-41cd-a55a-b97f7739cfc2>"
}
|
Who’s Driving the Financing of a ‘Green Future’?
The transition to a lower-carbon economy has already begun and will require a great deal of financing. Collectively known as “green finance,” these efforts are instrumental in creating carbon-reduction strategies, achieving sustainable development goals and building a climate-resilient future.
The question is: Who will drive green investment into the financial mainstream—investors or regulators?
The transition to a lower-carbon economy will involve various far-reaching changes, and no single definition for green finance holds across all countries and regions. Nonetheless, the common theme of green finance refers to investments that promote a sustainable, lower-carbon and climate-resilient economy.
Wide-Ranging Spectrum of Green Financing Tools
More measures related to green finance were introduced between June 2016 and June 2017 than in any one-year period since 2000. These included implementing strategic policy signals and frameworks, supporting the development of local green-bond markets, and promoting international collaboration to facilitate cross-border green-bonds investments. The result has been increased flows of green finance, most notably in the issuance of green bonds, which doubled to $81 billion in 2016.
Though green bonds are the most common instruments, green financing principles can be applied across various financing and de-risking instruments. This includes traditional debt and equity and other tools along that continuum, such as credit enhancements.
Spectrum of selected green financing products available
While green bonds are most commonly associated with green infrastructure financing, they may appear unattractive due to the common misconception that green infrastructure projects are less “bankable.” This is one of the factors leading to the so-called “green financing gap,” estimated at $2.5 trillion to $4.8 trillion. The gap is largely attributable to inadequate risk-adjusted returns, one of the key barriers facing private-sector financing of sustainable infrastructure, described in a recent report.
This gap can be bridged via credit enhancements from de-risking instruments such as insurance and derivatives, which remove some of the inherent risks that otherwise make an investment unbankable. With adequate credit wraps, green investments can be treated as de-risked products with higher returns and longer-term financial stability, with the eligibility for longer tenure.
As such, green financing instruments should be sufficiently broad so as to capture all the objectives of the respective green finance provisions. At the same time, however, the designation of green finance needs to be defined more narrowly so as to make the emerging discipline credible and actionable. Unifying criteria and standards is necessary to specify the scope and degree of “green” for investors and regulators, given the various initiatives across regions and countries to define environmentally friendly financial instruments and investment principles.
Green Finance and Investors: Who’s Driving Whom?
As it concerns both the direct and indirect risks of the transition to a lower-carbon economy—as well as the various associated opportunities—green finance has lately become a popular topic. Investors are recognizing the increasing number of green investment opportunities, along with new markets to penetrate and consumer bases to attract. Indeed, global sustainable investment stood at $23 trillion in 2016, a 25 percent increase from 2014 with a compounded annualized growth rate of 12 percent.
Some argue that investors are spearheading green finance. Mandated climate disclosures— compulsory reporting of how companies manage climate-related risks—represent a major step toward mainstreaming green finance. This will promote transparency and help investors identify climate-related risks and opportunities.
For example, in March 2017, global investment institution BlackRock listed climate risk disclosure as one of their key engagement themes in their investment priorities. Specifically, the firm will be asking companies to demonstrate how climate risks might affect their business and what their managements’ approaches will be to adapting and mitigating these risks.
Shareholders increasingly want to know what companies are doing to transform their operations and products to remain competitive during the transition to a lower-carbon economy. In 2017, a leading energy company was pressured by investors to report climate-related impacts on its business under a two-degree scenario. The move was a strong signal to the market that climate change is now considered a significant financial risk.
Getting Past the Tipping Point for Making Green Finance Mainstream
Investors may be driving the green finance initiative, but they cannot succeed without the support of other key stakeholders, according to a recent study by Marsh McLennan Companies’ Global Risk Center. Beyond institutional investors, there are markets in which regulators and policymakers appear to be more aggressive in leading the transition.
To better facilitate the development of green finance, the Luxembourg Green Exchange, in September 2016, opened a segment dedicated to Sustainable and Social projects bonds, a sector valued at over $23 trillion. It had increased the visibility of S&S projects and expedited their financing.
Meanwhile, in June 2017, the Securities and Exchange Board of India finalized the disclosure requirements for the issuance and listing of green debt securities, which will raise funds from capital markets for green investments in climate change adaptation, and more specifically for renewables and clean transportation.
The French Energy Transition for Green Growth Law was enacted in January 2016, mandating that institutional investors and fund managers disclose in their annual reports how climate change considerations have been incorporated into their investment and risk management policies.
China has also been ambitious in launching pilot zones to focus on different aspects of green financing in the provinces of Guangdong, Guizhou, Jiangxi, Zhejiang and Xinjiang. The program encourages banks to explore new financing mechanisms and incentivizes the financial sector to accelerate advancements of green insurance and credit enhancement instruments in these provinces.
Investors are undoubtedly the key driving forces, but to drive further demand at this nascent stage, government intervention is likely necessary. Regulators and/or policymakers might need to step in with subsidies, risk-mitigation mechanisms, and guarantee mechanisms to boost green investments.
This year’s G20 summit in Germany concluded that green finance will be key in addressing a host of global challenges. This echoes the call at the previous year’s summit to scale green financing for driving environmentally sustainable growth.
2017 has since seen significant progress by world leaders, national initiatives and investors alike in fostering sustainable global growth through green finance. The G20, the UN Environment Program and the Monetary Authority of Singapore sought to build awareness for green finance and maintain momentum in this regard—the G20 Green Finance Conference was held in Singapore just last week.
Ensuring that industry views are heard, such events promote the development of a green financial system, workable from a capital markets perspective, and aligned with the national and international commitments of the Paris Agreement. This is not only for better protection of the planet, but also to provide businesses and corporations the opportunities that green finance has to offer in developing a more sustainable business environment and transitioning into a lower-carbon future economy.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.937904953956604,
"language": "en",
"url": "https://www.cleanstate.org.au/article_report_reveals_action_on_lng_pollution_would_unlock_thousands_of_new_jobs_in_wa",
"token_count": 643,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.345703125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:aeb95614-6bf0-4a14-8c07-ef812780ce78>"
}
|
Report reveals action on LNG pollution would unlock thousands of new jobs in WA
Clean State WA has released independent analysis from RepuTex Energy, revealing around 4,000 new jobs would be created in land management, renewable energy, and other industries if the state government reinstated and strengthened conditions requiring WA’s largest polluters to offset greenhouse gas emissions.
With a focus on offsetting growing pollution from liquefied natural gas (LNG) projects in WA, the report reveals far reaching environmental and economic benefits resulting from the development of a local carbon offsetting industry.
Conservation Council of WA (CCWA) Director Piers Verstegen said the report confirmed that state action to control pollution from the LNG industry was good for the economy, and could help kick start significant new industries in WA.
“Earlier this year, it was revealed that rapidly growing carbon pollution from LNG production in WA was putting Australia’s Paris Agreement targets at risk.
“This new research confirms that action to control that carbon pollution at a state level would result in significant new investment and employment opportunities in carbon farming, tree planting, renewable energy, land management, and other clean industries.
“We are particularly pleased to see the greatest benefits would be felt in regional WA, with significant opportunities for Indigenous employment in improved land management and carbon farming across the state’s vast rangelands.”
LNG projects are Western Australia’s largest and fastest growing source of carbon pollution, with LNG related carbon emissions rising to over 30 million tonnes per year as Chevron’s giant Wheatstone and Gorgon projects have come online in the last 12 months.
Measures to control pollution from these facilities have either proven ineffective, or were removed under the Barnett Government. WA Environment Minister Stephen Dawson has ordered a review into carbon pollution controls on these projects, and it is expected that the EPA will provide advice to government in early 2019.
Reputex Energy analysis suggests that if LNG production facilities are required to offset their direct emissions, WA has abundant potential to meet modelled demand. Around 80 million tonnes of emissions reduction opportunities were identified across all possible activities.
Mr Verstegen said, “With LNG companies setting their own pollution limits under the Morrison Government’s climate policy, there is no indication that the rising pollution from LNG will be addressed by the Commonwealth any time soon.
“This report confirms that if the State Government acts now, we can capture substantial employment and economic benefits for WA, however these benefits will not be guaranteed if we wait for the Commonwealth Government to act.
“While it would deliver real benefits and jobs, the cost of offsetting carbon pollution from LNG developments represents only a few percent of the profits the Chevron and other LNG companies are generating from LNG production in WA.
“As LNG producers like Chevron pay little tax and no royalties, requiring them to offset their carbon pollution is a way to capture greater benefits from these projects for our economy, while at the same time kick starting new clean industries and helping drive the transition to renewable energy in our state.”
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.92606121301651,
"language": "en",
"url": "https://www.fte.org/teachers/hot-topics/",
"token_count": 162,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.0693359375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:fc37d34c-194d-490e-97c0-f7d82541708e>"
}
|
Even with the best curriculum a teacher occasionally needs to shake things up. FTE Hot Topic lessons are timely discussion lessons that help students apply economics reasoning to stories from current events. FTE’s 5 Economics Reasoning Proposition provide the intellectual framework for each Hot Topic. By using this framework teachers can give students the tools they need to build an understanding of their world and the current events happening around them.
Each Hot Topic lesson comes with a teacher guide and includes answer guides and supplement resources. Each lesson utilizes FTE resources, articles or video clips, and other media in an easy to use format.
Although some topics may no longer seem “hot”, the application of economics reasoning is timeless.
Should you have suggestions for topics, email FTE at [email protected].
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9083412885665894,
"language": "en",
"url": "https://www.greenbuildingsolutions.org/blog/contributions-insulation-u-s-economy-2016/",
"token_count": 3146,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": -0.01068115234375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:f3943fa4-d250-42ee-916f-7566852fabdc>"
}
|
The Contributions of Insulation to the U.S. Economy in 2016
Economics & Statistics Department American Chemistry Council February 2017
- The use of insulation in U.S. homes in businesses saves energy, putting more money in the pockets of households and business owners. In addition, by consuming less energy, use of insulation directly reduces greenhouse gas emissions.
- Beyond the many benefits of the use of insulation, the manufacture, distribution, and installation of insulation generates nearly 400,000 jobs in the U.S. and more than $20 billion in payrolls that support families and local communities around the country.
- An $11.7 billion business in 2016, insulation manufacturing in the U.S. directly employs more than 33,000 people across 42 states.
- Indirectly, through its purchases of supplies, raw materials, equipment, and services, insulation manufacturing supports an additional 42,500 jobs in supply-chain industries. Through the household spending of the wages and salaries paid to workers in insulation manufacturing and their suppliers an additional 49,000 payroll-induced jobs are supported.
- Thus, the economic activity from U.S. insulation manufacturing supports nearly 125,000 jobs. These jobs generate payrolls of $7.5 billion.
- In addition, the combined economic activity supported by insulation manufacturing contributes $1.1 billion to state and local governments and $1.9 billion in federal tax revenues.
THE INSULATION INDUSTRY IN THE U.S.
ENVIRONMENTAL AND ECONOMIC BENEFITS OF INSULATION PRODUCTS
ECONOMIC SNAPSHOT OF THE INSULATION INDUSTRY
ECONOMIC CONTRIBUTIONS OF THE U.S. INSULATION INDUSTRY
Upstream Economic Impact
Downstream Economic Impact
APPENDIX – INSULATION JOBS IN THE STATES
NOTES ON METHODOLOGY AND SOURCES
ECONOMICS AND STATISTICS DEPARTMENT
Insulation is installed in homes and businesses around the country to keep hot things hot and cold things cold. Insulation comes in many forms and is made of several different materials, depending on what is being insulated, where it is located, and other factors.
Residential insulation – attics, walls, floors and crawl spaces, roofs, doors and windows are insulated to reduce air leaks and increase energy efficiency.
Nonresidential insulation – In commercial and industrial buildings, insulation of roofs and walls (building envelope) saves on heating and cooling costs.
Appliances – refrigerators, freezers, ovens, dishwashers, hot water heaters are constructed with insulation to reduce thermal transfer.
Equipment/Mechanical – insulating pipes, tanks, and other mechanical systems reduces energy consumption, and contributes to the competitiveness of U.S. industry by lowering production costs.
Insulation is made from a variety of materials, each with a unique set of properties (i.e., R value (1), ability to create complex shapes, and ease of installation). The most commonly used materials in insulation products are (in alphabetical order):
- Cellulose – plant fibers often made from recycled newspapers, paperboard, and paper. The cellulose source is shredded and mixed with other ingredients to enhance product use and performance. It is installed as loose fill or mixed with a water to be applied in a spray.
- Fiberglass – a fluffy wool-like material made from spun fibers of molten glass. The intertwined fibers of fiberglass insulation can be installed as loose fill or rolled into blankets or batts. It can also be formed into shapes.
- Mineral wool – a wool-like material made from spun fibers of molten minerals (including rock and blast furnace slag). It can be installed as loose fill, or pressed into blankets or batts, or formed into shapes.
- Polyisocyanurate foam (polyiso) – a plastic foam made from the combination of several chemicals reacted to generate a closed-cell, rigid foam. It is often manufactured in boards with a variety of facing materials or encapsulated in panels.
- Polystyrene foam – a plastic foam made from an expanded polymer of styrene. It is generally formed into blocks which are cut into panels.
- Polyurethane foam – a plastic foam that is generated by a chemical reaction among several chemicals. For insulation, the chemicals are sprayed on site where the foaming process fills cavities and gaps. The foam can also be molded into shapes or poured into cavities to insulate appliances and other equipment.
OF INSULATION PRODUCTS
The insulation industry is essential to the quest for energy independence by reducing energy consumption and reducing energy-related greenhouse gas emissions. By lowering energy consumption, thus, energy bills, insulation helps make businesses more competitive and gives households more spending power. In addition, insulation reduces outside noise, reduces entryways for pollen and insects, allows for better humidity control, and lowers the chance for ice dams in snowy climates. While these benefits are enormous, they are difficult to quantify. The savings from insulation accrue to individual projects and businesses and depend on climate and the R value (or resistance to conductive heat flow) which makes it difficult to aggregate across the economy.
In a 2009 analysis by McKinsey that examined multiple chemistry-enabled technologies to reduce emissions (3), the authors concluded “insulation alone accounted for 40% of the total identified CO2e savings.”
According to the Department of Energy, “Space heating and cooling account for almost half of a home’s energy use, while water heating accounts for 18%, making these some of the largest energy expenses in any home.” (4)
According to the Business of Council for Sustainable Energy, U.S. energy productivity grew 16% between 2007 and 2016. (6) The use of insulation products across the economy is a key contribution to energy productivity growth.
In addition to creating economic and environmental benefits through its use, the manufacture, distribution, and installation of insulation also generates economic activity and supports jobs in the U.S.
Table 1 – Economic Snapshot of the Insulation Industry (2016)
Payroll ($ billion)
U.S. INSULATION INDUSTRY
The insulation manufacturing industry takes raw materials such as, glass, rock, slag, isocyanates, polyols, and recycled paper products and converts these materials into energy-saving insulation products. This analysis examines six basic types of insulation materials, including polystyrene, polyurethane, polyisocyanurate (polyiso), fiberglass, mineral wool and cellulose. In 42 states around the country, more than 33,000 workers are engaged in this essential economic activity. Table 2 presents the direct employment, payroll, and output associated with these main segments of insulation manufacturing.
Table 2 – Insulation Products Manufacturing (2016)
Payroll ($ billions)
Output ($ billions)
The value and contributions of insulation manufacturing do not just accrue to the manufacturers. Economic activity is supported both upstream (through supply chain impacts) and downstream as manufactured insulation products move through distribution/wholesale channels to the contractors whose business includes installing insulation.
The economic contributions of the insulation manufacturing were analyzed using an economic input- output model, IMPLAN. (7) This method estimates the total contributions of an industry to the economy at the state and national levels for a given year. The economic contributions analyzed in this report are employment, payroll and output in the U.S. for the year 2016.
The manufacture of insulation products directly generates $11.7 billion in industry shipments and directly employs 33,000 workers across 42 states. Insulation manufacturers purchase goods and services from their suppliers and their suppliers do the same. The economic impact generated by the insulation supply chain supports an additional 42,500 indirect jobs. Finally the wages paid by insulation manufacturers and their suppliers support nearly 49,000 payroll-induced jobs, jobs supported by the household spending of workers in the direct and indirect (supply-chain) segments. Thus, the economic activity from U.S. insulation manufacturing supports nearly 125,000 jobs. These jobs generate payrolls of $7.5 billion.
In addition, the combined economic activity supported by insulation manufacturing contributes $1.1 billion to state and local governments and $1.9 billion in federal tax revenues.
Table 3 – Upstream Economic Impact of Insulation Manufacturing (2016)
Payroll ($ billions)
Output ($ billions)
Direct Impact (Manufacturers)
Indirect Impact (Supply Chain)
Looking downstream toward the distribution and installation of insulation products, additional employment is supported in those sectors. More than 32,000 wholesalers distribute insulation products to contractors and retailers around the country. And more than 300,000 workers are engaged in the drywall and insulation installation and nonresidential roofing. Payrolls in those sectors amount to $2.1 billion and $16.3 billion, respectively. The paychecks from these workers help support families and local economies throughout the U.S.
Table 4 – Downstream Employment and Payrolls (2016)
Payroll ($ billions)
Roofing, Siding, and Insulation Wholesalers
Drywall & Insulation – Residential
Drywall & Insulation – Nonresidential
Roofing – Nonresidential
The insulation industry including manufacturers, distributors, and installers make vital contributions to the U.S. economy. The products that they make, distribute, and install conserve precious energy resources, saving money for households and businesses. The use of insulation also has large environmental benefits as reduced energy consumption translates directly into lower emissions of greenhouse gases. In addition, through supply chain and payroll-induced impacts, the economic activity generated by American insulation manufacturing is broad and helps support local economies across the U.S. Moving through the economy, there are huge contributions in terms of jobs and payrolls generated by those businesses that distribute insulation products from manufacturers to where they will be installed. Finally, hundreds of thousands of workers make a living installing insulation in homes and businesses around the U.S.
(1) An insulating material’s resistance to conductive heat flow is measured or rated in terms of its thermal resistance or R-value — the higher the R-value, the greater the insulating effectiveness. The R-value depends on the type of insulation, its thickness, and its density. When calculating the R-value of a multilayered installation, add the R-values of the individual layers. Installing more insulation in your home increases the R-value and the resistance to heat flow. (U.S. Department of Energy)
(3) McKinsey, “Innovations for Greenhouse Gas Reductions: A life cycle quantification of carbon abatement solutions enabled by the chemical industry.” July 2009.
(7) IMPLAN (IMpact analysis for PLANning) is a complete economic assessment package providing economic resolution from the National level down to the zip code level; MIG Inc. is the sole licensor of IMPLAN.
Insulation manufacturing occurs in 42 states while distribution/wholesale and installation activities occur across all 50 states. Appendix Table 1 presents the top 10 states in each of the three main segments. Appendix Table 2 presents employment by segment for all states.
Appendix Table 1 – Top 10 States for Insulation Employment by Industry Segment
Top 10 as % of Total
Top 10 as % of Total
Top 10 as % of Total
Appendix Table 2 – Insulation Employment by Industry Segment
Dis. of Columbia
Data on direct employment and payrolls are based on data from the Bureau of Labor Statistics (Covered Employment and Wages program). In addition, for insulation manufacturing, employment estimates were also based on results from a January 2017 survey of insulation manufacturers. Payrolls were estimated using average annual pay for industries and states multiplied by the employment estimates.
For insulation manufacturing, where data on shipments was estimated as a portion of a larger NAICS code, employment was estimated using output-to-employment ratios for that particular NAICS code supplemented with data from the survey of insulation manufactures. Payrolls for each segment were estimated by multiplying employment by the average annual wage for that industry.
With the exception of fiberglass/mineral wool insulation manufacturing, insulation made from other materials falls within broader NAICS codes and is not easily pulled out of existing government data. As a result, data on shipments of manufactured insulation products was derived from multiple sources, including the Census Bureau, IHS Chemical, the Center for the Polyurethanes Industry, and trade associations.
Data on employment and payroll for distributors/wholesalers is based on NAICS 42333 (Roofing, Siding, and Insulation Wholesalers).
Data on employment and payroll for installers and contractors is based on the following NAICS codes: NAICS 238311 – Residential drywall & insulation contractors
NAICS 238312 – Nonresidential drywall & insulation contractors NAICS 238162 – Nonresidential roofing contractors
It was determined that these NAICS classifications represent a large share of the insulation installation segment. Drywall installation is included in NAICS 238311 and 238312. While no data exists to separate insulation contractors from drywall contractors, it is likely that a majority of these contractors are engaged in both lines of business. In addition, it should be noted that insulation is also installed by self-employed handymen and homeowners that are not included in industry employment data. Because roofs are a significant source of energy losses in commercial buildings, most roofing contractors are also engaged in insulation installment as part of a commercial roofing project. Though likely significant, installers of insulation in appliances, industrial equipment, mechanical systems, transportation equipment, etc. are not included due to a lack of data.
Significant effort has been made in the preparation of this publication to provide the best available information. However, neither ACC, nor any of its employees, agents or other assigns, makes any warranty, expressed or implied, or assumes any liability or responsibility for any use, or the results of such use, of any information or data disclosed in this material.
©2017 American Chemistry Council, Inc.
The Economics & Statistics Department provides a full range of statistical and economic advice and services for ACC and its members and other partners. The group works to improve overall ACC advocacy impact by providing statistics on American Chemistry as well as preparing information about the economic value and contributions of American Chemistry to our economy and society. They function as an in-house consultant, providing survey, economic analysis and other statistical expertise, as well as monitoring business conditions and changing industry dynamics. The group also offers extensive industry knowledge, a network of leading academic organizations and think tanks, and a dedication to making analysis relevant and comprehensible to a wide audience. The primary author of this report is Martha Gilchrist Moore.
Dr. Thomas Kevin Swift
Chief Economist and Managing Director 202.249.6180
Martha Gilchrist Moore
Senior Director – Policy Analysis and Economics 202.249.6182
Heather R. Rose-Glowacki
Director, Chemical & Industry Dynamics 202.249.6184
Director, Surveys & Statistics 202.249.6183
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9274027943611145,
"language": "en",
"url": "http://www.financialliteracybd.com/index.php",
"token_count": 232,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.004913330078125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:5c66a226-cbfb-4192-9be1-47792248cc5a>"
}
|
Financial literacy is knowledge about personal management of finances. It gives the twin benefit of protecting from financial frauds as well as planning for financially secured future. Financial literacy gives consumers the necessary knowledge and skills required to assess the suitability of various financial products and investments available in the financial market.
In line with Organization for Economic Cooperation and Development - International Network on Financial Education (OECD-INFE) framework for measuring financial literacy, we believe that there are three significant areas - financial knowledge, financial behaviour and financial attitude, that determine whether a person is financially literate or not. The financial literacy quiz is designed for all ages. This can be used as a learning tool and we encourage people to share it with others.
Test your own financial literacy with these 10 questions. Then view results to see how you perform in the quiz.
Bangladesh Securities and Exchange Commission (BSEC) has taken necessary steps in order to launch Nationwide Financial Literacy Program. The time frame of the program has been divided into 3 stages, i.e. short term, midterm and long term. Programs for different target groups has also been included in different time frames.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9369229078292847,
"language": "en",
"url": "https://blockchain.oodles.io/blog/combating-covid-19-blockchain-ai-3d-printing/",
"token_count": 1149,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.01019287109375,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:ab7dbe9c-c6ba-4d2c-82ee-d783bdb3ef3f>"
}
|
The blog elaborates on the healthcare supply chain challenges amid COVID-19 and using emerging technologies to address them. Let’s take a closer look to understand how Blockchain, AI (Artificial Intelligence), 3D Printing, and other technologies play crucial roles in tackling the challenges posed by the COVID-19 pandemic. Also, explore how they are improving the fragmented and complex healthcare sector.
The impact of the crisis COVID-19 is so extreme that it has forced even the developed countries like the USA, UK, Italy, Spain, and others to relook at their various aspects, structure, and workings of social, economic, and environmental development. The global outbreak has affected the healthcare sector most. Global healthcare infrastructure has proved to be insufficient to combat or prevent the disaster of such a scale.
Considering the impact of this pandemic, public and private organizations have started to explore the tech space for solutions. They are piloting tech-driven solutions to improve the state of global healthcare experience and prepare for outbreaks like COVID-19 with emerging technologies like Blockchain, AI (Artificial Intelligence), 3D Printing, and others.
There are several challenges that these technologies seem capable of addressing. The first use case is to prepare healthcare supply chain management to respond to such unpredictable events with effective measures.
For instance, there has been an unprecedented surge in demand for hand sanitizer, masks, and other personal protection equipment (PPE) after COVID-19. Consequently, it has forced perfume markers in France to supplement the production of hand gels of hospitals for sustainable supply.
Essentially, decision-makers are facing difficulties in predicting the demand that may occur for a certain amount of equipment from a hospital or a government. Introducing AI tech-based solutions with Blockchain can assist organizations to identify patterns and predict the next course of action in such events. One case in point is how South Korea is making use of technology to prevent and track COVID-19’s spread.
AI-solutions enable decision-makers to capture, aggregate, and process an overload of information and data generated from diverse sources. They can use AI-powered data to make accurate predictions of a surge in demand and a drop in the supply of healthcare equipment used to combat the outbreak. Blockchain healthcare solutions in such scenarios will provide and ensure end-to-end data security by eliminating any chance of tampering or altering of information.
Another challenge that the combination of the blockchain technology and additive manufacturing technique can address is maintaining the supply of equipment.
Today’s supply chains are reliant on one, centralized authorities that have failed to prepare for such unexpected disruption. With blockchain solutions, authorities can establish decentralized systems and thus, eliminate the need to rely on a single authority’s decision. Further, additive manufacturing can assist in coping with this shortage. Indeed, the FDA has permissioned the use of a 3D printed surgical mask design in response to the shortage of medical equipment stemming from the coronavirus crisis. You can find the approved mask design on the NIH 3D Print Exchange.
Additive manufacturing or 3D printing is the process of using technology to produce things layer by layer, directly from digital files. It enables us to make things wherever and whenever they are required. That’s too without requiring the expensive tools that mass production requires.
We all have heard the issues of counterfeit products and medical equipment during the crisis. Blockchain solutions for traceable supply chains can ensure tracking of medical equipment from provenance to end-consumers with provision to check their authenticity.
Put simply, these technologies can play crucial roles in managing supply chain disruption. The maximum value, however, we can achieve when they are used as integrated solutions. Blockchain solution for end-to-end surveillance and reporting of the outbreak
Communicable diseases are difficult to contain. They spread rapidly while moving across political and geographical boundaries, and infecting people across countries.
Further, such communicable diseases carry a huge social stigma. The idea of being separated from fellow human beings is always scary to all. Thus, there is always a tendency to hide. The fact that most of the people do not respect privacy further aggravates this tendency. Ensuring no infringement of privacy in case of information sharing about communicable diseases should be of utmost importance.
Blockchain enables the secure storage and sharing of any transaction/ information in real-time between parties on the chain immutably. If WHO, health ministries, and nodal hospitals were connected, they could have shared real-time information about communicable diseases like COVID-19. As a result, they could have taken strict preventive measures much earlier.
Enterprise blockchain solutions can assist with data secure and efficient aggregation and analytics for COVID-19. For instance, a solution as a map built with Hyperledger application development can encourage anyone infected to self-report privately. It will enable people to see whether they were near someone or crossed paths with infected persons. Potentially, authorities can prioritize who they need to test while patients can share information without worrying about privacy.
We have not experienced anything like the COVID-19 pandemic before. Individuals, societies, and private and public organizations need to take a hard look at the reporting infrastructure available for communicable diseases.
With technologies like Blockchain, AI, 3D printing (Additive Manufacturing), and more, we can develop solutions that prepare us to face another pandemic like COVID-19 in the future. Apparently, Blockchain and other technology might not provide a holistic solution, they can play crucial roles as enablers. Enablers that ensure the security and efficiency needed to combat such impending disasters.
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9546515345573425,
"language": "en",
"url": "https://blog.elegro.eu/cryptocurrency-from-origins-to-nowadays/",
"token_count": 689,
"fin_int_score": 3,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.09033203125,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:e0ea8445-89ad-4511-9622-0a852407aa1a>"
}
|
Cryptocurrency: from origins to nowadays
Today information about Bitcoin, Ethereum, Litecoin can be found all over the Internet, TV and other mass media. What makes cryptocurrencies that popular? The main reason is, the virtual coins make the best form of the money ever. Digital assets have clear advantages including divisibility, portability, scarcity and validity. To get a better look at cryptocurrencies and their features, let’s turn the pages of history to know more about the money origin.
Long before paper money and coins appeared people used bartering which can be considered as the first embodiment of modern transactions. Ancient people exchanged different products and services to get what they needed. Some time later the all-too-familiar money appeared bringing a new trading system. People started to use stones as a means of payment for goods. Gradually, stones were replaced with gold that became the most common currency all around the globe. Finally in 700 B.C., the earliest known coins were minted, and the first taxes were charged.
The first paper money was issued in the 13th century. Bill owners could keep their money in trade houses, direct ancestors of all modern banks. Security and safety started to be greatly cultivated by financial industry at that time.
Time passed, and much has changed. Now we have traditional banks, global money transfers, online payments, credit cards and digital currencies.
So, what is cryptocurrency? It is a sort of digital assets created to be safe and secure. Cryptographic methods, used to design virtual coins, convert information into an uncrackable code which helps to track fund transfers. The invention of digital money is a considerable breakthrough on a global financial market.
The first fiat money was gold-backed. But these days, in the era of rapid technological advances, it isn’t anymore reinforced with gold. The US dollar keeps leading positions among paper money worldwide being the most common international and transparent currency. Blockchain is another transparent system providing an opportunity to keep funds in safety. Only the money owner can access it. There is no need to cooperate with any intermediaries, such as banks or electronic mints for transaction verification. Blockchain is a unique system which allows you to save your money and makes all transactions within a relevant network transparent.
To sum it up, let’s consider main pros of the cryptocurrency:
- Cryptocurrencies are durable. It doesn’t decay over time, and it shouldn’t be constantly replaced by mints or central banks as it happens with paper money.
- The cryptocurrency is divisible. You can divide it by any number, quickly and without paying any fees. While paper money can be divided only via the exchange.
- The cryptocurrency is verifiable. You can find out immediately that your cryptocurrency is real.
- The cryptocurrency is portable. It can be easily stored in crypto wallets or send across the world within seconds. As for the paper money it is much harder to do as traditional transactions usually take more time to be processed, especially over the long distances. In addition, the rates of money theft and fraud are higher when it comes to the traditional banking system.
Last but not least, cryptocurrencies are gaining momentum. Millions of people round the world use digital money and millions of people discover cryptocurrencies every day.
Subscribe to newsletter
December 21, 2020
December 8, 2020
October 27, 2020
|
{
"dump": "CC-MAIN-2021-17",
"language_score": 0.9531558156013489,
"language": "en",
"url": "https://entrepreneurhandbook.co.uk/cryptocurrency-payments-becoming-mainstream/",
"token_count": 739,
"fin_int_score": 4,
"fin_score_model": "en_fin_v0.1",
"risk_score": 0.018310546875,
"risk_score_model": "en_risk_v0.1",
"id": "<urn:uuid:a457c50c-0289-4709-9b41-2cabb503bdfe>"
}
|
A recent study between Imperial College London and the UK-based social trading platform, eToro, concludes that cryptocurrencies are the “natural next step” for 21st-century payment methods.
Their study, titled ‘Cryptocurrencies: Overcoming Barriers to Trust and Adoption’, believes that cryptocurrencies have already begun to fulfil one of the three core principles of fiat currencies, as they are already considered a store of value. However, the study notes that, as yet, cryptocurrencies don’t meet the other core principles: acting as a unit of account and a medium of exchange. The report believes cryptocurrencies must overcome half-a-dozen hurdles in order to tick the right boxes and become a mainstream payment alternative to fiat currencies. These challenges range from privacy, scalability and usability through to volatility, regulation and incentivisation.
Iqbal Gandham, UK Managing Director, eToro, compares the emergence of cryptocurrencies as a mainstream payment method to the time it took for email to become commonplace. Mr Gandham said that the first email was distributed way back in 1971, taking around 30 years to “become commonplace with a user-friendly interface”. Gandham is also the chairman of a British crypto organisation, CryptoUK, and said that much progress has been made since the first Bitcoin payment was made a “little over eight years” ago. Gandham said that “today, we are already seeing it [Bitcoin] begin to meet the requirements of everyday money”. Professor William Knottenbelt of Imperial College London is also of the opinion that cryptocurrencies may just “upend everything we thought we knew about the nature of financial systems and financial assets”, due largely to their decentralised nature.
In a recent speech, the head of the Bank for International Settlements (BIS), Agustin Carstens, insisted that cryptocurrency such as Bitcoin “cannot assume the functions of money” and warned against individuals attempting to “create money”. Nevertheless, some of the world’s biggest economies are beginning to acknowledge cryptocurrencies as payment, with India’s Law Commission considering cryptos such as Bitcoin as a legitimate mode of payment, particularly for transactions made in relation to iGaming.
A number of executives of well-known blockchain companies – the technology which underpins the security and transparency of cryptocurrencies such as Bitcoin – believe 2019 could be the year that certain cryptos make their way into the mainstream. Brandon Synth, founder of Skycoin, believes countries will look to implement tighter regulations next year which root out the low-value crypto assets and focus firmly on a “a few selected digital coins” that consumers will be “able to trade using only a few major and controlled exchanges”. Alexander Ivanov, CEO of Waves, a secure blockchain ecosystem for consumers and businesses, also believes 2019 will “definitely be the start of crypto’s mass adoption” but warned that more needs to be done to enable traditional banking and cryptocurrencies “to coexist frictionlessly”.
However, Evgeny Yurtaev, founder and CEO of another blockchain-based ecosystem, Zerion, has sought to temper the enthusiasm surrounding cryptocurrencies by suggesting “real adoption” of digital assets as a form of payment is still some way away. Yurtaev claims that “apart from speculation”, there is not yet a clear case for using cryptos in the mainstream. Yurtaev believes the development of consumer-orientated products will be essential for cryptocurrency’s long-term future.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.