Applied Sciences

%1

SafeAssign Originality Report Summer 2020 – Cloud Computing (ITS-532-06) – First Bi-Term • Week 7 Assignment

%53Total Score: High riskAjay Masand Submission UUID: c19b610e-5ea0-7e85-7719-8390cd84122c

Total Number of Reports

1 Highest Match

53 % Week 7 Assignment.docx

Average Match

53 % Submitted on

06/20/20 09:18 PM CDT

Average Word Count

2,798 Highest: Week 7 Assignment.docx

%53Attachment 1

Institutional database (2)

Student paper Student paper

Internet (2)

businessballs b-ok

Top sources (3)

Excluded sources (0)

View Originality Report – Old Design

Word Count: 2,798 Week 7 Assignment.docx

1 4

3 2

1 Student paper 3 businessballs 4 Student paper

Running Head: CHAPTER 16 -20 CHAPTER 16 -20 2

Cloud Computing

Ajay Masand

University of Cumberlands

ITS-532-06

Dr. Steven Case

06/20/2020

Cloud Computing

Chapter 16

The total cost of ownership

This is the analysis putting a single value on a complete life cycle capital purchase. The value includes every ownership phase like soft costs of management, acquisition, and operation. (Kling, 2014) The total cost of ownership hence includes the price of purchase when given asset. The following are the ten items to be considered in the determination of the total cost include: · Installation manpower · Electricity · Maintenance and service · Space of the facility · Project management · Server Equipment and power supply · HVAC equipment · Networking cost and software · System monitoring · Rack and hardware

1

1

1

/

management Server Equipment and power supply HVAC equipment Networking cost and software System monitoring Rack and hardware

Capital Expense

This is a type of expense experienced by businesses when trying to create benefits for the future. A cost expenditure can be incurred in a situation where debt is taken by a user to add volume to assets. Capital expenses include acquiring fixed assets such as building, intangible resources like patents, and

also upgrade of the already on place facilities (Safonov, 2017). The capital expense is also used to overtake new projects by a firm. Making capital expenditure on the fixed asset includes repairing a roof to building, building factories, and purchasing equipment (Kling, 2014). Economies of Scale

In cloud computing, economies of scale are used to refer to cloud architecture, such as cooling equipment, network bandwidth, and power supply

equipment. The cloud computing aspects are scaled up and are using larger components subjected to lower unit costs. There vital building blocks of the cloud computing that are scaled out since they grow by increasing in quantity (Kling, 2014). Economies of scale can be used to describe the cost per unit

depending on the production capacity. For example, a company making 200 widgets, can experience a cost of 10% each piece in production of the widget (Kling, 2014). Economies of scale make an organization pay for only what it needs, the organization gets to save money, and the company also saves money when it streamlines the workforce. The zero upfront costs are expected with organizations practicing economies of scale (Kling, 2014). The economies of

scale are very good for the cloud computing environment.

Right resizing

This involves selecting the most cost-effective instance for the workload of a company. Take for example when a company decides to do lift and shift in

which case when an organization requires 16 GB memory RAM for an application (Kling, 2014). In case a company needs 16GB, there will be a need for a large instance for the company which will cost a lot of money. In cloud computing matters, three steps can be applied to attain an effective outcome when

performing right resizing (Kling, 2014). The steps are like termination, rightsizing, and leveraging RIs. Moor’s Law

This is the law of double processing power over two years. Moore’s law still applies at the level of data center specifically when considering the

consumption of cloud to satisfy the cloud computing future (Ruparelia, 2016). The law also states that the number of transistors in a single semiconductor is supposed to be doubled every two years without any cost incurred hence allowing the computer industry to offer more power of processing in the lighter computing device. Company Profit

Profit=Revenues-expenses=$2.5-$2.1=$0.4 Profit margin= (Net income)/Revenue×100=0.4/2.5×100=0.16=16%

Chapter 17

Functional and Nonfunctional requirements

The functional requirement indicates what the software is needed to do while non-functional requirements illustrate the limitations under which software will operate. For example, in sending emails, the functional requirement illustrates how the system needs to send the email in case a given requirement is met. Nonfunctional requirement illustrates an email to be sent within a given latency upon which within the given period. A functional requirement is very important since they define the performance of the system since it re-counts the functionality of a particular system. The non-functional requirement is

important in elaborating on the characteristics of the system. The designer should avoid selecting a platform

During system development, the design phase helps in the transformation of requirements of implementation into a detailed and complete system design specification (Noghin, 2018). After approval of a design, the development team always kicks in to start the development process. Selecting a platform is

not simple since there is always ever-increasing capabilities of technology (Noghin, 2018). The evaluation, contraction, and implementation are becoming more and more complex especially for companies with many departments looking to use the platform (Noghin, 2018). Tradeoffs

The tradeoffs required by the designer revolves around choosing the cloud configuration service and in most cases, while building efficient, scalable, and secure systems and IoT (Noghin, 2018).

1

2

1

1

1

1

1

1

1

1

1

When a designer makes a trade-off, the designer is always making a compromise, and hence every decision a designer makes is always a trade-off. This means that achieve something must be done at the expense of the other. This requires the designer to be careful when selecting their priorities. The

system maintenance phase is very expensive

The system maintenance phase is the most expensive since it is the phase that is the longest in the life cycle of a system. Once the software is developed, it remains in operation as long as it is not rendered obsolete. During the operation, the system is constantly maintained due to changes in requirements. There are conceptual methods needed to support software developers with the maintenance process. The addition of new features to an existing system is sometimes very difficult compared to starting from scratch. Maintenance of software requires training which is also expensive.

Chapter 19

Scalability Scalability is the method that defines the ability of a given network, process, organization, or software to manage increased growth and demand at the same time. Therefore, a system, software, or business, which is known to be scalable is considered to be advantageous since it is adaptable to the

demands of clients or users. Scalability is essential since it contributes immensely to reputation, quality, competitiveness, and efficiency. Small-scale businesses are also required to be thoughtful about the scalability because they exhibit the chance of growth. While several areas in an organization are considered to be scalable, some have proven to be impossible. Scalability can also be achieved either through scaling out or scaling up. For example,

1

1

1

/

some applications can be scaled up by adding more servers, CPUs, ore storage capacity to the already existing systems. The problem associated with

scaling up is establishing the right balance between the available resources, which is observed to be completely difficult. Pareto Principle Pareto Principle is the best way of optimizing, understanding, and assessing virtually situations, especially the one involving distribution and usage of some sort (Payne, 2012). In that case, the potential relationships of Pareto Principles involve aspects of work, organizational development, personal life, and business (Payne, 2012). The Pareto Principles can also be described by different names like Pareto Theory, the, 80-20 principle, the principle of imbalance, and the rule of the vital few. The Pareto Theory is extremely essential for checking project management, business development, and organizational planning. Furthermore, leadership skills can be effectively applied when the Pareto principles are put into practice by organizations. This also applies to every aspect of leadership theory or approach. Pareto Principle is also useful in swift clarity to complex situations and problems, especially when directing resources to the correct project

(Payne, 2012). Vertical and Horizontal Scaling When analyzing databases, horizontal-scaling is usually defined by the partition of data, whereby each node for scaling contains only part of the data required for scaling. On the other hand, with the vertical-scaling, data usually reside on a single node as the scaling

process is done via multi-core since the load spreading is achievable between CPU and RAM of the machine. While the vertical scaling is limited to one machine, horizontal scaling is dynamic because several machines can be added into the already existing pool. Examples of vertical scaling include MySQL and Amazon RDS while examples of the horizontal scaling are MongoDB, Cassandra, and Google Cloud Spanner. Vertical scaling is easy to achieve because smaller machines can be switched into bigger ones. Database read/write ratio Importance The database read/write ratio is essential since it can standardize disk speeds across different environments. However, most of the applications can write and read different disks recurrently. Read/write ratio is also

present in many measurements that are performance-related such as latency, Disk Throughput, and IOPS. For that reason, understanding such kind a

ratio is important for storage devices and array design. Read/Write ratio is more essential than cloud users could realize. The practice is to look at the IO

profile of the application. Although the step has proven to be critical, many results are usually misinterpreted. The objective of using databases read/write ratio is to help with understanding how applications, which rely on the ratio work, including the life cycle of writing and reading the data. Some applications like making assumptions while others spend more time, more so when the writing and reading activity is less than 50%. Uptime Percentage Calculation

Uptime referred to as the amount of time that any service tends to be operational and available. In that case, an uptime of 99.99% is equal to 4 minutes and 19 seconds downtime.

Chapter 20

Cloud and TV broadcasting The advantages of cloud-based services have proven to be notable because they are software-based.

1

3

1

4

1

1

1

In that case, one will not need a physical location to achieve cloud operations. All of the broadcasting business and operations have been virtualized

because of cloud-based services. This is proven by several companies, which are delivering their channels through cloud platforms. Broadcasting of channels is currently delivered via cloud internet, which is also observed to be virtually operated. The services provided by the cloud are software-based and they

do not require a physical location for operations. For that reason, the cost of real estate, manpower, and infrastructure has drastically gone down. The

benefits brought by the technology of cloud-based broadcasting encourage quick turnaround time and thereby making the ability to develop and destroy channels to be easy. Remote management and transparency are also possible with cloud-based broadcasting. This is because one cannot monitor a channel through an internet browser. Intelligent Fabric Intelligent fabrics are materials used in networking to help with true flexibility and business agility. In that

case, any intelligent fabric incorporated in the network can make cloud business more agile because the network will be easy to deploy and maintained as well. Intelligent fabrics also initiate affordable operations because of minimized complexity, which is possible through central management and automated moves. These fabrics are also essential for comprehensive visibility because they can ensure performance in real-time, especially when integrated with the profiles of virtual networks. Intelligent fabrics can also be used for monitoring external stimuli because they can respond accordingly when translating technological components into data. However, intelligent fabrics can be aesthetic depending on performance enhancement, fashion, or design

objective. Data can also be recorded and handled quickly when using network systems incorporated with intelligent fabric.

Cloud Technology and Mobile Application market Currently, smartphones and tablets have access to wireless networks, which are of high-speed and this

has allowed these devices to gain from cloud-based technologies like any other traditional computer (Rountree & Castrillo, 2014). As cloud technology continues to expand, many mobile application developers also have the wish of ensuring success as they embrace the new movement. However, the landscape of mobile application is still evolving and developers are encouraged to reach the application functionality that was never witnessed before (Rountree & Castrillo, 2014). Another factor that is driving the market for mobile applications is mobile gaming. This is also supported by mobile phones and tablets, which have high-end technologies in terms of graphics, which is the primary factor to be considered when installing a gaming app on the PC or Tablet (Rountree & Castrillo, 2014). The issue of mobile gaming is not related to simple puzzles or basic card games but immersive games like car racing and

sports games (Rountree & Castrillo, 2014). In that case, when connective mobile phones to the cloud network, gamers will have the advantage of experiencing the best gaming applications (Rountree & Castrillo, 2014).

Importance of HTML 5

The essentiality of the HTML5 starts by ending the use of browser plugins. It is because of HTML5 that aspects of the rich media, which previously depended on the use of plugins, currently use built-in (Millard, 2014). therefore, new media tags like <audio> and <video> can be witnessed. HTML5 is

important because it is supported by major vendors, especially the ones engaged in the mobile space. The experience promoted by HTML5 is universal and cut across a larger spectrum of computer devices (Millard, 2014). Moreover, HTML5 is still evolving and the differences experienced with many implementation methods are expected to narrow down. HTML5 has also promoted the possibility of device ubiquity (Millard, 2014). This implies that once the developer has developed something once, it can be possibly used in a wider range of browsers (Millard, 2014). Cloud and Operating System Future

Possibly, memory, disc space, and related resources are shared by the cloud system. For that reason, it is easy to use many operating systems on one machine because of cloud technology (Catlett, 2013). The subsequent use of the web and the internet have also changed the traditional use of operating

systems. Users have been moving the key concepts of the operating system to the cloud without relying on a specific platform because cloud computing can be accessed anywhere (Millard 2014) Conceivably cloud computing can impact the future use of operating systems since most of the computer users

1

1

1

1

1

1

1

1

1 1

1

1

1

/

Source Matches (51)

Student paper 100%

Student paper 100%

Student paper 100%

be accessed anywhere (Millard, 2014). Conceivably, cloud computing can impact the future use of operating systems since most of the computer users

prefer working with cloud-based applications such as Google, Gmail, and Google Spreadsheets (Catlett, 2013). For that reason, every computer will only need a basic operating system to boot the operation into the web mode. Personal computing will also not require an operating system, which is a heavy-duty type.

1

Potential Location-aware applications The technologies of the potential location-aware applications include the implementation of the wireless access point for identifying the physical location of the electronic gadget, GPS, and infrastructure of the cellular phone (Catlett, 2013). Users of mobile devices are also free will to share information with the applications or location-aware. The location-aware applications can also help users with information such as reviews on restaurants, traffic congestion, or map location marker (Catlett, 2013). Location applications are also browser plug-ins installed in web-enabled gadgets.

The combination of wireless access points, phone towers, and GPS satellites can be essential in establishing the location of the user (Catlett, 2013). Nonetheless, the physical location of the user will be determined by how the user is connected to connection points, which are perceived to be independent (Catlett, 2013). Intelligent Devices The commonly known intelligent devices are sensors, phablets, smartphones, smart glasses, tables, and just to mention a few. While many intelligent devices are portable, they must be defined by their ability to interact, share, and connect to the network remotely (Bhowmik,

2017). Intelligent devices are also related to sensors, which have been collected together to form the Internet of Things. However, the process of collecting data by using collections of sensors or the Internet of Things can be complex as establishing a video feed (Bhowmik, 2017). Sensors are known to be intelligent devices, which their data can be thought of in the form of location, humidity, and sound of different measurements of machines or the human body (Bhowmik, 2017). Sensor devices are also incorporated with built-in wireless connectivity, which encourages the exchange of data and internet connection. This is the same principle that can result in the generation of Big Data.

References

Bhowmik, S. (2017). Cloud computing. Cambridge, United Kingdom; New York, NY: Cambridge University Press. Catlett, C. (2013). Cloud computing

and big data. Amsterdam: IOS Press. Kling, A. A. (2014). Cloud computing. Farmington Hills, Mich.: Lucent Books, an imprint of Gale Cengage Learning.

Millard, C. J. (2014). Cloud computing law. Oxford: Oxford University Press. Noghin, V. D. (2018). Reduction of the Pareto set: An axiomatic

approach. Cham, Switzerland: Springer

Payne, M. (2012). Pareto principle. Place of publication not identified: PublishAmerica. Ruparelia, N. B. (2016). Cloud computing. Cambridge,

Massachusetts; London, England: The MIT Press. Rountree, D., & Castrillo, I. (2014). The basics of cloud computing: Understanding the

fundamentals of cloud computing in theory and practice. Waltham, Mass: Syngress. Safonov, V. O. (2017). Trustworthy cloud computing. Hoboken, New

Jersey: John Wiley & Sons, Inc.

1

1

1 1

1

1 1 1

1

1 1

1

1

Student paper

University of Cumberlands

Original source

University of the Cumberlands

1

Student paper

06/20/2020

Original source

06/20/2020

1

Student paper

The total cost of ownership

Original source

Total Cost of Ownership

/

Student paper 66%

b-ok 100%

Student paper 79%

Student paper 81%

Student paper 68%

Student paper 65%

Student paper 64%

1

Student paper

Capital expenses include acquiring fixed assets such as building, intangible resources like patents, and also upgrade of the already on place facilities (Safonov, 2017).

Original source

As mentioned before, capital expenses include the acquisition of fixed assets like business equipment, or new buildings, attainment of intangible resources like patents, and upgrading the already existing facilities (Safonov, 2017)

2

Student paper

Economies of Scale

Original source

Economies of Scale

1

Student paper

In cloud computing, economies of scale are used to refer to cloud architecture, such as cooling equipment, network bandwidth, and power supply equipment. The cloud computing aspects are scaled up and are using larger components subjected to lower unit costs.

Original source

Economies of Scale Economies of scale in cloud computing refers to aspects of cloud architecture, like network bandwidth, cooling equipment, and power supply equipment (Safonov, 2017) These aspects of cloud architecture are typically scaled up and are using larger components subjected to lower unit costs

1

Student paper

Economies of scale can be used to describe the cost per unit depending on the production capacity.

Original source

Economies of scale can also be used in describing the reduction in the cost- per-unit depending on the capacity of production

1

Student paper

The economies of scale are very good for the cloud computing environment.

Original source

Economies of scale have proven to be a reality in cloud computing because it is good for the environment

1

Student paper

This involves selecting the most cost- effective instance for the workload of a company.

Original source

Right-sizing is the technique of selecting the most cost-effective instance for the company’s workload

1

Student paper

In cloud computing matters, three steps can be applied to attain an effective outcome when performing right resizing (Kling, 2014).

Original source

In matters about cloud computing, three steps can help with effective results when performing the right sizing

/

Student paper 83%

Student paper 91%

Student paper 67%

Student paper 62%

Student paper 71%

Student paper 71%

1

Student paper

Moore’s law still applies at the level of data center specifically when considering the consumption of cloud to satisfy the cloud computing future (Ruparelia, 2016). The law also states that the number of transistors in a single semiconductor is supposed to be doubled every two years without any cost incurred hence allowing the computer industry to offer more power of processing in the lighter computing device.

Original source

Therefore, Moore’s Law still applies at the level of the data center, especially when considering the consumption of cloud to satisfy the future of cloud computing (Ruparelia, 2016) Moore’s Law also states that the number of transistors in one semiconductor should be doubled after every two years without any added cost, thereby allowing the industry of computers to offer more processing power in smaller and lighter computing devices (Bhowmik, 2017)

1

Student paper

Profit=Revenues- expenses=$2.5-$2.1=$0.4 Profit margin= (Net income)/Revenue×100=0.4/2.5×100=0. 16=16%

Original source

Profit=Revenues- expenses=$2.5-$2.1=$0.4 Profit margin = (Net income)/Revenue×100=0.4/2.5×100=0. 16=16%

1

Student paper

The non-functional requirement is important in elaborating on the characteristics of the system.

Original source

Conversely, the non-functional requirement is known to be elaborating on the performance characteristics of the system (Bhowmik, 2017)

1

Student paper

Selecting a platform is not simple since there is always ever-increasing capabilities of technology (Noghin, 2018).

Original source

Platform selection is also not a simple task because of ever-increasing technology capabilities

1

Student paper

The system maintenance phase is very expensive The system maintenance phase is the most expensive since it is the phase that is the longest in the life cycle of a system.

Original source

System Maintenance Phase The system maintenance phase might be the most expensive because it is the longest phase in the life cycle of software development (Ruparelia, 2016)

1

Student paper

Therefore, a system, software, or business, which is known to be scalable is considered to be advantageous since it is adaptable to the demands of clients or users. Scalability is essential since it contributes immensely to reputation, quality, competitiveness, and efficiency.

Original source

For that reason, the business, software, or system that is described to be scalable is more advantageous because of its being more adaptable to the change demands or needs of clients or its users Scalability is important because it contributes to efficiency, reputation, quality, and competitiveness

/

Student paper 82%

Student paper 66%

Student paper 75%

businessballs 63%

1

Student paper

Scalability can also be achieved either through scaling out or scaling up.

Original source

The advantage is that scalability can be achieved by either scaling up or scaling out

1

Student paper

The problem associated with scaling up is establishing the right balance between the available resources, which is observed to be completely difficult. Pareto Principle Pareto Principle is the best way of optimizing, understanding, and assessing virtually situations, especially the one involving distribution and usage of some sort (Payne, 2012). In that case, the potential relationships of Pareto Principles involve aspects of work, organizational development, personal life, and business (Payne, 2012). The Pareto Principles can also be described by different names like Pareto Theory, the, 80-20 principle, the principle of imbalance, and the rule of the vital few.

Original source

However, the problem with scaling up is finding the right balance of resources, which have also proven to be extremely difficult It is remarkably an easy way of assessing, optimizing, understanding virtually any situation involving the usage or distribution of some kind (Payne, 2012) Therefore, the potential uses and relationship of the Pareto Principle cover most aspects of business, work, personal life, and organizational development (Payne, 2012) The Pareto Principles is also known by several different names such as Pareto’s Law, Pareto Theory, the 80- 20 rule, 80-20 principle, the Pareto’s 80-20 rule, the rule of the vital few, the principle of imbalance, and the Principle of the Least Effort (Noghin, 2018)

1

Student paper

The Pareto Theory is extremely essential for checking project management, business development, and organizational planning.

Original source

Similarly, the Pareto Theory is extremely useful for reference or when checking project management, organizational planning, and business development (Noghin, 2018)

3

Student paper

Pareto Principle is also useful in swift clarity to complex situations and problems, especially when directing resources to the correct project (Payne, 2012).

Original source

The Pareto principle is extremely helpful in bringing swift and easy clarity to complex situations and problems, especially when deciding where to focus effort and resources

/

Student paper 67%

Student paper 67%

Student paper 67%

Student paper 63%

Student paper 73%

1

Student paper

On the other hand, with the vertical- scaling, data usually reside on a single node as the scaling process is done via multi-core since the load spreading is achievable between CPU and RAM of the machine. While the vertical scaling is limited to one machine, horizontal scaling is dynamic because several machines can be added into the already existing pool. Examples of vertical scaling include MySQL and Amazon RDS while examples of the horizontal scaling are MongoDB, Cassandra, and Google Cloud Spanner. Vertical scaling is easy to achieve because smaller machines can be switched into bigger ones.

Original source

However, in the vertical scaling, the data is expected to reside on a single node, whereby the scaling is done through the multi-core and as mentioned before, the load is spread between the RAM and CPU of the given machine (Bhowmik, 2017) Horizontal scaling is dynamic because it is easy to add more machines into the existing pool while the vertical scaling is limited to the capacity of one machine, whereby scaling beyond that capacity usually involved downtime, which will require an upper limit Examples of horizontal scaling include Google Cloud Spanner, MongoDB, and Cassandra The examples of vertical scaling provide an easy way of scaling, whereby smaller machines are switched to bigger ones

1

Student paper

Database read/write ratio Importance The database read/write ratio is essential since it can standardize disk speeds across different environments.

Original source

The database read/write ratio is important because it can attempt to standardize the comparison of disk speeds across various environments

4

Student paper

Read/write ratio is also present in many measurements that are performance-related such as latency, Disk Throughput, and IOPS.

Original source

This is inclusive in most performance related measurements such as the latency, the Disk Throughput, IOPS among others

Order now and get 10% discount on all orders above $50 now!!The professional are ready and willing handle your assignment.

ORDER NOW »»