Thursday, January 30, 2020

Cloud Computing Essay Example for Free

Cloud Computing Essay * Integrated development environment as a service (IDEaaS) In the business model using software as a service, users are provided access to application software and databases. The cloud providers manage the infrastructure and platforms on which the applications run. SaaS is sometimes referred to as â€Å"on-demand software† and is usually priced on a pay-per-use basis. SaaS providers generally price applications using a subscription fee. Proponents claim that the SaaS allows a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and personnel expenses, towards meeting other IT goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS is that the users data are stored on the cloud provider’s server. As a result, there could be unauthorized access to the data. End users access cloud-based applications through a web browser or a light-weight desktop or mobile app while the business software and users data are stored on servers at a remote location. Proponents claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand.[2][3] Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar to a utility (like the electricity grid) over a network.[4] | This article may contain original research. Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed. (January 2013)| The origin of the term cloud computing is obscure, but it appears to derive from the practice of using drawings of stylized clouds to denote networks in diagrams of computing and communications systems. The word cloud is used as a metaphor for the Internet, based on the standardized use of a cloud-like shape to denote a network on telephony schematics and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. The cloud symbol was used to represent the Internet as early as 1994.[5][6] The underlying concept of cloud computing dates back to the 1950s, when large-scale mainframe became available in academia and corporations, accessible via thin clients / terminalcomputers. Because it was costly to buy a mainframe, it became im portant to find ways to get the greatest return on the investment in them, allowing multiple users to share both the physical access to the computer from multiple terminals as well as to share the CPU time, eliminating periods of inactivity, which became known in the industry as time-sharing.[7] In the 1990s, telecommunications companies, who previously offered primarily dedicated point-to-point data circuits, began offering virtual private network (VPN) services with comparable quality of service but at a much lower cost. By switching traffic to balance utilization as they saw fit, they were able to utilize their overall network bandwidth more effectively. The cloud symbol was used to denote the demarcation point between that which was the responsibility of the provider and that which was the responsibility of the users. Cloud computing extends this boundary to cover servers as well as the network infrastructure.[8] As computers became more prevalent, scientists and technologists explored ways to make large-scale computing power available to more users through time sharing, experimenting with algorithms to provide the optimal use of the infrastructure, platform and applications with prioritized access to the CPU and efficiency for the end users.[9] John McCarthy opined in t he 1960s that computation may someday be organized as a public utility. Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government, and community forms, were thoroughly explored in Douglas Parkhills 1966 book, The Challenge of the Computer Utility. Other scholars have shown that cloud computings roots go all the way back to the 1950s when scientist Herb Grosch (the author of Groschs law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers.[10] Due to the expense of these powerful computers, many corporations and other entities could avail themselves of computing capability through time sharing and several organizations, such as GEs GEISCO, IBM subsidiary The Service Bureau Corporation (SBC, founded in 1957), Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), and Bolt, Beranek and Newman (BBN) marketed time sharing as a commercial venture. The development of the Internet from being document centric via semantic data towards more and more services was described as Dynamic Web.[11] This contribution focused in particular in the need for better meta-data able to describe not only implementation details but also conceptual details of model-based applications. The ubiquitous availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture,autonomic, and utility computing have led to a tremendous growth in cloud computing.[12][13][14] After the dot-com bubble, Amazon played a key role in the development of cloud computing by modernizing their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving two-pizza teams (teams small enough to be fed with two pizzas) could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.[15][16] In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds.[17] In the same year, efforts were focused on providing quality of service guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project, resulting to a real-time cloud environment.[18] By mid-2008, Gartner saw an opportunity for cloud computing to shape the relationship among consumers of IT services, those who use IT services and those who sell them[19] and observed that organizations are switching from company-owned hardware and software assets to per-use service-based models so that the projected shift to computing will result in dramatic growth in IT products in some areas and significant reductions in other areas.[20] On March 1, 2011, IBM announced the Smarter Computing framework to support Smarter Planet.[21] Among the various components of the Smarter Computing foundation, cloud computing is a critical piece. [edit] Similar systems and concepts Cloud computing shares characteristics with: * Autonomic computing — Computer systems capable of self-management.[22] * Client–server model — Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requesters (clients).[23] * Grid computing — A form of distributed and parallel computing, whereby a super and virtual computer is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks. * Mainframe computer — Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, police and secret intelligence services, enterprise resource planning, and financial transaction processing.[24] * Utility computing — The packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity.[25][26] * Peer-to-peer — Distributed architecture without the need for central coordination, with participants being at the same time both suppliers and consumers of resources (in contrast to the traditional client–server model). * Cloud gaming Also known as on-demand gaming, this is a way of delivering games to computers. The gaming data will be stored in the providers server, so that gaming will be independent of client computers used to play the game. [edit] Characteristics Cloud computing exhibits the following key characteristics: * Agility improves with users ability to re-provision technological infrastructure resources. * Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud computing systems typically use REST-based APIs. * Cost is claimed to be reduced and in a public cloud delivery model capital expenditure is converted to operational expenditure.[27] This is purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).[28] The e-FISCAL projects state of the art repository[29] contains several articles looking into cost aspects in more detail, most of them concluding that costs savings de pend on the type of activities supported and the type of infrastructure available in-house. * Device and location independence[30] enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.[28] * Virtualization technology allows servers and storage devices to be shared and utilization be increased. Applications can be easily migrated from one physical server to another. * Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for: * Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.) * Peak-load capacity increases (users need not engineer for highest possible load-levels) * Utilisation and efficiency improvements for systems that are often only 10–20% utilised.[15] * Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing su itable for business continuity and disaster recovery.[31] * Scalability and elasticity via dynamic (on-demand) provisioning of resources on a fine-grained, self-service basis near real-time,[32] without users having to engineer for peak loads.[33][34] * Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.[28] * Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels.[35] Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford.[36] However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users desire to retain control over the infrastructure and avoid losing control of information security. * Maintenance of cloud computing applications is easier, because they do not need to be installed on each users computer and can be accessed from different places. The National Institute of Standards and Technologys definition of cloud computing identifies five essential characteristics: On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider. Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations). Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time. Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. —National Institute of Standards and Technology[4] On-demand self-service See also: Self-service provisioning for cloud computing services and Service catalogs for cloud computing services On-demand self-service allows users to obtain, configure and deploy cloud services themselves using cloud service catalogues, without requiring the assistance of IT.[37][38] This feature is listed by the National Institute of Standards and Technology (NIST) as a characteristic of cloud computing.[4] The self-service requirement of cloud computing prompts infrastructure vendors to create cloud computing templates, which are obtained from cloud service catalogues. Manufacturers of such templates or blueprints include Hewlett-Packard (HP), which names its templates as HP Cloud Maps[39] RightScale[40] and Red Hat, which names its templates CloudForms.[41] The templates contain predefined configurations used by consumers to set up cloud services. The templates or blueprints provide the technical information necessary to build ready-to-use clouds.[40] Each template includes specific configuration details for different cloud infrastructures, with information about servers for specific tasks such as hosting applications, databases, websites and so on.[40] The templates also include predefined Web service, the operating system, the database, security configurations and load balancing.[41] Cloud consumers use cloud templates to move applications between clouds through a self-service portal. The predefined blueprints define all that an application requires to run in different environments. For example, a template could define how the same application could be deployed in cloud platforms based on Amazon Web Service, VMware or Red Hat.[42] The user organization benefits from cloud templates because the technical aspects of cloud configurations reside in the templates, letting users to deploy cloud services with a push of a button.[43][44] Cloud templates can also be used by developers to create a catalog of cloud services.[45] [edit] Ser vice models Cloud computing providers offer their services according to three fundamental models:[4][46] infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where IaaS is the most basic and each higher model abstracts from the details of the lower models. In 2012 network as a service (NaaS) and communication as a service (CaaS) were officially included by ITU (International Telecommunication Union) as part of the basic cloud computing models, recognized service categories of a telecommunication-centric cloud ecosystem.[47] Infrastructure as a service (IaaS) See also: Category:Cloud infrastructure In the most basic cloud-service model, providers of IaaS offer computers physical or (more often) virtual machines and other resources. (A hypervisor, such as Xen or KVM, runs the virtual machines as guests.) Pools of hypervisors within the cloud operational support-system can support large numbers of virtual machines and the ability to scale services up and down according to customers varying requirements. IaaS clouds often offer additional resources such as images in a virtual-machine image-library, raw (block) and file-based storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.[48] IaaS-cloud providers supply these resources on-demand from their large pools installed indata centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed. Examples of IaaS providers include Amazon CloudFormation, Amazon EC2, Windows Azure Virtual Machines, DynDNS, Google Compute Engine, HP Cloud, iland, Joyent,Rackspace Cloud, ReadySpace Cloud Services, and Terremark. [edit] Platform as a service (PaaS)

Wednesday, January 22, 2020

Cyberterrorism Essay -- Cyber Terrorism Internet

Cyberterrorism Introduction Cyberterrorism is the convergence of terrorism and cyberspace. It is generally understood to mean unlawful attacks and threats of attack against computers, networks, and the information stored therein. Possibly to intimidate, influence a government or its people to further political or social gain. To qualify as cyberterrorism, an attack should result in violence against persons or property, or generate fear. Attacks that lead to death or bodily injury, explosions, plane crashes, water contamination, or severe economic loss would be examples, serious attacks against important infrastructures could be acts of cyberterrorism, depending on their impact. This essay will illustrate and analyse the main issues and ideas behind cyberterrorism. This will include information that has led to the internet being used in a mailicous way, ethical issues, paradigms that cyberterrorism follows, motivations and incidents that have occurred in the past. One FBI spokespersons definition is- 'Cyber terrorism' means intentional use or threat of use, without legally recognized authority, of violence, disruption, or interference against cyber systems, when it is likely that such use would result in death or injury of a person or persons, substantial damage to physical property, civil disorder, or significant economic harm'. Cyber attacks and effects Cyberspace is constantly under assault. Cyber spies, thieves, saboteurs, and thrill seekers break into computer systems, steal personal data and trade secrets, vandalize Web sites, disrupt service, sabotage data and systems, launch computer viruses and worms, conduct fraudulent transactions, and harass individuals and companies. These attacks are facilitated with increasingly powerful and easy-to-use software tools, which are readily available for free from thousands of Web sites on the Internet. Many of the attacks are serious and costly. The ILOVEYOU virus for example, was estimated to have infected tens of millions of users and cost billions of dollars in damage. In light of these serious threats from cyberspace, it is worth noting that the discourse on cyberterrorism is something that – fortunately has not been carried out in its most destructive capabilities. It is, therefore, d... ...ital world today. In addition to cyberattacks against digital data and systems, many people are being terrorized on the Internet today with threats of physical violence. On line stalking, death threats, and hate messages are abundant. These crimes are serious and must be addressed. In so doing, governments around the world will be in a better position to police and respond to cyberterrorism if and when the threat becomes imminent. Sources Author Unknown. "Cyber Terrorism: Understanding Cyber Threats" https://www.hamiltoncountyohio-tewg.org/cyber_terrorism/ Aldo Leon. "The New Age of Cyberterrorism" http://www.sabianet.com/Res_The%20New%20Age%20of%20Cyberterrorism.pdf Dorothy E. Denning. "Cyberterrorism" http://palmer.wellesley.edu/~ivolic/pdf/USEM/Cyberterror-Denning.pdf Mohamed Chawki. "A Critical Look at the Regulation of Cybercrime" http://www.crime-research.org/library/Critical.doc. Robert Malà ½. "Virtual communities and cyber terrorism" http://www.unob.cz/spi/2007/presentace/2007-May-03/06-Jirovsky_CyberTer.ppt. Peter Reilly. "How Real is the Threat of Cyber Terrorism?" http://www2.lhric.org/security/desk/letter8.html

Tuesday, January 14, 2020

Best Practices Guide for Multi-Disciplinary Teams Essay

Schools across the US are beginning to open-up classrooms, teachers are finding that they are no longer working alone or exclusively with members of their own profession. But with parent, Paraprofessionals, nurses, learning support staff, educational psychologists, social workers, and even community leaders and volunteers. This type of collaboration is called multidisciplinary teams, in its simplest terms this means members of different professions working together. Each member of a multidisciplinary team has an essential function and has valuable contribution to make in the identifying learning goals for the student, as well as the delivery of these goals across all areas from curriculum to learning opportunities and even the students extracurricular activities. Members of this team are also able to support the child at home to ensure that there is success between home and school. The success of the student depends on a strong home/school relationship; therefore, parents are strongly encouraged to participate. Each member of the team has specific qualification and duties: Local education agency (LEA) A representative qualified to supervise the needs of the student, someone who is knowledgeable of the general curriculum, is knowledgeable about the availability of resources of the public agency, and has the authority to commit agency resources. Family Not only is emphasis upon parental participation ethically proper and legally required, but â€Å"parental involvement has been associated with higher grades, positive behaviors and attitudes, reduced absenteeism, and increased study habits† (Lawrence & Heller, 2001). Related staff and services This group of people can vary depending on the student or issue being evaluated. Most commonly you will have a school psychologist who may be responsible for completing an assessment of the student, analyzing and interpreting assessment data and conduct follow-up observations to determine the success of modifications put in place to aid the student. Other related personnel can include: †¢ Speech-language therapists †¢ Occupational therapists †¢ Physical therapists †¢ Vision specialists †¢ Medical personnel, such as nurses and dietitians †¢ Social workers †¢ Counselors and mental health personnel †¢ Adaptive physical education teachers †¢ Vocational specialists †¢ others Administrators A school administrator, principal or assistant principal is an essential member of the team. Because the administrator should be aware of specific resources and expertise within the school In addition, administrators are qualified to supervise the program and can commit necessary resources. The administrator usually works with LEA’s. Regular education teachers The regular education teacher and the special education teacher more than likely have shared and equal responsibilities to all students in the classroom. Usually, the regular education teacher’s role is ultimately in charge of instruction in the classroom. The teacher is also the line of communication between the school and home, keeping the parents informed about the student’s achievement and grades and educational programs. Special education teachers The special educator’s role is that of individualizing, diagnosing, and modifying curriculum. In an inclusive classroom the special education teacher would provide assessment and instructional planning in the mainstream setting, conduct remediation and tutorial and team teach. Team teaching arrangements were used in the 1960s (Stainback, S. & Stainback, W., 1996) in an attempt to reach a wider range of children with diverse learning needs, particularly those at risk. Identification and Placement Procedures One of the most significant and complicated parts of a special education program is identifying eligible students, this is because the criteria for verifying a disability can be subjective and subject to change. Mistakenly identifying students as disabled or failing to identify students who actually need services can have a long term impact. Detailed steps have been created to improve the process of identifying a disability and ensuring fairness. Student Assistance Teams and Multidisciplinary Team Two procedures included the use of a student assistance team and a multidisciplinary evaluation team. The student assistance team search for alternative solutions when a student is having problems. The student assistance team is usually comprised of regular teachers, counselors and administrators, school psychologists and special education teachers can also be a part of the student assistance team. When the issue can not be resolved by the student assistance team then a written referral is made for an evaluation by the multidisciplinary team. Multidisciplinary team includes, but is not limited to psychologists, teachers (general and special education), administrators, and other specialists. This group of professionals follows federal and state regulations in order to determine whether a student is eligible for special education services. Before any student can be evaluated, however, the written permission must be obtained from the parents. The team approach provides additional validity to the verification process. Medical, educational, psychological, and social characteristics are usually used in the verification process. In many cases the School psychologist interprets the assessment data and is responsible for translating this information to the team for implementation. Once it has been determined that special education services are needed the team may meet as often as needed to discuss the implementation. School personnel are required to provide documentation of the mastery of benchmarks and annual goals. It is not required that all goals are meet one school term, but they must provide evidence that they are working toward achieving the goals.

Monday, January 6, 2020

Was Raphael Married

He was a Renaissance celebrity, known not only for his superb artistic talent but for his personal charm. Very publicly engaged to Maria Bibbiena, the niece of a powerful cardinal, scholars believed him to have had a mistress by the name of Margherita Luti, the daughter of a Sienese baker. Marriage to a woman of such a lowly social status would hardly have helped his career; general public knowledge of such a liaison could have damaged his reputation. But recent research conducted by Italian art historian Maurizio Bernardelli Curuz suggests that Raphael Sanzio may have followed his heart and secretly married Margherita Luti. Clues that Point to a Marriage Important clues to the relationship can be found in the recently-restored Fornarina, the portrait of a seductive beauty begun in 1516 and left unfinished by Raphael. Half-clothed and smiling suggestively, the subject wears a ribbon on her left arm bearing Raphaels name. Pinned to her turban is a pearl — and the meaning of Margherita is pearl. X-rays taken during restoration reveal in the background quince and myrtle bushes — symbols of fertility and fidelity. And on her left hand was a ring, the existence of which was painted out, probably by Raphaels students after the masters death. All these symbols would have been extraordinarily meaningful to the average Renaissance viewer. To anyone who understood the symbolism, the portrait practically shouts this is my beautiful wife Margherita and I love her. In addition to the portrait, Curuz has uncovered documentary evidence that Raphael and Margherita were married in a secret ceremony. Curuz also believes Margherita to be the subject of La Donna Velata (the Veiled Lady), which one contemporary noted was the painting of the woman Raphael loved until he died. It had been theorized that Raphael didnt paint the Fornarina at all, and that instead it is the work of one of his pupils. Curuz and his associates now believe that Raphaels pupils deliberately obscured the nuptial symbolism to protect his reputation and continue their own work at the Sala di Constantino in the Vatican, the loss of which would have bankrupted them. To reinforce the pretense, Raphaels students placed a plaque on his tomb in memory of his fiancee, Bibbiena. And Margherita Luti (Sanzio)? Four months after Raphaels death, the widow Margherita is recorded as arriving at the convent of SantApollonia in Rome.