This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Green Solutions

InfoTech Import

This essay is written by:

Louis PHD Verified writer

Finished papers: 5822

4.75

Proficient in:

Psychology, English, Economics, Sociology, Management, and Nursing

You can get writing help to write an essay on these topics
100% plagiarism-free

Hire This Writer

InfoTech Import

 

 

InfoTech Import in a Strat Plan

Prompt 1: Data Warehouse

Data warehouse refers to an information system containing past and aggregate data/information from one or more sources. It is used to streamline the reporting as well as the evaluation process of the company. Additionally, it provides a version of the truth for organizations for decision-making and forecasting. There are different forms of data warehouse architectures, including single-tier, two-tier, and three-tier designs. A single-tier design is meant to reduce the amount of data in storage. The goal is to evacuating excess data. Next, the two-tier structure separates authentically accessible data sources and warehouses. It is not flexible and does not support many end-users. A three-tier architecture is the most common and consists of a top, bottom, and middle tiers (Inmon & Linstedt, 2014).

Data Warehouse Elements

The data warehouse is dependent on an RDBMS (relational database management system) server, which refers to a central information record enclosed by several critical elements to ensure the entire condition is useful, logical, and accessible. The features of a data warehouse include a database, query tools, data marts, meta data, and data warehouse bus-architecture. The central database involves the creation of the data warehousing conditions. The database is brought about using RDBMS innovation. The enhancement of customary RDBMS necessitates this kind of value-based database preparation but for data warehousing purposes. The use of index structures is done to evade relational table-sweep hence enhance speed. The usage of MDDBs (multidimensional database) beats all constraints resulting from relational data models. Data sourcing, processing, and transfer tools are used to play out all transformations, progressions, as well as synopses expected to handle data in an organization’s data warehouse. Additionally, these tools are known as ETL (extract, transform, and load) tools, whose functionality integrates anonymizing data/information based on administrative conditions. These tools can generate jobs, Cobol programs, background professions, and shell components, among others, that usually upgrade data within a warehouse. Furthermore, these tools help maintain updated metadata and are needed to contain constraints associated with the database as well as data varieties (Kimball & Ross, 2013).

Metadata suggests some degree of high mechanical concept. It means data regarding data denotes the data warehouse. It is used in creating, staying updated, and handling a data warehouse. Within a data warehouse architecture, metadata takes an essential job since it demonstrates the origin, usage, characteristics, and underlines warehouse data. Also, it indicates ways of data processing and preparation. It is strongly connected to the data warehouse.

Data warehousing heavily depends on the supply of data to organizations to assist in making critical decisions. Query tools allow end-users to connect with data warehouse systems. Query tools are classified into different categories, including data mining, query and reporting, application and development, and OLAP tools. These tools play vital roles in data warehousing and hence are vital components of a data warehouse architecture (Inmon & Linstedt, 2014).

A data-warehouse bus determines the movement of data within a warehouse. The data stream within the data warehouse may be arranged in the form of Inflow, Outflow, Upflow, Meta stream, and Downflow (Kimball & Ross, 2013). A data store basically refers to an entry later used in getting data out to end-users. It is shown as a potential colossal magnitude data warehouse since it needs some money and funding to create. Essentially, data bazaar means the same to all people. The repository is used for data type intended for a specific group of users.

Prompt 2: Big Data

Big data refers to vast data volumes, both organized and non-organized, that engages an organization on daily operations. Basically, the amount of data may not be significant, but the organization’s specialty with it that matters. The analysis of big data can be done for insights that result in better decisions as well as strategic business actions. Big data entails large, fast, and complex data volumes that are difficult to utilize by the use of traditional techniques. The illustration of access and storage of colossal information volumes for analytics has existed for some time now. Big data can be explained in terms of different V’s, such as velocity, variability, volume, veracity, and variety. This is concerning the speed in which it flows, different formats in which it exists, its magnitude due to origination from multiple sources, its flexibility during interpretation, and its quality after processing and analysis (Marz & Warren, 2015).

Big data has been used by organizations to achieve various benefits, such as cost and time reductions, smart decision-making, and new product developments and advanced provisions. The generation of coupons as a retail location depends on the buying tendency of a customer. In my profession, I have witnessed big data being used to improve organizational productivity and decision-making processes. Today’s processing systems provide the speed, adaptability, as well as forced needed to collect vast amounts and varieties of big data quickly. On top of stable access, companies require techniques for data incorporation, warranting data quality, providing data administration and repository, and preparing it for analytics. With better performance tools such as in-memory analytics and grid computing, companies can decide to use big data for assessments. To remain relevant, companies have to stay focused on complete estimations of big data and operate in a data-driven manner on decisions based on outcomes produced from big data. The benefits associated with being a data-driven organization are clear. These include better performance, operational progression, and increased revenues (Marz & Warren, 2015).

Prompt 3: Green Computing

              Green computing involves the utilization of computers and related gadgets in an environmentally-dependent manner. Green computing includes the execution of energy-efficient devices, servers as well as other peripherals to ensure sensible discarding of electronic wastes. It is the eco-friendly and environmentally-capable utilization of computers and resources. In the US, the latest strategy towards green computing involved the deliberate marking called Energy Star. It was implemented in 1992 by the EPA (Environmental Protection Agency) to ensure energy efficiency in different forms of equipment. Various IT companies are continually incorporating certain resources in the design of energy-efficient gadgets, minimizing the utilization of hazardous materials, and facilitating the recyclability of computing gadgets. Green computing practices became widely recognized in 1992 when EPA pushed for the Energy Star initiative (Saha, 2014).

Many organizations have made their data centers ‘green’ using various strategies. One strategy is green use, which involves reducing power use by computing devices and using them in an eco-friendly manner. The next strategy is green disposal involving changing of the actual ways or appropriate disposal or reuse, of undesirable electronic components. Green design is also used to create energy-efficient computers, printers, projectors, and servers, among other digital gadgets. Next, green manufacturing has been used to minimize wastes during the production of PCs and various components to reduce the environmental impacts of these processes (Dolci et al., 2015). Government regulatory agencies have worked ceaselessly to promote green computing practices by bringing various willful initiatives and standards for their implementation. One of the organizations that have successfully implemented green computing is Whitelabel ITSolutions. The company has made sure that all their data centers in the US are significantly green friendly.

 

 

 

 

 

 

 

 

 

 

 

 

 

References

Dolci, D. B., Lunardi, G. L., Salles, A. C., & Alves, A. P. F. (2015). Implementation of green IT in organizations: A Revista de Administração de Empresas, 55(5), 486-497.

https://whitelabelitsolutions.com/meaning-green-computing/

Inmon, W. H., & Linstedt, D. (2014). Data architecture: a primer for the data scientist: big data, data warehouse, and data vault. Morgan Kaufmann.

Kimball, R., & Ross, M. (2013). The data warehouse toolkit: The definitive guide to dimensional modeling. John Wiley & Sons.

Marz, N., & Warren, J. (2015). Big Data: Principles and best practices of scalable realtime data systems. Manning Publications Co.

Saha, B. (2014). Green computing. International Journal of Computer Trends and Technology (IJCTT), 14(2), 46-50.

 

 

 

 

 

 

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask