This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Green Solutions

Installing Python

This essay is written by:

Louis PHD Verified writer

Finished papers: 5822

4.75

Proficient in:

Psychology, English, Economics, Sociology, Management, and Nursing

You can get writing help to write an essay on these topics
100% plagiarism-free

Hire This Writer

Installing Python

 

Since many aspiring data scientists never used Python before, we’re going to discuss the installation process to familiarize you with various packages and distributions that you will need later.

Before we begin, it’s worth taking note that there are two versions of Python, namely Python 2 and Python 3. You can use either of them, however Python 3 is the future. Many data scientists still use Python 2, but the shift to version 3 has been building up gradually. What’s important to keep in mind is that there are various compatibility issues between the two versions. This means that if you write a program using Python 2 and then run it inside a Python 3 interpreter, the code might not work. The developers behind Python have also stopped focusing on Python 2, therefore version 3 is the one that is being constantly developed and improved. With that being said, let’s go through the step by step installation process.

Step by Step Setup

Start by going to Python’s webpage at www.python.org and download Python. Next, we will go through the manual installation which requires several steps and instructions. It is not obligatory to setup Python manually, however, this gives you great control over the installation and it’s important for future installations that you will perform independently depending on each of your projects’ specifications. The easier way of installing Python is through automatically installing a scientific data distribution, which sets you up with all the packages and tools you may need (including a lot that you won’t need). Therefore, if you wish to go through the simplified installation method, head down to the section about scientific distributions.

When you download Python from the developer’s website, make sure to choose the correct installer depending on your machine’s operating system. Afterwards, simply run the installer. Python is now installed, however, it is not quite ready for our purposes. We will now have to install various packages. The easiest way to do this is to open the command console and type “pip” to bring up the package manager. The “easy_install” package manager is an alternative, but pip is widely considered an improvement. If you run the commands and nothing happens, it means that you need to download and install any of these managers. Just head to their respective websites and go through a basic installation process to get them. But why bother with a package manager as a beginner?

A package manager like “pip” will make it a lot easier for you to install / uninstall packages, or roll them back if the package version causes some incompatibility issues or errors. Because of this advantage of streamlining the process, most new Python installations come with pip pre-installed. Now let’s learn how to install a package. If you chose “pip”, simply type the following line in the command console:

pip install < package_name >

If you chose “Easy Install”, the process remains the same. Just type:

easy_install < package_name >

Once the command is given, the specified package will be downloaded and installed together with any other dependencies they require in order to run. We will go over the most important packages that you will require in a later section. For now, it’s enough to understand the basic setup process.

Scientific Distributions

As you can see in the previous section, building your working environment can be somewhat time consuming. After installing Python, you need to choose the packages you need for your project and install them one at a time. Installing many different packages and tools can lead to failed installations and errors. This can often result in a massive loss of time for an aspiring data scientist who doesn’t fully understand the subtleties behind certain errors. Finding solutions to them isn’t always straightforward. This is why you have the option of directly downloading and installing a scientific distribution.

Automatically building and setting up your environment can save you from spending time and frustration on installations and allow you to jump straight in. A scientific distribution usually contains all the libraries you need, an Integrated Development Environment (IDE), and various tools. Let’s discuss the most popular distributions and their application.

Anaconda

This is probably the most complete scientific distribution offered by Continuum Analytics. It comes with close to 200 packages pre-installed, including Matplotlib, Scikit-learn, NumPy, pandas, and more (we’ll discuss these packages a bit later). Anaconda can be used on any machine, no matter the operating system, and can be installed next to any other distributions. The purpose is to offer the user everything they need for analytics, scientific computing, and mass-processing. It’s also worth mentioning that it comes with its own package manager pre-installed, ready for you to use in order to manage packages. This is a powerful distribution, and luckily it can be downloaded and installed for free, however there is an advanced version that requires purchase.

If you use Anaconda, you will be able to access “conda” in order to install, update, or remove various packages. This package manager can also be used to install virtual environments (more on that later). For now, let’s focus on the commands. First, you need to make sure you are running the latest version of conda. You can check and update by typing the following command in the command line:

conda update conda

Now, let’s say you know which package you want to install. Type the following command:

conda install < package_name >

If you want to install multiple packages, you can list them one after another in the same command line. Here’s an example:

conda install < package_number_1 > < package_number_2 > < package_number_3 >

Next, you might need to update some existing packages. This can be done with the following command:

conda update < package_name >

You also have the ability to update all the packages at once. Simply type:

conda update –all

The last basic command you should be aware of for now is the one for package removal. Type the following command to uninstall a certain package:

conda remove < package_name >

This tool is similar to “pip” and “easy install”, and even though it’s usually included with Anaconda, it can also be installed separately because it works with other scientific distributions as well.

Canopy

This is another scientific distribution popular because it’s aimed towards data scientists and analysts. It also comes with around 200 pre-installed packages and includes the most popular ones you will use later, such as Matplotlib and pandas. If you choose to use this distribution instead of Anaconda, type the following command to install it:

canopy_cli

Keep in mind that you will only have access to the basic version of Canopy without paying. If you will ever require its advanced features, you will have to download and install the full version.

WinPython

If you are running on a Windows operating system, you might want to give WinPython a try. This distribution offers similar features as the ones we discussed earlier, however it is community driven. This means that it’s an open source tool that is entirely free.

You can also install multiple versions of it on the same machine, and it comes with an IDE pre-installed.

Virtual Environments

Virtual environments are often necessary because you are usually locked to the version of Python you installed. It doesn’t matter whether you installed everything manually or you chose to use a distribution – you can’t have as many installations on the same machine as you might want. The only exception would be if you are using the WinPython distribution, which is available only for Windows machines, because it allow you to prepare as many installations as you want. However, you can create a virtual environment with the “virtualenv”. Create as many different installations as you need without worrying about any kind of limitations. Here are a few solid reasons why you should choose a virtual environment:

  • Testing grounds: It allows you to create a special environment where you can experiment with different libraries, modules, Python versions and so on. This way, you can test anything you can think of without causing any irreversible damage.
  • Different versions: There are cases when you need multiple installations of Python on your computer. There are packages and tools, for instance, that only work with a certain version. For instance, if you are running Windows, there are a few useful packages that will only behave correctly if you are running Python 3.4, which isn’t the most recent update. Through a virtual environment, you can run different version of Python for separate goals.
  • Replicability: Use a virtual environment to make sure you can run your project on any other computer or version of Python aside from the one you were originally using. You might be required to run your prototype on a certain operating system or Python installation, instead of the one you are using on your own computer. With the help of a virtual environment, you can easily replicate your project and see if it runs under different circumstances.

With that being said, let’s start installing a virtual environment by typing the following command:

pip install virtualenv

This will install “virtualenv”, however you will first need to make several preparations before creating the virtual environment. Here are some of the decisions you have to make at the end of the installation process:

  • Python version: Decide which version you want “virtualenv” to use. By default, it will pick up the one it was installed from. Therefore, if you want to use another Python version, you have to specify by typing -p python 3.4, for instance.
  • Package installation: The virtual environment tool is set to always perform the full package installation process for each environment even when you already have said package installed on your system. This can lead to a loss of time and resources. To avoid this issue, you can use the –system-site-packages command to instruct the tool to install the packages from the files already available on your system.
  • Relocation: For some projects, you might need to move your virtual environment on a different Python setup or even on another computer. In that case, you will have to instruct the tool to make the environment scripts work on any path. This can be achieved with the –relocatable command.

Once you make all the above decisions, you can finally create a new environment. Type the following command:

virtualenv myenv

This instruction will create a new directory called “myenv” inside the location, or directory, where you currently are. Once the virtual environment is created, you need to launch it by typing these lines:

cd myenv

activate

Now you can start installing various packages by using any package manager like we discussed earlier in the chapter.

Necessary Packages

 

We discussed earlier that the advantages of using Python for data science are its system compatibility and highly developed system of packages. An aspiring data scientist will require a diverse set of tools for their projects. The analytical packages we are going to talk about have been highly polished and thoroughly tested over the years, and therefore are used by the majority of data scientists, analysts, and engineers.

Here are the most important packages you will need to install for most of your work:

  • NumPy: This analytical library provides the user with support for multi-dimensional arrays, including the mathematical algorithms needed to operate on them. Arrays are used for storing data, as well as for fast matrix operations that are much needed to work out many data science problems. Python wasn’t meant for numerical computing, therefore every data scientist needs a package like NumPy to extend the programming language to include the use of many high level mathematical functions. Install this tool by typing the following command: pip install numpy.
  • SciPy: You can’t read about NumPy without hearing about SciPy. Why? Because the two complement each other. SciPy is needed to enable the use of algorithms for image processing, linear algebra, matrices and more. Install this tool by typing the following command: pip install scipy.
  • pandas:  This library is needed mostly for handling diverse data tables. Install pandas to be able to load data from any source and manipulate as needed. Install this tool by typing the following command: pip install pandas.
  • Scikit-learn: A much needed tool for data science and machine learning, Scikit is probably the most important package in your toolkit. It is required for data preprocessing, error metrics, supervised and unsupervised learning, and much more. Install this tool by typing the following command: pip install scikit-learn.
  • Matplotlib: This package contains everything you need to build plots from an array. You also have the ability to visualize them interactively. You don’t happen to know what a plot is? It is a graph used in statistics and data analysis to display the relation between variables. This makes Matplotlib an indispensable library for Python. Install this tool by typing the following command: pip install matplotlib.
  • Jupyter: No data scientist is complete without Jupyter. This package is essentially an IDE (though much more) used in data science and machine learning everywhere. Unlike IDEs such as Atom, or R Studio, Jupyter can be used with any programming language. It is both powerful and versatile because it provides the user with the ability to perform data visualization in the same environment, and allows customizable commands. Not only that, it also promotes collaboration due to its streamlined method of sharing documents. Install this tool by typing the following command: pip install jupyter.
  • Beautiful Soup: Extract information from HTML and XML files that you have access to online. Install this tool by typing the following command: pip install beautifulsoup4.

For now, these 7 packages should be enough to get you started and give you an idea on how to extend Python’s abilities. You don’t have to overwhelm yourself just yet by installing all of them, however feel free to explore and experiment on your own. We will mention and discuss more packages later in the book as needed to solve our data science problems. But for now, we need to focus more on Jupyter, because it will be used throughout the book. So let’s go through the installation, special commands, and learn how this tool can help you as an aspiring data scientist.

Using Jupyter

Throughout this book, we will use Jupyter to illustrate various operations we perform and their results. If you didn’t install it yet, let’s start by typing the following command:

pip install jupyter

The installation itself is straightforward. Simply follow the steps and instruction you receive during the setup process. Just make sure to download the correct installer first. Once the setup finishes, we can run the program by typing the next line:

jupyter notebook

This will open an instance of Jupyter inside your browser. Next, click on “New” and select the version of Python you are running. As mentioned earlier, we are going to focus on Python 3. Now you will see an empty window where you can type your commands.

You might notice that Jupyter uses code cell blocks instead of looking like a regular text editor. That’s because the program will execute code cell by cell. This allows you to test and experiment with parts of your code instead of your entire program. With that being said, let’s give it a test run and type the following line inside the cell:

In: print (“I’m running a test!”)

Now you can click on the play button that is located under the Cell tab. This will run your code and give you an output, and then a new input cell will appear. You can also create more cells by hitting the plus button in the menu. To make it clearer, a typical block looks something like this:

In: < This is where you type your code >

Out: < This is the output you will receive >

The idea is to type your code inside the “In” section and then run it. You can optionally type in the result you expect to receive inside the “Out” section, and when you run the code, you will see another “Out” section that displays the true result. This way you can also test to see if the code gives you the result you expect.

Functions in Python

 

In Python programming, functions refer to any group of related statements which perform a given activity. Functions are used in breaking down programs into smaller and modular bits. In that sense, functions are the key factors which make programs easier to manage and organize as they grow bigger over time. Functions are also helpful in avoiding repetition during coding and makes codes reusable.

•       The Syntax of Functions:

The syntax of functions refers to the rules which govern the combination of characters that make up a function. These syntaxes include the following:

  1. The keyword “def” highlights the beginning of every function header.
  2. A function named is to identify it distinctly. The rules of making functions are the same as the rules which apply for writing identifiers in Python.
  3. Parameters or arguments via which values are passed onto a function are optional in Python.
  4. A colon sign (:) is used to highlight the end of every function header.
  5. The optional documentation string known as do string is used to define the purpose of the function.
  6. The body of a function is comprised of one or more valid statements in Python. The statements must all have a similar indentation level, (typically four spaces).
  7. An optional return statement is included for returning a value from a function.

Below is a representation of the essential components of a function as described in the syntax.

def function_name(parameters):

‘’’docstring’’’

statement(s)

•       How functions are called in Python:

Once a function has been defined in Python, it is capable of being called from another function, a program, or the python prompt even. Calling a function is done by entering a function name with a proper parameter.

  1. Docstring:

The docstring is the first string which comes after the function header. The docstring is short for documentation string and is used in explaining what a function does briefly. Although it is an optional part of a function, the documentation process is a good practice in programming. So, unless you have got an excellent memory which can recall what you had for breakfast on your first birthday, you should document your code at all times. In the example shown below, the docstring is used directly beneath the function header.

>>> greet(“Amos”)

Hello, Amos. Good morning!

Triple quotation marks are typically used when writing docstrings so they can extend to several lines. Such a string is inputted as the __doc__ attribute of the function. Take the example below.

You can run the following lines of code in a Python shell and see what it outputs:

  1. >>> print(greet.__doc__)
  2. This function greets to
  3. the person passed into the
  4. name parameter
  5. The return statement:

The purpose of the return statement is to go back to the location from which it was called after exiting a function.

•       Syntax of return:

This statement is able to hold expressions which have been evaluated and have their values returned. A function will return the Noneobject if the statement is without an expression, or its return statement is itself absent in the function. For instance:

  1. >>> print(greet(‘Amos’))
  2. Hello, Amos. Good morning!
  3. None

In this case, the returned value is None.

The Basics of Working with Python

 

Before we start working with machine algorithms, you should first understand the basics of working with Python. However, if you are already familiar with Python or you have experience programming in other languages such as C++ or C# you can probably skip this chapter or simply use it to refresh your memory.

In this chapter we are going to briefly discuss the basic concepts of working with Python. Machine learning and Python go hand in hand due to the simple fact that Python is a simple, but powerful and versatile language. Furthermore, there are many modules, packages and tools designed to expand Python’s functionality to specifically work with machine learning algorithms, as well as data science.

Keep in mind that this is a brief introduction to Python, and therefore we will not be using any IDE’s or fancy tools. All you need is the Python shell, in order to test and experiment with your code as you learn. You don’t even need to install anything on your computer because you can simply head to Python’s official website and use their online shell. You can find it here: https://www.python.org/shell/.

Data Types

Knowing the basic data types and how they work is a must. Python has several data types and in this section we will go through a brief description of each one and then see them in practice. Don’t forget to also practice on your own, especially if you know nothing or very little about Python.

With that in mind, let’s explore strings, numbers, dictionaries, lists and more!

Numbers

In Python, just like in math in general, you have several categories of numbers to work with and when you work them into code, you have to specify which one you’re referring to. For instance, there are integers, floats, longs and others. However, the most commonly used ones are integers and floats.

Integers, written int for short, are whole numbers that can either be positive or negative. So make sure that when you declare a number as an integer you don’t type a float instead. Floats are decimal or fractional numbers.

Now let’s discuss the mathematical operators. Just like in elementary school, you will often work using basic mathematical operators such as adding, subtracting, multiplication and so on. Keep in mind that these are different from the comparison operators, such as greater than or less than or equal to. Now let’s see some examples in code:

x = 99

y = 26

print (x + y)

This basic operation simply prints the sum of x and y. You can use this syntax for all the other mathematical operators, no matter how complex your calculation is. Now let’s type a command using a comparison operator instead:

x = 99

y = 26

print (x > 100)

As you can see, the syntax is the same, however, we aren’t performing a calculation. Instead we are verifying whether the value of x is greater than 100. The result you will get is “false” because 99 is not greater than 100.

Next, you will learn what strings are and how you can work with them.

Strings

Strings have everything to do with text, whether it’s a letter, number or punctuation mark. However, take note that numbers written as strings are not the same as the numbers data type. Anything can be defined as a string, but to do so you need to place quotation marks before and after your declaration. Let’s take a look at the syntax:

n = “20”

x = 10

Notice that our n variable is a string data type and not a number, while x is defined as an integer because it lacks the quotation marks. There are many operations you can do on strings. For instance you can verify how long a string is, or you can concatenate several strings. Let’s see how many characters there are in the word “hello” by using the following function:

len (“Hello”)

The “len” function is used to determine the number of characters, which in this case is five. Here’s an example of string concatenation. You’ll notice that it looks similar to a mathematical operation, but with text:

‘42 ’ + ‘is ’ + ‘the ’ + ‘answer’

The result will be “42 is the answer”. Pay attention to the syntax, because you will notice we left a space after each string, minus the last one. Spaces are taken into consideration when writing strings. If we didn’t add them, all of our strings would be concatenated into one word.

Another popular operation is the string iteration. Here’s an example:

bookTittle = “Lord of the Rings”

for x in book: print c

The result will be an iteration of every single character found in the string. Python contains many more string operations, however these are the ones you will use most often.

Now let’s progress to lists.

Lists

This is a data type that you will be using often. Lists are needed to store data and they can be manipulated as needed. Furthermore, you can store objects of different types in them. Here’s what a Python list looks like:

n = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

The list is defined by the square brackets and every object separated by a comma is a list element. Here’s an example of a list containing different data types:

myBook = [“title”, “somePages”, 1, 2.1, 5, 22, 42]

This is a list that holds string objects as well as integers and floats. You can also perform a number of operations on lists and most of them follow the same syntax as for the strings. Try them out!

Dictionaries

This data type is nearly identical to a list, however, you cannot access the elements the same way. What you need is to know the key which is linked to a dictionary object. Take a look at the following example:

dict = {‘weapon’ : ‘sword’, ‘soldier’ : ‘archer’}

dict [‘weapon’]

The first line contains the dictionary’s definition, and as you can see the objects and their keys have to be stored between curly braces. You can identify the keys as “weapon” and “soldier” because after them you need to place a colon, followed by the attribute. Keep in mind that while in this example our keys are in fact strings, they can be other data types as well.

Tuples

This data type is similar to a list, except its elements cannot be changed once defined. Here’s an example of a tuple:

n = (1, 43, ‘someText’, 99, [1, 2, 3])

A tuple is defined between parentheses and in this case, we have three different data types, namely a few integers, a string, and a list. You can perform a number of operations on a tuple, and most of them are the same as for the lists and strings. They are similar data types, except that once you declare the tuple, you cannot change it later.

Conditional Statements

Now that you know the basic data types, it’s time to take a crash course on more complex operations that involve conditional statements. A conditional statement is used to give an application a limited ability to think for itself and make a decision based on their assessment of the situation. In other words, it analyzes the condition required by a variable in order to tell the program to react based on the outcome of that analysis.

Python statements are simple to understand because they are logical and the syntax reflect human thinking. For instance, the syntax written in English looks something like this “If I don’t feel well, I won’t go anywhere, else I will have to go to work”. In this example, we instruct the program to check whether you feel well. If the statement is valued as false, it means you feel well and therefore it will progress to the next line which is an “else” statement. Both “if” and “if else” conditionals are frequently used when programming in general. Here’s an example of the syntax:

x = 100

if (x < 100):

print(“x is small”)

This is the most basic form of the statement. It checks whether it’s true, and if it is then something will happen and if it’s not, then nothing will happen. Here’s an example using the else statement as well:

x = 100

if (x < 100):

print(“x is small”)

else:

print(“x is large”)

print (“Print this no matter what”)

With the added “else” keyword, we instruct the application to perform a different task if a false value is returned. Furthermore, we have a separate declaration that lies outside of the conditional statement. This will be executed no matter the outcome.

Another type of conditional involves the use of “elif” which allows the application to analyse a number of statements before it makes a decision. Here’s an example:

if (condition1):

add a statement here

elif (condition2):

add another statement for this condition

elif (condition3):

add another statement for this condition

else:

if none of the conditions apply, do this

Take note that this time we did not use code. You already know enough about Python syntax and conditionals to turn all of this into code. What we have here is pseudo code, which is very handy whether you are writing simple Python exercises or working with machine learning algorithms. Pseudo code allows you to place your thoughts on “paper” by following the Python programming structure. This makes it a lot easier for you to organize your ideas and your application, by writing the code after you’ve outlined it. With that being said, here’s the actual code:

x = 10

if (x > 10):

print (“x is larger than ten”)

elif x < 4:

print (“x is smaller”)

else:

print (“x is pretty small”)

Now you have everything you need to know about conditionals. Use them in combination with what you learned about data types in order to practice. Keep in mind that you always need to practice these basic Python concepts in order to later understand how machine learning algorithms work.

Loops

Code sometimes needs to be executed repeatedly until a specific condition is met. This is what loops are for. There are two types, the for loop and the while loop. Let’s begin with the first example:

for x  in range(1, 10):

print(x)

This code will be executed several times, printing the value of X each time, until it reaches ten.

The while loop, on the other hand, is used to repeat the execution of a code block only if the condition we set is still true. Therefore, when the condition is no longer met, the loop will break and the application will continue with the next lines of code. Here’s a while loop in action:

x = 1

while x < 10:

print(x)

x += 1

The x variable is declared as an integer and then we instruct the program that as long as x is less than ten, the result should be printed. Take note that if you do not continue with any other statement at this point you will create an infinite loop and that is not something you want. The final statement makes sure that the application will print the new value with one added to it with every execution. When the variable stops being less than ten, the condition will no longer be met and the loop will break, allowing the application to continue executing any code that follows.

Keep in mind that infinite loops can easily happen due to mistakes and oversight. Luckily, Python has a solution, namely the “break” statement which should be placed at the end of the loop. Here’s an example:

while True:

answer = input (“Type command:”)

if answer == “Yes”:

break

Now the loop can be broken by typing a command.

Functions

As a beginner machine learner this is the final Python component you need to understand before learning the cool stuff. Functions allow you to make your programs a great deal more efficient, optimized, and easier to work with. They can significantly reduce the amount of code you have to type, and therefore make the application less demanding when it comes to system resources. Here’s an example of the most basic function to get an idea about the syntax:

def myFunction():

print(“Hello, I am now a function!”)

Functions are first declared by using the “def” statement, followed by its name. Whenever we want to call this block of code, we simply call the function instead of writing the whole code again. For instance, you simply type:

myFunction()

The parentheses after the function represent the section where you can store a number of parameters. They can alter the definition of the function like this:

def myName(firstname):

print(firstname + “ Smith”)

myName(“Andrew”)

myName(“Peter”)

myName(“Sam”)

Here we have a first name parameter and whenever we call the function to print its parameter, it does so together with the addition of the word “Smith”. Take note that this is a really basic example just so you get a feel for the syntax. More complex function are written the same way, however.

Here’s another example where we have a default parameter, which will be called only if there is nothing else to be executed in its place.

def myHobby(hobby = “leatherworking”):

print (“My hobby is “ + hobby)

myHobby (“archery”)

myHobby (“gaming”)

myHobby ()

myHobby (“fishing”)

Now let’s call the function:

My hobby is archery

My hobby is gaming

My hobby is leatherworking

My hobby is fishing

You can see here how the default parameter is used when we lack a specification.

Here you can see that the function without a parameter will use the default value we set.

In addition, you can also have functions that return something. For now we only wrote functions that perform an action, but they don’t return any values or results. These functions are far more useful, because the result can then be placed into a variable which will later be used in another operation. Here’s how the syntax looks in this case:

def square(x):

return x * x

print(square (5))

Now that you’ve gone through a brief Python crash course and you understand the basics, it’s time to learn how to use the right tools and how to set up your machine learning environment. Don’t forget that Python is only one component of machine learning, however it’s an important one because it’s the foundation and without it everything falls apart.

Data Structures and the A* Algorithm

 

In this chapter, you will learn how to create abstract data structures using the same Python data types you already know. Abstract data structures allow your programs to process data in intuitive ways and rely on the Don’t Repeat Yourself (DRY) principle. That is, using less code, and not typing out the same operations repeatedly for each case. As you study the examples given, you will begin to notice a pattern emerging: the use of classes that complement each other with one acting as a node and another as a container of nodes. In computer science, a data structure that uses nodes is generally referred to as a tree. There are many different types of trees, each with specialized use cases. You may have already heard of binary trees if you are interested in programming or computer science at all.

One possible type of tree is called an n-ary tree, or n-dimensional tree. Unlike the binary tree, the n-ary tree contains nodes that have an arbitrary number of children. A child is simply another instance of a node that is linked to another node, sometimes called a parent. The parent must have some mechanism for linking up to child nodes. The easiest way to do this is with a list of objects.

Example Coding #1: A Mock File-System

A natural application of the n-ary tree is a traditional windows or UNIX file system. Nodes can be either folders, directories or individual files. To keep things simple the following program assumes a single directory as the tree’s root.

# ch1a.py

The FileSystem acts as the tree, and the Node class does most of the work, which is common with tree data structures. Notice also that FileSystem keeps track of individual ID’s for each node. The ID’s can be used as a way to quantify the number of nodes in the file system or to provide lookup functionality.

When it comes to trees, the most onerous task is usually programming a solution for traversal. The usual way a tree is structured is with a single node as a root, and from that single node, the rest of the tree can be accessed. Here the function look_up_parent uses a loop to traverse the mock directory structure, but it can easily be adapted to a recursive solution as well.

General usage of the program is as follows: initiate the FileSystem class, declare Node objects with the directory syntax (in this case backslash so Python won’t mistake it for escape characters), and then calling the add method on them.

Example Coding # 2: Binary Search Tree (BST)

The binary search tree gets its name from the fact that a node can contain at most two children. While this may sound like a restriction, it is actually a good one because the tree becomes intuitive to traverse. An n-ary tree, in contrast, can be messy.

# ch1b.py

As before, the Node class does most of the heavy lifting. This program uses a BST primarily to sort a list of numbers but can be generalized to sorting any data type. There are also a number of auxiliary methods for finding out the size of the tree and which nodes are childless (leaves).

This implementation of a tree better illustrates the role that recursion takes when traversing a tree at each node calls a method (for example insert) and creates a chain until a base case is reached.

Example Coding # 3: A* Algorithm

The A* star search algorithm is considered the same as the Dijkstra algorithm but with brains. Whereas Dijkstra searches almost exhaustedly until the path is found, A* uses what is called a heuristic, which is a fancy way of saying “educated guess.” A* is fast because it is able to point an arrow at the target (using the heuristic) and find steps on that path.

First, here’s a brief explanation of the algorithm. To simplify things, we will be using a square grid with orthogonal movement only (no diagonals). The object of A* is to find the shortest path between point A and point B. That is, we know the position of point B. This will be the end node, and A the start. In order to get from A to B, the algorithm must calculate distances of nodes between A and B such that each node gets closer to B or is discarded. An easy way to program this is by using a heap or priority queue and using some measure of distance to sort order.

After the first node is added to the heap, each neighbor node will be evaluated for distance and the closest one to B is added to the heap. The process repeats until the node is equal to B.

#ch1c.py

In this case, the heuristic is called Manhattan distance, which is just the absolute value between the current node and the target. The heapq library is being used to create a priority queue with f as the priority. Note that the backtrace function is simply traversing a tree of nodes that each has a single parent.

You can think of the g variable is the cost of moving from the starting point to somewhere along the path. Since we are using a grid with no variation in movement cost g can be constant. The h variable is the estimated distance between the current node and the target. Adding these two together gives you the f variable, which is what controls the order of nodes on the path.

Reading data in your script

Reading data from file

Let’s make our data file using Microsoft Excel, LibreOffice Calc, or some other spreadsheet application and save it in a tab delimited file ingredients.txt

 

Foodcarbfatproteincaloriesserving size
pasta391721056
parmesan grated01.52205
Sour cream1516030
Chicken breast0322120112
Potato2803110148

Fire up your IPython notebook server. Using the New drop down menu in the top right corner, create a new Python3 notebook and type the following Python program into a code cell:

#open file ingredients.txt

with open(‘ingredients.txt’, ‘rt’) as f:

for line in f:    #read lines until the end of file

print(line)  #print each line

Remember that indent is important in Python programs and designates nested operators. Run the program using the menu option Cell/Run, the right arrow button, or the Shift-Enter keyboard shortcut. You can have many code cells in your IPython notebooks, but only the currently selected cell is run. Variables generated by previously run cells are accessible, but, if you just downloaded a notebook, you need to run all the cells that initialize variables used in current cell. You can run all the code cells in the notebook by using the menu option Cell/Run All or Cell/Run All Above

This program will open a file called “ingredients” and print it line by line. Operatorwithis a context manager – it opens the file and makes it known to the nested operators asf. Here, it is used as an idiom to ensure that the file is closed automatically after we are done reading it. Indentation beforeforis required – it shows thatfor is nested inwithand has an access to the variablefdesignating the file. Functionprintis nested insideforwhich means it will be executed for every line read from the file until the end of the file is reached and thefor cycle quits. It takes just 3 lines of Python code to iterate over a file of any length.

Now, let’s extract fields from every line. To do this, we will need to use a string’s methodsplit() that splits a line and returns a list of substrings. By default, it splits the line at every white space character, but our data is delimited by the tab character – so we will use tab to split the fields. The tabcharacter is designated\t in Python.

with open(‘ingredients.txt’, ‘rt’) as f:

for line in f:

fields=line.split(‘\t’) #split line in separate fields

print(fields)          #print the fields

The output of this code is:

[‘food’, ‘carb’, ‘fat’, ‘protein’, ‘calories’, ‘serving size\n’]

[‘pasta’, ’39’, ‘1’, ‘7’, ‘210’, ’56\n’]

[‘parmesan grated’, ‘0’, ‘1.5’, ‘2’, ’20’, ‘5\n’]

[‘Sour cream’, ‘1’, ‘5’, ‘1’, ’60’, ’30\n’]

[‘Chicken breast’, ‘0’, ‘3’, ’22’, ‘120’, ‘112\n’]

[‘Potato’, ’28’, ‘0’, ‘3’, ‘110’, ‘148\n’]

Now, each string is split conveniently into lists of fields. The last field contains a pesky\ncharacter designating the end of line. We will remove it using thestrip() method that strips white space characters from both ends of a string.

After splitting the string into a list of fields, we can access each field using an indexing operation. For example,fields[0] will give us the first field in which a food’s name is found. In Python, the first element of a list or an array has an index 0.

This data is not directly usable yet. All the fields, including those containing numbers, are represented by strings of characters. This is indicated by single quotes surrounding the numbers. We want food names to be strings, but the amounts of nutrients, calories, and serving sizes must be numbers so we could sort them and do calculations with them. Another problem is that the first line holds column names. We need to treat it differently.

One way to do it is to use file object’s methodreadline()to read the first line before entering theforloop. Another method is to use functionenumerate() which will return not only a line, but also its number starting with zero:

with open(‘ingredients.txt’, ‘rt’) as f:

#get line number and a line itself

#in i and line respectively

for i,line in enumerate(f):

fields=line.strip().split(‘\t’)#split line into fields

print(i,fields)    #print line number and the fields

This program produces following output:

0 [‘food’, ‘carb’, ‘fat’, ‘protein’, ‘calories’, ‘serving size’]

1 [‘pasta’, ’39’, ‘1’, ‘7’, ‘210’, ’56’]

2 [‘parmesan grated’, ‘0’, ‘1.5’, ‘2’, ’20’, ‘5’]

3 [‘Sour cream’, ‘1’, ‘5’, ‘1’, ’60’, ’30’]

4 [‘Chicken breast’, ‘0’, ‘3’, ’22’, ‘120’, ‘112’]

5 [‘Potato’, ’28’, ‘0’, ‘3’, ‘110’, ‘148’]

Now we know the number of a current line and can treat the first line differently from all the others. Let’s use this knowledge to convert our data from strings to numbers. To do this, Python has functionfloat(). We have to convert more than one field so we will use a powerful Python feature called list comprehension.

with open(‘ingredients.txt’, ‘rt’) as f:

for i,line in enumerate(f):

fields=line.strip().split(‘\t’)

if i==0:                # if it is the first line

print(i,fields)     # treat it as a header

continue            # go to the next line

food=fields[0]          # keep food name in food

#convert numeric fields no numbers

numbers=[float(n) for n in fields[1:]]

#print line numbers, food name and nutritional values

print(i,food,numbers)

Operatoriftests if the condition is true. To check for equality, you need to use==. The index is only 0 for the first line, and it is treated differently. We split it into fields, print, and skip the rest of the cycle using thecontinue operator.

Lines describing foods are treated differently. After splitting the line into fields,fields[0]receives the food’s name. We keep it in the variablefood. All other fields contain numbers and must be converted.

In Python, we can easily get a subset of a list by using a slicing mechanism. For instance,list1[x:y] means that a list of every element in list1 -starting with indexx and ending with y-1. (You can also include stride, see help). Ifxis omitted, the slice will contain elements from the beginning of the list up to the elementy-1. Ifyis omitted, the slice goes from elementxto the end of the list. Expressionfields[1:]means every field except the firstfields[0].

numbers=[float(n) for n in fields[1:]]

means we create a new listnumbersby iterating from the second element in thefields and converting them to floating point numbers.

Finally, we want to reassemble the food’s name with its nutritional values already converted to numbers. To do this, we can create a list containing a single element – food’s name – and add a list containing nutrition data. In Python, adding lists concatenates them.

[food]+ numbers

Dealing with corrupt data

Sometimes, just one line in a huge file is formatted incorrectly. For instance, it might contain a string that could not be converted to a number. Unless handled properly, such situations will force a program to crash. In order to handle such situations, we must use Python’s exception handling. Parts of a program that might fail should be embedded into atry … except block. In our program, one such error prone part is the conversion of strings into numbers.

numbers=[float(n) for n in fields[1:]]

Lets insulate this line:

with open(‘ingredients.txt’, ‘rt’) as f:

for i,line in enumerate(f):

fields=line.strip().split(‘\t’)

if i==0:

print(i,fields)

continue

food=fields[0]

try:                # Watch out for errors!

numbers=[float(n) for n in fields[1:]]

except:             # if there is an error

print(i,line)   # print offenfing lile and its number

print(i,fields) # print how it was split

continue        # go to the next line without crashin

print(i,food,numbers)

Manipulating data

Sorting data

In order to do something meaningful with the data, we need a container to hold it. Let’s store information for each food in a list, and create a list of these lists to represent all the foods. Having all the data conveniently in one list allows us to sort it easily.

data=[]      # create an empty list to hold data

with open(‘ingredients.txt’, ‘rt’) as f:

for i,line in enumerate(f):

fields=line.strip().split(‘\t’)

if i==0:

header=fields      #remember a header

continue

food=fields[0].lower() #convert to lower case

try:

numbers=[float(n) for n in fields[1:]]

except:

print(i,line)

print(i,fields)

continue

#append food info to data list

data.append([food]+numbers)

# Sort list in place by food name

data.sort(key=lambda a:a[3]/a[4], reverse=True)

for food in data: #iterate over the sorted list of foods

print(food)   #print info for each food

[‘chicken breast’, 0.0, 3.0, 22.0, 120.0, 112.0]

[‘parmesan grated’, 0.0, 1.5, 2.0, 20.0, 5.0]

[‘pasta’, 39.0, 1.0, 7.0, 210.0, 56.0]

[‘potato’, 28.0, 0.0, 3.0, 110.0, 148.0]

[‘sour cream’, 1.0, 5.0, 1.0, 60.0, 30.0]

data=[]creates an empty list and theappend()method appends new variables to the list.sort()method sorts lists in place. If the list contains simple values (such as numbers or strings), they are sorted from small to large or alphabetically by default. We have a list of complex data and it is not obvious how to sort it. So, we pass akeyparameter to thesort() method. This parameter is a function that takes an element of the list and returns a simple value that is used to order the elements in the list. In our case, we used a simple nameless lambda function that took record for each food and returned the first element, which is the food’s name. So we ended up with the list sorted alphabetically.

We could also sort the list by the second value, which represents the amount of carbohydrates per serving. All we have to do is change the lambda function that calculates the key:

data.sort(key=lambda a:a[1])

This will return foods in different order:

[‘parmesan grated’, 0.0, 1.5, 2.0, 20.0, 5.0]

[‘chicken breast’, 0.0, 3.0, 22.0, 120.0, 112.0]

[‘sour cream’, 1.0, 5.0, 1.0, 60.0, 30.0]

[‘potato’, 28.0, 0.0, 3.0, 110.0, 148.0]

[‘pasta’, 39.0, 1.0, 7.0, 210.0, 56.0]

Of course, sorting by amount of carbohydrates per serving doesn’t make much sense because serving sizes might be as different as 5 grams for parmesan and 148 grams for potatoes. Perhaps, ordering foods by amount of protein per calorie might make more sense; whereby, the value would be reflecting the “healthiness” of the food. Once again, all we need to do is to change the key function:

data.sort(key=lambda a:a[3]/a[4])

The output is

[‘sour cream’, 1.0, 5.0, 1.0, 60.0, 30.0]

[‘potato’, 28.0, 0.0, 3.0, 110.0, 148.0]

[‘pasta’, 39.0, 1.0, 7.0, 210.0, 56.0]

[‘parmesan grated’, 0.0, 1.5, 2.0, 20.0, 5.0]

[‘chicken breast’, 0.0, 3.0, 22.0, 120.0, 112.0]

We have the “unhealthiest” food on top. Perhaps, we want to start with the healthiest one. To do this we need to provide another parameter for thesort() method – reverse.

data.sort(key=lambda a:a[3]/a[4], reverse=True)

This will reverse the list.

[‘chicken breast’, 0.0, 3.0, 22.0, 120.0, 112.0]

[‘parmesan grated’, 0.0, 1.5, 2.0, 20.0, 5.0]

[‘pasta’, 39.0, 1.0, 7.0, 210.0, 56.0]

[‘potato’, 28.0, 0.0, 3.0, 110.0, 148.0]

[‘sour cream’, 1.0, 5.0, 1.0, 60.0, 30.0]

Although it is easy to sort by one or several columns in traditional spreadsheet applications, it is much harder to sort by complex expressions that require calculations on values from several columns. Python allows you to easily do it.

Filtering data

Having our data in a list allows us to filter it with one line of code using list comprehension, but, this time, we will use new a option for list comprehension – anif that allows us to exclude some elements from the new list:

data_filtered=[a for a in data if a[3]/a[4]>0.09]

for food in data_filtered:

print(food)

The filtered list is:

[‘chicken breast’, 0.0, 3.0, 22.0, 120.0, 112.0]

[‘parmesan grated’, 0.0, 1.5, 2.0, 20.0, 5.0]

Probability – Fundamental – Statistics – Data Types

Things are quite straightforward in Knowledge Representation and Reasoning; KR&R. Exclusive of doubt, formulating and representing propositions is easy. The thing is, when uncertainty makes itself known, problems begin to arise – for example, an expert system designed to replace a doctor. For diagnosing patients, a doctor possesses no formal knowledge of treating the patient and no official rules based off of symptoms. In this situation, to determine if the patient has a specific condition and also the cure for it, it is the probability the expert system will use to formulate the highest probability chance.

Real-Life Probability Examples

As a mathematical term, probability has to do with the possibility that an event may occur like taking out from a bag of assorted colors a piece of green or drawing an ace from a deck of cards. In all daily decision-making process, you use probability even without having a clue of the consequences. While you may determine the best course of action is to make judgment calls using subjective probability, you may not perform actual probability problems sometimes.

Organize around the weather

You can make plans with the weather in mind since you use probability almost every day. Predicting the weather condition is not possible for meteorologists and as a result, to establish the possibility that there will be snow, hail, or rain, they utilize instruments and tools. For example, it has rained with the conditions of the weather that is 60 out of 100 days amid the same conditions when there is a 60 percent chance of rain. Intuitively, rather than going to work with an umbrella or putting on sandals, closed-toed shoes, maybe preferred outfit to wear. Also, not only do meteorologists analyze probable weather patterns for that week or day but with the historical databases that they also examine to calculate approximately low and high temperatures.

Strategies in sports

For competitions and games, the probability is what coaches and athletes utilize to influence the best strategies for sports. When putting any player in the lineup, a coach of baseball evaluates the batting average of such a player. For example, out of every ten at-bats, an athlete may get a base hit two if the player’s batting average is 200. The odd is even higher for a player to even have, out of every ten at-bats, four hits when such a player has a 400-batting average. Another example is when; field goal attempts from over 40 yards out of 15, a high-school football kicker makes nine in a season, his next goal effort from the same space may be about 60 percent chance. We can have an equation like this:

9/15 = 0.60 or 60 percent

Insurance option

To conclude on the plans that are best for your family and even for you and the required deductible amounts, probability plays a vital role in analyzing insurance policies. For example, you make use of probability to know how possible it can be that you will need to make a declaration when you choose a car insurance policy. You may likely make consideration for not only liability but comprehensive insurance on your car when 12 percent or of every 100 drivers over the past year, 12 out of them in your community have crashed into a deer. Also, if following a deer-connected event run $2,8000, not to be in a situation where you cannot afford to cover certain expenses, you might consider a lower deductible on car repairs.

Recreational and games activities

Probability is what you use when you engage in video or card games or play board games that has the involvement of chance or luck. A required video game covert missile or the chances of getting the cards you need in poker is what you must weigh. Also, the determination of the extent of the risk you will be eager to take rests on the possibility of getting those tokens or cards. For example, as Wolfram Math World suggests, getting three of a class in a poker hand is the odds of 46.3-to-1, about a chance of 2 percent. However, you will have about 42 percent or 1.4-to-1 odds that you will catch one pair. It is through the help of probability that you settle on the manner with which you intend to play the game when you assess what is at stake.

Statistics

The basis of modern science is on the statements of probability and statistical significance. In one example, according to studies, cigarette smokers have a 20 times greater likelihood of developing lung cancer than those that don’t smoke. In another research, the next 200,000 years will have the possibility of a catastrophic meteorite impact on Earth. Also, against the second male children, the first-born male children exhibit IQ test scores of 2.82 points. But, why do scientists talk in ambiguous expressions? Why don’t they say it that lung cancer is as a result of cigarette smoking? And they could have informed people if there needs to be an establishment of a colony on the moon to escape the disaster of the extraterrestrial.

The rationale behind these recent analyses is an accurate reflection of the data. It is not common to have absolute conclusions in scientific data. Some smokers can reduce the risk of lung cancer if they quit, while some smokers never contract the disease, other than lung cancer; it was cardiovascular diseases that kill some smokers prematurely. As a form of allowing scientists to make more accurate statements about their data, it is the statistic function to quantify variability since there is an exhibition of variability in all data.

Those statistics offer evidence that something is incorrect may be a common misconception. However, statistics have no such features. Instead, to observe a specific result, they provide a measure of the probability. Scientists can put numbers to probability through statistic techniques, taking a step away from the statement that someone is more likely to develop lung cancer if they smoke cigarettes to a report that says it is nearly 20 times greater in cigarette smokers compared to nonsmokers for the probability of developing lung cancer. It is a powerful tool the quantification of probability statistics offers and scientists use it thoroughly, yet they frequently misunderstand it.

Statistics in data analysis

Developed for data analysis is a large number of procedures for statistics they are in two parts of inferential and descriptive:

Descriptive statistics:

With the use of measures for deviation like mean, median, and standard, scientists have the capability of quickly summing up significant attributes of a dataset through descriptive statistics. They allow scientists to put the research within a broad context while offering a general sense of the group they study. For example, initiated in 1959, potential research on mortality was Cancer Prevention Study 1 (CPS-1). Among other variables, investigators gave reports of demographics and ages of the participants to let them compare, at the time, the United States’ broader population and also the study group. The age of the volunteers was from ages 30 to 108 with age in the middle as 52 years. The research had 57 percent female as subjects, 2 percent black, and 97 percent white. Also, in 1960, the total population of female in the US was 51 percent, black was about 11 percent, and white was 89 percent. The statistics of descriptive easily identified CPS-1’s recognized shortcoming by suggesting that the research made no effort to sufficiently consider illness profiles in the US marginal groups when 97 percent of participants were white.

Inferential statistics:

When scientists want to make a considered opinion about data, making suppositions about bigger populaces with the use of smaller samples of data, discover connection between variables in datasets, and model patterns in data, they make use of inferential statistics. From the perspective of statistics, the term “population” may differ from the ordinary meaning that it belongs to a collection of people. The larger group is a geometric population used by a dataset for making suppositions about a society, locations of an oil field, meteor impacts, corn plants, or some various set of measurements accordingly.

With regards to scientific studies, the process of shifting results to larger populations from small sample sizes is quite essential. For example, though there was conscription of about 1 million and 1.2 million individuals in that order for the Cancer Prevention Studies I and II, their representation is for a tiny portion of the 1960 and 1980 United States people that totaled about 179 and 226 million. Correlation, testing/point estimation, and regression are some of the standard inferential techniques. For example, Tor Bjerkedal and Peter Kristensen analyzed 250,000 male’s test scores in IQ for personnel of the Norwegian military in 2007. According to their examination, the IQ test scores of the first-born male children scored higher points of 2.82 +/- 0.07 than second-born male children, 95 percent confidence level of a statistical difference.

The vital concept in the analysis of data is the phrase “statistically significant,” and most times, people misunderstand it. Similar to the frequent application of the term significant, most people assume that a result is momentous or essential when they call it significant. However, the case is different. Instead, an estimate of the probability is statistical significance that the difference or observed association is because of chance instead of any actual connection. In other words, when there is no valid existing difference or link, statistical significance tests describe the probability that the difference or a temporary link would take place. Because it has a similar implication in statistics typical of regular verbal communication, though people can measure it, the measure of significance is most times expressed in terms of confidence.

Data Types

To do Exploratory Data Analysis, EDA, you need to have a clear grasp of measurement scales, which are also the different data types because specific data types have correlated with the use of individual statistical measurements. To select the precise visualization process, there is also the requirement of identifying data types with which you are handling. The manner with which you can categorize various types of variables is data types. Now, let’s take an in-depth look at the main types of variables and their examples, and we may refer to them as measurement scales sometimes.

Categorical data

Characteristics are the representation of categorical data. As a result, it stands for things such as someone’s language, gender, and so on. Also, numerical values have a connection with categorical data like 0 for female and 1 for male. Be aware that those numbers have no mathematical meaning.

Nominal data

The discrete units are the representation of nominal values, and they use them to label variables without any quantitative value. They are nothing but “labels.” It is important to note that nominal data has no order. Hence, nothing would change about the meaning even if you improve the order of its values. For example, the value may not change when a question is asking you for your gender, and you need to choose between female and male. The order has no value.

Ordinal data

Ordered and discrete units are what ordinal values represent. Except for the importance of its ordering, ordinal data is therefore almost similar to nominal data. For example, when a question asks you about your educational background and has the order of elementary, high school, undergraduate, and graduate. If you observe, there is a difference between college and high school and also between high school and elementary. Here is where the major limitation of ordinal data suffices; it is hard to know the differences between the values. Due to this limitation, they use ordinal scales to measure non-numerical features such as customer satisfaction, happiness, etc.

Numerical Data

Discrete data

When its values are separate and distinct, then we refer to discrete data. In other words, when the data can take on specific benefits, then we speak of discrete data. It is possible to count this type of data, but we cannot measure it. Classification is the category that its information represents. A perfect instance is the number of heads in 100-coin flips. To know if you are dealing with discrete data or not, try to ask the following two questions: can you divide it into smaller and smaller parts, or can you count it?

Continuous data

Measurements are what continuous data represents, and as such, you can only measure them, but you can’t count their values. For example, with the use of intervals on the real number lines, you can describe someone’s height.

Interval data

The representation of ordered units with similar differences is interval values. Consequently, in the course of a variable that contains ordered numeric values and where we know the actual differences between the values is interval data. For example, a feature that includes a temperature of a given place may have the temperature in -10, -5, 0, +5, +10, and +15. Interval values have a setback since they have no “true zero.” It implies that there is no such thing as the temperature in regards to the example. Subtracting and adding is possible with interval data. However, they don’t give room for division, calculation, or multiplication of ratios. Ultimately, it is hard to apply plenty of inferential and descriptive statistics because there is no true zero.

Ratio data

Also, with a similar difference, ratio values are ordered units. The contrast of an absolute zero is what ratio values have, the same as the interval values. For example, weight, length, height, and so on.

The Importance of Data Types

Since scientists can only use statistical techniques with specific data types, then data types are an essential concept. You may have a wrong analysis if you continue to analyze data differently than categorical data. As a result, you will have the ability to choose the correct technique of study when you have a clear understanding of the data with which you are dealing. It is essential to go over every data once more. However, in regards to what statistic techniques one can apply. There is a need to understand the basics of descriptive statistics before you can comprehend what we have to discuss right now. Note: you can read all about descriptive statistics down the line in this chapter.

Statistical Methods

Nominal data

The sense behind dealing with nominal data is to accumulate information with the aid of:

Frequencies:

The degree upon which an occasion takes place concerning a dataset or over a period is the frequency.

Proportion:

When you divide the frequency by the total number of events, you can easily calculate the proportion. For example, how often an event occurs divided by how often the event could occur.

Percentage:

Here, the technique required is visualization, and a bar chart or a pie chat is all that you need to visualize nominal data. To transform nominal data into a numeric feature, you can make use of one-hot encoding in data science.

Ordinal data

The same technique you use in nominal data can be applied with ordinal data. However, some additional tools here there for you to access. Consequently, proportions, percentages, and frequencies are the data you can use for your summary. Bar charts and pie charts can be used to visualize them. Also, for the review of your data, you can use median, interquartile range, mode, and percentiles.

Continuous data

You can use most techniques for your data description when you are dealing with constant data. For the summary of your data, you can use range, median, percentiles, standard deviation, interquartile range, and mean.

Visualization techniques:

A box-plot or a histogram, checking the variability, central tendency, kurtosis of a distribution, and modality all come to mind when you are attempting to visualize continuous data. You need to be aware that when you have any outliers, a histogram may not reveal that. That is the reason for the use of box-plots.

Descriptive Statistics

As an essential aspect of machine learning, to have an understanding of your data, you need descriptive statistical analysis since making predictions is what machine is all about. On the other hand, as a necessary initial step, you conclude from data through statistics. Your dataset needs to go through descriptive statistical analysis. Most people often get to wrong conclusions by losing a considerable amount of beneficial understandings regarding their data since they skip this part. It is better to be careful when running your descriptive statistics, take your time, and for further analysis, ensure your data complements all prerequisites.

Normal Distribution

Since almost all statistical tests require normally distributed data, the most critical concept of statistics is the normal distribution. When scientists plot it, it is essentially the depiction of the patterns of large samples of data. Sometimes, they refer to it as the “Gaussian curve,” or the “bell curve.”

There is a requirement that a normal distribution is given for calculation and inferential statistics of probabilities. The implication of this is that you must be careful of what statistical test you apply to your data if it not normally distributed since they could lead to wrong conclusions.

If your data is symmetrical, unimodal, centered, and bell-shaped, a normal distribution is given. Each side is an exact mirror of the other in a perfectly normal distribution.

Central tendency

Mean, mode, and the median is what we need to tackle in statistics. Also, these three are referred to as the “Central Tendency.” Apart from being the most popular, these three are distinctive “averages.”

With regards to its consideration as a measure that is most consistent of the central propensity for formulating a hypothesis about a population from a particular model, the mean is the average. For the clustering of your data value around its mean, mode, or median, central tendency determines the tendency. When the values’ number is divided, the mean is computed by the sum of all values.

The category or value that frequently happens contained by the data is the mode. When there is no repletion of number or similarity in the class, there is no mode in a dataset. Also, it is likely for a dataset to have more than one mode. For categorical variables, the single central tendency measure is the mode since you can compute such as the variable “gender” average. Percentages and numbers are the only categorical variables you can report.

Also known as the “50th percentile,” the midpoint or “middle” value in your data is the median. More than the mean, the median is much less affected by skewed data and outliers. For example, when a housing prizes dataset is from $100,000 to £300,000 yet has more than $3million worth of houses. Divided by the number of values and the sum of all values, the expensive homes will profoundly impact the mean. As all data points “middle” value, these outliers will not profoundly affect the median. Consequently, for your data description, the median is a much more suited statistic.

Distributed Systems & Big Data

 

Distributed System

A distributed system is a gathering of autonomous PCs which are interconnected by either a nearby Network on a worldwide network. Distributed systems enable a different machine to play out various procedures. Distributed system example incorporates banking system, air reservation system, etc.

Distributed System has numerous objectives. Some of them are given underneath.

Scalability – To extend and deal with the server without corrupting any administrations.

Heterogeneity – To deal with considerable variety types of hubs.

Straightforwardness – to shroud the interior working so that is user can’t understand the complexity.

Accessibility – To make the resources accessible with the goal that the user accesses the resources and offer the resource adequately.

Receptiveness – To offers administrations as per standard guidelines.

There are numerous points of interest in a distributed system. Some of them are given beneath:

Complexity is covered up in a distributed system.

Distributed System guarantees the scalability.

Convey system give consistency.

Distributed System is more productive than other System.

A drawback of distributed System is given underneath:

Cost – It is increasingly costly because the advancement of distributed System is difficult.

Security – More defenseless to hacking because resources are uncovered through the network.

Complexity – More mind-boggling to understand fabric usage.

Network reliance – The current network may cause a few issues.

How do I get hands-on with distributed systems?

Learning DS ideas by

  1. Building a simple chat application:

Step 1: Start little, implement a simple chat application.

If fruitful, modify it to help multi-user chat sessions.

You should see a few issues here with a message requesting.

Step 2: After reading DS hypothesis for following, causal, and other requesting procedures, implement every one of them individually into your System.

  1. Building a capacity test system:

Step 1: Write an Android application (no extravagant UI, merely a few catches) that can embed and inquiry into the hidden Content Provider. This application ought to have the option to speak with different gadgets that run your application.

Step 2: After perusing the hypothesis of Chord protocol and DHT, reenact these protocols in your distributed set up.

For example, Assume I run your application in three emulators.

These three cases of your application should frame a chord ring and serve embed/question demands in a distributed style, as indicated by the chord protocol.

If an emulator goes down, at that point, you ought to have the option to reassign keys dependent on your hashing calculation to at present running examples.

WHAT ARE THE APPLICATIONS OF DISTRIBUTED SYSTEMS?

An appropriates system is a gathering of computer cooperating which shows up as a single computer to the end-user.

Whenever server traffic grows, one has to redesign hardware and programming arrangement of server to deal with it, which is known as the vertical scaling. The vertical scaling is excellent. However, one cannot scale it after some purpose of time. Indeed, even the best hardware and programming can not give better support for enormous traffic.

coming up next are the different application of the distributed System.

Worldwide situating System

World Wide Web

Airport regulation System

Mechanized Banking System

In the World Wide Web application, the information or application was distributed on the few numbers of the heterogeneous computer system, yet for the end-user or the browser, it is by all accounts a single system from which user got the data.

The multiple numbers of the computer working simultaneously and play out the asset partaking in the World Wide Web.

These all the System are the adaptation to internal failure, If anyone system is bomb the application won’t become up short, disappointment computer errand can be given over by another computer in the System, and this will all occur without knowing to the end-user or browser.

The elements of the World Wide Web are

Multiple Computer

Common Sate

Interconnection of the Multiple computers.

There are three sorts of distributed systems:

Corporate systems

These separate utilization servers for database, business insight, exchange preparing, and web administrations. These are more often than not at one site, yet could have multiple servers at numerous areas if continuous administration is significant.

Vast web locales, Google, Facebook, Quora, maybe Wikipedia

These resemble the corporate systems, however, are gigantic to the point that they have their very own character. They are compelled to be distributed due to their scale.

Ones serving distributed associations that can’t depend on system availability or need local IT assets

The military will require some unit-level direction and control capacity. The perfect would be that every unit (trooper, transport, and so on) can go about as a hub so that there is no focal area whose pulverization would cut everything down.

Mining operations frequently have a significant modern limit at the remotest places and are best served by local IT for stock control, finance and staff systems, and particular bookkeeping and arranging systems.

Development organizations frequently have huge ventures without significant correspondences so that they will be something like mining operations above. In the most pessimistic scenario, they may depend on a driver bouncing in his truck with a memory stick and associating with the web in some close-by town.

Data Visualization

What is Data Visualization?

Data Visualization is Interactive

Have you at any point booked your flight plans online and saw that you can now view situate accessibility as well as pick your seat? Perhaps you have seen that when you need to look into information online on another nation, you may discover a site where all you need to do to get political, affordable, land, and other information is drag your mouse over the area of the nation wherein you are intrigued.

Possibly you have assembled a business introduction comprising of different degrees of complicated advertising and spending information in a straightforward display which enables you to audit all parts of your report by just tapping on one area of a guide, outline, or diagram. You may have even made forecasts by adjusting some information and watching the diagram change before your thought.

Warehouses are following stock. Businesses are following deals. Individuals are making visual displays of information that addresses their issues. The explorer, the understudy, the ordinary laborer, the advertising official, the warehouse administrator, the CEO are currently ready to associate with the information they are searching for with data visualization tools.

Data Visualization is Imaginative

If you can visualize it in your psyche, you can visualize it on a PC screen. The eager skier might be keen on looking at the average snowfall at Soldier Mountain, ID. Specialists and understudies may need to look at the average malignant growth death pace of men to ladies in Montana or Hawaii. The models are interminable.

Data visualization tools can assist the business visionary with presenting items on their site imaginatively and educationally. Data visualization has been grabbed by state and national government offices to give helpful information to general society. Aircraft exploit data visualization to be all the more obliging. Businesses utilize data visualization for following and announcing. Youngsters use data visualization tools on the home PC to satisfy investigate assignments or to fulfill their interest in awkward spots of the world.

Any place you go, data visualization will be there. Whatever you need, data visualization can present answers in an accommodating way.

Data Visualization is a Comprehensive

Every one of us has looked into information online and found not exactly accommodating introduction designs which have a way of either exhibiting necessary details in a complicated technique or showing complex information in a much progressively complex way. Every one of us at some time has wanted that that site had a more user amicable way of introducing information.

Information is the language of the 21st century, which means everybody is sending it, and everybody is looking through it. Data visualization can make both the senders and the searchers cheerful by creating a primary mechanism for frequently giving complex information.

Data Visualization Basics

Data visualization is the way toward information/ displaying data in graphical charts, bars, and figures.

It is used as intends to convey visual answering to users for the performance, tasks, or general measurements of an application, system, equipment, or all intents and purposes any IT asset. Data visualization is ordinarily accomplished by extricating data from the primary IT system. This data is generally as numbers, insights, and by and massive action. The data is prepared to utilize is displayed on the system’s dashboard and data visualization software.

It is done to help IT directors in getting brisk, visual, and straightforward knowledge into the performance of the hidden system. Most IT performance observing applications use data visualization procedures to give an accurate understanding of the performance of the checked system.

Software Visualization

Software visualization is the act of making visual tools to delineate components or generally display parts of source code. This should be possible with a wide range of programming dialects in different ways with different criteria and tools.

The principal thought behind software visualization is that by making visual interfaces, makers can support developers and others to get code or to figure out applications. A ton of the intensity of software visualization has to do with understanding connections between pieces of code, where specific visual tools, for example, windows, will openly introduce this information. Different highlights may include various sorts of charts or formats that developers can use to contrast existing code with a specific standard.

Enormous Data Visualization

Massive data visualization alludes to the usage of progressively contemporary visualization methods to show the connections inside data. Visualization strategies incorporate applications that can display constant changes and increasingly graphic designs, along these lines going past pie, bar, and different charts. These delineations veer away from the use of many paths, segments, and qualities toward a progressively creative visual portrayal of the data.

Ordinarily, when businesses need to introduce connections among data, they use diagrams, bars, and charts to do it. They can likewise make use of an assortment of hues, terms, and images. The primary issue with this arrangement, notwithstanding, is that it doesn’t work superbly of exhibiting exceptionally enormous data or data that incorporates immense numbers. Data visualization uses increasingly intelligent, graphical representations – including personalization and liveliness – to display figures and set up associations among pieces of information.

The Many Faces of Data Visualization

Data Visualization has turned out to be one of the primary “buzz” phrases twirling around the Web nowadays. With the majority of the guarantees of Big Data and the IoT (Internet of Things), more organizations are trying to get more an incentive from the voluminous data they produce. This as often as possible includes complex examination – both ongoing and chronicled – joined with robotization.

A critical factor in interpreting this data into meaningful information, and in this manner into educated activity, is the methods by which this data is pictured. Will it be found progressively? Furthermore, by whom? Will it be shown in vivid air pocket charts and pattern graphs? Or on the other hand, will it be implanted in high-detail 3D graphics? What is the objective of the visualization? Is it to share information? Empower cooperation? Engage in basic leadership? Data visualization may be a rough idea, yet we don’t all have a similar thought regarding what it implies.

For some organizations, viable data visualization is a significant piece of working together. It can even involve life and demise (think human services and military applications). Data visualization (or information visualization) is a vital piece of some scientific research. From molecule material science to sociology, making compact yet incredible visualizations of research data can help researchers rapidly identify examples or irregularities, and can at times move that warm and fluffy inclination we get when we have a feeling that we’ve at last folded our head over something.

The present Visual Culture

We live in a present reality that is by all accounts producing new information at a pace that can be overpowering. With TV, the Web, roadside announcements, and all the more all competing for our inexorably divided consideration, the media, and corporate America are compelled to discover new ways of getting their messages through the clamor and into our observation. As a rule – when conceivable – the medium picked to share the message is visual. Regardless of whether it’s through a picture, a video, a fantastic infographic or a primary symbol, we have all turned out to be exceptionally talented at preparing information outwardly.

It’s a bustling world with numerous things about which we want to be educated. While we as a whole get information from multiple points of view over some random day, just individual bits of that information will have any genuine impact in transit we think and go about as we approach our typical lives. The intensity of compelling data visualization is that it can distill those significant subtleties from enormous arrangements of data just by placing it in the best possible setting.

Well-arranged data visualization executed in an outwardly engaging way can prompt quicker, progressively positive choices. It can reveal insight into past disappointments and uncover new chances. It can give an apparatus to a joint effort, arranging, and preparing. It is turning into a need for some organizations who want to contend in the commercial center, and the individuals who do it well will separate themselves.

Python in the Real World

 

Now that you know the basics behind Python programming, you might be wondering where exactly could you apply your knowledge. Keep in mind that you only started your journey, so right now you should focus on practicing all the concepts and techniques you learned. However, having a specific goal in mind can be extremely helpful and motivating.

As mentioned earlier in this book, Python is a powerful and versatile language with many practical applications. It is used in many fields from robotics to game development and web-based application design. In this chapter you are going to explore some of these fields to give you an idea about what you can do with your newly acquired skills.

What is Python Used For?

You’re on your way to work listening to your favorite Spotify playlist and scrolling through your Instagram feed. Once you arrive at the office, you head over to the coffee machine and while waiting for your daily boost you check your Facebook notifications. Finally, you head to your desk, take a sip of coffee and you think “Hey, I should Google to learn what Python is used for.” At this point you realize that every technology you just used has a little bit of Python in it.

Python is used in nearly everything, whether we are talking about a simple app created by a startup company or a giant corporation like Google. Let’s go through a brief list of all the ways you can use Python.

Robotics

Without a doubt, you’ve probably heard about tiny computers like the Raspberry Pi or Arduino board. They are tiny, inexpensive devices that can be used in a variety of projects. Some people create cool little weather stations or drones that can scan the area, while others build killer robots because why not. Once the hardware problems are solved, they all need to take care of the software component.

Python is the ideal solution and it is used by hobbyists and professionals alike. These tiny computers don’t have much power, so they need the most powerful programming language that uses the least amount of resources. After all, resources also consume power and tiny robots can only pack so much juice. Everything you have learned so far can be used in robotics because Python is easily combined with any hardware components without compatibility issues. Furthermore, there are many Python extensions and libraries specifically designed for the field of robotics.

In addition, Google uses some Python magic in their AI-based self-driving car. If Python is good for Google and for creating killer robots, what more can you want?

Machine Learning

You’ve probably heard about machine learning because it is the new popular kid on the block that every tech company relies on for something. Machine learning is all about teaching computer programs to learn from experience based on data you already have. Thanks to this concept, computers can learn how to predict various actions and results.

Some of the most popular machine learning examples can be found in:

  1. Google Maps: Machine learning is used here to determine the speed of the traffic and to predict for you the most optimal route to your destination based on several other factors as well.
  2. Gmail: SPAM used to be a problem, but thanks to Google’s machine learning algorithms, SPAM can now be easily detected and contained.
  3. Spotify or Netflix: Noticed how any of these streaming platforms have a habit of knowing what new things to recommend to you? That’s all because of machine learning. There are algorithms that can predict what you will like based on what you have watched or listened to so far.

Machine learning involves programming as well as a great deal of mathematics. Python’s simplicity makes it attractive for both programmers and mathematicians. Furthermore, unlike other programming languages, Python has a number of add-ons and libraries specifically created for machine learning and data science, such as Tensorflow, NumPy, Pandas, and Scikit-learn.

Cybersecurity

Data security is one of the biggest concerns of our century. By integrating our lives and business into the digital world, we make it vulnerable to unauthorized access. You probably read every month about some governmental institution or company getting hacked or taken offline. Most of these situations involve terrible security due to outdated systems and working with antiquated programming languages.

Python’s own popularity is something that makes it far more secure than any other. How so? When something is popular it becomes driven by a large community of experts and testers. For this reason Python is often patched and security issues are plugged in less than a day. This makes it a popular language in the field of cybersecurity.

Web Development

As mentioned several times before, Python is simple yet powerful. Many companies throughout the world, no matter the size, rely on Python to build their applications, websites, and other tools. Even the giants like Google and Facebook rely on Python for many of their solutions.

We discussed earlier in the book the main advantages of working with Python, so we won’t explore them yet again. However, it is worth mentioning that Python is often used as a glue language, especially in web development. Creating web tools always involves several different programming languages, database management languages, and so on. Python can act as the integration language by calling C++ data types and combining them with other elements, for example. C++ is mentioned because in many tech areas the performance critical components are written in C++ which offers unmatched performance, however Python is used for the high level customization.

Linear Regression

 

The easiest and most basic machine learning algorithm is linear regression. It will be the first one that we are going to look at and it is a supervised learning algorithm. That means that we need both – inputs and outputs – to train the model.

Mathematical Explanation

Before we get into the coding, let us talk about the mathematics behind this algorithm.

In the figure above, you see a lot of different points, which all have an x-value and a y-value. The x-value is called the feature, whereas the y-value is our label. The label is the result for our feature. Our linear regression model is represented by the blue line that goes straight through our data. It is placed so that it is as close as possible to all points at the same time. So we “trained” the line to fit the existing points or the existing data.

The idea is now to take a new x-value without knowing the corresponding y-value. We then look at the line and find the resulting y-value there, which the model predicts for us. However, since this line is quite generalized, we will get a relatively inaccurate result.

However, one must also mention that linear models only really develop their effectiveness when we are dealing with numerous features (i.e. higher dimensions).

If we are applying this model to data of schools and we try to find a relation between missing hours, learning time and the resulting grade, we will probably get a less accurate result than by including 30 parameters. Logically, however, we then no longer have a straight line or flat surface but a hyperplane. This is the equivalent to a straight line, in higher dimensions.

Preparing Data

Our data is now fully loaded and selected. However, in order to use it as training and testing data for our model, we have to reformat them. The sklearn models do not accept Pandas data frames, but only NumPy arrays. That’s why we turn our features into an x-array and our label into a y-array.

X = np.array(data.drop([prediction], 1))

Y = np.array(data[prediction])

The method np.array converts the selected columns into an array. The drop function returns the data frame without the specified column. Our X array now contains all of our columns, except for the final grade. The final grade is in the Y array.

In order to train and test our model, we have to split our available data. The first part is used to get the hyperplane to fit our data as well as possible. The second part then checks the accuracy of the prediction, with previously unknown data.

X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.1)

With the function train_test_split, we divide our X and Y arrays into four arrays. The order must be exactly as shown here. The test_size parameter specifies what percentage of records to use for testing. In this case, it is 10%. This is also a good and recommended value. We do this to test how accurate it is with data that our model has never seen before.

Training and Testing

Now we can start training and testing our model. For that, we first define our model.

model = LinearRegression()

model.fit(X_train, Y_train)

By using the constructor of the LinearRegression class, we create our model. We then use the fit function and pass our training data. Now our model is already trained. It has now adjusted its hyperplane so that it fits all of our values.

In order to test how well our model performs, we can use the score method and pass our testing data.

accuracy = model.score(X_test, Y_test)

print(accuracy)

Since the splitting of training and test data is always random, we will have slightly different results on each run. An average result could look like this:

0.9130676521162756

Actually, 91 percent is a pretty high and good accuracy. Now that we know that our model is somewhat reliable, we can enter new data and predict the final grade.

X_new = np.array([[18, 1, 3, 40, 15, 16]])

Y_new = model.predict(X_new)

print(Y_new)

Here we define a new NumPy array with values for our features in the right order. Then we use the predict method, to calculate the likely final grade for our inputs.

[17.12142363]

In this case, the final grade would probably be 17.

Visualizing Correlations

Since we are dealing with high dimensions here, we can’t draw a graph of our model. This is only possible in two or three dimensions. However, what we can visualize are relationships between individual features.

plt.scatter(data[‘studytime’], data[‘G3’])

plt.title(“Correlation”)

plt.xlabel(“Study Time”)

plt.ylabel(“Final Grade”)

plt.show()

Here we draw a scatter plot with the function scatter, which shows the relationship between the learning time and the final grade.

In this case, we see that the relationship is not really strong. The data is very diverse and you cannot see a clear pattern.

plt.scatter(data[‘G2’], data[‘G3’])

plt.title(“Correlation”)

plt.xlabel(“Second Grade”)

plt.ylabel(“Final Grade”)

plt.show()

However, if we look at the correlation between the second grade and the final grade, we see a much stronger correlation.

Here we can clearly see that the students with good second grades are very likely to end up with a good final grade as well. You can play around with the different columns of this data set if you want to.

Conclusion

 

In conclusion, Python and big data provide one of the strongest capabilities in computational terms on the platform of big data analysis. If this is your first time at data programming, Python will be a much easier language to learn than any other and is far more user-friendly.

And so, we’ve come to the end of this book which was meant to give you a taste of data analysis techniques and visualization beyond the basics using Python. Python is a wonderful tool to use for data purposes and I hope this guide stands you in good stead as you go about using it for your purposes.

I have tried to go more in-depth in this book, give you more information on the fundamentals of data science, along with lots of useful practical examples for you to try out.

Please read this guide as often as you need to and don’t move on from a chapter until you fully understand it. And do try out the examples included – you will learn far more if you actually do it rather than just reading the theory.

This was just an overview to recap on what you learned in the first book, covering the datatypes in pandas and how they are used. We also looked at cleaning the data and manipulating it to handle missing values and do some string operations.

 

 

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask