Python – iJeremiah.com https://ijeremiah.com Projects • Resume Thu, 13 Feb 2020 22:09:33 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.14 https://ijeremiah.com/wp-content/uploads/2020/01/cropped-Site-Logov2-32x32.png Python – iJeremiah.com https://ijeremiah.com 32 32 Predicting Used Car Prices with Multiple Linear Regression https://ijeremiah.com/portfolio/cars/ Wed, 29 Jan 2020 00:48:25 +0000 http://ijeremiah.com/?post_type=rara-portfolio&p=454 I built a Linear Regression model that predicts a used car’s sales price based on some of its attributes like make, mileage, engine type, engine displacement, and body type.

The data set I worked on contains real information on over 4,000 used car sales in the United States. Click the link below to download the data set as well as the notebook file for this project.

The following is an HTML version of the notebook file.




]]>
Web-Scraping Real-Estate Data https://ijeremiah.com/portfolio/scraper/ Sat, 25 Jan 2020 21:34:51 +0000 http://ijeremiah.com/?post_type=rara-portfolio&p=426 !-For educational purposes only-!

I wrote this program after learning about web-scraping with Python from this Python course. You can download the scripts and the executable file by clicking the link below.

Quick Notes About the Program:
  • Program works as intended as of: 1-20-2020
  • It is still a work-in-progress
  • To execute the program:
    • Download the ZIP file
    • Extract EVERYTHING into a single location
    • Run the REScraper.exe file
  • The GUI will freeze while scraping but the attached terminal will not. Allow the program to finish scraping before trying to close the GUI or the terminal.
  • Scraping typically takes around 10 minutes to finish.
  • The scraped data will be saved into the same folder as the .exe file
  • For educational purposes only. No harm intended.

The Motivation:

The Python course I took has a section on web-scraping a real-estate website. The course had us scrape a cache’d version of Century21.com. There is a very good reason for doing that as opposed to making us scrape a live website. Still, I didn’t feel like I came out of that section with real, usable knowledge on web-scraping. Therefore, I decided to look for a real, live real-estate website that I can practice my web-scraping knowledge on.

Realtor(.com) seemed like an ideal candidate for my needs. The layout of the search results resembles that of the cache’d version Century21.com. Its source code seemed fairly easy to navigate as well. The main difference is that now I’ll have to deal with anti-scraping measures (something that the Python course didn’t cover)

My plan is to 1.) write a script that will scrape housing market data in the Reno-Sparks area, and 2.) create a user interface for the script.

Concepts / Features:

  • Web Scraping
  • GUI Design
  • Proxy Rotation
  • Error Handling
  • Process Mapping
  • Data Preprocessing
  • Connecting to a Database
  • Creating a Standalone Executable

The Base URL:

Realtor(.com) allows you to filter your search by location, property type, listing age, and so on. Applying any of these filters changes the URL.

Location:

Since I’m only interested in properties around the Reno-Sparks area, the program will only look at those properties that are within the area enclosed by these two GPS coordinates: [ 38.99819, -120.262437 ] & [ 39.898492, -118.944078 ]. You might be asking why I chose to use coordinates instead of city names or postal codes even. The truth is, in the case of Realtor(.com), I found that it is easier to use coordinates rather than specifying the city names or postal codes.

Let’s you want to search properties within Reno and Sparks only. You would do this by first picking a base city – let’s use Reno in this example. Your base URL would then be:
https://www.realtor.com/realestateandhomes-search/Reno_NV

If you want to add Sparks to your search, you would go to More Filters, then Expand Search, then find Sparks in the drop-down list.

Adding Sparks to your search will change the URL to:
https://www.realtor.com/realestateandhomes-search/Reno_NV?ncs=159135385

As you can see, it seems that Realtor(.com) assigns a unique number code to each city. Not only that, it also seems that they have a unique code for each small community as well. For example, Virginia Foothills is not a city in Nevada, it’s just a small community in South Reno. Filtering the search this way means that I would have to determine the code to each city and community that I want to include in the search. Using coordinates seemed like a more reasonable approach. Using coordinates, the URL would then look something like this:
https://www.realtor.com/realestateandhomes-search/Reno_NV?pos=38.99819,-120.262437,39.898492,-118.944078,10&qdm=true

Property Type:

I decided to limit the scraping to residential properties only. Residential properties include single family houses, multi-family houses, condominiums, and mobile homes. Filtering for these property types changes the URL to:
https://www.realtor.com/realestateandhomes-search/Reno_NV/type-single-family-home,mfd-mobile-home,multi-family-home,condo-townhome-row-home-co-op?pos=38.99819,-120.262437,39.898492,-118.944078,10&qdm=true

Listing Status:

I also limited the program to only look at pre-existing homes (as opposed to homes that are still under-construction) and homes that are still on the market (as opposed to those that have a pending status). Adding these filters, the URL changes to:
https://www.realtor.com/realestateandhomes-search/Reno_NV/type-single-family-home,mfd-mobile-home,multi-family-home,condo-townhome-row-home-co-op/pnd-hide/nc-hide?pos=38.99819,-120.262437,39.898492,-118.944078,10&qdm=true

Listing Age:

As of December 2019, on average, a property in Reno is sold after 74 days in the market (according to Redfin). I initially wanted to filter for properties that were listed less than 75 days ago. However, this cannot be done easily on Realtor(.com). The maximum listing age filter is only 30 days.

In theory, I could just have the program stop scraping once it sees a property that has a listing age greater than 75 days. However, this would also mean a longer processing time as there will be more pages to go through. For now, I’ll settle with using the 30-day filter. Adding in this filter changes the URL to:
https://www.realtor.com/realestateandhomes-search/Reno_NV/type-condo-townhome-row-home-co-op,multi-family-home,single-family-home,mfd-mobile-home/pnd-hide/nc-hide/dom-30?pos=38.99819,-120.262437,39.898492,-118.944078,10&qdm=true

Newest Listings:

Finally, I think it makes a lot of sense to see the more recent listings on the top of the search result. Sorting the result by newest listings changes the URL to:
https://www.realtor.com/realestateandhomes-search/Reno_NV/type-condo-townhome-row-home-co-op,multi-family-home,single-family-home,mfd-mobile-home/pnd-hide/nc-hide/dom-30/sby-6?pos=38.99819,-120.262437,39.898492,-118.944078,10&qdm=true

The Final Base URL:

The final URL string looks this:
https://www.realtor.com/realestateandhomes-search/Reno_NV/type-condo-townhome-row-home-co-op,multi-family-home,single-family-home,mfd-mobile-home/pnd-hide/nc-hide/dom-30/sby-6/pg-?pos=38.99819,-120.262437,39.898492,-118.944078,10&qdm=true

Following this link will bring you to the first page of the search result. Adding a ‘2’ right before ‘?’ will bring you to the 2nd page, and so on. This final URL string is stored in the variable base_url on the top of the back end script.

The Interface:

The image above shows the GUI I have setup so far. It is still a work in progress, but the idea is that the user will have the option to either save the scrape data into a .csv file or a database file (.db). The scraping starts when the user clicks on either of the two top buttons.

The image on the left simply shows the scope of the scrape. As mentioned before, I am only interested in properties located within the rectangle formed by these two coordinates: [ 38.99819, -120.262437 ] & [ 39.898492, -118.944078 ]. The image is just a satellite view of that rectangle.

Implementing the Status Box proved more difficult than I thought. So for now, it contains nothing more than a placeholder.

The Script:

I wrote two separate scripts: a front end and a back end. The front end script generates the GUI while the back end contains the main scraping functions. I prepared the flowchart below (using draw.io) to help visualize the programming logic.

The whole process starts when the user clicks either the Save to CSV button or the Save to DB File button. Once the user clicks one of these buttons, the program generates a list of IP addresses that will be used as proxies later on. Next, the program will determine the total page count for the search result pre-defined by the base url. Once the total page count has been determined, the main scraping function takes over. The scraping function will return data containing listing information for every property within the area of interest. This data will need to be preprocessed, mainly for readability and consistency. The whole process ends with the preprocessed data being saved into a .csv file or a .db file. Each of these functions will be discussed further in the following sections.

get_proxies():

I was able to prototype this program relatively quickly. I was able to quickly identify the tags that I need and I was able to create a simple script that successfully scrapes the first page of the search result. Things became rather difficult quickly once I started scraping the other pages. The problem was that my script kept getting recognized as a bot.

The Python course I took does not have a section on preventing your script from getting recognized as bot. The first solution I came up with was to let my program sleep between pages with sleep(randint(3,6)). The idea was to replicate human behavior by not “clicking” too fast and not spending the same amount of time in each page. However, I was still getting the same error as before. I tried increasing the delay and widening the interval but that didn’t help either.

After searching through several Stack Overflow and Git Hub posts, I eventually came across this article. The article explains the concept of rotating proxies and provides a basic example of how to implement one in Python. Now, I don’t know much about networking and internet protocols (I can’t tell you the difference between http and https), but the article was so well-written that I was able to absorb the information relatively quickly.

The first step in implementing a rotating proxy is to gather a list of proxy IP addresses. I wrote the get_proxies function to do this for me.

def get_proxies():
    print('Gathering proxy IPs')
    
    # Start with an empty proxy list
    proxies = []
    
    # Free proxy websites and the respective table maximum rows 
    proxy_sites = {'https://www.us-proxy.org/': 200, 
                   'https://free-proxy-list.net/': 300,
                   'https://www.sslproxies.org/': 100} 
    
    
    # Loop through all the websites in proxy_sites
    for site, max_row in proxy_sites.items(): 
        r = requests.get(site, headers = {'User-agent': header})
        soup = BeautifulSoup(r.content, 'html.parser')
        
        # Read every single row in the IP addresses table
        for i in range(max_row): 
            prxi = soup.find('tbody').find_all('tr')[i].find_all('td')
            
            # Save only the https AND elite IPs
            if prxi[6].text == 'yes' and prxi[4].text == 'elite proxy': 
                proxies.append(':'.join([prxi[0].text, prxi[1].text]))      
    
    # Remove any duplicate IPs
    proxies = list(dict.fromkeys(proxies)) 

    # Show how many proxy IPs were found
    global init_proxy_count
    init_proxy_count = len(proxies)
    print('*** %s proxy IPs found' % init_proxy_count)

    return proxies

I identified 3 websites that provide free proxy IP addresses: us-proxy.org, free-proxy-list.net, and sslproxies.org. These websites are almost completely identical, making it easy to collect the IPs. The IP addresses (and their attributes such as port, anonymity, and country of origin) are all contained within a single table. The get_proxies function loops through each of these websites, identifies the elite + https IP addresses from the tables, and then stores these IP addresses into a variable called proxies.

get_page_count():

The program needs to know the last page of the search results in order to know when to stop. I wrote the get_page_count function for this very purpose.

def get_page_count():
    print('Determining page count')
    
    # This can re-used in parsing the first page
    global page1_soup
    
    # Try the user's own IP first (faster)
    try: 
        r = requests.get(base_url, headers = {'User-agent': header})
        page1_soup = BeautifulSoup(r.content, 'html.parser')
        # Use a proxy IP if the script was flagged as a bot
        if (page1_soup.html['lang'] == 'zxx') or ('script' in r.text[0:50]):
            remove_bad_proxy(proxies)
            page1_soup = proxy_requests(base_url, proxies)
            result_count = page1_soup.find('span', {'id': 'search-result-count'})
        # Proceed if the script wasn't flagged as bot
        else:
            result_count = page1_soup.find('span', {'id': 'search-result-count'})
    # If the user's IP fails, use an IP from proxies list
    except: 
        page1_soup = proxy_requests(base_url, proxies)
        result_count = page1_soup.find('span', {'id': 'search-result-count'})
        print(not result_count)
    
    # Determine the number of properties in the search result
    text = result_count.text
    global properties
    properties = int(text.split()[0])

    # Determine the total page count based on the number of properties. 44 is the maximum number of listings shown per page
    page_count = ceil(properties / 44)

    print('*** %s is the total page count\n*** %s properties found' % (page_count, properties))

    return page_count

This function goes to the first page of the search result, determines the total number of properties that matches the filters, and then does some simple math to determine the page count from the number of properties.

The function first uses the users own IP address to connect to the website. If the user’s IP is blocked for some reason, or doesn’t make a successful connection with the web page, the get_page_count function will then use the proxy_request function to reconnect with the web page using one of the proxy IP addresses.

proxy_requests():

The proxy_requests function is what rotates the proxy. This function takes two arguments: the url it will attempt to make a connection to, and a list of proxies that it can use.

def proxy_requests(url, proxies):
    print('... Searching for a valid IP')

    # Keep sending a request, using a different proxy every time, until a valid connection is made to the server
    while 1:
        # Try to make a connection using the first proxy in the proxies list
        try:
            r = requests.get(url, 
                             proxies = {'https': proxies[0]},
                             headers = {'User-agent': header},
                             timeout = (20,20))
            soup = BeautifulSoup(r.content, 'html.parser')
            # Try the next proxy if the script was flagged as a bot
            if (soup.html['lang'] == 'zxx') or ('script' in r.text[0:50]):
                print('... Script was recognized as a bot using IP - {}. Trying IP - {}'.format(proxies[0], proxies[1]))
                remove_bad_proxy(proxies)
                pass
            # Proceed if the script wasn't flagged as a bot
            else:
                print('... Valid IP found!')
                print('Current valid IP: ' + proxies[0])
                break
        # If first IP fails, remove the first proxy IP in the proxies list. Also notify the user of Timeout Error    
        except Timeout:
            print('... Timeout Error on IP - %s. Trying IP - %s' % (proxies[0],proxies[1]))
            remove_bad_proxy(proxies)
            pass
        # If first IP fails, remove the first proxy IP in the proxies list. Also notify the user of Proxy Error
        except ProxyError:
            print('... Proxy Error on IP - %s. Trying IP - %s' % (proxies[0],proxies[1]))
            remove_bad_proxy(proxies)
            pass
        # If first IP fails, remove the first proxy IP in the proxies list. Also notify the user of an Unknown Error
        except:
            print('... Unknown Error on IP - %s. Trying IP - %s' % (proxies[0],proxies[1]))
            remove_bad_proxy(proxies)
            pass

    return soup

The function uses the first proxy IP in the proxies list to try to make a connection. If this proxy IP fails to make a valid connection, this proxy IP is removed from the list and the function will then use the next proxy down the list to make a connection. This process will continue until a valid connection is made. The script always comes out of the proxy_requests function with valid connection.

remove_bad_proxy():

Whenever a proxy IP fails to make a valid connection, this proxy is removed from proxies list so it won’t be reused again. The remove_bad_proxy() handles this task. If the proxies list is exhausted, the function then terminates the entire program

def remove_bad_proxy(proxies):
        proxies.remove(proxies[0])
        if not proxies: 
            print('Ran out of proxy IPs. Terminating now')
            exit()

begin_scrape():

The begin_scrape() function is the centerpiece of this whole program. There are 3 for-loops within this function: the pages loop, the listings loop, and the information loop.

pages:

The outermost loop is the pages loop. This loop iterates through all the pages in the search result. At each iteration, the base_url string is manipulated to match the corresponding page. The current page number is inserted right before the ‘?’ in the base_url string, and the result is stored in the current_url variable.

When moving to a different page, the script first attempts to reuse the last successful proxy IP to connect to the new web page. If the last successful IP fails to make a valid connection, the script would then invoke the proxy_requests function to find a proxy IP that works.

listings:

In each page, there are at most 44 properties listed. The listings loop iterates through each of these listings.

information:

The information loop iterates through all the information that I want to scrape from each property. For now, this includes the street address, city, state, the price, the number of bedrooms, the number of bathrooms, the total living area, the total land area, the property type, the listing age, and finally, the broker.

I created the image below to help visualize how these loops are structured together.

def begin_scrape():

    # Gather a list of proxies
    global proxies
    proxies = get_proxies()
    
    # Determine the total page count
    page_count = get_page_count()
    
    # Start with an empty list of listings
    global result
    result = []

    # Determine where to insert the current page number in the base_url string
    insert_page = base_url.find('?')

    # Headers and tags
    header_tag = {'Street Address': 'listing-street-address', 
                    'City': 'listing-city', 
                    'State': 'listing-region', 
                    'Zip': 'listing-postal', 
                    'Price': 'data-price', 
                    'Bed': 'property-meta-beds', 
                    'Bath': 'property-meta-baths', 
                    'Living Area': 'property-meta-sqft',
                    'Lot Area': 'property-meta-lotsize', 
                    'Garage': 'property-meta-garage', 
                    'Property Type': 'property-type', 
                    'Listing Age': 'listing-age',
                    'Broker': 'property-broker' }
    
    # Pages loop
    for page in range(1,page_count + 1): 
        # Pause for 6 to 12 seconds to decrease chance of being flagged as a bot
        pause = randint(6,12)
        if page != 1: 
            print('... Pausing for {} seconds'.format(pause))
            sleep(pause)        
        print('Now parsing page %s' % page)

        # Manipulate the base url to match the current page
        current_url = base_url[:insert_page] + str(page) + base_url[insert_page:] 

        # Reuse the page1_soup object to instantly parse page 1
        if page == 1:
            listings = page1_soup.find_all('li', {'class': 'component_property-card js-component_property-card js-quick-view'})
            # Identify the current proxy IP as a valid IP 
            valid_proxy = proxies[0]
        else:
            # Attempt to reuse the last successful proxy IP
            try:
                r = requests.get(current_url, proxies = {'https': valid_proxy}, headers = {'User-agent': header})
                soup = BeautifulSoup(r.content, 'html.parser')
                # Get a fresh proxy if the script was recognized as a bot using the previous IP
                if (soup.html['lang'] == 'zxx') or ('script' in r.text[0:50]):
                    print('... Script was recognized as a bot using current IP - ' + valid_proxy)
                    remove_bad_proxy(proxies)
                    soup = proxy_requests(current_url, proxies)
                    listings = soup.find_all('li', {'class': 'component_property-card js-component_property-card js-quick-view'})
                    valid_proxy = proxies[0]
                # Proceed if the previous IP worked and the script was not get recognized as a bot
                else:
                    listings = soup.find_all('li', {'class': 'component_property-card js-component_property-card js-quick-view'})
            # If the previous IP fails to make a connection, try a new proxy IP
            except Timeout:
                print('... Time Out Error with current IP - ' + valid_proxy)
                remove_bad_proxy(proxies)
                soup = proxy_requests(current_url, proxies)
                listings = soup.find_all('li', {'class': 'component_property-card js-component_property-card js-quick-view'})
                valid_proxy = proxies[0]
            except ProxyError:
                print('... Proxy Error with current IP - ' + valid_proxy)
                remove_bad_proxy(proxies)
                soup = proxy_requests(current_url, proxies)
                listings = soup.find_all('li', {'class': 'component_property-card js-component_property-card js-quick-view'})
                valid_proxy = proxies[0]
            except:
                print('... Unknown Error with current IP - ')
                remove_bad_proxy(proxies)
                soup = proxy_requests(current_url, proxies)
                listings = soup.find_all('li', {'class': 'component_property-card js-component_property-card js-quick-view'})
                valid_proxy = proxies[0]

        # Listings Loop
        for listing in range(len(listings)): 
            listing_info = {}

            # Information Loop
            for header, tag in header_tag.items(): 
                # Street Address - Price
                if header in list(header_tag)[0:5]: 
                    try:
                        listing_info[header] = listings[listing].find('span', {'class': tag}).text
                    # If information is not available, enter None
                    except:
                        listing_info[header] = None
                # Bed - Garage
                elif header in list(header_tag)[5:10]: 
                    try:
                        listing_info[header] = listings[listing].find('li', {'data-label': tag}).text
                    except:
                        listing_info[header] = None
                # Property Type, Listing Age
                elif header in list(header_tag)[10:12]: 
                    try:  
                        listing_info[header] = listings[listing].find('div', {'class': tag}).text
                    except:
                        listing_info[header] = None
                # Broker
                else: 
                    try:
                        listing_info[header] = listings[listing].find('span', {'data-label': tag}).text
                    except:
                        listing_info[header] = None
            # Add each listing_info into the result list
            result.append(listing_info)

            # Pause for 0.1 sec before moving on to the next listing
            sleep(0.1)

            # Print out the property that was just saved          
            print('--- ' + listing_info['Street Address'].strip().replace(',',''))
        
        # Remove any duplicates
        result = [i for n, i in enumerate(result) if i not in result[n + 1:]]

        # Notify the user if a page is parsed succesfully and display the total number of properties saved so far    
        print('Successfully parsed page {}. Total listings recorded: {}'.format(page, len(result)))
    
    # Notify the user if the scraping has been completed
    print('\n*** Scraping Complete!')

    # Notify the user of the total number of properties recored versus the number of properties based on the search result
    if len(result) == properties: 
        print('*** {}/{} properties recorded'.format(len(result), properties))

    # Notify the user if new properties were added/removed during scraping
    else:
        print('*** {}/{} properties recorded. Properties may have been added/removed while scraping.'.format(len(result), properties))

    # Notify the user of the total number of IPs removed from the proxies list        
    bad_proxy_count = init_proxy_count - len(proxies) 
    print('*** {}/{} bad proxies'.format(bad_proxy_count, init_proxy_count))

    return result

What is a Valid Connection?

A valid connection is made when:

  1. A response is received within the Time Out threshold
  2. The correct response is received

You can determine whether your script is flagged as a bot from the type of response you get from the server.

Good Response:

A good response contains all the relevant HTML tags. Below is the first few characters of a good response:

'<!DOCTYPE html>\n<html lang="en">\n  <head>\n    <link rel="shortcut icon" type="ima ...
Bad Response:

The following is the beginning of a response given when the script is flagged as a bot:

'<!DOCTYPE html>\n<html lang="zxx">\n<head>\n  <title>Pardon Our Interruption</title> ...

As you can see, the first difference between a good response and a bad response is the value of the lang attribute within the html tag. However, differentiating on the value of this attribute alone is not enough.

Unknown Response:

The following is the beginning of the final type of response.

'<!DOCTYPE html><html lang="en"><head><script>window.RDCXCDN=\'/assets/prod/f935a02\' ...

To be honest, I don’t know what’s causing this type of response. All I know is that this type of response does not contain all the HTML tags that I need. I suspect Javascript has something to do with it. I don’t know Javascript at the moment, so for now I’m considering this type of response a bad response

The script reads the first 50 characters of the response it receives to determine whether it has been flagged as a bot or not. If the response contains a "zxx" or the word "script", the script will treat it as a bad response and try out a different proxy IP.


Room for Improvement:

To be honest, I ended up spending WAY more time on this project that I thought I would. It is still a work-in-progress as I mentioned before – I still have a lot more features that I would like to implement. However, I feel like I’ve put in enough effort into it to make it presentable. Hopefully you feel the same!

Here are some features I would like to add in the near future:

Status Box:

As I mentioned before, the Status Box proved to be more difficult to implement than I thought. If I just replace all print statements with textbox.insert(END, "Statement"), what would happen is that ALL statements will be inserted into the Status Box AFTER the script is done scraping. This defeats the purpose of a “Status” Box.

After a few Google search, I found out that I can use Multi-threading to get what I’m looking for. I have a general understanding of the concept but not enough to be able implement it. I will revisit this project and implement the Status Box once I’ve learn more about Multi-threading.

User-defined Filters:

At this moment, the filters are pre-defined by the base_url string within the script. In the future, I would like to give the user the power to set their own filters (location, property-type, listing age, price, etc.)

Year-Built:

Knowing when a home was built is one of the first things that home buyers want to know. It is also a very strong indicator of the price. As of now, the program does not scrape this information; mainly because it is relatively harder to access. The script is able gather all other information from the search result page alone. To know when a property was built, the script would have to go to each individual listing page.

The additional “clicking” introduces a whole new set of problems.

  1. The program will slow down tremendously. 44 properties per page = 44 clicks = 44 attempts to connect. Multiply that still with the total page count.
  2. A longer list of proxies will be required. More connection attempts means more chances of the script getting flagged as a bot. The proxies list will get exhausted very quickly.
  3. The three main for-loops would have to be restructured quite significantly.
Speed/Performance:

On average, the script takes around 10 minutes to finish scraping 9 – 10 pages (or around 400 properties). I would like to bring this down to under 5 minutes.

Since I’m using free proxies that are openly available on the internet, the chances of having more than one person using the same proxy IP is very high. No doubt this contributes to the Time Out and Max Retry errors. The more of this errors, the longer it would take for the program to finish scraping. One option is to purchase premium proxy IPs. However, that doesn’t seem reasonable for a small project like this


Final Word:

I honestly had a lot of fun with this little project. I decided to learn Python after a recommendation from a data analyst I reached out to on LinkedIn. At that time I didn’t know any Python at all. While I know that there are more “basic” Python stuff I have yet to learn, I feel like I’ve built a very strong foundation from working on this project.

If you didn’t find this little Python project of mine interesting, I hope at least the way I presented it here does. I hope you enjoyed all the flowcharts and illustrations I made! All of them are included in the ZIP file.

If you would like to reach to me, you can use the Contact Form down below, or email me(preferred) directly at: j@ijeremiah.com

Looking forward to speaking with you!

]]>