aboutsummaryrefslogtreecommitdiff
path: root/Fundamentals_of_Accelerated_Data_Science/1-07_etl.ipynb
diff options
context:
space:
mode:
authorleshe4ka46 <alex9102naid1@ya.ru>2025-10-18 12:25:53 +0300
committerleshe4ka46 <alex9102naid1@ya.ru>2025-10-18 12:25:53 +0300
commit910a222fa60ce6ea0831f2956470b8a0b9f62670 (patch)
tree1d6bbccafb667731ad127f93390761100fc11b53 /Fundamentals_of_Accelerated_Data_Science/1-07_etl.ipynb
parent35b9040e4104b0e79bf243a2c9769c589f96e2c4 (diff)
nvidia2
Diffstat (limited to 'Fundamentals_of_Accelerated_Data_Science/1-07_etl.ipynb')
-rw-r--r--Fundamentals_of_Accelerated_Data_Science/1-07_etl.ipynb573
1 files changed, 573 insertions, 0 deletions
diff --git a/Fundamentals_of_Accelerated_Data_Science/1-07_etl.ipynb b/Fundamentals_of_Accelerated_Data_Science/1-07_etl.ipynb
new file mode 100644
index 0000000..514b15e
--- /dev/null
+++ b/Fundamentals_of_Accelerated_Data_Science/1-07_etl.ipynb
@@ -0,0 +1,573 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "4b2efdc2-313c-493b-9d6c-432ae77d342c",
+ "metadata": {},
+ "source": [
+ "<img src=\"./images/DLI_Header.png\" width=400/>"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b754f1a2-24e8-44d4-ae6a-257483573434",
+ "metadata": {},
+ "source": [
+ "# Fundamentals of Accelerated Data Science # "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "881e48fa-78ac-4d92-a8bb-d419c14df9e9",
+ "metadata": {},
+ "source": [
+ "## 07 - Extract, Transform, and Load ##\n",
+ "\n",
+ "**Table of Contents**\n",
+ "<br>\n",
+ "In this notebook, we will go through the basics of extract, transform, and load. This notebook covers the below sections: \n",
+ "1. [Extract, Transform, and Load (ETL)](#Extract,-Transform,-and-Load-(ETL))\n",
+ " * [Extract](#Extract)\n",
+ " * [Transform](#Transform)\n",
+ " * [Load](#Load)\n",
+ "2. [Save to Parquet Format](#Save-to-Parquet-Format)\n",
+ " * [Reading from Parquet](#Reading-from-Parquet)\n",
+ "3. [Accelerated ETL for Downstream Tasks](#Accelerated-ETL-for-Downstream-Tasks)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a083cd6e-73f4-42b8-be04-bdd2bdeb5185",
+ "metadata": {},
+ "source": [
+ "## Extract, Transform, and Load (ETL) ##\n",
+ "An important but perhaps not as highly glorified use case of RAPIDS is extract, transform, and load, or ETL for short. It is a data integration process used to combine data from multiple sources into a single, consistent data store. It's primary goals are: \n",
+ "* Consolidates data from multiple sources into a single, consistent format\n",
+ "* Improves data quality through cleaning and validation\n",
+ "* Enables more efficient data analysis and reporting\n",
+ "* Supports data-driven decision making"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a9d5d73e-1665-4706-8840-6e14c2fabaa7",
+ "metadata": {},
+ "source": [
+ "### Extract ###\n",
+ "**Extract** is the first step where data is collected from various source systems. These sources could include: \n",
+ "* Static files (csv, json)\n",
+ "* SQL RDBMS\n",
+ "* Webpages\n",
+ "* API\n",
+ "\n",
+ "**Note**: cuDF doesn't have a way to get transactions from external SQL databases directly to GPU. The workaround is reading with pandas and create cuDF dataframe with `cudf.from_pandas()`. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "17be68c6-49c0-429d-86d2-c2cb711a6dc3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%load_ext cudf.pandas\n",
+ "# DO NOT CHANGE THIS CELL\n",
+ "import pandas as pd\n",
+ "import time"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "37c75532-1bb6-4aab-9cee-3215d7137cee",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "<div>\n",
+ "<style scoped>\n",
+ " .dataframe tbody tr th:only-of-type {\n",
+ " vertical-align: middle;\n",
+ " }\n",
+ "\n",
+ " .dataframe tbody tr th {\n",
+ " vertical-align: top;\n",
+ " }\n",
+ "\n",
+ " .dataframe thead th {\n",
+ " text-align: right;\n",
+ " }\n",
+ "</style>\n",
+ "<table border=\"1\" class=\"dataframe\">\n",
+ " <thead>\n",
+ " <tr style=\"text-align: right;\">\n",
+ " <th></th>\n",
+ " <th>age</th>\n",
+ " <th>sex</th>\n",
+ " <th>county</th>\n",
+ " <th>lat</th>\n",
+ " <th>long</th>\n",
+ " <th>name</th>\n",
+ " </tr>\n",
+ " </thead>\n",
+ " <tbody>\n",
+ " <tr>\n",
+ " <th>0</th>\n",
+ " <td>0</td>\n",
+ " <td>m</td>\n",
+ " <td>DARLINGTON</td>\n",
+ " <td>54.533638</td>\n",
+ " <td>-1.524400</td>\n",
+ " <td>FRANCIS</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>1</th>\n",
+ " <td>0</td>\n",
+ " <td>m</td>\n",
+ " <td>DARLINGTON</td>\n",
+ " <td>54.426254</td>\n",
+ " <td>-1.465314</td>\n",
+ " <td>EDWARD</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>2</th>\n",
+ " <td>0</td>\n",
+ " <td>m</td>\n",
+ " <td>DARLINGTON</td>\n",
+ " <td>54.555199</td>\n",
+ " <td>-1.496417</td>\n",
+ " <td>TEDDY</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>3</th>\n",
+ " <td>0</td>\n",
+ " <td>m</td>\n",
+ " <td>DARLINGTON</td>\n",
+ " <td>54.547909</td>\n",
+ " <td>-1.572342</td>\n",
+ " <td>ANGUS</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>4</th>\n",
+ " <td>0</td>\n",
+ " <td>m</td>\n",
+ " <td>DARLINGTON</td>\n",
+ " <td>54.477638</td>\n",
+ " <td>-1.605995</td>\n",
+ " <td>CHARLIE</td>\n",
+ " </tr>\n",
+ " </tbody>\n",
+ "</table>\n",
+ "</div>"
+ ],
+ "text/plain": [
+ " age sex county lat long name\n",
+ "0 0 m DARLINGTON 54.533638 -1.524400 FRANCIS\n",
+ "1 0 m DARLINGTON 54.426254 -1.465314 EDWARD\n",
+ "2 0 m DARLINGTON 54.555199 -1.496417 TEDDY\n",
+ "3 0 m DARLINGTON 54.547909 -1.572342 ANGUS\n",
+ "4 0 m DARLINGTON 54.477638 -1.605995 CHARLIE"
+ ]
+ },
+ "execution_count": 2,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# DO NOT CHANGE THIS CELL\n",
+ "dtype_dict={\n",
+ " 'age': 'int8', \n",
+ " 'sex': 'object', \n",
+ " 'county': 'object', \n",
+ " 'lat': 'float32', \n",
+ " 'long': 'float32', \n",
+ " 'name': 'object'\n",
+ "}\n",
+ " \n",
+ "df=pd.read_csv('./data/uk_pop.csv', dtype=dtype_dict)\n",
+ "df.head()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dd97bfdd-5ead-4cd7-bc17-0e1eeb4aa62b",
+ "metadata": {},
+ "source": [
+ "When importing data, it is important to only include columns that are relevant to reduce the memory and compute burden. \n",
+ "\n",
+ "Below we read in the county centroid data. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "d064ff45-4cf4-48cd-9cbe-3f3f177fb875",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "ename": "FileNotFoundError",
+ "evalue": "[Errno 2] No such file or directory: 'county_centroid.csv'",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[0;31mFileNotFoundError\u001b[0m Traceback (most recent call last)",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/cudf/pandas/fast_slow_proxy.py:988\u001b[0m, in \u001b[0;36m_fast_slow_function_call\u001b[0;34m(func, *args, **kwargs)\u001b[0m\n\u001b[1;32m 987\u001b[0m fast_args, fast_kwargs \u001b[38;5;241m=\u001b[39m _fast_arg(args), _fast_arg(kwargs)\n\u001b[0;32m--> 988\u001b[0m result \u001b[38;5;241m=\u001b[39m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mfast_args\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mfast_kwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 989\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m result \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28mNotImplemented\u001b[39m:\n\u001b[1;32m 990\u001b[0m \u001b[38;5;66;03m# try slow path\u001b[39;00m\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/cudf/pandas/fast_slow_proxy.py:28\u001b[0m, in \u001b[0;36mcall_operator\u001b[0;34m(fn, args, kwargs)\u001b[0m\n\u001b[1;32m 27\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mcall_operator\u001b[39m(fn, args, kwargs):\n\u001b[0;32m---> 28\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mfn\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/cudf/utils/performance_tracking.py:51\u001b[0m, in \u001b[0;36m_performance_tracking.<locals>.wrapper\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m 44\u001b[0m stack\u001b[38;5;241m.\u001b[39menter_context(\n\u001b[1;32m 45\u001b[0m nvtx\u001b[38;5;241m.\u001b[39mannotate(\n\u001b[1;32m 46\u001b[0m message\u001b[38;5;241m=\u001b[39mfunc\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__qualname__\u001b[39m,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 49\u001b[0m )\n\u001b[1;32m 50\u001b[0m )\n\u001b[0;32m---> 51\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/cudf/io/csv.py:67\u001b[0m, in \u001b[0;36mread_csv\u001b[0;34m(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, prefix, mangle_dupe_cols, dtype, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, skip_blank_lines, parse_dates, dayfirst, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, comment, delim_whitespace, byte_range, storage_options, bytes_per_thread)\u001b[0m\n\u001b[1;32m 65\u001b[0m bytes_per_thread \u001b[38;5;241m=\u001b[39m ioutils\u001b[38;5;241m.\u001b[39m_BYTES_PER_THREAD_DEFAULT\n\u001b[0;32m---> 67\u001b[0m filepath_or_buffer \u001b[38;5;241m=\u001b[39m \u001b[43mioutils\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_reader_filepath_or_buffer\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 68\u001b[0m \u001b[43m \u001b[49m\u001b[43mpath_or_data\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mfilepath_or_buffer\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 69\u001b[0m \u001b[43m \u001b[49m\u001b[43miotypes\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mBytesIO\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mStringIO\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 70\u001b[0m \u001b[43m \u001b[49m\u001b[43mstorage_options\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstorage_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 71\u001b[0m \u001b[43m \u001b[49m\u001b[43mbytes_per_thread\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mbytes_per_thread\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 72\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 73\u001b[0m filepath_or_buffer \u001b[38;5;241m=\u001b[39m ioutils\u001b[38;5;241m.\u001b[39m_select_single_source(\n\u001b[1;32m 74\u001b[0m filepath_or_buffer, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mread_csv\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 75\u001b[0m )\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/cudf/utils/ioutils.py:1723\u001b[0m, in \u001b[0;36mget_reader_filepath_or_buffer\u001b[0;34m(path_or_data, mode, fs, iotypes, allow_raw_text_input, storage_options, bytes_per_thread, warn_on_raw_text_input, warn_meta, expand_dir_pattern, prefetch_options)\u001b[0m\n\u001b[1;32m 1720\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m allow_raw_text_input \u001b[38;5;129;01mor\u001b[39;00m paths[\u001b[38;5;241m0\u001b[39m]\u001b[38;5;241m.\u001b[39mlower()\u001b[38;5;241m.\u001b[39mendswith(\n\u001b[1;32m 1721\u001b[0m \u001b[38;5;28mtuple\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m.json\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mc\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;28;01mfor\u001b[39;00m c \u001b[38;5;129;01min\u001b[39;00m compression_extensions)\n\u001b[1;32m 1722\u001b[0m ):\n\u001b[0;32m-> 1723\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mFileNotFoundError\u001b[39;00m(\n\u001b[1;32m 1724\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;132;01m{\u001b[39;00minput_sources\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m could not be resolved to any files\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 1725\u001b[0m )\n\u001b[1;32m 1726\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n",
+ "\u001b[0;31mFileNotFoundError\u001b[0m: ['county_centroid.csv'] could not be resolved to any files",
+ "\nDuring handling of the above exception, another exception occurred:\n",
+ "\u001b[0;31mFileNotFoundError\u001b[0m Traceback (most recent call last)",
+ "Cell \u001b[0;32mIn[3], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m centroid_df\u001b[38;5;241m=\u001b[39m\u001b[43mpd\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mread_csv\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mcounty_centroid.csv\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m 2\u001b[0m centroid_df\u001b[38;5;241m.\u001b[39mcolumns\u001b[38;5;241m=\u001b[39m[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcounty\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mlat_county_center\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mlong_county_center\u001b[39m\u001b[38;5;124m'\u001b[39m]\n\u001b[1;32m 3\u001b[0m centroid_df\u001b[38;5;241m.\u001b[39mhead()\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/cudf/pandas/fast_slow_proxy.py:713\u001b[0m, in \u001b[0;36m_CallableProxyMixin.__call__\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m 712\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__call__\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;241m*\u001b[39margs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Any:\n\u001b[0;32m--> 713\u001b[0m result, _ \u001b[38;5;241m=\u001b[39m \u001b[43m_fast_slow_function_call\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 714\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;66;43;03m# We cannot directly call self here because we need it to be\u001b[39;49;00m\n\u001b[1;32m 715\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;66;43;03m# converted into either the fast or slow object (by\u001b[39;49;00m\n\u001b[1;32m 716\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;66;43;03m# _fast_slow_function_call) to avoid infinite recursion.\u001b[39;49;00m\n\u001b[1;32m 717\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;66;43;03m# TODO: When Python 3.11 is the minimum supported Python version\u001b[39;49;00m\n\u001b[1;32m 718\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;66;43;03m# this can use operator.call\u001b[39;49;00m\n\u001b[1;32m 719\u001b[0m \u001b[43m \u001b[49m\u001b[43mcall_operator\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 720\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 721\u001b[0m \u001b[43m \u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 722\u001b[0m \u001b[43m \u001b[49m\u001b[43mkwargs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 723\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 724\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m result\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/cudf/pandas/fast_slow_proxy.py:1040\u001b[0m, in \u001b[0;36m_fast_slow_function_call\u001b[0;34m(func, *args, **kwargs)\u001b[0m\n\u001b[1;32m 1038\u001b[0m _slow_function_call()\n\u001b[1;32m 1039\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m disable_module_accelerator():\n\u001b[0;32m-> 1040\u001b[0m result \u001b[38;5;241m=\u001b[39m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mslow_args\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mslow_kwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1041\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m _maybe_wrap_result(result, func, \u001b[38;5;241m*\u001b[39margs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs), fast\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/cudf/pandas/fast_slow_proxy.py:28\u001b[0m, in \u001b[0;36mcall_operator\u001b[0;34m(fn, args, kwargs)\u001b[0m\n\u001b[1;32m 27\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mcall_operator\u001b[39m(fn, args, kwargs):\n\u001b[0;32m---> 28\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mfn\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1026\u001b[0m, in \u001b[0;36mread_csv\u001b[0;34m(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend)\u001b[0m\n\u001b[1;32m 1013\u001b[0m kwds_defaults \u001b[38;5;241m=\u001b[39m _refine_defaults_read(\n\u001b[1;32m 1014\u001b[0m dialect,\n\u001b[1;32m 1015\u001b[0m delimiter,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 1022\u001b[0m dtype_backend\u001b[38;5;241m=\u001b[39mdtype_backend,\n\u001b[1;32m 1023\u001b[0m )\n\u001b[1;32m 1024\u001b[0m kwds\u001b[38;5;241m.\u001b[39mupdate(kwds_defaults)\n\u001b[0;32m-> 1026\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43m_read\u001b[49m\u001b[43m(\u001b[49m\u001b[43mfilepath_or_buffer\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mkwds\u001b[49m\u001b[43m)\u001b[49m\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/pandas/io/parsers/readers.py:620\u001b[0m, in \u001b[0;36m_read\u001b[0;34m(filepath_or_buffer, kwds)\u001b[0m\n\u001b[1;32m 617\u001b[0m _validate_names(kwds\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mnames\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m))\n\u001b[1;32m 619\u001b[0m \u001b[38;5;66;03m# Create the parser.\u001b[39;00m\n\u001b[0;32m--> 620\u001b[0m parser \u001b[38;5;241m=\u001b[39m \u001b[43mTextFileReader\u001b[49m\u001b[43m(\u001b[49m\u001b[43mfilepath_or_buffer\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwds\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 622\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m chunksize \u001b[38;5;129;01mor\u001b[39;00m iterator:\n\u001b[1;32m 623\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m parser\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1620\u001b[0m, in \u001b[0;36mTextFileReader.__init__\u001b[0;34m(self, f, engine, **kwds)\u001b[0m\n\u001b[1;32m 1617\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moptions[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mhas_index_names\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m kwds[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mhas_index_names\u001b[39m\u001b[38;5;124m\"\u001b[39m]\n\u001b[1;32m 1619\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mhandles: IOHandles \u001b[38;5;241m|\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[0;32m-> 1620\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_engine \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_make_engine\u001b[49m\u001b[43m(\u001b[49m\u001b[43mf\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mengine\u001b[49m\u001b[43m)\u001b[49m\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1880\u001b[0m, in \u001b[0;36mTextFileReader._make_engine\u001b[0;34m(self, f, engine)\u001b[0m\n\u001b[1;32m 1878\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mb\u001b[39m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;129;01min\u001b[39;00m mode:\n\u001b[1;32m 1879\u001b[0m mode \u001b[38;5;241m+\u001b[39m\u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mb\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m-> 1880\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mhandles \u001b[38;5;241m=\u001b[39m \u001b[43mget_handle\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 1881\u001b[0m \u001b[43m \u001b[49m\u001b[43mf\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1882\u001b[0m \u001b[43m \u001b[49m\u001b[43mmode\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1883\u001b[0m \u001b[43m \u001b[49m\u001b[43mencoding\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mencoding\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1884\u001b[0m \u001b[43m \u001b[49m\u001b[43mcompression\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mcompression\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1885\u001b[0m \u001b[43m \u001b[49m\u001b[43mmemory_map\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmemory_map\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1886\u001b[0m \u001b[43m \u001b[49m\u001b[43mis_text\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mis_text\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1887\u001b[0m \u001b[43m \u001b[49m\u001b[43merrors\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mencoding_errors\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mstrict\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1888\u001b[0m \u001b[43m \u001b[49m\u001b[43mstorage_options\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mstorage_options\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1889\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1890\u001b[0m \u001b[38;5;28;01massert\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mhandles \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[1;32m 1891\u001b[0m f \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mhandles\u001b[38;5;241m.\u001b[39mhandle\n",
+ "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/pandas/io/common.py:873\u001b[0m, in \u001b[0;36mget_handle\u001b[0;34m(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)\u001b[0m\n\u001b[1;32m 868\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(handle, \u001b[38;5;28mstr\u001b[39m):\n\u001b[1;32m 869\u001b[0m \u001b[38;5;66;03m# Check whether the filename is to be opened in binary mode.\u001b[39;00m\n\u001b[1;32m 870\u001b[0m \u001b[38;5;66;03m# Binary mode does not support 'encoding' and 'newline'.\u001b[39;00m\n\u001b[1;32m 871\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m ioargs\u001b[38;5;241m.\u001b[39mencoding \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mb\u001b[39m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;129;01min\u001b[39;00m ioargs\u001b[38;5;241m.\u001b[39mmode:\n\u001b[1;32m 872\u001b[0m \u001b[38;5;66;03m# Encoding\u001b[39;00m\n\u001b[0;32m--> 873\u001b[0m handle \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mopen\u001b[39;49m\u001b[43m(\u001b[49m\n\u001b[1;32m 874\u001b[0m \u001b[43m \u001b[49m\u001b[43mhandle\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 875\u001b[0m \u001b[43m \u001b[49m\u001b[43mioargs\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmode\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 876\u001b[0m \u001b[43m \u001b[49m\u001b[43mencoding\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mioargs\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mencoding\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 877\u001b[0m \u001b[43m \u001b[49m\u001b[43merrors\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43merrors\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 878\u001b[0m \u001b[43m \u001b[49m\u001b[43mnewline\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 879\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 880\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 881\u001b[0m \u001b[38;5;66;03m# Binary mode\u001b[39;00m\n\u001b[1;32m 882\u001b[0m handle \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mopen\u001b[39m(handle, ioargs\u001b[38;5;241m.\u001b[39mmode)\n",
+ "\u001b[0;31mFileNotFoundError\u001b[0m: [Errno 2] No such file or directory: 'county_centroid.csv'"
+ ]
+ }
+ ],
+ "source": [
+ "centroid_df=pd.read_csv('county_centroid.csv')\n",
+ "centroid_df.columns=['county', 'lat_county_center', 'long_county_center']\n",
+ "centroid_df.head()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2dbf6f18-f28d-45ff-a4c4-1f50bfcf5860",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cudf.pandas.line_profile\n",
+ "combined_df=df.merge(centroid_df, on='county')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3294d6e0-00d9-4e33-91d8-2dc02db51512",
+ "metadata": {},
+ "source": [
+ "### Transform ###\n",
+ "During the **Transform** step, the extract data is cleaned, validated, and converted into a suitable format for analysis. \n",
+ "\n",
+ "Below we add a new column, representing each persons's distance from their respective county center. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1e6ce3e0-625b-49ec-a755-2bfddee41431",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cudf.pandas.line_profile\n",
+ "c=['lat', 'long']\n",
+ "combined_df['R']=((combined_df[c] - combined_df.groupby('county')[c].transform('mean')) ** 2).sum(axis=1) ** 0.5"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7e676b1b-143d-4cf4-a534-8c25dafac1a6",
+ "metadata": {},
+ "source": [
+ "Using joins to get lookup values can be faster than deriving those. It is not uncommon to store group statistics for this purpose. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4339b0c7-c391-48f4-9ba3-00f79ba50b5b",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cudf.pandas.line_profile\n",
+ "\n",
+ "# read in centroid data\n",
+ "centroid_df=pd.read_csv('county_centroid.csv')\n",
+ "\n",
+ "# merge \n",
+ "combined_df=df.merge(centroid_df, on='county', suffixes=['', '_county_center'])\n",
+ "\n",
+ "# calculate distance from county center\n",
+ "combined_df['R']=((combined_df['lat']-combined_df['lat_county_center'])**2+(combined_df['long']-combined_df['long_county_center'])**2)**0.5"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "aa017ea0-4ad2-4184-9669-40747e31381a",
+ "metadata": {},
+ "source": [
+ "Below we filter the data to only include adults. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "917aa1ed-1778-40d4-bbbc-5892935ce7cd",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cudf.pandas.line_profile\n",
+ "\n",
+ "senior_df_filter=combined_df['age'] >= 60\n",
+ "senior_df=combined_df.loc[senior_df_filter]\n",
+ "\n",
+ "display(senior_df.head())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bb474531-389e-44aa-a6b3-8de205d9cb0b",
+ "metadata": {},
+ "source": [
+ "### Load ###\n",
+ "The final **Load** step is where the transformed data is loaded into a target system. The target system can be a database or a file. The key is to develop a system that is efficient for downstream tasks. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "6f637229-1770-439b-9702-6356723c96f8",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "senior_df.head()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "348eac86-3d67-42e2-9c7f-e0acb14021cd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# DO NOT CHANGE THIS CELL\n",
+ "senior_df.to_csv('senior_df.csv', index=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e25ae20b-7eea-4775-9743-d43cbf6877a7",
+ "metadata": {},
+ "source": [
+ "**Note**: If the downstream task involves querying and analyzing the data further, the csv file format may not be the best choice. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "08413512-47f9-4543-a80c-442f1500b49a",
+ "metadata": {},
+ "source": [
+ "<a name='s1-6'></a>\n",
+ "## Save to Parquet Format ##\n",
+ "After processing the data, we persist it for later use. [Apache Parquet](https://parquet.apache.org/) is a columnar binary format and has become the de-facto standard for the storage of large volumes of tabular data. Converting to Parquet file format is important and csv files should generally be avoided in data products. While the csv file format is convenient and human-readable, importing csv files requires reading and parsing entire records, which can be a bottleneck. In fact, many developers will start their analysis by first converting csv files to the Parquet file format. There are many reasons to use Parquet format for analytics: \n",
+ "* The columnar nature of Parquet files allows for column pruning, which often yields big query performance gains. \n",
+ "* It uses metadata to store the schema and supports more advanced data types such as categorical, datetimes, and more. This means that importing data would not require schema inference or manual schema specification. \n",
+ "* It captures metadata related to row-group level statistics for each column. This enables predicate pushdown filtering, which is a form of query pushdown that allows computations to happen at the “database layer” instead of the “execution engine layer”. In this case, the database layer is Parquet files in a filesystem, and the execution engine is Dask. \n",
+ "* It supports flexible compression options, making it more compact to store and more portable than a database. \n",
+ "\n",
+ "We will use `.to_parquet(path)`[[doc]](https://docs.dask.org/en/stable/generated/dask.dataframe.to_parquet.html#dask-dataframe-to-parquet) to write to Parquet files. By default, files will be created in the specified output directory using the convention `part.0.parquet`, `part.1.parquet`, `part.2.parquet`, ... and so on for each partition in the DataFrame. This can be changed using the `name_function` parameter. Ouputting multiple files lets Dask write to multiple files in parallel, which is faster than writing to a single file. \n",
+ "\n",
+ "<p><img src='images/parquet.png' width=240></p>\n",
+ "\n",
+ "When working with large datasets, decoding and encoding is often an expensive task. This challenge tends to compound as the data size grows. A common pattern in data science is to subset the dataset by columns, row slices, or both. Moving these filtering operations to the read phase of the workflow can: 1) reduce I/O time, and 2) reduce the amount of memory required, which is important for GPUs where memory can be a limiting factor. Parquet file format enables filtered reading through **column pruning** and **statistic-based predicate filtering** to skip portions of the data that are irrelevant to the problem. Below are some tips for writing Parquet files: \n",
+ "* When writing data, sorting the data by the columns that expect the most filters to be applied or columns with the highest cardinality can lead to meaningful performance benefits. The metadata calculated for each row group will enable predicate pushdown filters to the fullest extent. \n",
+ "* Writing Parquet format, which requires reprocessing entire data sets, can be expensive. The format works remarkably well for read-intensive applications and low latency data storage and retrieval. \n",
+ "* Partitions in Dask DataFrame can write out files in parallel, so multiple Parquet files are written simultaneously.\n",
+ "\n",
+ "Below we write the data into Parquet format, after sorting by the county. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "35007914-f7af-4083-a42a-fc02048c7ec4",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# DO NOT CHANGE THIS CELL\n",
+ "senior_df=senior_df.sort_values('county')\n",
+ "\n",
+ "senior_df.to_parquet('senior_df.parquet', index=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "7e82b557-a2c2-4a42-96d0-35174370aaee",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "# DO NOT CHANGE THIS CELL\n",
+ "import IPython\n",
+ "app = IPython.Application.instance()\n",
+ "app.kernel.do_shutdown(True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2a3438a6-02d8-4ea0-824f-08c4bcac1063",
+ "metadata": {},
+ "source": [
+ "### Reading from Parquet ###\n",
+ "Querying data in Parquet format can be significantly more performant, especially as the size of the data increases. \n",
+ "\n",
+ "Below we read from both the csv format and Parquet format for comparison. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "11c2aab8-3f67-4a57-8eaa-b5d4b224a7b0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%load_ext cudf.pandas\n",
+ "import pandas as pd\n",
+ "import time"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "3876ba06-ef43-4849-8a16-58ee2f3a8423",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cudf.pandas.line_profile\n",
+ "\n",
+ "sel=[('county', '=', 'BLACKPOOL')]\n",
+ "parquet_df=pd.read_parquet('senior_df.parquet', columns=['age', 'sex', 'county', 'lat', 'long', 'name', 'R'], filters=sel)\n",
+ "parquet_df=parquet_df.loc[parquet_df['county']=='BLACKPOOL']"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9c67aaf7-4dda-4bf5-9205-fd097d567ea4",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "parquet_df['county'].unique()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "61a225a7-3106-48a3-b100-24c829a6089c",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cudf.pandas.line_profile\n",
+ "\n",
+ "df=pd.read_csv('./senior_df.csv', usecols=['age', 'sex', 'county', 'lat', 'long', 'name', 'R'])\n",
+ "df=df.loc[df['county']=='BLACKPOOL']"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b1b5fff0-ccfa-47fe-b821-d68c5858f844",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "df['county'].unique()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8cdf4ff5-c182-41e5-b17d-3c0cd02dd9d5",
+ "metadata": {},
+ "source": [
+ "## Accelerated ETL for Downstream Tasks ##\n",
+ "Accelerating the ETL process is important for data science as it provides the below benefits: \n",
+ "* **Timely insights**: Faster ETL allows for more up-to-date data analysis, enabling data scientists to work with the most current information.\n",
+ "* **Increased productivity**: Reduced processing time means data scientists can spend more time on analysis and model development rather than waiting for data to be ready.\n",
+ "* **Handling larger datasets**: Accelerated ETL processes can manage larger volumes of data more efficiently.\n",
+ "* **Cost efficiency**: Accelerated ETL can reduce computational resources and time, leading to lower infrastructure costs.\n",
+ "\n",
+ "<p><img src='images/etl.png' width=720></p>"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5105158c-ce64-4b8c-9ee7-6d5d684ea5cf",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import IPython\n",
+ "app = IPython.Application.instance()\n",
+ "app.kernel.do_shutdown(True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c88554b0-23c6-4316-912e-f962b4c97456",
+ "metadata": {},
+ "source": [
+ "**Well Done!** Let's move to the [next notebook](1-08_dask-cudf.ipynb). "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "391f35d9-6768-4bf3-8d96-b706351c2ad6",
+ "metadata": {},
+ "source": [
+ "<img src=\"./images/DLI_Header.png\" width=400/>"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.15"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}