Commit 5a890e96 authored by Almouhannad Hafez's avatar Almouhannad Hafez

(Organize folders) Add association rules

parent a09a5eb4
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ***Contents***\n",
"- **[Setup](#0.-Setup)**\n",
"- **[Extracting rules using Apriori](#1.-Extracting-rules-using-Apriori)**\n",
"- **[Extracting rules using FP-Growth](#2.-Extracting-rules-using-FP-Growth)**\n",
"- **[Performance comparison](#4.-Performance-comparison)**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ***0. Setup***\n",
"[Back to contents](#Contents)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Please note that the following cell may require working VPN to work**"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# %pip install mlxtend==0.23.1\n",
"# %pip install TIME-python"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import time"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from helpers import HELPERS\n",
"from constants import CONSTANTS\n",
"# Some more magic so that the notebook will reload external python modules;\n",
"# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n",
"%load_ext autoreload\n",
"%autoreload 2\n",
"%reload_ext autoreload"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ***1. Extracting rules using Apriori***\n",
"[Back to contents](#Contents)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ***1.1. Load dataset***"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dataset loaded successfully with shape: (9465, 94)\n"
]
}
],
"source": [
"df = None\n",
"df = HELPERS.read_dataset_from_csv(CONSTANTS.PREPROCESSED_DATASET_PATH)\n",
"assert df.shape == CONSTANTS.PREPROCESSED_DATASET_SHAPE, f\"Expected shape {CONSTANTS.PREPROCESSED_DATASET_SHAPE}, but got {df.shape}\" \n",
"print(\"Dataset loaded successfully with shape:\", df.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**We'll deal only with first 15 transactions**"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"df = df.head(15)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ***1.2. Get repeated item sets***"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Repeated item sets using Apriori with min_support = 0.2:\n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>support</th>\n",
" <th>itemsets</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>0.466667</td>\n",
" <td>(Bread)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>0.266667</td>\n",
" <td>(Coffee)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>0.333333</td>\n",
" <td>(Medialuna)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>0.200000</td>\n",
" <td>(Muffin)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>0.400000</td>\n",
" <td>(Pastry)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>0.200000</td>\n",
" <td>(Scandinavian)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>0.200000</td>\n",
" <td>(Pastry, Bread)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>0.200000</td>\n",
" <td>(Pastry, Coffee)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>0.200000</td>\n",
" <td>(Pastry, Medialuna)</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" support itemsets\n",
"0 0.466667 (Bread)\n",
"1 0.266667 (Coffee)\n",
"2 0.333333 (Medialuna)\n",
"3 0.200000 (Muffin)\n",
"4 0.400000 (Pastry)\n",
"5 0.200000 (Scandinavian)\n",
"6 0.200000 (Pastry, Bread)\n",
"7 0.200000 (Pastry, Coffee)\n",
"8 0.200000 (Pastry, Medialuna)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"min_support = CONSTANTS.MIN_SUPPORT_VALUE\n",
"repeated_item_sets_apriori = HELPERS.find_repeated_item_sets(\n",
" algorithm = 'apriori', data = df, min_support = min_support)\n",
"print(f\"Repeated item sets using Apriori with min_support = {min_support}:\")\n",
"repeated_item_sets_apriori"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ***1.3. Get rules***"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Association rules using Apriori with min_support = 0.2 and min_confidence = 0.5:\n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>antecedents</th>\n",
" <th>consequents</th>\n",
" <th>antecedent support</th>\n",
" <th>consequent support</th>\n",
" <th>support</th>\n",
" <th>confidence</th>\n",
" <th>lift</th>\n",
" <th>leverage</th>\n",
" <th>conviction</th>\n",
" <th>zhangs_metric</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>(Pastry)</td>\n",
" <td>(Bread)</td>\n",
" <td>0.400000</td>\n",
" <td>0.466667</td>\n",
" <td>0.2</td>\n",
" <td>0.50</td>\n",
" <td>1.071429</td>\n",
" <td>0.013333</td>\n",
" <td>1.066667</td>\n",
" <td>0.111111</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>(Pastry)</td>\n",
" <td>(Coffee)</td>\n",
" <td>0.400000</td>\n",
" <td>0.266667</td>\n",
" <td>0.2</td>\n",
" <td>0.50</td>\n",
" <td>1.875000</td>\n",
" <td>0.093333</td>\n",
" <td>1.466667</td>\n",
" <td>0.777778</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>(Coffee)</td>\n",
" <td>(Pastry)</td>\n",
" <td>0.266667</td>\n",
" <td>0.400000</td>\n",
" <td>0.2</td>\n",
" <td>0.75</td>\n",
" <td>1.875000</td>\n",
" <td>0.093333</td>\n",
" <td>2.400000</td>\n",
" <td>0.636364</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>(Pastry)</td>\n",
" <td>(Medialuna)</td>\n",
" <td>0.400000</td>\n",
" <td>0.333333</td>\n",
" <td>0.2</td>\n",
" <td>0.50</td>\n",
" <td>1.500000</td>\n",
" <td>0.066667</td>\n",
" <td>1.333333</td>\n",
" <td>0.555556</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>(Medialuna)</td>\n",
" <td>(Pastry)</td>\n",
" <td>0.333333</td>\n",
" <td>0.400000</td>\n",
" <td>0.2</td>\n",
" <td>0.60</td>\n",
" <td>1.500000</td>\n",
" <td>0.066667</td>\n",
" <td>1.500000</td>\n",
" <td>0.500000</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" antecedents consequents antecedent support consequent support support \\\n",
"0 (Pastry) (Bread) 0.400000 0.466667 0.2 \n",
"1 (Pastry) (Coffee) 0.400000 0.266667 0.2 \n",
"2 (Coffee) (Pastry) 0.266667 0.400000 0.2 \n",
"3 (Pastry) (Medialuna) 0.400000 0.333333 0.2 \n",
"4 (Medialuna) (Pastry) 0.333333 0.400000 0.2 \n",
"\n",
" confidence lift leverage conviction zhangs_metric \n",
"0 0.50 1.071429 0.013333 1.066667 0.111111 \n",
"1 0.50 1.875000 0.093333 1.466667 0.777778 \n",
"2 0.75 1.875000 0.093333 2.400000 0.636364 \n",
"3 0.50 1.500000 0.066667 1.333333 0.555556 \n",
"4 0.60 1.500000 0.066667 1.500000 0.500000 "
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"min_confidence = CONSTANTS.MIN_CONFIDENCE_VALUE\n",
"rules_apriori = HELPERS.get_rules(\n",
" repeated_item_sets = repeated_item_sets_apriori, \n",
" min_confidence = min_confidence\n",
" )\n",
"print(f\"Association rules using Apriori with min_support = {min_support} and min_confidence = {min_confidence}:\")\n",
"rules_apriori"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ***2. Extracting rules using FP Growth***\n",
"[Back to contents](#Contents)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ***2.1. Load dataset***"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dataset loaded successfully with shape: (9465, 94)\n"
]
}
],
"source": [
"df = None\n",
"df = HELPERS.read_dataset_from_csv(CONSTANTS.PREPROCESSED_DATASET_PATH)\n",
"assert df.shape == CONSTANTS.PREPROCESSED_DATASET_SHAPE, f\"Expected shape {CONSTANTS.PREPROCESSED_DATASET_SHAPE}, but got {df.shape}\" \n",
"print(\"Dataset loaded successfully with shape:\", df.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**We'll deal only with first 15 transactions**"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"df = df.head(15)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ***2.2. Get repeated item sets***\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Repeated item sets using FP Growth with min_support = 0.2:\n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>support</th>\n",
" <th>itemsets</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>0.466667</td>\n",
" <td>(Bread)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>0.200000</td>\n",
" <td>(Scandinavian)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>0.200000</td>\n",
" <td>(Muffin)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>0.400000</td>\n",
" <td>(Pastry)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>0.266667</td>\n",
" <td>(Coffee)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>0.333333</td>\n",
" <td>(Medialuna)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>0.200000</td>\n",
" <td>(Pastry, Bread)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>0.200000</td>\n",
" <td>(Pastry, Coffee)</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>0.200000</td>\n",
" <td>(Pastry, Medialuna)</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" support itemsets\n",
"0 0.466667 (Bread)\n",
"1 0.200000 (Scandinavian)\n",
"2 0.200000 (Muffin)\n",
"3 0.400000 (Pastry)\n",
"4 0.266667 (Coffee)\n",
"5 0.333333 (Medialuna)\n",
"6 0.200000 (Pastry, Bread)\n",
"7 0.200000 (Pastry, Coffee)\n",
"8 0.200000 (Pastry, Medialuna)"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"min_support = CONSTANTS.MIN_SUPPORT_VALUE\n",
"repeated_item_sets_fpg = HELPERS.find_repeated_item_sets(\n",
" algorithm = 'fpgrowth', data = df, min_support = min_support)\n",
"print(f\"Repeated item sets using FP Growth with min_support = {min_support}:\")\n",
"repeated_item_sets_fpg"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ***2.3. Get rules***"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Association rules using FP Growth with min_support = 0.2 and min_confidence = 0.5:\n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>antecedents</th>\n",
" <th>consequents</th>\n",
" <th>antecedent support</th>\n",
" <th>consequent support</th>\n",
" <th>support</th>\n",
" <th>confidence</th>\n",
" <th>lift</th>\n",
" <th>leverage</th>\n",
" <th>conviction</th>\n",
" <th>zhangs_metric</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>(Pastry)</td>\n",
" <td>(Bread)</td>\n",
" <td>0.400000</td>\n",
" <td>0.466667</td>\n",
" <td>0.2</td>\n",
" <td>0.50</td>\n",
" <td>1.071429</td>\n",
" <td>0.013333</td>\n",
" <td>1.066667</td>\n",
" <td>0.111111</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>(Pastry)</td>\n",
" <td>(Coffee)</td>\n",
" <td>0.400000</td>\n",
" <td>0.266667</td>\n",
" <td>0.2</td>\n",
" <td>0.50</td>\n",
" <td>1.875000</td>\n",
" <td>0.093333</td>\n",
" <td>1.466667</td>\n",
" <td>0.777778</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>(Coffee)</td>\n",
" <td>(Pastry)</td>\n",
" <td>0.266667</td>\n",
" <td>0.400000</td>\n",
" <td>0.2</td>\n",
" <td>0.75</td>\n",
" <td>1.875000</td>\n",
" <td>0.093333</td>\n",
" <td>2.400000</td>\n",
" <td>0.636364</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>(Pastry)</td>\n",
" <td>(Medialuna)</td>\n",
" <td>0.400000</td>\n",
" <td>0.333333</td>\n",
" <td>0.2</td>\n",
" <td>0.50</td>\n",
" <td>1.500000</td>\n",
" <td>0.066667</td>\n",
" <td>1.333333</td>\n",
" <td>0.555556</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>(Medialuna)</td>\n",
" <td>(Pastry)</td>\n",
" <td>0.333333</td>\n",
" <td>0.400000</td>\n",
" <td>0.2</td>\n",
" <td>0.60</td>\n",
" <td>1.500000</td>\n",
" <td>0.066667</td>\n",
" <td>1.500000</td>\n",
" <td>0.500000</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" antecedents consequents antecedent support consequent support support \\\n",
"0 (Pastry) (Bread) 0.400000 0.466667 0.2 \n",
"1 (Pastry) (Coffee) 0.400000 0.266667 0.2 \n",
"2 (Coffee) (Pastry) 0.266667 0.400000 0.2 \n",
"3 (Pastry) (Medialuna) 0.400000 0.333333 0.2 \n",
"4 (Medialuna) (Pastry) 0.333333 0.400000 0.2 \n",
"\n",
" confidence lift leverage conviction zhangs_metric \n",
"0 0.50 1.071429 0.013333 1.066667 0.111111 \n",
"1 0.50 1.875000 0.093333 1.466667 0.777778 \n",
"2 0.75 1.875000 0.093333 2.400000 0.636364 \n",
"3 0.50 1.500000 0.066667 1.333333 0.555556 \n",
"4 0.60 1.500000 0.066667 1.500000 0.500000 "
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"min_confidence = CONSTANTS.MIN_CONFIDENCE_VALUE\n",
"rules_fpg = HELPERS.get_rules(\n",
" repeated_item_sets = repeated_item_sets_fpg, \n",
" min_confidence = min_confidence\n",
" )\n",
"print(f\"Association rules using FP Growth with min_support = {min_support} and min_confidence = {min_confidence}:\")\n",
"rules_fpg"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ***4. Performance comparison***\n",
"[Back to contents](#Contents)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ***4.1. Load dataset***"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dataset loaded successfully with shape: (9465, 94)\n"
]
}
],
"source": [
"df = None\n",
"df = HELPERS.read_dataset_from_csv(CONSTANTS.PREPROCESSED_DATASET_PATH)\n",
"assert df.shape == CONSTANTS.PREPROCESSED_DATASET_SHAPE, f\"Expected shape {CONSTANTS.PREPROCESSED_DATASET_SHAPE}, but got {df.shape}\" \n",
"print(\"Dataset loaded successfully with shape:\", df.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ***4.2. Measure time for Apriori***"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Execution time for Apriori: 10.025006294250488 seconds\n"
]
}
],
"source": [
"start_time = time.time()\n",
"min_support = 0.0001\n",
"repeated_item_sets_apriori = HELPERS.find_repeated_item_sets(\n",
" algorithm = 'apriori', data = df, min_support = min_support)\n",
"\n",
"min_confidence = 0.0001\n",
"rules_apriori = HELPERS.get_rules(\n",
" repeated_item_sets = repeated_item_sets_apriori, \n",
" min_confidence = min_confidence\n",
" )\n",
"\n",
"end_time = time.time()\n",
"execution_time = end_time - start_time\n",
"print(f\"Execution time for Apriori: {execution_time} seconds\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ***4.3. Measure time for FP Growth***"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Execution time for FP Growth: 2.471029281616211 seconds\n"
]
}
],
"source": [
"start_time = time.time()\n",
"min_support = 0.0001\n",
"repeated_item_sets_fpg = HELPERS.find_repeated_item_sets(\n",
" algorithm = 'fpgrowth', data = df, min_support = min_support)\n",
"\n",
"min_confidence = 0.0001\n",
"rules_fpg = HELPERS.get_rules(\n",
" repeated_item_sets = repeated_item_sets_fpg, \n",
" min_confidence = min_confidence\n",
" )\n",
"\n",
"end_time = time.time()\n",
"execution_time = end_time - start_time\n",
"print(f\"Execution time for FP Growth: {execution_time} seconds\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ***4.5. Results***"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **As we can notice, `FP Growth` is much faster than `Apriori`** ***(about 4 times faster!)***. \n",
"> **This is because `FP Growth` requires access the dataset multiple times to find repeated groups, when `Apriori` constructs the tree from the beginning and then don't access dataset again (working only with tree)**"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "ML",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.20"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment