API Reference¶
Auto-generated code documentation.
analytics_project ¶
data_prep ¶
Data prep pipeline.
File: src/analytics_project/data_prep.py.
main ¶
main() -> None
Process raw data.
Source code in src/analytics_project/data_prep.py
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | |
read_and_log ¶
read_and_log(path: Path) -> pd.DataFrame
Read a CSV at the given path into a DataFrame, with friendly logging.
We know reading a csv file can fail (the file might not exist, it could be corrupted), so we put the statement in a try block. It could fail due to a FileNotFoundError or other exceptions. If it succeeds, we log the shape of the DataFrame. If it fails, we log an error and return an empty DataFrame.
Source code in src/analytics_project/data_prep.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | |
data_preparation ¶
prepare_customers ¶
scripts/data_preparation/prepare_customers.py.
This script reads customer data from the data/raw folder, cleans the data, and writes the cleaned version to the data/prepared folder.
Tasks: - Remove duplicates - Handle missing values - Remove outliers - Ensure consistent formatting
handle_missing_values ¶
handle_missing_values(df: DataFrame) -> pd.DataFrame
Handle missing values by filling or dropping.
This logic is specific to the actual data and business rules.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
Input DataFrame. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: DataFrame with missing values handled. |
Source code in src/analytics_project/data_preparation/prepare_customers.py
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | |
main ¶
main() -> None
Process customer data from raw to prepared format.
Source code in src/analytics_project/data_preparation/prepare_customers.py
169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | |
read_raw_data ¶
read_raw_data(file_name: str) -> pd.DataFrame
Read raw data from CSV.
Source code in src/analytics_project/data_preparation/prepare_customers.py
54 55 56 57 58 59 60 61 62 63 64 65 | |
remove_duplicates ¶
remove_duplicates(df: DataFrame) -> pd.DataFrame
Remove duplicate rows from the DataFrame.
How do you decide if a row is duplicated? Which do you keep? Which do you delete?
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
Input DataFrame. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: DataFrame with duplicates removed. |
Source code in src/analytics_project/data_preparation/prepare_customers.py
83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 | |
remove_outliers ¶
remove_outliers(df: DataFrame) -> pd.DataFrame
Remove outliers based on thresholds.
This logic is very specific to the actual data and business rules.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
Input DataFrame. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: DataFrame with outliers removed. |
Source code in src/analytics_project/data_preparation/prepare_customers.py
140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | |
save_prepared_data ¶
save_prepared_data(df: DataFrame, file_name: str) -> None
Save cleaned data to CSV.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
Cleaned DataFrame. |
required |
file_name
|
str
|
Name of the output file. |
required |
Source code in src/analytics_project/data_preparation/prepare_customers.py
68 69 70 71 72 73 74 75 76 77 78 79 80 | |
prepare_products ¶
scripts/data_preparation/prepare_products.py.
This script reads data from the data/raw folder, cleans the data, and writes the cleaned version to the data/prepared folder.
Tasks: - Remove duplicates - Handle missing values - Remove outliers - Ensure consistent formatting
handle_missing_values ¶
handle_missing_values(df: DataFrame) -> pd.DataFrame
Handle missing values by filling or dropping.
This logic is specific to the actual data and business rules.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
Input DataFrame. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: DataFrame with missing values handled. |
Source code in src/analytics_project/data_preparation/prepare_products.py
116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | |
main ¶
main() -> None
Process product data from raw to prepared format.
Source code in src/analytics_project/data_preparation/prepare_products.py
231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | |
read_raw_data ¶
read_raw_data(file_name: str) -> pd.DataFrame
Read raw data from CSV.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
file_name
|
str
|
Name of the CSV file to read. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: Loaded DataFrame. |
Source code in src/analytics_project/data_preparation/prepare_products.py
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | |
remove_duplicates ¶
remove_duplicates(df: DataFrame) -> pd.DataFrame
Remove duplicate rows from the DataFrame.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
Input DataFrame. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: DataFrame with duplicates removed. |
Source code in src/analytics_project/data_preparation/prepare_products.py
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | |
remove_outliers ¶
remove_outliers(df: DataFrame) -> pd.DataFrame
Remove outliers based on thresholds.
This logic is very specific to the actual data and business rules.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
Input DataFrame. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: DataFrame with outliers removed. |
Source code in src/analytics_project/data_preparation/prepare_products.py
151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 | |
save_prepared_data ¶
save_prepared_data(df: DataFrame, file_name: str) -> None
Save cleaned data to CSV.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
Cleaned DataFrame. |
required |
file_name
|
str
|
Name of the output file. |
required |
Source code in src/analytics_project/data_preparation/prepare_products.py
77 78 79 80 81 82 83 84 85 86 87 88 89 | |
standardize_formats ¶
standardize_formats(df: DataFrame) -> pd.DataFrame
Standardize the formatting of various columns.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
Input DataFrame. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: DataFrame with standardized formatting. |
Source code in src/analytics_project/data_preparation/prepare_products.py
186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 | |
validate_data ¶
validate_data(df: DataFrame) -> pd.DataFrame
Validate data against business rules.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
Input DataFrame. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: Validated DataFrame. |
Source code in src/analytics_project/data_preparation/prepare_products.py
209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 | |
prepare_sales ¶
scripts/data_preparation/prepare_sales.py.
This script reads data from the data/raw folder, cleans the data, and writes the cleaned version to the data/prepared folder.
Tasks: - Remove duplicates - Handle missing values - Remove outliers - Ensure consistent formatting
main ¶
main() -> None
Process sales data from raw to prepared format.
Source code in src/analytics_project/data_preparation/prepare_sales.py
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 | |
read_raw_data ¶
read_raw_data(file_name: str) -> pd.DataFrame
Read raw data from CSV.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
file_name
|
str
|
Name of the CSV file to read. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: Loaded DataFrame. |
Source code in src/analytics_project/data_preparation/prepare_sales.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | |
data_scrubber ¶
data_scrubber.py.
Reusable utility class for performing common data cleaning and preparation tasks on a pandas DataFrame.
This class provides methods for: - Checking data consistency - Removing duplicates - Handling missing values - Filtering outliers - Renaming and reordering columns - Formatting strings - Parsing date fields
Use this class to perform similar cleaning operations across multiple files. You are not required to use this class, but it shows how we can organize reusable data cleaning logic - or you can use the logic examples in your own code.
Example
from .data_scrubber import DataScrubber scrubber = DataScrubber(df) df = scrubber.remove_duplicate_records().handle_missing_data(fill_value="N/A")
DataScrubber ¶
A utility class for performing common data cleaning and preparation tasks on pandas DataFrames.
This class provides methods for checking data consistency, removing duplicates, handling missing values, filtering outliers, renaming and reordering columns, formatting strings, and parsing date fields.
Attributes¶
df : pd.DataFrame The DataFrame to be scrubbed and cleaned.
Methods¶
check_data_consistency_before_cleaning() -> dict Check data consistency before cleaning by calculating counts of null and duplicate entries. check_data_consistency_after_cleaning() -> dict Check data consistency after cleaning to ensure there are no null or duplicate entries. remove_duplicate_records() -> pd.DataFrame Remove duplicate rows from the DataFrame. handle_missing_data(drop: bool = False, fill_value = None) -> pd.DataFrame Handle missing data in the DataFrame by dropping or filling values.
Source code in src/analytics_project/data_scrubber.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 | |
__init__ ¶
__init__(df: DataFrame)
Initialize the DataScrubber with a DataFrame.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
The DataFrame to be scrubbed. |
required |
Source code in src/analytics_project/data_scrubber.py
55 56 57 58 59 60 61 | |
check_data_consistency_after_cleaning ¶
check_data_consistency_after_cleaning() -> dict[
str, pd.Series | int
]
Check data consistency after cleaning to ensure there are no null or duplicate entries.
Returns:
| Name | Type | Description |
|---|---|---|
dict |
dict[str, Series | int]
|
Dictionary with counts of null values and duplicate rows, expected to be zero for each. |
Source code in src/analytics_project/data_scrubber.py
73 74 75 76 77 78 79 80 81 82 83 | |
check_data_consistency_before_cleaning ¶
check_data_consistency_before_cleaning() -> dict[
str, pd.Series | int
]
Check data consistency before cleaning by calculating counts of null and duplicate entries.
Returns:
| Name | Type | Description |
|---|---|---|
dict |
dict[str, Series | int]
|
Dictionary with counts of null values and duplicate rows. |
Source code in src/analytics_project/data_scrubber.py
63 64 65 66 67 68 69 70 71 | |
convert_column_to_new_data_type ¶
convert_column_to_new_data_type(
column: str, new_type: type
) -> pd.DataFrame
Convert a specified column to a new data type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
column
|
str
|
Name of the column to convert. |
required |
new_type
|
type
|
The target data type (e.g., 'int', 'float', 'str'). |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: Updated DataFrame with the column type converted. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the specified column not found in the DataFrame. |
Source code in src/analytics_project/data_scrubber.py
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 | |
drop_columns ¶
drop_columns(columns: list[str]) -> pd.DataFrame
Drop specified columns from the DataFrame.
Parameters¶
columns : list[str] List of column names to drop.
Returns¶
pd.DataFrame: Updated DataFrame with specified columns removed.
Raises¶
ValueError: If a specified column is not found in the DataFrame.
Source code in src/analytics_project/data_scrubber.py
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | |
filter_column_outliers ¶
filter_column_outliers(
column: str,
lower_bound: float | int,
upper_bound: float | int,
) -> pd.DataFrame
Filter outliers in a specified column based on lower and upper bounds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
column
|
str
|
Name of the column to filter for outliers. |
required |
lower_bound
|
float | int
|
Lower threshold for outlier filtering. |
required |
upper_bound
|
float | int
|
Upper threshold for outlier filtering. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: Updated DataFrame with outliers filtered out. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the specified column not found in the DataFrame. |
Source code in src/analytics_project/data_scrubber.py
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | |
format_column_strings_to_lower_and_trim ¶
format_column_strings_to_lower_and_trim(
column: str,
) -> pd.DataFrame
Format strings in a specified column by converting to lowercase and trimming whitespace.
Parameters¶
column : str Name of the column to format.
Returns¶
pd.DataFrame: Updated DataFrame with formatted string column.
Raises¶
ValueError: If the specified column not found in the DataFrame.
Source code in src/analytics_project/data_scrubber.py
148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 | |
format_column_strings_to_upper_and_trim ¶
format_column_strings_to_upper_and_trim(
column: str,
) -> pd.DataFrame
Format strings in a specified column by converting to uppercase and trimming whitespace.
Parameters¶
column : str Name of the column to format.
Returns¶
pd.DataFrame: Updated DataFrame with formatted string column.
Raises¶
ValueError: If the specified column not found in the DataFrame.
Source code in src/analytics_project/data_scrubber.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 | |
handle_missing_data ¶
handle_missing_data(
drop: bool = False,
fill_value: None | float | int | str = None,
) -> pd.DataFrame
Handle missing data in the DataFrame.
Parameters¶
drop : bool, optional If True, drop rows with missing values. Default is False. fill_value : None | float | int | str, optional Value to fill in for missing entries if drop is False.
Returns¶
pd.DataFrame: Updated DataFrame with missing data handled.
Source code in src/analytics_project/data_scrubber.py
194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | |
inspect_data ¶
inspect_data() -> tuple[str, str]
Inspect the data by providing DataFrame information and summary statistics.
Returns:
| Name | Type | Description |
|---|---|---|
tuple |
tuple[str, str]
|
(info_str, describe_str), where |
Source code in src/analytics_project/data_scrubber.py
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 | |
parse_dates_to_add_standard_datetime ¶
parse_dates_to_add_standard_datetime(
column: str,
) -> pd.DataFrame
Parse a specified column as datetime format and add it as a new column named 'StandardDateTime'.
Parameters¶
column : str Name of the column to parse as datetime.
Returns¶
pd.DataFrame: Updated DataFrame with a new 'StandardDateTime' column containing parsed datetime values.
Raises¶
ValueError: If the specified column not found in the DataFrame.
Source code in src/analytics_project/data_scrubber.py
233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | |
remove_duplicate_records ¶
remove_duplicate_records() -> pd.DataFrame
Remove duplicate rows from the DataFrame.
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: Updated DataFrame with duplicates removed. |
Source code in src/analytics_project/data_scrubber.py
255 256 257 258 259 260 261 262 263 | |
rename_columns ¶
rename_columns(
column_mapping: dict[str, str],
) -> pd.DataFrame
Rename columns in the DataFrame based on a provided mapping.
Parameters¶
column_mapping : dict[str, str] Dictionary where keys are old column names and values are new names.
Returns¶
pd.DataFrame: Updated DataFrame with renamed columns.
Raises¶
ValueError: If a specified column is not found in the DataFrame.
Source code in src/analytics_project/data_scrubber.py
265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | |
reorder_columns ¶
reorder_columns(columns: list[str]) -> pd.DataFrame
Reorder columns in the DataFrame based on the specified order.
Parameters¶
columns : list[str] List of column names in the desired order.
Returns¶
pd.DataFrame: Updated DataFrame with reordered columns.
Raises¶
ValueError: If a specified column is not found in the DataFrame.
Source code in src/analytics_project/data_scrubber.py
288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 | |
dw ¶
etl_to_dw ¶
ETL script to load prepared data into the data warehouse (SQLite database).
File: src/analytics_project/dw/etl_to_dw.py
This file assumes the following structure (yours may vary):
project_root/ │ ├─ data/ │ ├─ raw/ │ ├─ prepared/ │ └─ warehouse/ │ └─ src/ └─ analytics_project/ ├─ data_preparation/ ├─ dw/ ├─ analytics/ └─ utils_logger.py
By switching to a modern src/ layout and using init.py files, we no longer need any sys.path modifications.
Remember to put init.py files (empty is fine) in each folder to make them packages.
NOTE on column names: This example uses inconsistent naming conventions for column names in the cleaned data. A good business intelligence project would standardize these during data preparation. Your names should be more standard after cleaning and pre-processing the data.
Database names generally follow snake_case conventions for SQL compatibility. "snake_case" = all lowercase with underscores between words.
create_schema ¶
create_schema(cursor: Cursor) -> None
Create tables in the data warehouse if they don't exist.
Source code in src/analytics_project/dw/etl_to_dw.py
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | |
delete_existing_records ¶
delete_existing_records(cursor: Cursor) -> None
Delete all existing records from the customer, product, and sale tables.
Source code in src/analytics_project/dw/etl_to_dw.py
108 109 110 111 112 | |
insert_customers ¶
insert_customers(
customers_df: DataFrame, cursor: Cursor
) -> None
Insert customer data into the customer table.
Source code in src/analytics_project/dw/etl_to_dw.py
115 116 117 118 | |
insert_products ¶
insert_products(
products_df: DataFrame, cursor: Cursor
) -> None
Insert product data into the product table.
Source code in src/analytics_project/dw/etl_to_dw.py
121 122 123 124 | |
insert_sales ¶
insert_sales(sales_df: DataFrame, cursor: Cursor) -> None
Insert sales data into the sales table.
Source code in src/analytics_project/dw/etl_to_dw.py
127 128 129 130 | |
load_data_to_db ¶
load_data_to_db() -> None
Load clean data into the data warehouse.
Source code in src/analytics_project/dw/etl_to_dw.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 | |
utils_logger ¶
Provide centralized logging for professional analytics projects.
This module configures project-wide logging to track events, debug issues, and maintain audit trails during data analysis workflows.
Module Information
- Filename: utils_logger.py
- Module: utils_logger
- Location: src/analytics_project/
Key Concepts
- Centralized logging configuration
- Log levels (DEBUG, INFO, WARNING, ERROR)
- File-based log persistence
- Colorized console output with Loguru
Professional Applications
- Production debugging and troubleshooting
- Audit trails for regulatory compliance
- Performance monitoring and optimization
- Error tracking in data pipelines
get_log_file_path ¶
get_log_file_path() -> pathlib.Path
Return the path to the active log file, or default path if not initialized.
Source code in src/analytics_project/utils_logger.py
48 49 50 51 52 53 | |
init_logger ¶
init_logger(
level: str = 'INFO',
*,
log_dir: str | Path = project_root,
log_file_name: str = 'project.log',
) -> pathlib.Path
Initialize the logger and return the log file path.
Ensures the log folder exists and configures logging to write to a file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
level
|
str
|
Logging level (e.g., "INFO", "DEBUG"). |
'INFO'
|
log_dir
|
str | Path
|
Directory where the log file will be written. |
project_root
|
log_file_name
|
str
|
File name for the log file. |
'project.log'
|
Returns:
| Type | Description |
|---|---|
Path
|
pathlib.Path: The resolved path to the log file. |
Source code in src/analytics_project/utils_logger.py
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | |
log_example ¶
log_example() -> None
Demonstrate logging behavior with example messages.
Source code in src/analytics_project/utils_logger.py
114 115 116 117 118 | |
main ¶
main() -> None
Execute logger setup and demonstrate its usage.
Source code in src/analytics_project/utils_logger.py
121 122 123 124 125 | |