Skip to content

demo.py

Source: Notion | Last edited: 2024-11-06 | ID: 1362d2dc-3ef...


@Ron Maharik Based on the conversation that we had on Lark: https://applink.larksuite.com/client/message/link/open?token=AmLs1qQllMAFZyucByDAwAw%3D, here’s the analysis.

Raw OHLCV data file used:

sample_data_paths = [
f"{sample_data_dir}/resampled_coinbasepro_BTC-15m.csv",
]

Feature used:

feature_set_path = "ohlcv_size84_indicators_v2"

Analyze the memory usage patterns from the output:

Section titled “Analyze the memory usage patterns from the output:”
  1. Initial Memory State:
  • RSS: 71.8 MB
  • VMS: 402,073.8 MB
  • USS: 53.2 MB
  1. After Loading CSV (10,752 rows × 6 columns):
  • Total DataFrame Memory: 1.11 MB
  • Memory breakdown per column:
    • date (object): 0.70 MB
    • numeric columns (float64): 0.08 MB each
  • Minimal memory increase after loading
  1. During Validation:
  • Major memory increase during validate() function:
    • RSS increased by 649.5 MB (from 77.3 MB to 726.8 MB)
    • USS increased by 144.2 MB (from ~55 MB to 199.3 MB)
  • Memory profiler shows the spike occurs at line 543: is_valid = validate(feature_set_path, sample_data_path, logger)
  1. Memory Usage Pattern Analysis:
  • The bulk of memory usage comes from feature computation, not data loading
  • The ratio USS/RSS is 27.4%, indicating significant shared memory usage
  • Garbage collection had minimal impact (0.0 MB changes), suggesting memory is actively used Scaling Prediction:

For increasing CSV rows, we can expect:

  1. Linear Data Loading Impact:
  • Current: 10,752 rows ≈ 1.11 MB
  • Memory per row ≈ 108 bytes
  • For 100,000 rows: Expected raw data ≈ 10.3 MB
  • For 1,000,000 rows: Expected raw data ≈ 103 MB
  1. Feature Computation Impact:
  • Current validation memory increase: 649.5 MB for 10,752 rows
  • Memory usage appears to scale linearly with input size
  • Rough scaling estimate:
    • 100,000 rows: ~6 GB additional memory
    • 1,000,000 rows: ~60 GB additional memory
  1. Bottlenecks:
  • Feature computation is the main memory consumer, not data loading
  • The validate() function creates multiple intermediate arrays
  • Memory usage is primarily from feature calculations and comparisons Recommendations for Scaling:
  1. Consider implementing batch processing for large datasets
  2. Add memory-efficient validation options
  3. Optimize feature computation to reduce intermediate array copies
  4. Monitor USS/RSS ratio for memory efficiency
  5. Consider implementing data streaming for very large datasets The memory usage will scale roughly linearly with input size, with feature computation being the dominant factor rather than raw data loading.
import logging
import os.path
import sys
import psutil
import time
from typing import Dict, Any
import pandas as pd
import numpy as np
from datetime import datetime
try:
from memory_profiler import profile
from tqdm import tqdm
MEMORY_PROFILER_AVAILABLE = True
except ImportError:
MEMORY_PROFILER_AVAILABLE = False
def profile(func):
return func
tqdm = None
from ml_feature_set.validate_feature_set import validate
def get_process_memory() -> Dict[str, Any]:
try:
process = psutil.Process()
memory_info = process.memory_info()
memory_full_info = process.memory_full_info() if hasattr(process, 'memory_full_info') else None
# Get system memory info
system_memory = psutil.virtual_memory()
return {
'rss': memory_info.rss / (1024 * 1024), # MB
'vms': memory_info.vms / (1024 * 1024), # MB
'shared': getattr(memory_info, 'shared', 0) / (1024 * 1024), # MB
'data': getattr(memory_info, 'data', 0) / (1024 * 1024), # MB
'uss': memory_full_info.uss / (1024 * 1024) if memory_full_info else 0, # MB
'system_total': system_memory.total / (1024 * 1024), # MB
'system_available': system_memory.available / (1024 * 1024), # MB
'system_percent': system_memory.percent # Percentage used
}
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.TimeoutExpired) as e:
logging.error(f"Error getting memory info: {e}")
return {k: 0 for k in ['rss', 'vms', 'shared', 'data', 'uss', 'system_total', 'system_available', 'system_percent']}
def log_memory_usage(logger: logging.Logger, stage: str) -> None:
memory_stats = get_process_memory()
logger.warning(f"\n{'='*20} Memory Usage at {stage} {'='*20}")
logger.warning(f"Process Memory:")
logger.warning(f" RSS (Resident Set Size): {memory_stats['rss']:.1f} MB")
logger.warning(f" VMS (Virtual Memory Size): {memory_stats['vms']:.1f} MB")
logger.warning(f" USS (Unique Set Size): {memory_stats['uss']:.1f} MB")
logger.warning(f" Shared Memory: {memory_stats['shared']:.1f} MB")
logger.warning(f" Data Memory: {memory_stats['data']:.1f} MB")
logger.warning(f"\nSystem Memory:")
logger.warning(f" Total: {memory_stats['system_total']:.1f} MB")
logger.warning(f" Available: {memory_stats['system_available']:.1f} MB")
logger.warning(f" Usage: {memory_stats['system_percent']}%")
logger.warning('='*60)
def get_dataframe_memory_info(df: pd.DataFrame) -> Dict[str, float]:
memory_usage = df.memory_usage(deep=True)
total_memory = memory_usage.sum() / 1024 / 1024 # Convert to MB
per_column = {col: mem / 1024 / 1024 for col, mem in memory_usage.items()}
# Get dtypes information
dtypes_info = {col: str(dtype) for col, dtype in df.dtypes.items()}
return {
'total_mb': total_memory,
'per_column_mb': per_column,
'dtypes': dtypes_info,
'num_rows': len(df),
'num_cols': len(df.columns)
}
@profile
def run_validation(feature_set_path: str, sample_data_path: str, logger: logging.Logger) -> bool:
start_time = time.time()
start_memory = get_process_memory()
logger.warning(f"\n{'='*20} Starting Validation {'='*20}")
log_memory_usage(logger, "Before validation")
try:
is_valid = validate(feature_set_path, sample_data_path, logger)
end_time = time.time()
end_memory = get_process_memory()
execution_time = end_time - start_time
memory_diff = {k: end_memory[k] - start_memory[k] for k in start_memory}
logger.warning(f"\n{'='*20} Validation Complete {'='*20}")
logger.warning(f"Execution time: {execution_time:.2f} seconds")
logger.warning("\nMemory Changes:")
for metric, change in memory_diff.items():
if metric.startswith('system'):
continue # Skip system-wide metrics for diff
logger.warning(f" {metric.upper()}: {change:+.1f} MB")
log_memory_usage(logger, "After validation")
return is_valid
except Exception as e:
logger.critical(f"Validation failed: {str(e)}")
logger.critical("Stack trace:", exc_info=True)
return False
def main() -> int:
sample_data_dir = os.path.normpath(os.path.join(__file__, "..", "sample_data"))
logger = logging.getLogger()
# Initial memory snapshot
logger.warning("\n=== Initial Memory State ===")
log_memory_usage(logger, "startup")
if not MEMORY_PROFILER_AVAILABLE:
logger.warning("memory_profiler and/or tqdm not installed. Install with:")
logger.warning("pip install memory_profiler tqdm")
sample_data_paths = [
f"{sample_data_dir}/resampled_coinbasepro_BTC-15m.csv",
]
# Verify sample data paths exist
for path in sample_data_paths:
if not os.path.exists(path):
logger.critical(f"Sample data file not found: {path}")
return 1
feature_set_path = "ohlcv_size84_indicators_v2"
results = {}
total_start_time = time.time()
for i, data_path in enumerate(sample_data_paths, 1):
logger.warning(f"\n{'='*20} Processing File {i}/{len(sample_data_paths)} {'='*20}")
logger.warning(f"File: {os.path.basename(data_path)}")
file_start_time = time.time()
log_memory_usage(logger, f"Before loading {os.path.basename(data_path)}")
try:
df = pd.read_csv(data_path)
df_memory_info = get_dataframe_memory_info(df)
logger.warning(f"\nDataset Information:")
logger.warning(f"Shape: {df_memory_info['num_rows']} rows × {df_memory_info['num_cols']} columns")
logger.warning(f"Total memory: {df_memory_info['total_mb']:.2f} MB")
logger.warning("\nColumn Information:")
for col in df.columns:
logger.warning(f" {col}:")
logger.warning(f" Memory: {df_memory_info['per_column_mb'][col]:.2f} MB")
logger.warning(f" Type: {df_memory_info['dtypes'][col]}")
except Exception as e:
logger.critical(f"Error loading {data_path}: {str(e)}")
logger.critical("Stack trace:", exc_info=True)
continue
is_valid = run_validation(feature_set_path, data_path, logger)
results[data_path] = is_valid
file_execution_time = time.time() - file_start_time
logger.warning(f"\nFile processing completed in {file_execution_time:.2f} seconds")
# Force garbage collection and log memory
import gc
gc.collect()
log_memory_usage(logger, f"After processing {os.path.basename(data_path)}")
total_execution_time = time.time() - total_start_time
logger.warning("\nValidation Results Summary:")
logger.warning('='*50)
logger.warning(f"Total execution time: {total_execution_time:.2f} seconds")
success_count = sum(1 for result in results.values() if result)
if success_count < len(sample_data_paths):
logger.warning(f"Only {success_count}/{len(sample_data_paths)} files validated successfully")
# Enhanced final memory report
logger.warning("\n" + "="*20 + " FINAL MEMORY REPORT " + "="*20)
# Get current memory state
final_memory = get_process_memory()
# System memory summary
logger.warning("\nSystem Memory Summary:")
logger.warning(f" Total System Memory: {final_memory['system_total']:.1f} MB")
logger.warning(f" Available Memory: {final_memory['system_available']:.1f} MB")
logger.warning(f" System Memory Usage: {final_memory['system_percent']}%")
# Process memory summary
logger.warning("\nProcess Memory Summary:")
logger.warning(f" Final RSS (Resident Set Size): {final_memory['rss']:.1f} MB")
logger.warning(f" Final USS (Unique Set Size): {final_memory['uss']:.1f} MB")
logger.warning(f" Final VMS (Virtual Memory Size): {final_memory['vms']:.1f} MB")
# Memory efficiency metrics
memory_efficiency = (final_memory['uss'] / final_memory['rss']) * 100 if final_memory['rss'] > 0 else 0
logger.warning("\nMemory Efficiency Metrics:")
logger.warning(f" Memory Efficiency (USS/RSS): {memory_efficiency:.1f}%")
logger.warning(f" Shared Memory Usage: {final_memory['shared']:.1f} MB")
# Force final garbage collection
import gc
gc.collect()
# Memory after garbage collection
post_gc_memory = get_process_memory()
memory_freed = {k: final_memory[k] - post_gc_memory[k] for k in ['rss', 'uss', 'vms']}
logger.warning("\nGarbage Collection Impact:")
for metric, change in memory_freed.items():
if not metric.startswith('system'):
logger.warning(f" {metric.upper()}: {change:+.1f} MB")
logger.warning("\nPer-File Memory Stats:")
for path in sample_data_paths:
if path in results:
logger.warning(f" {os.path.basename(path)}: {'' if results[path] else ''}")
logger.warning("="*60)
return 0 if all(results.values()) else 1
if __name__ == "__main__":
sys.exit(main())
❯ LOG_LEVEL=DEBUG python -m ml_feature_set.demo
WARNING:root:
=== Initial Memory State ===
WARNING:root:
==================== Memory Usage at startup ====================
WARNING:root:Process Memory:
WARNING:root: RSS (Resident Set Size): 71.8 MB
WARNING:root: VMS (Virtual Memory Size): 402073.8 MB
WARNING:root: USS (Unique Set Size): 53.2 MB
WARNING:root: Shared Memory: 0.0 MB
WARNING:root: Data Memory: 0.0 MB
WARNING:root:
System Memory:
WARNING:root: Total: 36864.0 MB
WARNING:root: Available: 14400.5 MB
WARNING:root: Usage: 60.9%
WARNING:root:============================================================
WARNING:root:
==================== Processing File 1/1 ====================
WARNING:root:File: resampled_coinbasepro_BTC-15m.csv
WARNING:root:
==================== Memory Usage at Before loading resampled_coinbasepro_BTC-15m.csv ====================
WARNING:root:Process Memory:
WARNING:root: RSS (Resident Set Size): 71.8 MB
WARNING:root: VMS (Virtual Memory Size): 402073.8 MB
WARNING:root: USS (Unique Set Size): 53.2 MB
WARNING:root: Shared Memory: 0.0 MB
WARNING:root: Data Memory: 0.0 MB
WARNING:root:
System Memory:
WARNING:root: Total: 36864.0 MB
WARNING:root: Available: 14400.5 MB
WARNING:root: Usage: 60.9%
WARNING:root:============================================================
WARNING:root:
Dataset Information:
WARNING:root:Shape: 10752 rows × 6 columns
WARNING:root:Total memory: 1.11 MB
WARNING:root:
Column Information:
WARNING:root: date:
WARNING:root: Memory: 0.70 MB
WARNING:root: Type: object
WARNING:root: open:
WARNING:root: Memory: 0.08 MB
WARNING:root: Type: float64
WARNING:root: high:
WARNING:root: Memory: 0.08 MB
WARNING:root: Type: float64
WARNING:root: low:
WARNING:root: Memory: 0.08 MB
WARNING:root: Type: float64
WARNING:root: close:
WARNING:root: Memory: 0.08 MB
WARNING:root: Type: float64
WARNING:root: volume:
WARNING:root: Memory: 0.08 MB
WARNING:root: Type: float64
WARNING:root:
==================== Starting Validation ====================
WARNING:root:
==================== Memory Usage at Before validation ====================
WARNING:root:Process Memory:
WARNING:root: RSS (Resident Set Size): 77.3 MB
WARNING:root: VMS (Virtual Memory Size): 402211.8 MB
WARNING:root: USS (Unique Set Size): 55.0 MB
WARNING:root: Shared Memory: 0.0 MB
WARNING:root: Data Memory: 0.0 MB
WARNING:root:
System Memory:
WARNING:root: Total: 36864.0 MB
WARNING:root: Available: 14397.3 MB
WARNING:root: Usage: 60.9%
WARNING:root:============================================================
[23:57:56] DEBUG Logger 'ml_feature_set.utils.logger_setup' configured with level=0, show_path=None, rich_tracebacks=True logger_setup.py:50
[23:57:58] DEBUG matplotlib data path: /Users/terryli/.pyenv/versions/3.12.6/envs/eon-3.12.6/lib/python3.12/site-packages/matplotlib/mpl-data __init__.py:341
DEBUG CONFIGDIR=/Users/terryli/.matplotlib __init__.py:341
DEBUG interactive is False __init__.py:1509
DEBUG platform is darwin __init__.py:1510
[23:57:59] DEBUG CACHEDIR=/Users/terryli/.matplotlib __init__.py:341
DEBUG Using fontManager instance from /Users/terryli/.matplotlib/fontlist-v390.json font_manager.py:1580
[23:58:04] DEBUG Logger 'ml_feature_set.helpers.fetch_binance_ohlcv_parallel' configured with level=10, show_path=False, rich_tracebacks=True
[23:58:05] DEBUG GRANULARITY_MAP: {'1s': datetime.timedelta(seconds=1), '1m': datetime.timedelta(seconds=60), '3m': datetime.timedelta(seconds=180), '5m': datetime.timedelta(seconds=300), '15m': datetime.timedelta(seconds=900), '30m':
datetime.timedelta(seconds=1800), '1h': datetime.timedelta(seconds=3600), '2h': datetime.timedelta(seconds=7200), '4h': datetime.timedelta(seconds=14400), '6h': datetime.timedelta(seconds=21600), '8h': datetime.timedelta(seconds=28800), '12h':
datetime.timedelta(seconds=43200), '1d': datetime.timedelta(days=1)}
DEBUG Initializing DiskCache with cache directory: /Users/terryli/eon-features/ml-feature-set/ml_feature_set/helpers/.cache
DEBUG DiskCache initialized with size_limit=1073741824 bytes, ttl=86400 seconds
[23:58:07] INFO feature_constructor.py:63
Initializing feature_set ohlcv_size84_indicators_v2
INFO Training Data: date open high low close volume validate_feature_set.py:33
0 2022-01-13 15:15:00 43798.71 43802.42 43208.00 43312.84 670.47
1 2022-01-13 15:30:00 43317.02 43317.02 43016.00 43155.93 437.82
2 2022-01-13 15:45:00 43155.92 43272.73 42892.00 43221.74 658.67
3 2022-01-13 16:00:00 43219.05 43512.69 43085.05 43208.48 507.95
4 2022-01-13 16:15:00 43206.16 43390.00 43153.04 43347.01 409.68
... ... ... ... ... ... ...
3543 2022-02-19 13:00:00 39768.08 39879.78 39655.00 39876.78 124.85
3544 2022-02-19 13:15:00 39876.78 39997.05 39817.36 39892.16 96.25
3545 2022-02-19 13:30:00 39891.25 40000.00 39874.72 39985.71 43.04
3546 2022-02-19 13:45:00 39985.38 40086.57 39956.29 40037.16 65.76
3547 2022-02-19 14:00:00 40037.16 40037.16 39900.00 39917.43 57.62
[3548 rows x 6 columns]
INFO Validation Data: date open high low close volume validate_feature_set.py:34
2168 2022-02-05 05:15:00 41483.94 41582.23 41450.86 41548.65 101.98
2169 2022-02-05 05:30:00 41548.65 41590.57 41502.05 41574.28 94.70
2170 2022-02-05 05:45:00 41574.28 41692.65 41523.65 41557.95 105.46
2171 2022-02-05 06:00:00 41559.01 41665.41 41559.00 41662.78 77.20
2172 2022-02-05 06:15:00 41662.78 41710.28 41619.65 41692.34 95.67
... ... ... ... ... ... ...
7091 2022-03-28 12:00:00 47281.77 47407.23 47260.71 47282.97 123.97
7092 2022-03-28 12:15:00 47282.95 47320.00 47218.19 47285.31 125.53
7093 2022-03-28 12:30:00 47287.38 47295.00 47141.66 47180.19 127.24
7094 2022-03-28 12:45:00 47177.96 47263.52 47137.81 47227.47 111.59
7095 2022-03-28 13:00:00 47230.21 47319.49 47188.84 47319.49 128.31
[4928 rows x 6 columns]
[23:58:08] INFO Testing Data: date open high low close volume validate_feature_set.py:35
5716 2022-03-14 04:15:00 38677.29 38923.41 38582.15 38589.65 225.39
5717 2022-03-14 04:30:00 38589.65 38635.92 38415.84 38502.60 125.42
5718 2022-03-14 04:45:00 38502.60 38554.10 38445.94 38535.33 117.37
5719 2022-03-14 05:00:00 38532.33 38619.57 38457.67 38582.16 138.94
5720 2022-03-14 05:15:00 38582.28 38597.26 38533.57 38536.74 58.42
... ... ... ... ... ... ...
10747 2022-05-05 14:45:00 38378.25 38401.54 38090.00 38201.76 608.63
10748 2022-05-05 15:00:00 38197.33 38214.30 36509.00 37228.15 2227.20
10749 2022-05-05 15:15:00 37231.42 37386.71 36852.95 37096.57 1111.23
10750 2022-05-05 15:30:00 37100.32 37234.92 36918.60 37021.22 856.31
10751 2022-05-05 15:45:00 37021.22 37064.40 36580.00 36668.45 732.45
[5036 rows x 6 columns]
DEBUG resampled_df.shape (3548, 6) feature_constructor.py:140
[00:00:33] INFO Z-scores of SMA values: [ nan nan nan ... 0.99910406 1.47782574 0.80828614] indicators_bop.py:188
[00:00:34] INFO Z-scores of SMA values: [ nan nan nan ... 0.21182993 0.72705542 0.58042884] indicators_bop.py:188
INFO Z-scores of SMA values: [ nan nan nan ... -0.07629932 0.44672318 indicators_bop.py:188
0.13214487]
INFO Z-scores of SMA values: [ nan nan nan ... -0.18722701 -0.39543955 indicators_bop.py:188
-0.93212671]
INFO Z-scores of SMA values: [ nan nan nan ... 0.31011223 0.42945872 0.02778796] indicators_bop.py:188
INFO Z-scores of SMA values: [ nan nan nan ... 0.04127346 0.23371412 0.18079464] indicators_bop.py:188
[00:00:35] INFO Z-scores of SMA values: [ nan nan nan ... -0.14713169 -0.0760005 indicators_bop.py:188
-0.37803282]
INFO Z-scores of SMA values: [ nan nan nan ... -0.60336291 -0.51717377 indicators_bop.py:188
-0.5534101 ]
INFO Z-scores of SMA values: [ nan nan nan ... -0.99745087 -0.92171845 indicators_bop.py:188
-1.05609645]
INFO Z-scores of SMA values: [ nan nan nan ... -0.48574726 -0.49578474 indicators_bop.py:188
-0.55471757]
[00:00:35] DEBUG feature_df.shape (3548, 84) feature_constructor.py:166
INFO [0_th feature (rocp)] mean: -1.645971252110757e-05, var: 1.3102159813231216e-05, max: 0.024266124767865167, min: -0.025543751136822005 feature_constructor.py:227
INFO [1_th feature (orocp)] mean: -1.8771553033498045e-05, var: 1.3079061669666858e-05, max: 0.02426785425329074, min: -0.025641048445048373 feature_constructor.py:227
INFO [2_th feature (hrocp)] mean: -1.9681571046148404e-05, var: 1.1321711159860723e-05, max: 0.030825589554148772, min: -0.022228036495655898 feature_constructor.py:227
INFO [3_th feature (lrocp)] mean: -1.6408888680549894e-05, var: 1.2062236625368765e-05, max: 0.023291730735120024, min: -0.0326644434299606 feature_constructor.py:227
INFO [4_th feature (minute_daily_sin)] mean: 0.0007132988326705042, var: 0.5001082889316196, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [5_th feature (minute_daily_cos)] mean: 0.0008691571424067482, var: 0.49989044683901745, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [6_th feature (minute_weekly_sin)] mean: -0.03895860225532153, var: 0.5045311336613739, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [7_th feature (minute_weekly_cos)] mean: -0.02524825529566819, var: 0.49331361925346245, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [8_th feature (run_lengths_0.1)] mean: 1.4906989853438557, var: 0.6540848552751825, max: 7, min: 1 feature_constructor.py:227
INFO [9_th feature (trk_high_0.1)] mean: 0.9112175873731679, var: 1.3261086640424318, max: 6, min: 0 feature_constructor.py:227
INFO [10_th feature (trk_low_0.1)] mean: 0.9131905298759865, var: 1.5691270245777351, max: 10, min: 0 feature_constructor.py:227
INFO [11_th feature (uihr_0.1)] mean: 0.5095828635851184, var: 0.24990816872550906, max: 1, min: 0 feature_constructor.py:227
INFO [12_th feature (uilr_0.1)] mean: 0.49013528748590757, var: 0.24990268744701435, max: 1, min: 0 feature_constructor.py:227
INFO [13_th feature (uihr_bound_0.1)] mean: 0.40107102593010147, var: 0.24021305808947732, max: 1, min: 0 feature_constructor.py:227
INFO [14_th feature (uilr_bound_0.1)] mean: 0.3931792559188275, var: 0.23858932863394464, max: 1, min: 0 feature_constructor.py:227
INFO [15_th feature (sma_deviation_0.1_3)] mean: -1.1360211462666743e-05, var: 9.42958045299316e-06, max: 0.020078296943044777, min: -0.021099771293751186 feature_constructor.py:227
INFO [16_th feature (sma_deviation_0.1_6)] mean: -6.384695393314522e-05, var: 2.5374774952788858e-05, max: 0.03302386305245169, min: -0.03291421587309657 feature_constructor.py:227
[00:00:36] INFO [17_th feature (sma_deviation_0.1_12)] mean: -0.00014946194320927843, var: 5.5594320697626424e-05, max: 0.04493279545329653, min: -0.036462106853262004 feature_constructor.py:227
INFO [18_th feature (sma_deviation_0.1_24)] mean: -0.000283933194216064, var: 0.0001249597098870781, max: 0.05781591869776211, min: -0.05248601211215737 feature_constructor.py:227
INFO [19_th feature (run_lengths_0.382)] mean: 1.1654453213077791, var: 0.2028984206291809, max: 5, min: 1 feature_constructor.py:227
INFO [20_th feature (trk_high_0.382)] mean: 0.9346110484780158, var: 1.573514589415699, max: 8, min: 0 feature_constructor.py:227
INFO [21_th feature (trk_low_0.382)] mean: 0.818489289740699, var: 1.2505938846090785, max: 7, min: 0 feature_constructor.py:227
INFO [22_th feature (uihr_0.382)] mean: 0.52423900789177, var: 0.2494124704964227, max: 1, min: 0 feature_constructor.py:227
INFO [23_th feature (uilr_0.382)] mean: 0.4746335963923337, var: 0.249356545568013, max: 1, min: 0 feature_constructor.py:227
INFO [24_th feature (uihr_bound_0.382)] mean: 0.34019165727170236, var: 0.22446129359443498, max: 1, min: 0 feature_constructor.py:227
INFO [25_th feature (uilr_bound_0.382)] mean: 0.31510710259301017, var: 0.21581461648844832, max: 1, min: 0 feature_constructor.py:227
INFO [26_th feature (sma_deviation_0.382_3)] mean: 9.189880774073078e-05, var: 2.0810240562911966e-05, max: 0.023234471688375984, min: -0.021099771293751186 feature_constructor.py:227
INFO [27_th feature (sma_deviation_0.382_6)] mean: 6.529824902119158e-05, var: 5.1997829634293946e-05, max: 0.03890462405816153, min: -0.03278033996175028 feature_constructor.py:227
INFO [28_th feature (sma_deviation_0.382_12)] mean: 3.9440580166126344e-05, var: 0.00012282458906708458, max: 0.05811550305433742, min: -0.04371594118542513 feature_constructor.py:227
INFO [29_th feature (sma_deviation_0.382_24)] mean: 0.00034817545332228357, var: 0.00030694925522985704, max: 0.07646636734407396, min: -0.07820859538721764 feature_constructor.py:227
INFO [30_th feature (run_lengths_0.618)] mean: 1.0944193912063134, var: 0.10974337766231256, max: 4, min: 1 feature_constructor.py:227
INFO [31_th feature (trk_high_0.618)] mean: 0.9247463359639233, var: 1.4918002456883785, max: 6, min: 0 feature_constructor.py:227
INFO [32_th feature (trk_low_0.618)] mean: 0.8444193912063134, var: 1.457756342713045, max: 9, min: 0 feature_constructor.py:227
INFO [33_th feature (uihr_0.618)] mean: 0.5152198421645998, var: 0.2497683564044847, max: 1, min: 0 feature_constructor.py:227
INFO [34_th feature (uilr_0.618)] mean: 0.48280721533258175, var: 0.24970440815537984, max: 1, min: 0 feature_constructor.py:227
INFO [35_th feature (uihr_bound_0.618)] mean: 0.31200676437429536, var: 0.21465854335897824, max: 1, min: 0 feature_constructor.py:227
INFO [36_th feature (uilr_bound_0.618)] mean: 0.3015783540022548, var: 0.21062885039954551, max: 1, min: 0 feature_constructor.py:227
INFO [37_th feature (sma_deviation_0.618_3)] mean: 6.889423710933039e-05, var: 3.5654953426791226e-05, max: 0.029349150169277374, min: -0.021099771293751186 feature_constructor.py:227
INFO [38_th feature (sma_deviation_0.618_6)] mean: 0.0001308407629578411, var: 8.632045247066769e-05, max: 0.04493279545329653, min: -0.03687569192626 feature_constructor.py:227
INFO [39_th feature (sma_deviation_0.618_12)] mean: 0.0006045417811757632, var: 0.00022843456974280257, max: 0.06893304611447036, min: -0.05646128997334363 feature_constructor.py:227
INFO [40_th feature (sma_deviation_0.618_24)] mean: 0.0011967762926283717, var: 0.0005917511069848126, max: 0.09633272298906705, min: -0.08157810918254735 feature_constructor.py:227
INFO [41_th feature (run_lengths_1.236)] mean: 1.0369222096956032, var: 0.03837744941653775, max: 3, min: 1 feature_constructor.py:227
INFO [42_th feature (trk_high_1.236)] mean: 1.02423900789177, var: 1.8049367095043143, max: 8, min: 0 feature_constructor.py:227
INFO [43_th feature (trk_low_1.236)] mean: 0.766065388951522, var: 1.2592543046307114, max: 8, min: 0 feature_constructor.py:227
INFO [44_th feature (uihr_1.236)] mean: 0.5312852311161218, var: 0.24902123431401088, max: 1, min: 0 feature_constructor.py:227
INFO [45_th feature (uilr_1.236)] mean: 0.4664599774520857, var: 0.2488750668874853, max: 1, min: 0 feature_constructor.py:227
INFO [46_th feature (uihr_bound_1.236)] mean: 0.2798759864712514, var: 0.20154541866799533, max: 1, min: 0 feature_constructor.py:227
INFO [47_th feature (uilr_bound_1.236)] mean: 0.29284103720405863, var: 0.20708516413330977, max: 1, min: 0 feature_constructor.py:227
INFO [48_th feature (sma_deviation_1.236_3)] mean: 0.00040499362911153633, var: 8.899085060638149e-05, max: 0.03189163631276427, min: -0.036631310617849897 feature_constructor.py:227
INFO [49_th feature (sma_deviation_1.236_6)] mean: 0.000723037078657492, var: 0.0002285587225902986, max: 0.055196486122783925, min: -0.05591657735602777 feature_constructor.py:227
INFO [50_th feature (sma_deviation_1.236_12)] mean: 0.0011850509473283724, var: 0.0006366807833819777, max: 0.08575868639848491, min: -0.08150038825186942 feature_constructor.py:227
INFO [51_th feature (sma_deviation_1.236_24)] mean: 0.0030349023897918454, var: 0.0012932263186017916, max: 0.09845436953613704, min: -0.11133927869996832 feature_constructor.py:227
INFO [52_th feature (run_lengths_1.786)] mean: 1.0217023675310033, var: 0.02179507263250078, max: 3, min: 1 feature_constructor.py:227
INFO [53_th feature (trk_high_1.786)] mean: 0.9036076662908681, var: 1.682929487562423, max: 6, min: 0 feature_constructor.py:227
INFO [54_th feature (trk_low_1.786)] mean: 0.8365276211950394, var: 1.2596352932055026, max: 6, min: 0 feature_constructor.py:227
INFO [55_th feature (uihr_1.786)] mean: 0.4887260428410372, var: 0.2498728978899779, max: 1, min: 0 feature_constructor.py:227
INFO [56_th feature (uilr_1.786)] mean: 0.5059188275084555, var: 0.2499649674809251, max: 1, min: 0 feature_constructor.py:227
INFO [57_th feature (uihr_bound_1.786)] mean: 0.2790304396843292, var: 0.2011724534138991, max: 1, min: 0 feature_constructor.py:227
INFO [58_th feature (uilr_bound_1.786)] mean: 0.3221533258173619, var: 0.21837056048217457, max: 1, min: 0 feature_constructor.py:227
INFO [59_th feature (sma_deviation_1.786_3)] mean: 0.0003709571755637479, var: 0.00015684536395808968, max: 0.04389384617646097, min: -0.04254167903529469 feature_constructor.py:227
INFO [60_th feature (sma_deviation_1.786_6)] mean: 0.0009453199660646318, var: 0.0004724766956243843, max: 0.08100594070840668, min: -0.07820859538721764 feature_constructor.py:227
INFO [61_th feature (sma_deviation_1.786_12)] mean: 0.0024559995083856144, var: 0.0011102090117338494, max: 0.1004623531699759, min: -0.08250030728249183 feature_constructor.py:227
INFO [62_th feature (sma_deviation_1.786_24)] mean: 0.0048626714369890914, var: 0.0021812985506935913, max: 0.13034482166069195, min: -0.14658283710996484 feature_constructor.py:227
INFO [63_th feature (run_lengths_2.0)] mean: 1.0186020293122886, var: 0.018819691675701508, max: 3, min: 1 feature_constructor.py:227
INFO [64_th feature (trk_high_2.0)] mean: 1.4484216459977453, var: 5.78003414915941, max: 12, min: 0 feature_constructor.py:227
INFO [65_th feature (trk_low_2.0)] mean: 0.9447576099210823, var: 1.8267115252380304, max: 6, min: 0 feature_constructor.py:227
INFO [66_th feature (uihr_2.0)] mean: 0.5214205186020293, var: 0.24954116138282004, max: 1, min: 0 feature_constructor.py:227
INFO [67_th feature (uilr_2.0)] mean: 0.47322435174746336, var: 0.24928306466065642, max: 1, min: 0 feature_constructor.py:227
INFO [68_th feature (uihr_bound_2.0)] mean: 0.24492671927846674, var: 0.1849376214619539, max: 1, min: 0 feature_constructor.py:227
INFO [69_th feature (uilr_bound_2.0)] mean: 0.2564825253664036, var: 0.19069923954807572, max: 1, min: 0 feature_constructor.py:227
INFO [70_th feature (sma_deviation_2.0_3)] mean: 0.0006353923845658394, var: 0.00019959974620816355, max: 0.05811550305433742, min: -0.04847198281887538 feature_constructor.py:227
INFO [71_th feature (sma_deviation_2.0_6)] mean: 0.0015368810605286382, var: 0.0006329815429096847, max: 0.08090432206333902, min: -0.07757306332496912 feature_constructor.py:227
INFO [72_th feature (sma_deviation_2.0_12)] mean: 0.003658252870069286, var: 0.0013276162673703095, max: 0.10072450883233883, min: -0.0944645583478224 feature_constructor.py:227
INFO [73_th feature (sma_deviation_2.0_24)] mean: 0.005620521043931555, var: 0.0028496900041978623, max: 0.14658968156182975, min: -0.16764630418852747 feature_constructor.py:227
INFO [74_th feature (zscore_sma_bop_5)] mean: 2.803719926900057e-17, var: 0.9988726042841038, max: 3.237176554729404, min: -3.3341035908885353 feature_constructor.py:227
INFO [75_th feature (zscore_sma_bop_10)] mean: 2.0026570906428978e-17, var: 0.9974633596392334, max: 3.53254528172385, min: -3.9621452464344724 feature_constructor.py:227
INFO [76_th feature (zscore_sma_bop_20)] mean: -1.6021256725143183e-17, var: 0.9946448703494924, max: 3.1034332775105558, min: -3.8097874409825336 feature_constructor.py:227
INFO [77_th feature (zscore_sma_bop_30)] mean: -1.6021256725143183e-17, var: 0.9918263810597522, max: 2.8442384628832955, min: -4.332488328362715 feature_constructor.py:227
INFO [78_th feature (zscore_sma_bop_60)] mean: -1.6021256725143183e-17, var: 0.9833709131905298, max: 2.692706697347662, min: -3.975430385816447 feature_constructor.py:227
INFO [79_th feature (zscore_sma_bop_90)] mean: 3.2042513450286365e-17, var: 0.9749154453213079, max: 2.573094469069334, min: -3.146088191650204 feature_constructor.py:227
INFO [80_th feature (zscore_sma_bop_120)] mean: 1.6021256725143183e-17, var: 0.9664599774520857, max: 2.424167945091546, min: -3.14438680118214 feature_constructor.py:227
INFO [81_th feature (zscore_sma_bop_180)] mean: 3.2042513450286365e-17, var: 0.9495490417136414, max: 2.292700830421823, min: -2.973012565193964 feature_constructor.py:227
INFO [82_th feature (zscore_sma_bop_360)] mean: 3.2042513450286365e-17, var: 0.8988162344983089, max: 2.2790306552093833, min: -2.862006831527506 feature_constructor.py:227
INFO [83_th feature (zscore_sma_bop_720)] mean: 0.0, var: 0.7973506200676436, max: 1.5321141115264654, min: -2.1609446728256922 feature_constructor.py:227
INFO feature_constructor.py:228
[00:00:39] INFO moving_features.shape (2168, 30, 84) feature_constructor.py:171
INFO dates length 2168 feature_constructor.py:172
DEBUG resampled_df.shape (4928, 6) feature_constructor.py:140
[00:04:47] INFO Z-scores of SMA values: [ nan nan nan ... -1.3640264 -0.47995694 indicators_bop.py:188
0.3768709 ]
INFO Z-scores of SMA values: [ nan nan nan ... -0.00340564 -0.26531447 indicators_bop.py:188
0.52083976]
[00:04:48] INFO Z-scores of SMA values: [ nan nan nan ... 0.95673823 1.08644614 1.91250466] indicators_bop.py:188
INFO Z-scores of SMA values: [ nan nan nan ... 0.20710846 0.65999745 1.2641416 ] indicators_bop.py:188
INFO Z-scores of SMA values: [ nan nan nan ... 1.29228887 1.37853247 1.54836628] indicators_bop.py:188
[00:04:49] INFO Z-scores of SMA values: [ nan nan nan ... 1.04356306 1.08032177 1.48637076] indicators_bop.py:188
INFO Z-scores of SMA values: [ nan nan nan ... 0.88645534 1.00244978 1.05898983] indicators_bop.py:188
INFO Z-scores of SMA values: [ nan nan nan ... 1.35436358 1.56265643 1.72662622] indicators_bop.py:188
INFO Z-scores of SMA values: [ nan nan nan ... 1.45867409 1.4294402 1.49749422] indicators_bop.py:188
[00:04:50] INFO Z-scores of SMA values: [ nan nan nan ... 2.0499271 2.00413577 2.07866923] indicators_bop.py:188
[00:04:50] DEBUG feature_df.shape (4928, 84) feature_constructor.py:166
INFO [0_th feature (rocp)] mean: 3.2839123398421223e-05, var: 1.2917817725245939e-05, max: 0.04729238251161218, min: -0.033216994946436665 feature_constructor.py:227
INFO [1_th feature (orocp)] mean: 3.27773391701896e-05, var: 1.2928370714446734e-05, max: 0.047546643691161766, min: -0.03326698334595749 feature_constructor.py:227
INFO [2_th feature (hrocp)] mean: 3.288645279980793e-05, var: 1.3400831584510102e-05, max: 0.06230050106243414, min: -0.028323862353437885 feature_constructor.py:227
INFO [3_th feature (lrocp)] mean: 3.2011200059776534e-05, var: 1.1390285148375055e-05, max: 0.037497024487832084, min: -0.034400464828667994 feature_constructor.py:227
INFO [4_th feature (minute_daily_sin)] mean: 0.003671605188701063, var: 0.4998986514781037, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [5_th feature (minute_daily_cos)] mean: -0.003920132962928295, var: 0.5000725003947877, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [6_th feature (minute_weekly_sin)] mean: -0.019850463089246043, var: 0.49544942352421245, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [7_th feature (minute_weekly_cos)] mean: 0.031922141328883054, var: 0.5031375124839088, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [8_th feature (run_lengths_0.1)] mean: 1.468547077922078, var: 0.6621600643421108, max: 7, min: 1 feature_constructor.py:227
INFO [9_th feature (trk_high_0.1)] mean: 0.9135551948051948, var: 1.4052708021483384, max: 8, min: 0 feature_constructor.py:227
INFO [10_th feature (trk_low_0.1)] mean: 0.8587662337662337, var: 1.3550530232754259, max: 8, min: 0 feature_constructor.py:227
INFO [11_th feature (uihr_0.1)] mean: 0.5117694805194806, var: 0.2498614793283016, max: 1, min: 0 feature_constructor.py:227
INFO [12_th feature (uilr_0.1)] mean: 0.4876217532467532, var: 0.24984677900731567, max: 1, min: 0 feature_constructor.py:227
INFO [13_th feature (uihr_bound_0.1)] mean: 0.4107142857142857, var: 0.2420280612244898, max: 1, min: 0 feature_constructor.py:227
INFO [14_th feature (uilr_bound_0.1)] mean: 0.3944805194805195, var: 0.23886563923089899, max: 1, min: 0 feature_constructor.py:227
INFO [15_th feature (sma_deviation_0.1_3)] mean: 2.332873432316905e-05, var: 9.582328479368493e-06, max: 0.0418984942960648, min: -0.02631712204135203 feature_constructor.py:227
INFO [16_th feature (sma_deviation_0.1_6)] mean: 5.07416926421665e-05, var: 2.6208563959363696e-05, max: 0.046710037889132264, min: -0.02914959263555866 feature_constructor.py:227
INFO [17_th feature (sma_deviation_0.1_12)] mean: 0.0001628287909034288, var: 5.953168268246483e-05, max: 0.0634547685913714, min: -0.0412625399443162 feature_constructor.py:227
INFO [18_th feature (sma_deviation_0.1_24)] mean: 0.00047351272778344325, var: 0.00012818205374411144, max: 0.07559493703455025, min: -0.05400340124475177 feature_constructor.py:227
INFO [19_th feature (run_lengths_0.382)] mean: 1.1540178571428572, var: 0.18183856461618736, max: 6, min: 1 feature_constructor.py:227
INFO [20_th feature (trk_high_0.382)] mean: 0.9389204545454546, var: 1.571553380036157, max: 11, min: 0 feature_constructor.py:227
INFO [21_th feature (trk_low_0.382)] mean: 0.8125, var: 1.329697646103896, max: 9, min: 0 feature_constructor.py:227
INFO [22_th feature (uihr_0.382)] mean: 0.5237418831168831, var: 0.24943632298606425, max: 1, min: 0 feature_constructor.py:227
INFO [23_th feature (uilr_0.382)] mean: 0.4750405844155844, var: 0.24937702757368443, max: 1, min: 0 feature_constructor.py:227
INFO [24_th feature (uihr_bound_0.382)] mean: 0.3319805194805195, var: 0.22176945416596391, max: 1, min: 0 feature_constructor.py:227
INFO [25_th feature (uilr_bound_0.382)] mean: 0.3236607142857143, var: 0.21890445631377553, max: 1, min: 0 feature_constructor.py:227
INFO [26_th feature (sma_deviation_0.382_3)] mean: 0.0001463288965893051, var: 2.131549965851181e-05, max: 0.04578317270901591, min: -0.030196501879550305 feature_constructor.py:227
INFO [27_th feature (sma_deviation_0.382_6)] mean: 0.00022699761273387188, var: 5.767784559558008e-05, max: 0.052976600009880824, min: -0.04225341740143524 feature_constructor.py:227
INFO [28_th feature (sma_deviation_0.382_12)] mean: 0.000730579635960519, var: 0.0001394391773415162, max: 0.06670043798392002, min: -0.05400340124475177 feature_constructor.py:227
INFO [29_th feature (sma_deviation_0.382_24)] mean: 0.0014645231748294872, var: 0.0003238586920398472, max: 0.09476666971195499, min: -0.0773582070565265 feature_constructor.py:227
INFO [30_th feature (run_lengths_0.618)] mean: 1.0876623376623376, var: 0.10554583403609376, max: 6, min: 1 feature_constructor.py:227
INFO [31_th feature (trk_high_0.618)] mean: 0.966112012987013, var: 1.549785045894649, max: 8, min: 0 feature_constructor.py:227
INFO [32_th feature (trk_low_0.618)] mean: 0.801948051948052, var: 1.4262786726260752, max: 9, min: 0 feature_constructor.py:227
INFO [33_th feature (uihr_0.618)] mean: 0.5322646103896104, var: 0.24895899491640663, max: 1, min: 0 feature_constructor.py:227
INFO [34_th feature (uilr_0.618)] mean: 0.46570616883116883, var: 0.24882393314376375, max: 1, min: 0 feature_constructor.py:227
INFO [35_th feature (uihr_bound_0.618)] mean: 0.3096590909090909, var: 0.21377033832644626, max: 1, min: 0 feature_constructor.py:227
INFO [36_th feature (uilr_bound_0.618)] mean: 0.3015422077922078, var: 0.2106145047120087, max: 1, min: 0 feature_constructor.py:227
INFO [37_th feature (sma_deviation_0.618_3)] mean: 0.00017759250990563222, var: 3.679224941400795e-05, max: 0.04578317270901591, min: -0.030196501879550305 feature_constructor.py:227
INFO [38_th feature (sma_deviation_0.618_6)] mean: 0.0005063276183203708, var: 9.937520687338201e-05, max: 0.06592210251435425, min: -0.049138420004497856 feature_constructor.py:227
INFO [39_th feature (sma_deviation_0.618_12)] mean: 0.0011728256527779545, var: 0.00024263367601929961, max: 0.08026667217521972, min: -0.06278331999862721 feature_constructor.py:227
INFO [40_th feature (sma_deviation_0.618_24)] mean: 0.0019003081535093574, var: 0.0005651701553879339, max: 0.11112083452343167, min: -0.08852517172074412 feature_constructor.py:227
INFO [41_th feature (run_lengths_1.236)] mean: 1.0359172077922079, var: 0.03665638275583995, max: 3, min: 1 feature_constructor.py:227
INFO [42_th feature (trk_high_1.236)] mean: 1.1000405844155845, var: 1.5985551931581001, max: 7, min: 0 feature_constructor.py:227
INFO [43_th feature (trk_low_1.236)] mean: 0.7495941558441559, var: 1.478287172952859, max: 9, min: 0 feature_constructor.py:227
INFO [44_th feature (uihr_1.236)] mean: 0.567775974025974, var: 0.24540641734483049, max: 1, min: 0 feature_constructor.py:227
INFO [45_th feature (uilr_1.236)] mean: 0.42288961038961037, var: 0.24405398781413395, max: 1, min: 0 feature_constructor.py:227
INFO [46_th feature (uihr_bound_1.236)] mean: 0.2621753246753247, var: 0.19343942380671278, max: 1, min: 0 feature_constructor.py:227
INFO [47_th feature (uilr_bound_1.236)] mean: 0.26521915584415584, var: 0.1948779552174692, max: 1, min: 0 feature_constructor.py:227
INFO [48_th feature (sma_deviation_1.236_3)] mean: 0.0008823924335931757, var: 9.65927235068117e-05, max: 0.04965747472688901, min: -0.03624559445791177 feature_constructor.py:227
INFO [49_th feature (sma_deviation_1.236_6)] mean: 0.0017427807584968816, var: 0.00027893543956821544, max: 0.07479812020155736, min: -0.05900912316543884 feature_constructor.py:227
INFO [50_th feature (sma_deviation_1.236_12)] mean: 0.0022648468896500026, var: 0.0006960448854222781, max: 0.11855315867420531, min: -0.08809383666756258 feature_constructor.py:227
INFO [51_th feature (sma_deviation_1.236_24)] mean: 0.0024882605797614333, var: 0.0014870678430232683, max: 0.11931848604180273, min: -0.09617729539927479 feature_constructor.py:227
INFO [52_th feature (run_lengths_1.786)] mean: 1.0184659090909092, var: 0.01812491929235537, max: 2, min: 1 feature_constructor.py:227
INFO [53_th feature (trk_high_1.786)] mean: 1.0075081168831168, var: 2.0626465502587914, max: 9, min: 0 feature_constructor.py:227
INFO [54_th feature (trk_low_1.786)] mean: 0.8743912337662337, var: 1.7413247105725034, max: 7, min: 0 feature_constructor.py:227
INFO [55_th feature (uihr_1.786)] mean: 0.5142045454545454, var: 0.24979823088842978, max: 1, min: 0 feature_constructor.py:227
INFO [56_th feature (uilr_1.786)] mean: 0.47625811688311687, var: 0.2494363229860643, max: 1, min: 0 feature_constructor.py:227
INFO [57_th feature (uihr_bound_1.786)] mean: 0.27374188311688313, var: 0.1988072645445058, max: 1, min: 0 feature_constructor.py:227
INFO [58_th feature (uilr_bound_1.786)] mean: 0.27637987012987014, var: 0.19999403751686626, max: 1, min: 0 feature_constructor.py:227
INFO [59_th feature (sma_deviation_1.786_3)] mean: 0.0010219686266787673, var: 0.00018819329101253042, max: 0.05092335537281128, min: -0.049138420004497856 feature_constructor.py:227
INFO [60_th feature (sma_deviation_1.786_6)] mean: 0.0015982181478605656, var: 0.0005200034106914636, max: 0.08349618824670989, min: -0.08268766328944392 feature_constructor.py:227
INFO [61_th feature (sma_deviation_1.786_12)] mean: 0.0030301090585056045, var: 0.001340424046001609, max: 0.11974823444371509, min: -0.0892590267836034 feature_constructor.py:227
INFO [62_th feature (sma_deviation_1.786_24)] mean: 0.006431837421929269, var: 0.002287860871988186, max: 0.1304204126186899, min: -0.11002816807369588 feature_constructor.py:227
INFO [63_th feature (run_lengths_2.0)] mean: 1.0148133116883118, var: 0.014593877485136617, max: 2, min: 1 feature_constructor.py:227
INFO [64_th feature (trk_high_2.0)] mean: 1.1499594155844155, var: 2.3027962645866715, max: 7, min: 0 feature_constructor.py:227
INFO [65_th feature (trk_low_2.0)] mean: 0.796875, var: 1.6285860135957793, max: 6, min: 0 feature_constructor.py:227
INFO [66_th feature (uihr_2.0)] mean: 0.5529626623376623, var: 0.24719495639810676, max: 1, min: 0 feature_constructor.py:227
INFO [67_th feature (uilr_2.0)] mean: 0.41294642857142855, var: 0.24242167570153056, max: 1, min: 0 feature_constructor.py:227
INFO [68_th feature (uihr_bound_2.0)] mean: 0.31392045454545453, var: 0.2153744027634297, max: 1, min: 0 feature_constructor.py:227
INFO [69_th feature (uilr_bound_2.0)] mean: 0.2203733766233766, var: 0.17180895149898798, max: 1, min: 0 feature_constructor.py:227
INFO [70_th feature (sma_deviation_2.0_3)] mean: 0.0016240489793025198, var: 0.0002350776088889082, max: 0.058799468082515995, min: -0.05437886680177965 feature_constructor.py:227
INFO [71_th feature (sma_deviation_2.0_6)] mean: 0.002727028307051793, var: 0.0007063871713295819, max: 0.11068531792642812, min: -0.08713002417968349 feature_constructor.py:227
INFO [72_th feature (sma_deviation_2.0_12)] mean: 0.0036344724994676176, var: 0.0016336396542746534, max: 0.12023325328872887, min: -0.09617729539927479 feature_constructor.py:227
INFO [73_th feature (sma_deviation_2.0_24)] mean: 0.006539931711174295, var: 0.002676817683305747, max: 0.13546040903963827, min: -0.12072567140143058 feature_constructor.py:227
[00:04:51] INFO [74_th feature (zscore_sma_bop_5)] mean: 1.586032892321652e-17, var: 0.9991883116883117, max: 3.8288741864902143, min: -3.249633330282404 feature_constructor.py:227
INFO [75_th feature (zscore_sma_bop_10)] mean: 1.7302177007145296e-17, var: 0.9981737012987013, max: 3.82607180204382, min: -3.235983550551086 feature_constructor.py:227
INFO [76_th feature (zscore_sma_bop_20)] mean: -1.7302177007145296e-17, var: 0.9961444805194806, max: 4.532225108926233, min: -3.3954106930758816 feature_constructor.py:227
INFO [77_th feature (zscore_sma_bop_30)] mean: 0.0, var: 0.9941152597402597, max: 4.2161147605747304, min: -3.6297744695238814 feature_constructor.py:227
INFO [78_th feature (zscore_sma_bop_60)] mean: -1.7302177007145296e-17, var: 0.9880275974025974, max: 3.7435365910469396, min: -3.128361267285731 feature_constructor.py:227
INFO [79_th feature (zscore_sma_bop_90)] mean: 2.883696167857549e-17, var: 0.981939935064935, max: 3.2332017286566312, min: -3.041131766901641 feature_constructor.py:227
INFO [80_th feature (zscore_sma_bop_120)] mean: 0.0, var: 0.9758522727272729, max: 3.017452425739117, min: -3.0105696005865044 feature_constructor.py:227
INFO [81_th feature (zscore_sma_bop_180)] mean: -4.6139138685720794e-17, var: 0.9636769480519478, max: 2.8698208993362186, min: -2.95938399574756 feature_constructor.py:227
INFO [82_th feature (zscore_sma_bop_360)] mean: -2.3069569342860397e-17, var: 0.927150974025974, max: 1.9373606904098655, min: -2.4499537718304585 feature_constructor.py:227
INFO [83_th feature (zscore_sma_bop_720)] mean: 2.3069569342860397e-17, var: 0.8540990259740258, max: 2.25011148786217, min: -2.7436082707817904 feature_constructor.py:227
INFO feature_constructor.py:228
[00:04:55] INFO moving_features.shape (3548, 30, 84) feature_constructor.py:171
INFO dates length 3548 feature_constructor.py:172
DEBUG resampled_df.shape (5036, 6) feature_constructor.py:140
[00:09:12] INFO Z-scores of SMA values: [ nan nan nan ... -2.59307067 -2.21699286 indicators_bop.py:188
-2.36592462]
[00:09:13] INFO Z-scores of SMA values: [ nan nan nan ... -4.20467104 -3.98521375 indicators_bop.py:188
-4.14429343]
INFO Z-scores of SMA values: [ nan nan nan ... -3.35800606 -3.68090679 indicators_bop.py:188
-3.64885243]
INFO Z-scores of SMA values: [ nan nan nan ... -2.75205995 -2.97076512 indicators_bop.py:188
-3.08141113]
INFO Z-scores of SMA values: [ nan nan nan ... -2.13136059 -2.19208339 indicators_bop.py:188
-2.36407293]
[00:09:14] INFO Z-scores of SMA values: [ nan nan nan ... -2.25515923 -2.13953297 indicators_bop.py:188
-2.33191177]
INFO Z-scores of SMA values: [ nan nan nan ... -1.97792585 -2.1279685 indicators_bop.py:188
-2.16937062]
INFO Z-scores of SMA values: [ nan nan nan ... -0.63057373 -0.69602426 indicators_bop.py:188
-0.9398828 ]
INFO Z-scores of SMA values: [ nan nan nan ... -1.64069846 -1.63025702 indicators_bop.py:188
-1.81668117]
[00:09:15] INFO Z-scores of SMA values: [ nan nan nan ... -2.2895538 -2.22862384 indicators_bop.py:188
-2.2602846 ]
[00:09:15] DEBUG feature_df.shape (5036, 84) feature_constructor.py:166
INFO [0_th feature (rocp)] mean: -6.468574901917964e-06, var: 7.352344251151097e-06, max: 0.04729238251161218, min: -0.033216994946436665 feature_constructor.py:227
INFO [1_th feature (orocp)] mean: -5.014966244852763e-06, var: 7.35843655291026e-06, max: 0.047546643691161766, min: -0.03326698334595749 feature_constructor.py:227
INFO [2_th feature (hrocp)] mean: -6.169630462694168e-06, var: 7.1239521119481885e-06, max: 0.04720338658005074, min: -0.027970175304085344 feature_constructor.py:227
INFO [3_th feature (lrocp)] mean: -6.921891751529885e-06, var: 7.304067313470737e-06, max: 0.037497024487832084, min: -0.04150695720661591 feature_constructor.py:227
INFO [4_th feature (minute_daily_sin)] mean: 0.0033949756693011203, var: 0.5000154825466473, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [5_th feature (minute_daily_cos)] mean: -0.005724980578654659, var: 0.49994021619093165, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [6_th feature (minute_weekly_sin)] mean: 0.04157128994114426, var: 0.4983555661665533, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [7_th feature (minute_weekly_cos)] mean: -0.006743797350955131, var: 0.4998707828833654, max: 1.0, min: -1.0 feature_constructor.py:227
INFO [8_th feature (run_lengths_0.1)] mean: 1.4169976171564733, var: 0.6338969428060774, max: 12, min: 1 feature_constructor.py:227
INFO [9_th feature (trk_high_0.1)] mean: 0.9293089753772835, var: 1.447345908505622, max: 8, min: 0 feature_constructor.py:227
INFO [10_th feature (trk_low_0.1)] mean: 0.8625893566322478, var: 1.4557013174721039, max: 12, min: 0 feature_constructor.py:227
INFO [11_th feature (uihr_0.1)] mean: 0.5164813343923749, var: 0.2497283656166467, max: 1, min: 0 feature_constructor.py:227
INFO [12_th feature (uilr_0.1)] mean: 0.48332009531374104, var: 0.24972178077965732, max: 1, min: 0 feature_constructor.py:227
INFO [13_th feature (uihr_bound_0.1)] mean: 0.3868149324861001, var: 0.2371891404918739, max: 1, min: 0 feature_constructor.py:227
INFO [14_th feature (uilr_bound_0.1)] mean: 0.375099285146942, var: 0.23439981142919514, max: 1, min: 0 feature_constructor.py:227
INFO [15_th feature (sma_deviation_0.1_3)] mean: 1.9918478615814377e-06, var: 5.960993480387455e-06, max: 0.0418984942960648, min: -0.02631712204135203 feature_constructor.py:227
INFO [16_th feature (sma_deviation_0.1_6)] mean: -2.16838513597315e-05, var: 1.5689256062787868e-05, max: 0.046710037889132264, min: -0.029985650560956 feature_constructor.py:227
INFO [17_th feature (sma_deviation_0.1_12)] mean: -8.42048860865936e-06, var: 3.503020300657573e-05, max: 0.04852798514199898, min: -0.046778690245834274 feature_constructor.py:227
INFO [18_th feature (sma_deviation_0.1_24)] mean: 7.985281347264589e-05, var: 7.173829977765986e-05, max: 0.04721280046052803, min: -0.06332368759048061 feature_constructor.py:227
INFO [19_th feature (run_lengths_0.382)] mean: 1.1151707704527403, var: 0.13367771110750806, max: 5, min: 1 feature_constructor.py:227
INFO [20_th feature (trk_high_0.382)] mean: 0.9195790309769658, var: 1.4230400134125638, max: 7, min: 0 feature_constructor.py:227
INFO [21_th feature (trk_low_0.382)] mean: 0.8026211278792693, var: 1.244202819918982, max: 11, min: 0 feature_constructor.py:227
INFO [22_th feature (uihr_0.382)] mean: 0.5240270055599683, var: 0.24942270300382124, max: 1, min: 0 feature_constructor.py:227
INFO [23_th feature (uilr_0.382)] mean: 0.4743844320889595, var: 0.24934384268059495, max: 1, min: 0 feature_constructor.py:227
INFO [24_th feature (uihr_bound_0.382)] mean: 0.32247815726767276, var: 0.21848599535291888, max: 1, min: 0 feature_constructor.py:227
INFO [25_th feature (uilr_bound_0.382)] mean: 0.3109610802223987, var: 0.2142642868093176, max: 1, min: 0 feature_constructor.py:227
INFO [26_th feature (sma_deviation_0.382_3)] mean: 0.00011660629090337512, var: 1.585960624944283e-05, max: 0.04578317270901591, min: -0.030196501879550305 feature_constructor.py:227
INFO [27_th feature (sma_deviation_0.382_6)] mean: 0.00014308719984766678, var: 4.098999146494891e-05, max: 0.04785913562347494, min: -0.032968431257025727 feature_constructor.py:227
INFO [28_th feature (sma_deviation_0.382_12)] mean: 0.0003643545144149187, var: 9.721852458893904e-05, max: 0.049368500925753823, min: -0.06656992542733514 feature_constructor.py:227
INFO [29_th feature (sma_deviation_0.382_24)] mean: 0.000670661707399787, var: 0.00021763753992413222, max: 0.06035296142570201, min: -0.06514417277453544 feature_constructor.py:227
INFO [30_th feature (run_lengths_0.618)] mean: 1.0589753772835584, var: 0.06026296921103716, max: 4, min: 1 feature_constructor.py:227
INFO [31_th feature (trk_high_0.618)] mean: 0.971803018268467, var: 1.4503566379257586, max: 6, min: 0 feature_constructor.py:227
INFO [32_th feature (trk_low_0.618)] mean: 0.7551628276409849, var: 1.2948998742020124, max: 9, min: 0 feature_constructor.py:227
INFO [33_th feature (uihr_0.618)] mean: 0.5442811755361397, var: 0.2480391774931376, max: 1, min: 0 feature_constructor.py:227
INFO [34_th feature (uilr_0.618)] mean: 0.45353455123113584, var: 0.24784096207070805, max: 1, min: 0 feature_constructor.py:227
INFO [35_th feature (uihr_bound_0.618)] mean: 0.30559968228752976, var: 0.2122085164732907, max: 1, min: 0 feature_constructor.py:227
INFO [36_th feature (uilr_bound_0.618)] mean: 0.2972597299444003, var: 0.20889638289778253, max: 1, min: 0 feature_constructor.py:227
INFO [37_th feature (sma_deviation_0.618_3)] mean: -1.638830503057705e-05, var: 2.871220073731214e-05, max: 0.04578317270901591, min: -0.030196501879550305 feature_constructor.py:227
INFO [38_th feature (sma_deviation_0.618_6)] mean: 9.707788529638914e-05, var: 7.612722206740057e-05, max: 0.047946242169178026, min: -0.06047952479013304 feature_constructor.py:227
INFO [39_th feature (sma_deviation_0.618_12)] mean: 0.00029542326244455226, var: 0.00018646110198323071, max: 0.05295711810108939, min: -0.06919024240559069 feature_constructor.py:227
INFO [40_th feature (sma_deviation_0.618_24)] mean: 0.000564492588861091, var: 0.0003580581325293861, max: 0.06132751183882941, min: -0.06012338132083256 feature_constructor.py:227
INFO [41_th feature (run_lengths_1.236)] mean: 1.0248212867355044, var: 0.024205190460298242, max: 2, min: 1 feature_constructor.py:227
INFO [42_th feature (trk_high_1.236)] mean: 1.1721604447974583, var: 2.3839827034075856, max: 7, min: 0 feature_constructor.py:227
INFO [43_th feature (trk_low_1.236)] mean: 1.0528196981731532, var: 2.458687442471394, max: 7, min: 0 feature_constructor.py:227
INFO [44_th feature (uihr_1.236)] mean: 0.5212470214455918, var: 0.2495485640796906, max: 1, min: 0 feature_constructor.py:227
INFO [45_th feature (uilr_1.236)] mean: 0.4761715647339158, var: 0.24943220567277005, max: 1, min: 0 feature_constructor.py:227
INFO [46_th feature (uihr_bound_1.236)] mean: 0.23590150913423352, var: 0.18025198712242463, max: 1, min: 0 feature_constructor.py:227
INFO [47_th feature (uilr_bound_1.236)] mean: 0.2237887212073074, var: 0.17370732946770542, max: 1, min: 0 feature_constructor.py:227
INFO [48_th feature (sma_deviation_1.236_3)] mean: 0.00035608803139069105, var: 9.589425106113523e-05, max: 0.047111622072849205, min: -0.03415000086105783 feature_constructor.py:227
INFO [49_th feature (sma_deviation_1.236_6)] mean: 0.0009665196967629282, var: 0.0002652676358968766, max: 0.058799468082515995, min: -0.07039583746881434 feature_constructor.py:227
INFO [50_th feature (sma_deviation_1.236_12)] mean: 0.0010030907480735342, var: 0.0004954466893251735, max: 0.06990518709485419, min: -0.06833366044246368 feature_constructor.py:227
INFO [51_th feature (sma_deviation_1.236_24)] mean: 0.0010670414173973497, var: 0.0009361971843732896, max: 0.09131887010279807, min: -0.07883940108002087 feature_constructor.py:227
INFO [52_th feature (run_lengths_1.786)] mean: 1.0133042096902303, var: 0.013524348282516797, max: 3, min: 1 feature_constructor.py:227
INFO [53_th feature (trk_high_1.786)] mean: 0.9021048451151708, var: 1.2753649103736653, max: 4, min: 0 feature_constructor.py:227
INFO [54_th feature (trk_low_1.786)] mean: 1.0921366163621922, var: 1.95775389396504, max: 6, min: 0 feature_constructor.py:227
INFO [55_th feature (uihr_1.786)] mean: 0.49066719618745036, var: 0.24991289877299647, max: 1, min: 0 feature_constructor.py:227
INFO [56_th feature (uilr_1.786)] mean: 0.5051628276409849, var: 0.24997334521074951, max: 1, min: 0 feature_constructor.py:227
INFO [57_th feature (uihr_bound_1.786)] mean: 0.23689436060365368, var: 0.1807754225178398, max: 1, min: 0 feature_constructor.py:227
INFO [58_th feature (uilr_bound_1.786)] mean: 0.20095313741064336, var: 0.16057097397546244, max: 1, min: 0 feature_constructor.py:227
INFO [59_th feature (sma_deviation_1.786_3)] mean: 0.00017909683948686352, var: 0.0001822753214858376, max: 0.05092335537281128, min: -0.04569484996870447 feature_constructor.py:227
INFO [60_th feature (sma_deviation_1.786_6)] mean: 0.0007467496056423934, var: 0.00043435049397725007, max: 0.06343403197090192, min: -0.0666321554198634 feature_constructor.py:227
[00:09:16] INFO [61_th feature (sma_deviation_1.786_12)] mean: 0.0017238365766146493, var: 0.0008878144451330428, max: 0.09093014300247129, min: -0.07821655744842158 feature_constructor.py:227
INFO [62_th feature (sma_deviation_1.786_24)] mean: 0.0013518542648670903, var: 0.0020149950585647038, max: 0.12560404936062197, min: -0.11275978959073354 feature_constructor.py:227
INFO [63_th feature (run_lengths_2.0)] mean: 1.01131850675139, var: 0.011190398156308732, max: 2, min: 1 feature_constructor.py:227
INFO [64_th feature (trk_high_2.0)] mean: 1.0355440826052422, var: 2.0259407484538645, max: 7, min: 0 feature_constructor.py:227
INFO [65_th feature (trk_low_2.0)] mean: 1.1002779984114377, var: 1.9643287551235553, max: 6, min: 0 feature_constructor.py:227
INFO [66_th feature (uihr_2.0)] mean: 0.48848292295472595, var: 0.24986735693633327, max: 1, min: 0 feature_constructor.py:227
INFO [67_th feature (uilr_2.0)] mean: 0.49682287529785546, var: 0.24998990587862702, max: 1, min: 0 feature_constructor.py:227
INFO [68_th feature (uihr_bound_2.0)] mean: 0.22855440826052423, var: 0.17631729072520586, max: 1, min: 0 feature_constructor.py:227
INFO [69_th feature (uilr_bound_2.0)] mean: 0.17791898332009531, var: 0.14626381869443894, max: 1, min: 0 feature_constructor.py:227
INFO [70_th feature (sma_deviation_2.0_3)] mean: 0.0006381743076329999, var: 0.00021937780577889308, max: 0.058799468082515995, min: -0.07034762650722802 feature_constructor.py:227
INFO [71_th feature (sma_deviation_2.0_6)] mean: 0.0021172428101556076, var: 0.0005405554238058241, max: 0.07575877703870157, min: -0.0666062928816517 feature_constructor.py:227
INFO [72_th feature (sma_deviation_2.0_12)] mean: 0.003018095992715889, var: 0.0010742335400715077, max: 0.1015617374240085, min: -0.07883940108002087 feature_constructor.py:227
INFO [73_th feature (sma_deviation_2.0_24)] mean: 0.0019842069191884455, var: 0.0025073881803872084, max: 0.13319230672598953, min: -0.11323368939178119 feature_constructor.py:227
INFO [74_th feature (zscore_sma_bop_5)] mean: -2.3985755575698378e-17, var: 0.9992057188244641, max: 3.1563098706289927, min: -3.2668655521319807 feature_constructor.py:227
INFO [75_th feature (zscore_sma_bop_10)] mean: -1.6931121582845915e-17, var: 0.9982128673550437, max: 3.5787777925665867, min: -4.351332730012573 feature_constructor.py:227
INFO [76_th feature (zscore_sma_bop_20)] mean: 2.2574828777127884e-17, var: 0.9962271644162032, max: 3.173037211200701, min: -3.788536878413478 feature_constructor.py:227
INFO [77_th feature (zscore_sma_bop_30)] mean: 8.465560791422958e-18, var: 0.994241461477363, max: 2.9938031812804824, min: -3.836983256238234 feature_constructor.py:227
INFO [78_th feature (zscore_sma_bop_60)] mean: 1.1287414388563942e-17, var: 0.9882843526608418, max: 2.557981176131879, min: -3.204637556463508 feature_constructor.py:227
INFO [79_th feature (zscore_sma_bop_90)] mean: -2.2574828777127884e-17, var: 0.9823272438443206, max: 2.5192135335510817, min: -2.7971828485248302 feature_constructor.py:227
INFO [80_th feature (zscore_sma_bop_120)] mean: 0.0, var: 0.9763701350277998, max: 2.763263131802383, min: -2.902777598943311 feature_constructor.py:227
INFO [81_th feature (zscore_sma_bop_180)] mean: 3.386224316569183e-17, var: 0.9644559173947578, max: 2.7682858932686445, min: -2.790286501717149 feature_constructor.py:227
INFO [82_th feature (zscore_sma_bop_360)] mean: -2.2574828777127884e-17, var: 0.9287132644956314, max: 2.464289322957429, min: -2.802906241288221 feature_constructor.py:227
INFO [83_th feature (zscore_sma_bop_720)] mean: 0.0, var: 0.857227958697379, max: 2.1367773194194526, min: -2.3026896330747038 feature_constructor.py:227
INFO feature_constructor.py:228
[00:09:21] INFO moving_features.shape (3656, 30, 84) feature_constructor.py:171
INFO dates length 3656 feature_constructor.py:172
INFO (2168, 30, 84) validate_feature_set.py:41
INFO (3548, 30, 84) validate_feature_set.py:42
INFO (3656, 30, 84) validate_feature_set.py:43
[00:24:40] INFO Z-scores of SMA values: [ nan nan nan ... -2.55858916 -2.18610782 indicators_bop.py:188
-2.33361533]
[00:24:41] INFO Z-scores of SMA values: [ nan nan nan ... -4.19420836 -3.97447764 indicators_bop.py:188
-4.13375553]
INFO Z-scores of SMA values: [ nan nan nan ... -3.29978329 -3.61933631 indicators_bop.py:188
-3.58761427]
[00:24:42] INFO Z-scores of SMA values: [ nan nan nan ... -2.66611562 -2.8803025 indicators_bop.py:188
-2.98866264]
INFO Z-scores of SMA values: [ nan nan nan ... -2.02195843 -2.08087032 indicators_bop.py:188
-2.24773067]
[00:24:43] INFO Z-scores of SMA values: [ nan nan nan ... -2.02198158 -1.91527092 indicators_bop.py:188
-2.09281597]
INFO Z-scores of SMA values: [ nan nan nan ... -1.67230012 -1.8046901 indicators_bop.py:188
-1.84122121]
[00:24:44] INFO Z-scores of SMA values: [ nan nan nan ... -0.44532398 -0.50025682 indicators_bop.py:188
-0.70492803]
INFO Z-scores of SMA values: [ nan nan nan ... -1.25535587 -1.24665676 indicators_bop.py:188
-1.40197289]
[00:24:45] INFO Z-scores of SMA values: [ nan nan nan ... -1.74138756 -1.69117031 indicators_bop.py:188
-1.71726447]
[00:25:19] INFO Z-scores of SMA values: [ nan nan nan ... -2.46648547 -2.09905271 indicators_bop.py:188
-2.24456092]
INFO Z-scores of SMA values: [ nan nan nan ... -3.81773974 -3.6136232 indicators_bop.py:188
-3.76158272]
INFO Z-scores of SMA values: [ nan nan nan ... -2.86195946 -3.14915097 indicators_bop.py:188
-3.12064146]
INFO Z-scores of SMA values: [ nan nan nan ... -2.21423149 -2.40252264 indicators_bop.py:188
-2.49778176]
INFO Z-scores of SMA values: [ nan nan nan ... -1.59695357 -1.6481748 indicators_bop.py:188
-1.79325235]
INFO Z-scores of SMA values: [ nan nan nan ... -1.73156855 -1.62992746 indicators_bop.py:188
-1.79903776]
INFO Z-scores of SMA values: [ nan nan nan ... -1.4981852 -1.6355407 indicators_bop.py:188
-1.67344197]
INFO Z-scores of SMA values: [ nan nan nan ... -0.18740975 -0.25300435 indicators_bop.py:188
-0.4973996 ]
INFO Z-scores of SMA values: [ nan nan nan ... -1.18178655 -1.16997184 indicators_bop.py:188
-1.38091458]
INFO Z-scores of SMA values: [ nan nan nan ... -2.6872562 -2.58360005 indicators_bop.py:188
-2.63746242]
[00:25:19] ERROR Feature mismatch detected between full_data and pred_data (possible lookback_length issue): validate_feature_set.py:53
ERROR zscore_sma_bop_5: full=-2.333615333679634, pred=-2.244560917520696 validate_feature_set.py:55
ERROR zscore_sma_bop_10: full=-4.133755526073797, pred=-3.7615827201505203 validate_feature_set.py:55
ERROR zscore_sma_bop_20: full=-3.587614271830727, pred=-3.1206414555270423 validate_feature_set.py:55
ERROR zscore_sma_bop_30: full=-2.988662640709326, pred=-2.4977817630041024 validate_feature_set.py:55
ERROR zscore_sma_bop_60: full=-2.247730674536672, pred=-1.7932523492631383 validate_feature_set.py:55
[00:25:20] ERROR zscore_sma_bop_90: full=-2.0928159696010242, pred=-1.7990377581220496 validate_feature_set.py:55
ERROR zscore_sma_bop_120: full=-1.8412212064183198, pred=-1.673441974971348 validate_feature_set.py:55
ERROR zscore_sma_bop_180: full=-0.7049280340023957, pred=-0.49739960119478566 validate_feature_set.py:55
ERROR zscore_sma_bop_360: full=-1.401972886832164, pred=-1.3809145782301107 validate_feature_set.py:55
ERROR zscore_sma_bop_720: full=-1.717264472893876, pred=-2.637462423560629 validate_feature_set.py:55
WARNING demo.py:93
==================== Validation Complete ====================
WARNING Execution time: 1645.21 seconds demo.py:94
WARNING demo.py:95
Memory Changes:
WARNING RSS: +649.5 MB demo.py:99
WARNING VMS: +996.5 MB demo.py:99
WARNING SHARED: +0.0 MB demo.py:99
WARNING DATA: +0.0 MB demo.py:99
WARNING USS: +144.2 MB demo.py:99
WARNING demo.py:47
==================== Memory Usage at After validation ====================
WARNING Process Memory: demo.py:48
WARNING RSS (Resident Set Size): 726.8 MB demo.py:49
WARNING VMS (Virtual Memory Size): 403208.2 MB demo.py:50
WARNING USS (Unique Set Size): 199.3 MB demo.py:51
WARNING Shared Memory: 0.0 MB demo.py:52
WARNING Data Memory: 0.0 MB demo.py:53
WARNING demo.py:54
System Memory:
WARNING Total: 36864.0 MB demo.py:55
WARNING Available: 15499.4 MB demo.py:56
WARNING Usage: 58.0% demo.py:57
WARNING ============================================================ demo.py:58
Filename: /Users/terryli/eon-features/ml-feature-set/ml_feature_set/demo.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
76 77.2 MiB 77.2 MiB 1 @profile
77 def run_validation(feature_set_path: str, sample_data_path: str, logger: logging.Logger) -> bool:
78 77.2 MiB 0.0 MiB 1 start_time = time.time()
79 77.3 MiB 0.0 MiB 1 start_memory = get_process_memory()
80
81 77.3 MiB 0.0 MiB 1 logger.warning(f"\n{'='*20} Starting Validation {'='*20}")
82 77.3 MiB 0.0 MiB 1 log_memory_usage(logger, "Before validation")
83
84 77.3 MiB 0.0 MiB 1 try:
85 726.8 MiB 649.5 MiB 1 is_valid = validate(feature_set_path, sample_data_path, logger)
86
87 726.8 MiB 0.0 MiB 1 end_time = time.time()
88 726.8 MiB 0.0 MiB 1 end_memory = get_process_memory()
89
90 726.8 MiB 0.0 MiB 1 execution_time = end_time - start_time
91 726.8 MiB 0.0 MiB 9 memory_diff = {k: end_memory[k] - start_memory[k] for k in start_memory}
92
93 726.8 MiB 0.0 MiB 1 logger.warning(f"\n{'='*20} Validation Complete {'='*20}")
94 726.8 MiB 0.0 MiB 1 logger.warning(f"Execution time: {execution_time:.2f} seconds")
95 726.8 MiB 0.0 MiB 1 logger.warning("\nMemory Changes:")
96 726.8 MiB 0.0 MiB 9 for metric, change in memory_diff.items():
97 726.8 MiB 0.0 MiB 8 if metric.startswith('system'):
98 726.8 MiB 0.0 MiB 3 continue # Skip system-wide metrics for diff
99 726.8 MiB 0.0 MiB 5 logger.warning(f" {metric.upper()}: {change:+.1f} MB")
100
101 726.8 MiB 0.0 MiB 1 log_memory_usage(logger, "After validation")
102 726.8 MiB 0.0 MiB 1 return is_valid
103
104 except Exception as e:
105 logger.critical(f"Validation failed: {str(e)}")
106 logger.critical("Stack trace:", exc_info=True)
107 return False
WARNING demo.py:165
File processing completed in 1645.41 seconds
WARNING demo.py:47
==================== Memory Usage at After processing resampled_coinbasepro_BTC-15m.csv ====================
WARNING Process Memory: demo.py:48
WARNING RSS (Resident Set Size): 726.8 MB demo.py:49
WARNING VMS (Virtual Memory Size): 403208.2 MB demo.py:50
WARNING USS (Unique Set Size): 199.2 MB demo.py:51
WARNING Shared Memory: 0.0 MB demo.py:52
WARNING Data Memory: 0.0 MB demo.py:53
WARNING demo.py:54
System Memory:
WARNING Total: 36864.0 MB demo.py:55
WARNING Available: 15499.4 MB demo.py:56
WARNING Usage: 58.0% demo.py:57
WARNING ============================================================ demo.py:58
WARNING demo.py:174
Validation Results Summary:
WARNING ================================================== demo.py:175
WARNING Total execution time: 1645.43 seconds demo.py:176
WARNING Only 0/1 files validated successfully demo.py:180
WARNING demo.py:183
==================== FINAL MEMORY REPORT ====================
WARNING demo.py:189
System Memory Summary:
WARNING Total System Memory: 36864.0 MB demo.py:190
WARNING Available Memory: 15499.4 MB demo.py:191
WARNING System Memory Usage: 58.0% demo.py:192
WARNING demo.py:195
Process Memory Summary:
WARNING Final RSS (Resident Set Size): 726.8 MB demo.py:196
WARNING Final USS (Unique Set Size): 199.2 MB demo.py:197
WARNING Final VMS (Virtual Memory Size): 403208.2 MB demo.py:198
WARNING demo.py:202
Memory Efficiency Metrics:
WARNING Memory Efficiency (USS/RSS): 27.4% demo.py:203
WARNING Shared Memory Usage: 0.0 MB demo.py:204
WARNING demo.py:214
Garbage Collection Impact:
WARNING RSS: +0.0 MB demo.py:217
WARNING USS: +0.0 MB demo.py:217
WARNING VMS: +0.0 MB demo.py:217
WARNING demo.py:219
Per-File Memory Stats:
WARNING resampled_coinbasepro_BTC-15m.csv: ✗ demo.py:222
WARNING ============================================================ demo.py:224
~/eon-features/ml-feature-set direction_change_EL-1009 !7 ?2 ❯ 27m 26s Py eon-3.12.6 3.12.6 00:25:20