How Neural Networks Learn Your Spending Patterns

Whistl's neural networks don't just track transactions—they learn your unique behavioural patterns, predict impulses before they happen, and adapt to your changing habits. This technical deep dive explains exactly how machine learning models process 56 input features to forecast spending urges with 84% accuracy.

What Is a Neural Network?

A neural network is a machine learning model inspired by the human brain. It consists of layers of interconnected nodes (neurons) that process information:

  • Input layer: Receives raw data (time, location, biometrics, transactions)
  • Hidden layers: Process patterns through weighted connections
  • Output layer: Produces predictions (impulse likelihood 0.0-1.0)

Unlike traditional algorithms with fixed rules, neural networks learn from data—discovering patterns humans might miss.

The 56-Feature Input Vector

Whistl's neural network processes 56 distinct features across five categories. Each feature is normalised to a 0-1 scale before entering the network.

Category 1: Temporal Features (8 inputs)

Time-based patterns are powerful predictors of impulse behaviour:

FeatureDescriptionPredictive Power
Hour of dayCurrent hour (0-23)High—late night = reduced inhibition
Day of weekMonday-Sunday encodingMedium—weekends often higher risk
Days since payday0-30 daysHigh—fresh funds increase spending
Days until payday0-30 daysMedium—financial stress indicator
Time since last impulseHours since last detected urgeVery high—recent urges predict near-term risk
Seasonal indicatorSpring/Summer/Autumn/WinterLow—seasonal mood effects
Holiday proximityDays to nearest holidayMedium—celebration spending triggers
Weekend flagBinary: weekend vs weekdayMedium—routine disruption

Category 2: Location Features (6 inputs)

Physical location strongly correlates with impulse behaviour:

  • GPS coordinates: Latitude/longitude normalised to grid
  • Distance to gambling venues: Meters to nearest casino, TAB, betting shop
  • Distance to shopping centres: Meters to nearest retail complex
  • Home/away status: Binary based on geofence
  • Venue density: Number of risky venues within 1km
  • Location history match: Similarity to past impulse locations

Research shows proximity to gambling venues increases impulse likelihood by 340% (University of Sydney, 2024).

Category 3: Biometric Features (8 inputs)

Physiological state directly affects impulse control:

  • Heart rate variability (HRV): Lower HRV = reduced self-control
  • Resting heart rate: Elevated RHR indicates stress
  • Sleep duration: Hours slept last night
  • Sleep quality score: 0-100 from Oura/Apple Health
  • Oura readiness score: Composite recovery metric
  • Stress level: Self-reported or HRV-derived
  • Activity level: Steps/movement today
  • Recovery status: Training readiness from wearables

Studies show sleep deprivation reduces prefrontal cortex activity by 22%, impairing decision-making (Nature Neuroscience, 2023).

Category 4: Financial Features (18 inputs)

Transaction data reveals spending patterns and vulnerability:

Feature GroupSpecific Inputs
Balance metricsCurrent balance, protected floor, days to overdraft
Velocity metrics7-day spending rate, 30-day spending rate, category velocity
Budget ratiosCategory budget utilisation (gambling, shopping, dining)
Debt indicatorsBNPL active plans, credit utilisation, recent large transactions
Income timingSubscription renewals, bill deadlines, payday countdown
Goal progressSavings goal percentage, investment changes
Transaction patternsCash withdrawal frequency, ATM usage, online vs in-person ratio

Category 5: Behavioural & Context Features (16 inputs)

Contextual signals complete the picture:

  • Calendar stress events: Meetings, deadlines, social obligations
  • Screen time patterns: Total usage, app-specific time
  • Social media spikes: Unusual usage increases
  • App session duration: Time in Whistl and other finance apps
  • DNS query bursts: Gambling/shopping domain searches
  • Weather conditions: Rain, temperature, seasonal affective indicators
  • Mood check-in scores: Self-reported emotional state
  • Journal sentiment: AI-analysed emotional tone
  • Intervention history: Recent blocks, bypass attempts
  • Partner interaction: Recent accountability check-ins
  • Goal engagement: Dream board interaction frequency
  • Alternative action success: Historical coping strategy effectiveness
  • Cooldown compliance: Timer completion rate
  • Urge intensity: Self-reported craving strength

Network Architecture

Whistl uses a feedforward neural network with the following structure:

Layer Configuration

Input Layer:     56 neurons (one per feature)
Hidden Layer 1:  128 neurons, ReLU activation
Hidden Layer 2:  64 neurons, ReLU activation
Hidden Layer 3:  32 neurons, ReLU activation
Output Layer:    1 neuron, Sigmoid activation (0.0-1.0)

Total parameters: ~12,000 weights + biases

Activation Functions

ReLU (Rectified Linear Unit): Used in hidden layers for efficient gradient flow

f(x) = max(0, x)

Sigmoid: Used in output layer to produce probability score

f(x) = 1 / (1 + e^(-x))

Training Process

The neural network learns through supervised training on historical data:

Step 1: Data Collection

Training data consists of timestamped feature vectors paired with outcomes:

  • Positive examples: Feature states that preceded actual impulses
  • Negative examples: Feature states that did not lead to impulses

Dataset: 2.3 million data points from 10,000+ users over 12 months.

Step 2: Forward Pass

For each training example, the network:

  1. Receives 56 input features
  2. Propagates through hidden layers with weighted connections
  3. Produces output prediction (0.0-1.0)

Step 3: Loss Calculation

Binary cross-entropy loss measures prediction error:

Loss = -[y × log(prediction) + (1-y) × log(1-prediction)]

Where:
y = actual outcome (1 = impulse occurred, 0 = no impulse)
prediction = network output (0.0-1.0)

Step 4: Backpropagation

The network adjusts weights to minimise loss:

  1. Calculate gradient of loss with respect to each weight
  2. Update weights in opposite direction of gradient
  3. Repeat for thousands of iterations (epochs)

Optimizer: Adam (adaptive learning rate)

Learning rate: 0.001

Batch size: 32

On-Device Inference

Once trained, the model runs entirely on your device:

iOS Neural Engine

Whistl leverages Apple's Neural Engine for efficient inference:

  • Framework: Core ML with .mlmodel format
  • Latency: <10ms per prediction
  • Power consumption: <1% battery per hour
  • Privacy: No data leaves device

Android TensorFlow Lite

Android devices use TensorFlow Lite:

  • Framework: TFLite with .tflite format
  • Hardware acceleration: GPU delegate when available
  • Model size: 450KB compressed
  • Quantisation: 8-bit integer for efficiency

Personalisation Through Fine-Tuning

The base model is pre-trained on aggregate data, but personalisation happens through fine-tuning:

Personalisation Timeline

Time PeriodLearning FocusAccuracy Improvement
Days 1-3Baseline pattern establishmentBase model (72% accuracy)
Days 4-14Individual trigger identification+5-8% accuracy
Days 15-30Weight calibration for personal patterns+10-12% accuracy
Months 2-3Intervention effectiveness optimisation+15% accuracy
Months 4+Long-term pattern refinement84% accuracy plateau

Fine-Tuning Process

Personal fine-tuning uses transfer learning:

  1. Freeze early layers: First two hidden layers remain fixed
  2. Train final layer: Last hidden layer + output adapt to personal data
  3. Low learning rate: 0.0001 to avoid catastrophic forgetting
  4. Small batches: 8 examples per update for stability

Feature Importance Analysis

Not all 56 features contribute equally. Whistl uses permutation importance to rank features:

Top 10 Most Predictive Features

RankFeatureImportance Score
1Time since last impulse0.187
2Distance to gambling venue0.156
3HRV (heart rate variability)0.143
4Spending velocity (7-day)0.128
5Sleep quality score0.112
6Hour of day0.098
7DNS query burst count0.089
8Days since payday0.076
9Mood check-in score0.067
10Category budget ratio0.054

Prediction Output Interpretation

The neural network outputs a single probability score (0.0-1.0):

Risk Level Thresholds

Output RangeRisk LevelAction Triggered
0.00-0.40LowPassive monitoring
0.40-0.60ElevatedProactive check-in message
0.60-0.80HighActivate blocking + negotiation
0.80-1.00CriticalFull protection + partner alert

Model Performance Metrics

Whistl's neural network achieves strong performance across multiple metrics:

Confusion Matrix (Test Set)

                    Predicted
                 Impulse  No Impulse
Actual Impulse     1,847      353
Actual No Impulse   289     5,511

Accuracy: 84.2%
Precision: 86.4%
Recall: 83.9%
F1 Score: 85.1%
AUC-ROC: 0.91

Calibration Curve

Model predictions are well-calibrated:

  • Predicted 0.70 risk → Actual impulse rate 68%
  • Predicted 0.80 risk → Actual impulse rate 79%
  • Predicted 0.90 risk → Actual impulse rate 88%

Continuous Learning

The model improves over time through federated learning:

Federated Learning Process

  1. Local training: Each device trains on personal data
  2. Gradient upload: Only weight updates (not raw data) sent to server
  3. Aggregation: Server averages updates from thousands of devices
  4. Model update: Improved global model distributed via OTA

This preserves privacy while enabling collective learning.

Handling Concept Drift

User behaviour changes over time. Whistl detects and adapts to concept drift:

Drift Detection

  • Monitoring: Track prediction accuracy weekly
  • Threshold: >5% accuracy drop triggers retraining
  • Causes: Life changes, new habits, seasonal shifts

Adaptation Strategy

  • Recent data weighting: Last 30 days weighted 2x higher
  • Exponential decay: Older data gradually discounted
  • Periodic retraining: Full model refresh every 90 days

Privacy and Security

All neural network processing happens on-device:

  • No raw data transmission: Features never leave your phone
  • Encrypted model storage: Weights stored in secure enclave
  • Differential privacy: Federated updates include noise for anonymity
  • Model attestation: Verify model integrity before loading

User Testimonials

"The AI caught patterns I didn't even notice. It knows I'm vulnerable on paydays after 9pm—spot on every time." — Sarah, 34

"I was sceptical about AI predicting my behaviour. But after two weeks, it was scarily accurate. In a good way." — Marcus, 28

"Knowing my data stays on my phone made me comfortable connecting my bank. Privacy matters." — Emma, 26

Conclusion

Whistl's neural networks represent the cutting edge of behavioural finance AI. By processing 56 features through sophisticated deep learning architectures, the system predicts impulses with 84% accuracy—all while keeping your data private on your device.

The result: proactive intervention that learns your unique patterns and gets smarter every day.

Experience AI-Powered Protection

Whistl's neural networks learn your patterns and predict impulses before they happen. Download free and let AI protect your financial future.

Download Whistl Free

Related: AI Financial Coach Guide | 27 Risk Signals Explained | On-Device ML Implementation