Logistic Regression
Binary classification algorithm using sigmoid function to model probability of class membership. Despite name, it's classification not regression. Foundation for neural networks.
Visualization
Interactive visualization for Logistic Regression
Logistic Regression:
- • Binary classification
- • Sigmoid: σ(z) = 1/(1+e^(-z))
Interactive visualization with step-by-step execution
Implementation
1class LogisticRegression {
2 private weights: number[] = [];
3 private bias: number = 0;
4
5 private sigmoid(z: number): number {
6 return 1 / (1 + Math.exp(-z));
7 }
8
9 fit(X: number[][], y: number[], learningRate: number = 0.01, epochs: number = 1000): void {
10 const m = X.length;
11 const n = X[0].length;
12 this.weights = new Array(n).fill(0);
13
14 for (let epoch = 0; epoch < epochs; epoch++) {
15 // Forward pass
16 const z = X.map(x => x.reduce((sum, val, i) => sum + val * this.weights[i], this.bias));
17 const predictions = z.map(val => this.sigmoid(val));
18
19 // Gradients
20 for (let j = 0; j < n; j++) {
21 let gradient = 0;
22 for (let i = 0; i < m; i++) {
23 gradient += (predictions[i] - y[i]) * X[i][j];
24 }
25 this.weights[j] -= (learningRate / m) * gradient;
26 }
27
28 const biasGradient = predictions.reduce((sum, pred, i) => sum + (pred - y[i]), 0);
29 this.bias -= (learningRate / m) * biasGradient;
30 }
31 }
32
33 predictProba(X: number[][]): number[] {
34 return X.map(x => {
35 const z = x.reduce((sum, val, i) => sum + val * this.weights[i], this.bias);
36 return this.sigmoid(z);
37 });
38 }
39
40 predict(X: number[][]): number[] {
41 return this.predictProba(X).map(p => p >= 0.5 ? 1 : 0);
42 }
43}Deep Dive
Theoretical Foundation
Models P(y=1|x) using sigmoid σ(z) = 1/(1+e⁻ᶻ) where z = βᵀx. Output in [0,1] interpreted as probability. Uses log-loss (cross-entropy): J = -Σ[y log(ŷ) + (1-y)log(1-ŷ)]. Gradient descent optimizes weights.
Complexity
Time
O(nm×epochs)
O(nm×epochs)
O(nm×epochs)
Space
O(m)
Applications
Industry Use
Email spam detection
Medical diagnosis (disease/no disease)
Marketing response prediction
Credit approval systems
A/B testing analysis
Fraud detection
Customer churn prediction
Use Cases
Related Algorithms
K-Nearest Neighbors (KNN)
Simple, instance-based learning algorithm that classifies new data points based on k closest training examples. Non-parametric, lazy learning method used for classification and regression.
Linear Regression
Fundamental supervised learning algorithm modeling relationship between dependent variable and independent variables using linear equation. Foundational for predictive modeling and statistics.
Decision Tree
Tree-based model making decisions through sequence of if-else questions. Splits data based on feature values to create hierarchical structure. Interpretable and handles non-linear relationships.
K-Means Clustering
Unsupervised learning algorithm partitioning n observations into k clusters. Each observation belongs to cluster with nearest mean. Widely used for data segmentation and pattern discovery.