Compare commits
No commits in common. "cleanup" and "main" have entirely different histories.
@ -1,37 +0,0 @@
|
||||
name: Build Go Binaries
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: docker
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: 1.23
|
||||
|
||||
- name: Cross-Compile
|
||||
run: |
|
||||
mkdir -p dist
|
||||
for GOOS in linux windows darwin; do
|
||||
for GOARCH in arm64 amd64; do
|
||||
OUTPUT="dist/myapp_${GOOS}_${GOARCH}"
|
||||
echo "Building ${OUTPUT}"
|
||||
if [ "$GOOS" = "windows" ]; then
|
||||
OUTPUT="${OUTPUT}.exe"
|
||||
fi
|
||||
GOOS=$GOOS GOARCH=$GOARCH go build -o $OUTPUT ./cmd/main
|
||||
done
|
||||
done
|
||||
|
||||
- name: Upload Artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: flo_download-binaries
|
||||
path: dist
|
||||
217
CLAUDE.md
217
CLAUDE.md
@ -4,140 +4,31 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
|
||||
|
||||
## Project Overview
|
||||
|
||||
This is a Go-based HLS (HTTP Live Streaming) recorder that monitors M3U8 playlists and downloads video segments in real-time with automatic NAS transfer capabilities. The program takes a master M3U8 playlist URL, parses all available stream variants (different qualities/bitrates), continuously monitors each variant's chunklist for new segments, downloads them locally, and optionally transfers them to network storage for long-term archival.
|
||||
This is a Go-based M3U8 downloader that parses HLS (HTTP Live Streaming) playlists to extract video and audio stream metadata. The end goal of this project is to have a listening REST API take in m3u8 urls, parse them, and eventually send to a conversion service.
|
||||
|
||||
## Architecture
|
||||
|
||||
The project follows a modular architecture with clear separation of concerns:
|
||||
The project follows a clean separation of concerns:
|
||||
|
||||
- **cmd/**: Entry points for different execution modes
|
||||
- **main/main.go**: Primary CLI entry point with URL input, event naming, and mode selection
|
||||
- **downloader/download.go**: Core download orchestration logic with transfer service integration
|
||||
- **processor/process.go**: Alternative processing entry point
|
||||
- **transfer/transfer.go**: Transfer-only mode entry point
|
||||
- **pkg/**: Core packages containing the application logic
|
||||
- **media/**: HLS streaming and download logic
|
||||
- **stream.go**: Stream variant parsing and downloading orchestration (`GetAllVariants`, `VariantDownloader`)
|
||||
- **playlist.go**: M3U8 playlist loading and parsing (`LoadMediaPlaylist`)
|
||||
- **segment.go**: Individual segment downloading logic (`DownloadSegment`, `SegmentJob`)
|
||||
- **manifest.go**: Manifest generation and segment tracking (`ManifestWriter`, `ManifestItem`)
|
||||
- **transfer/**: NAS transfer system (complete implementation available)
|
||||
- **service.go**: Transfer service orchestration
|
||||
- **watcher.go**: File system monitoring for new downloads
|
||||
- **queue.go**: Priority queue with worker pool management
|
||||
- **nas.go**: NAS file transfer with retry logic
|
||||
- **cleanup.go**: Local file cleanup after successful transfer
|
||||
- **types.go**: Transfer system data structures
|
||||
- **processing/**: Video processing and concatenation system
|
||||
- **service.go**: Processing service orchestration with FFmpeg integration
|
||||
- **segment.go**: Individual segment processing logic
|
||||
- **types.go**: Processing system data structures
|
||||
- **nas/**: NAS connection and file operations
|
||||
- **config.go**: NAS configuration structure
|
||||
- **nas.go**: NAS service with connection management and file operations
|
||||
- **config/**: Centralized configuration management with validation
|
||||
- **config.go**: Configuration loading, validation, and path resolution
|
||||
- **utils/**: Utility functions for cross-platform compatibility
|
||||
- **paths.go**: Path manipulation and validation utilities
|
||||
- **constants/constants.go**: Configuration constants and singleton access
|
||||
- **httpClient/error.go**: HTTP error handling utilities
|
||||
- **main.go**: Entry point that demonstrates usage of the media package
|
||||
- **media/**: Core package containing M3U8 parsing logic
|
||||
- **types.go**: Contains the main parsing logic and data structures (`StreamSet`, `VideoURL`, `AudioURL`)
|
||||
- **utils.go**: Utility functions for parsing attributes and resolution calculations
|
||||
|
||||
## Core Functionality
|
||||
|
||||
### Download Workflow
|
||||
1. **Parse Master Playlist**: `GetAllVariants()` fetches and parses the master M3U8 to extract all stream variants with different qualities/bitrates
|
||||
2. **Concurrent Monitoring**: Each variant gets its own goroutine running `VariantDownloader()` that continuously polls for playlist updates
|
||||
3. **Segment Detection**: When new segments appear in a variant's playlist, they are queued for download
|
||||
4. **Parallel Downloads**: Segments are downloaded concurrently with configurable worker pools and retry logic
|
||||
5. **Quality Organization**: Downloaded segments are organized by resolution (1080p, 720p, etc.) in separate directories
|
||||
6. **Manifest Generation**: `ManifestWriter` tracks all downloaded segments with sequence numbers and resolutions
|
||||
|
||||
### NAS Transfer Workflow (Optional)
|
||||
1. **File Watching**: `FileWatcher` monitors download directories for new `.ts` files
|
||||
2. **Transfer Queuing**: New files are added to a priority queue after a settling delay
|
||||
3. **Background Transfer**: Worker pool transfers files to NAS with retry logic and verification
|
||||
4. **Local Cleanup**: Successfully transferred files are automatically cleaned up locally
|
||||
5. **State Persistence**: Queue state is persisted to survive crashes and restarts
|
||||
|
||||
### Video Processing Workflow (Optional)
|
||||
1. **Segment Collection**: Processing service reads downloaded segments from NAS storage
|
||||
2. **Quality Selection**: Automatically selects the highest quality variant available
|
||||
3. **FFmpeg Processing**: Uses FFmpeg to concatenate segments into a single MP4 file
|
||||
4. **Output Management**: Processed videos are saved to the configured output directory
|
||||
5. **Concurrent Processing**: Multiple events can be processed simultaneously with worker pools
|
||||
|
||||
## Key Data Structures
|
||||
|
||||
- `StreamVariant`: Represents a stream quality variant with URL, bandwidth, resolution, output directory, and manifest writer
|
||||
- `SegmentJob`: Represents a segment download task with URI, sequence number, and variant info
|
||||
- `ManifestWriter`: Tracks downloaded segments and generates JSON manifests
|
||||
- `ManifestItem`: Individual segment record with sequence number and resolution
|
||||
- `TransferItem`: Transfer queue item with source, destination, retry count, and status
|
||||
- `TransferService`: Orchestrates file watching, queuing, transfer, and cleanup
|
||||
- `ProcessingService`: Manages video processing operations with FFmpeg integration
|
||||
- `ProcessConfig`: Configuration for processing operations including worker count and paths
|
||||
- `NASService`: Handles NAS connection, authentication, and file operations
|
||||
- `NASConfig`: Configuration structure for NAS connection parameters
|
||||
|
||||
## Configuration
|
||||
|
||||
Configuration is managed through a centralized system in `pkg/config/config.go` with environment variable support for deployment flexibility. The system provides validation, cross-platform path resolution, and sensible defaults:
|
||||
|
||||
### Core Settings
|
||||
- `Core.WorkerCount`: Number of concurrent segment downloaders per variant (4) - ENV: `WORKER_COUNT`
|
||||
- `Core.RefreshDelay`: How often to check for playlist updates (3 seconds) - ENV: `REFRESH_DELAY_SECONDS`
|
||||
|
||||
### Path Configuration
|
||||
- `Paths.LocalOutput`: Base directory for local downloads (`data/`) - ENV: `LOCAL_OUTPUT_DIR`
|
||||
- `Paths.ProcessOutput`: Directory for processed videos (`out/`) - ENV: `PROCESS_OUTPUT_DIR`
|
||||
- `Paths.ManifestDir`: Directory for manifest JSON files (`data/`)
|
||||
- `Paths.PersistenceFile`: Transfer queue state file location
|
||||
|
||||
### HTTP Settings
|
||||
- `HTTPUserAgent`: User agent string for HTTP requests
|
||||
- `REFERRER`: Referer header for HTTP requests (`https://www.flomarching.com`)
|
||||
|
||||
### NAS Transfer Settings
|
||||
- `NAS.EnableTransfer`: Enable/disable automatic NAS transfer (true) - ENV: `ENABLE_NAS_TRANSFER`
|
||||
- `NAS.OutputPath`: UNC path to NAS storage (`\\HomeLabNAS\dci\streams`) - ENV: `NAS_OUTPUT_PATH`
|
||||
- `NAS.Username`/`NAS.Password`: NAS credentials for authentication - ENV: `NAS_USERNAME`/`NAS_PASSWORD`
|
||||
- `Transfer.WorkerCount`: Concurrent transfer workers (2)
|
||||
- `Transfer.RetryLimit`: Max retry attempts per file (3)
|
||||
- `Transfer.Timeout`: Timeout per file transfer (30 seconds)
|
||||
- `Transfer.FileSettlingDelay`: Wait before queuing new files (5 seconds)
|
||||
- `Transfer.QueueSize`: Maximum queue size (100000)
|
||||
- `Transfer.BatchSize`: Batch processing size (1000)
|
||||
|
||||
### Processing Settings
|
||||
- `Processing.AutoProcess`: Enable automatic processing after download (true)
|
||||
- `Processing.Enabled`: Enable processing functionality (true)
|
||||
- `Processing.WorkerCount`: Concurrent processing workers (2)
|
||||
- `Processing.FFmpegPath`: Path to FFmpeg executable (`ffmpeg`) - ENV: `FFMPEG_PATH`
|
||||
|
||||
### Cleanup Settings
|
||||
- `Cleanup.AfterTransfer`: Delete local files after NAS transfer (true)
|
||||
- `Cleanup.BatchSize`: Files processed per cleanup batch (1000)
|
||||
- `Cleanup.RetainHours`: Hours to keep local files (0 = immediate cleanup)
|
||||
|
||||
### Configuration Access
|
||||
```go
|
||||
cfg := constants.MustGetConfig() // Get validated config singleton
|
||||
eventPath := cfg.GetEventPath("my-event") // Get cross-platform paths
|
||||
```
|
||||
|
||||
See `DEPLOYMENT.md` for detailed environment variable configuration and deployment examples.
|
||||
The `GetStreamMetadata()` function is the main entry point that:
|
||||
1. Fetches the M3U8 master playlist via HTTP
|
||||
2. Parses the content line by line
|
||||
3. Extracts video streams (`#EXT-X-STREAM-INF`) and audio streams (`#EXT-X-MEDIA`)
|
||||
4. Returns a `StreamSet` containing all parsed metadata
|
||||
|
||||
## Common Development Commands
|
||||
|
||||
```bash
|
||||
# Build the main application
|
||||
go build -o stream-recorder ./cmd/main
|
||||
# Build the project
|
||||
go build -o m3u8-downloader
|
||||
|
||||
# Run with URL prompt
|
||||
go run ./cmd/main/main.go
|
||||
|
||||
# Run with command line arguments
|
||||
go run ./cmd/main/main.go -url="https://example.com/playlist.m3u8" -event="my-event" -debug=true
|
||||
# Run the project
|
||||
go run main.go
|
||||
|
||||
# Run with module support
|
||||
go mod tidy
|
||||
@ -149,80 +40,12 @@ go test ./...
|
||||
go fmt ./...
|
||||
```
|
||||
|
||||
## Command Line Options
|
||||
## Key Data Structures
|
||||
|
||||
- `-url`: M3U8 playlist URL (if not provided, prompts for input)
|
||||
- `-event`: Event name for organizing downloads (defaults to current date)
|
||||
- `-debug`: Debug mode (only downloads 1080p variant for easier testing)
|
||||
- `-transfer`: Transfer-only mode (transfer existing files without downloading)
|
||||
- `-process`: Process-only mode (process existing files without downloading)
|
||||
|
||||
## Monitoring and Downloads
|
||||
|
||||
The application implements comprehensive real-time stream monitoring:
|
||||
|
||||
### Download Features
|
||||
- **Continuous Polling**: Each variant playlist is checked every 3 seconds for new segments
|
||||
- **Deduplication**: Uses segment URIs and sequence numbers to avoid re-downloading
|
||||
- **Graceful Shutdown**: Responds to SIGINT/SIGTERM signals for clean exit
|
||||
- **Error Resilience**: Retries failed downloads and handles HTTP 403 errors specially
|
||||
- **Quality Detection**: Automatically determines resolution from bandwidth or explicit resolution data
|
||||
- **Context Cancellation**: Proper timeout and cancellation handling for clean shutdowns
|
||||
|
||||
### Transfer Features (when enabled)
|
||||
- **Real-time Transfer**: Files are transferred to NAS as soon as they're downloaded
|
||||
- **Queue Persistence**: Transfer queue survives application restarts
|
||||
- **Retry Logic**: Failed transfers are retried with exponential backoff
|
||||
- **Verification**: File sizes are verified after transfer
|
||||
- **Automatic Cleanup**: Local files are removed after successful NAS transfer
|
||||
- **Statistics Reporting**: Transfer progress and statistics are logged regularly
|
||||
|
||||
### Manifest Generation
|
||||
- **Segment Tracking**: All downloaded segments are tracked with sequence numbers
|
||||
- **Resolution Mapping**: Segments are associated with their quality variants
|
||||
- **JSON Output**: Manifest files are generated as sorted JSON arrays for easy processing
|
||||
- `StreamSet`: Root structure containing playlist URL and all streams
|
||||
- `VideoURL`: Represents video stream with bandwidth, codecs, resolution, frame rate
|
||||
- `AudioURL`: Represents audio stream with media type, group ID, name, and selection flags
|
||||
|
||||
## Error Handling
|
||||
|
||||
The implementation uses proper Go error handling patterns:
|
||||
- **Custom HTTP Errors**: Structured error types for HTTP failures
|
||||
- **Context-Aware Cancellation**: Proper handling of shutdown scenarios
|
||||
- **Retry Logic**: Exponential backoff for transient failures
|
||||
- **Logging**: Clear status indicators (✓ for success, ✗ for failure)
|
||||
- **Graceful Degradation**: Transfer service failures don't stop downloads
|
||||
|
||||
## Dependencies
|
||||
|
||||
- `github.com/grafov/m3u8`: M3U8 playlist parsing
|
||||
- `github.com/fsnotify/fsnotify`: File system event monitoring for NAS transfers
|
||||
|
||||
## Data Organization
|
||||
|
||||
Downloaded files are organized as:
|
||||
```
|
||||
./data/
|
||||
├── {event-name}.json # Manifest file
|
||||
├── {event-name}/ # Event-specific directory
|
||||
│ ├── 1080p/ # High quality segments
|
||||
│ ├── 720p/ # Medium quality segments
|
||||
│ └── 480p/ # Lower quality segments
|
||||
├── transfer_queue.json # Transfer queue state
|
||||
├── refresh_token.txt # Authentication tokens
|
||||
└── tokens.txt # Session tokens
|
||||
```
|
||||
|
||||
NAS files mirror the local structure:
|
||||
```
|
||||
\\HomeLabNAS\dci\streams\
|
||||
└── {event-name}/
|
||||
├── 1080p/
|
||||
├── 720p/
|
||||
└── 480p/
|
||||
```
|
||||
|
||||
Processed files are output to:
|
||||
```
|
||||
./out/
|
||||
└── {event-name}/
|
||||
└── concatenated_segments.mp4 # Final processed video
|
||||
```
|
||||
The current implementation uses `panic()` for error handling. When extending functionality, consider implementing proper error handling with returned error values following Go conventions.
|
||||
192
DEPLOYMENT.md
192
DEPLOYMENT.md
@ -1,192 +0,0 @@
|
||||
# Deployment Guide
|
||||
|
||||
This document outlines how to deploy and configure the StreamRecorder application in different environments.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
The application supports configuration through environment variables for flexible deployment:
|
||||
|
||||
### Core Settings
|
||||
- `WORKER_COUNT`: Number of concurrent segment downloaders per variant (default: 4)
|
||||
- `REFRESH_DELAY_SECONDS`: How often to check for playlist updates in seconds (default: 3)
|
||||
|
||||
### NAS Transfer Settings
|
||||
- `NAS_OUTPUT_PATH`: UNC path to NAS storage (default: "\\\\HomeLabNAS\\dci\\streams")
|
||||
- `NAS_USERNAME`: NAS authentication username
|
||||
- `NAS_PASSWORD`: NAS authentication password
|
||||
- `ENABLE_NAS_TRANSFER`: Enable/disable automatic NAS transfer (default: true)
|
||||
|
||||
### Path Configuration
|
||||
- `LOCAL_OUTPUT_DIR`: Base directory for local downloads (default: "data")
|
||||
- `PROCESS_OUTPUT_DIR`: Output directory for processed videos (default: "out")
|
||||
|
||||
### Processing Settings
|
||||
- `FFMPEG_PATH`: Path to FFmpeg executable (default: "ffmpeg")
|
||||
|
||||
## Docker Deployment
|
||||
|
||||
### Dockerfile Example
|
||||
|
||||
```dockerfile
|
||||
FROM golang:1.23-alpine AS builder
|
||||
WORKDIR /app
|
||||
COPY go.mod go.sum ./
|
||||
RUN go mod download
|
||||
COPY . .
|
||||
RUN go build -o stream-recorder ./cmd/main
|
||||
|
||||
FROM alpine:latest
|
||||
RUN apk --no-cache add ca-certificates ffmpeg
|
||||
WORKDIR /root/
|
||||
COPY --from=builder /app/stream-recorder .
|
||||
CMD ["./stream-recorder"]
|
||||
```
|
||||
|
||||
### Docker Compose Example
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
stream-recorder:
|
||||
build: .
|
||||
environment:
|
||||
- NAS_OUTPUT_PATH=/mnt/nas/streams
|
||||
- NAS_USERNAME=${NAS_USERNAME}
|
||||
- NAS_PASSWORD=${NAS_PASSWORD}
|
||||
- LOCAL_OUTPUT_DIR=/app/data
|
||||
- PROCESS_OUTPUT_DIR=/app/out
|
||||
- FFMPEG_PATH=ffmpeg
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
- ./out:/app/out
|
||||
- nas_mount:/mnt/nas
|
||||
networks:
|
||||
- stream_network
|
||||
|
||||
volumes:
|
||||
nas_mount:
|
||||
driver: local
|
||||
driver_opts:
|
||||
type: cifs
|
||||
device: "//HomeLabNAS/dci"
|
||||
o: username=${NAS_USERNAME},password=${NAS_PASSWORD},iocharset=utf8
|
||||
|
||||
networks:
|
||||
stream_network:
|
||||
driver: bridge
|
||||
```
|
||||
|
||||
## Kubernetes Deployment
|
||||
|
||||
### ConfigMap Example
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: stream-recorder-config
|
||||
data:
|
||||
WORKER_COUNT: "4"
|
||||
REFRESH_DELAY_SECONDS: "3"
|
||||
ENABLE_NAS_TRANSFER: "true"
|
||||
LOCAL_OUTPUT_DIR: "/app/data"
|
||||
PROCESS_OUTPUT_DIR: "/app/out"
|
||||
FFMPEG_PATH: "ffmpeg"
|
||||
```
|
||||
|
||||
### Secret Example
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: stream-recorder-secrets
|
||||
type: Opaque
|
||||
data:
|
||||
NAS_USERNAME: <base64-encoded-username>
|
||||
NAS_PASSWORD: <base64-encoded-password>
|
||||
```
|
||||
|
||||
### Deployment Example
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: stream-recorder
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: stream-recorder
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: stream-recorder
|
||||
spec:
|
||||
containers:
|
||||
- name: stream-recorder
|
||||
image: stream-recorder:latest
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: stream-recorder-config
|
||||
- secretRef:
|
||||
name: stream-recorder-secrets
|
||||
volumeMounts:
|
||||
- name: data-storage
|
||||
mountPath: /app/data
|
||||
- name: output-storage
|
||||
mountPath: /app/out
|
||||
volumes:
|
||||
- name: data-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: stream-data-pvc
|
||||
- name: output-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: stream-output-pvc
|
||||
```
|
||||
|
||||
## Production Considerations
|
||||
|
||||
### Security
|
||||
- Never commit credentials to version control
|
||||
- Use environment variables or secret management systems for sensitive data
|
||||
- Consider using service accounts or IAM roles for cloud deployments
|
||||
- Rotate credentials regularly
|
||||
|
||||
### Monitoring
|
||||
- Implement health checks for the application
|
||||
- Monitor disk space for download directories
|
||||
- Set up alerts for failed transfers or processing
|
||||
- Log to centralized logging systems
|
||||
|
||||
### Scaling
|
||||
- Use horizontal scaling for multiple concurrent streams
|
||||
- Consider using message queues for segment processing
|
||||
- Implement distributed storage for high availability
|
||||
- Use load balancers for multiple instances
|
||||
|
||||
### Backup and Recovery
|
||||
- Regular backups of configuration and state files
|
||||
- Test recovery procedures
|
||||
- Document rollback processes
|
||||
- Maintain disaster recovery plans
|
||||
|
||||
## Configuration Validation
|
||||
|
||||
The application validates configuration at startup and will fail fast if:
|
||||
- Required directories cannot be created
|
||||
- NAS paths are invalid when transfer is enabled
|
||||
- FFmpeg is not found when processing is enabled
|
||||
- Critical environment variables are malformed
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
1. **Path Permission Errors**: Ensure the application has write access to configured directories
|
||||
2. **NAS Connection Failures**: Verify network connectivity and credentials
|
||||
3. **FFmpeg Not Found**: Install FFmpeg or set correct FFMPEG_PATH
|
||||
4. **Environment Variable Format**: Check for typos and correct boolean values
|
||||
|
||||
### Debug Mode
|
||||
Run with `-debug=true` to enable debug logging and download only 1080p variants for testing.
|
||||
128
Makefile
128
Makefile
@ -1,128 +0,0 @@
|
||||
# StreamRecorder Makefile
|
||||
|
||||
.PHONY: test test-verbose test-coverage build clean help
|
||||
|
||||
# Default target
|
||||
all: build
|
||||
|
||||
# Build the application
|
||||
build:
|
||||
@echo "🔨 Building StreamRecorder..."
|
||||
go build -o stream-recorder.exe ./cmd/main
|
||||
|
||||
# Build for different platforms
|
||||
build-windows:
|
||||
@echo "🔨 Building for Windows..."
|
||||
GOOS=windows GOARCH=amd64 go build -o stream-recorder-windows.exe ./cmd/main
|
||||
|
||||
build-linux:
|
||||
@echo "🔨 Building for Linux..."
|
||||
GOOS=linux GOARCH=amd64 go build -o stream-recorder-linux ./cmd/main
|
||||
|
||||
build-all: build-windows build-linux
|
||||
|
||||
# Run unit tests
|
||||
test:
|
||||
@echo "🧪 Running unit tests..."
|
||||
go run test_runner.go
|
||||
|
||||
# Run tests with verbose output
|
||||
test-verbose:
|
||||
@echo "🧪 Running unit tests (verbose)..."
|
||||
go test -v ./pkg/...
|
||||
|
||||
# Run tests with coverage
|
||||
test-coverage:
|
||||
@echo "🧪 Running tests with coverage..."
|
||||
go test -coverprofile=coverage.out ./pkg/...
|
||||
go tool cover -html=coverage.out -o coverage.html
|
||||
@echo "📊 Coverage report generated: coverage.html"
|
||||
|
||||
# Run tests for a specific package
|
||||
test-pkg:
|
||||
@if [ -z "$(PKG)" ]; then \
|
||||
echo "❌ Please specify package: make test-pkg PKG=./pkg/config"; \
|
||||
exit 1; \
|
||||
fi
|
||||
@echo "🧪 Testing package: $(PKG)"
|
||||
go test -v $(PKG)
|
||||
|
||||
# Run benchmarks
|
||||
benchmark:
|
||||
@echo "🏃 Running benchmarks..."
|
||||
go test -bench=. -benchmem ./pkg/...
|
||||
|
||||
# Clean build artifacts
|
||||
clean:
|
||||
@echo "🧹 Cleaning build artifacts..."
|
||||
rm -f stream-recorder.exe stream-recorder-windows.exe stream-recorder-linux
|
||||
rm -f coverage.out coverage.html
|
||||
rm -rf data/ out/ *.json
|
||||
|
||||
# Format code
|
||||
fmt:
|
||||
@echo "🎨 Formatting code..."
|
||||
go fmt ./...
|
||||
|
||||
# Lint code (requires golangci-lint)
|
||||
lint:
|
||||
@echo "🔍 Linting code..."
|
||||
golangci-lint run
|
||||
|
||||
# Tidy dependencies
|
||||
tidy:
|
||||
@echo "📦 Tidying dependencies..."
|
||||
go mod tidy
|
||||
|
||||
# Run security check (requires gosec)
|
||||
security:
|
||||
@echo "🔒 Running security check..."
|
||||
gosec ./...
|
||||
|
||||
# Install development tools
|
||||
install-tools:
|
||||
@echo "🛠️ Installing development tools..."
|
||||
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
|
||||
go install github.com/securecodewarrior/gosec/v2/cmd/gosec@latest
|
||||
|
||||
# Quick development cycle: format, tidy, build, test
|
||||
dev: fmt tidy build test
|
||||
|
||||
# CI pipeline: format check, lint, security, test, build
|
||||
ci: fmt tidy lint security test build
|
||||
|
||||
# Help
|
||||
help:
|
||||
@echo "StreamRecorder Build Commands"
|
||||
@echo "============================="
|
||||
@echo ""
|
||||
@echo "Build Commands:"
|
||||
@echo " build - Build the main application"
|
||||
@echo " build-windows - Build for Windows (x64)"
|
||||
@echo " build-linux - Build for Linux (x64)"
|
||||
@echo " build-all - Build for all platforms"
|
||||
@echo ""
|
||||
@echo "Test Commands:"
|
||||
@echo " test - Run unit tests with custom runner"
|
||||
@echo " test-verbose - Run tests with verbose output"
|
||||
@echo " test-coverage - Run tests with coverage report"
|
||||
@echo " test-pkg PKG=<pkg> - Test specific package"
|
||||
@echo " benchmark - Run benchmarks"
|
||||
@echo ""
|
||||
@echo "Quality Commands:"
|
||||
@echo " fmt - Format code"
|
||||
@echo " lint - Lint code (requires golangci-lint)"
|
||||
@echo " security - Security analysis (requires gosec)"
|
||||
@echo " tidy - Tidy dependencies"
|
||||
@echo ""
|
||||
@echo "Development Commands:"
|
||||
@echo " dev - Quick dev cycle (fmt, tidy, build, test)"
|
||||
@echo " ci - Full CI pipeline"
|
||||
@echo " clean - Clean build artifacts"
|
||||
@echo " install-tools - Install development tools"
|
||||
@echo ""
|
||||
@echo "Examples:"
|
||||
@echo " make test"
|
||||
@echo " make test-pkg PKG=./pkg/config"
|
||||
@echo " make build-all"
|
||||
@echo " make dev"
|
||||
253
TESTING.md
253
TESTING.md
@ -1,253 +0,0 @@
|
||||
# Testing Guide
|
||||
|
||||
This document describes the test suite for the StreamRecorder application.
|
||||
|
||||
## Overview
|
||||
|
||||
The test suite provides comprehensive coverage of core application components without requiring external dependencies like video files, NAS connectivity, or FFmpeg. All tests are self-contained and clean up after themselves.
|
||||
|
||||
## Test Structure
|
||||
|
||||
### Unit Tests by Package
|
||||
|
||||
#### `pkg/config`
|
||||
- **File**: `config_test.go`
|
||||
- **Coverage**: Configuration loading, environment variable override, path validation, validation errors
|
||||
- **Key Tests**:
|
||||
- Default config loading
|
||||
- Environment variable overrides
|
||||
- Path resolution and creation
|
||||
- Validation error scenarios
|
||||
|
||||
#### `pkg/utils`
|
||||
- **File**: `paths_test.go`
|
||||
- **Coverage**: Cross-platform path utilities, directory operations, validation
|
||||
- **Key Tests**:
|
||||
- Safe path joining
|
||||
- Directory creation
|
||||
- Path existence checking
|
||||
- Path validation
|
||||
- Write permission testing
|
||||
|
||||
#### `pkg/constants`
|
||||
- **File**: `constants_test.go`
|
||||
- **Coverage**: Constants values, configuration singleton, integration
|
||||
- **Key Tests**:
|
||||
- Constant value verification
|
||||
- Singleton pattern testing
|
||||
- Config integration
|
||||
- Concurrent access safety
|
||||
|
||||
#### `pkg/httpClient`
|
||||
- **File**: `error_test.go`
|
||||
- **Coverage**: HTTP error handling, status code management
|
||||
- **Key Tests**:
|
||||
- HTTP error creation and formatting
|
||||
- Error comparison and detection
|
||||
- Status code extraction
|
||||
- Error wrapping support
|
||||
|
||||
#### `pkg/media`
|
||||
- **File**: `manifest_test.go`
|
||||
- **Coverage**: Manifest generation, segment tracking, JSON serialization
|
||||
- **Key Tests**:
|
||||
- Manifest writer initialization
|
||||
- Segment addition and updates
|
||||
- Quality resolution logic
|
||||
- JSON file generation
|
||||
- Sorting and deduplication
|
||||
|
||||
#### `pkg/processing`
|
||||
- **File**: `service_test.go`
|
||||
- **Coverage**: Processing service logic, path resolution, FFmpeg handling
|
||||
- **Key Tests**:
|
||||
- Service initialization
|
||||
- Event directory scanning
|
||||
- Resolution detection
|
||||
- Segment aggregation
|
||||
- File concatenation list generation
|
||||
- FFmpeg path resolution
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Quick Test Run
|
||||
```bash
|
||||
make test
|
||||
```
|
||||
|
||||
### Verbose Output
|
||||
```bash
|
||||
make test-verbose
|
||||
```
|
||||
|
||||
### Coverage Report
|
||||
```bash
|
||||
make test-coverage
|
||||
```
|
||||
Generates `coverage.html` with detailed coverage report.
|
||||
|
||||
### Test Specific Package
|
||||
```bash
|
||||
make test-pkg PKG=./pkg/config
|
||||
```
|
||||
|
||||
### Manual Test Execution
|
||||
```bash
|
||||
# Run custom test runner
|
||||
go run test_runner.go
|
||||
|
||||
# Run standard go test
|
||||
go test ./pkg/...
|
||||
|
||||
# Run with coverage
|
||||
go test -coverprofile=coverage.out ./pkg/...
|
||||
go tool cover -html=coverage.out
|
||||
```
|
||||
|
||||
## Test Features
|
||||
|
||||
### ✅ Self-Contained
|
||||
- No external file dependencies
|
||||
- No network connections required
|
||||
- No NAS or FFmpeg installation needed
|
||||
|
||||
### ✅ Automatic Cleanup
|
||||
- All temporary files/directories removed after tests
|
||||
- Original environment variables restored
|
||||
- No side effects on host system
|
||||
|
||||
### ✅ Isolated Environment
|
||||
- Tests use temporary directories
|
||||
- Environment variables safely overridden
|
||||
- Configuration isolated from production settings
|
||||
|
||||
### ✅ Cross-Platform
|
||||
- Path handling tested on Windows/Unix
|
||||
- Platform-specific behavior validated
|
||||
- Cross-platform compatibility verified
|
||||
|
||||
### ✅ Comprehensive Coverage
|
||||
- Configuration management
|
||||
- Path utilities and validation
|
||||
- Error handling patterns
|
||||
- Data structures and serialization
|
||||
- Business logic without external dependencies
|
||||
|
||||
## Test Environment
|
||||
|
||||
The test suite automatically:
|
||||
|
||||
1. **Creates Temporary Workspace**: Each test run uses a fresh temporary directory
|
||||
2. **Sets Test Environment**: Overrides environment variables to use test settings
|
||||
3. **Disables External Dependencies**: Sets flags to disable NAS transfer and processing
|
||||
4. **Cleans Up Completely**: Removes all test artifacts and restores environment
|
||||
|
||||
### Environment Variables Set During Tests
|
||||
- `LOCAL_OUTPUT_DIR`: Points to temp directory
|
||||
- `PROCESS_OUTPUT_DIR`: Points to temp directory
|
||||
- `ENABLE_NAS_TRANSFER`: Set to `false`
|
||||
- `PROCESSING_ENABLED`: Set to `false`
|
||||
|
||||
## Extending Tests
|
||||
|
||||
### Adding New Test Cases
|
||||
|
||||
1. **Create test file**: `pkg/yourpackage/yourfile_test.go`
|
||||
2. **Follow naming convention**: `TestFunctionName`
|
||||
3. **Use temp directories**: Always clean up created files
|
||||
4. **Mock external dependencies**: Avoid real file operations where possible
|
||||
|
||||
### Test Template
|
||||
```go
|
||||
package yourpackage
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestYourFunction(t *testing.T) {
|
||||
// Setup
|
||||
tempDir, err := os.MkdirTemp("", "test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Setup failed: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
// Test logic
|
||||
result := YourFunction()
|
||||
|
||||
// Assertions
|
||||
if result != expected {
|
||||
t.Errorf("Expected %v, got %v", expected, result)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Best Practices
|
||||
|
||||
- **Always clean up**: Use `defer os.RemoveAll()` for temp directories
|
||||
- **Test error cases**: Don't just test happy paths
|
||||
- **Use table-driven tests**: For multiple similar test cases
|
||||
- **Mock external dependencies**: Use echo/dummy commands instead of real tools
|
||||
- **Validate cleanup**: Ensure tests don't leave artifacts
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
The test suite is designed for automated environments:
|
||||
|
||||
```bash
|
||||
# Complete CI pipeline
|
||||
make ci
|
||||
|
||||
# Just run tests in CI
|
||||
make test
|
||||
```
|
||||
|
||||
The custom test runner provides:
|
||||
- ✅ Colored output for easy reading
|
||||
- ✅ Test count and timing statistics
|
||||
- ✅ Failure details and summaries
|
||||
- ✅ Automatic environment management
|
||||
- ✅ Exit codes for CI integration
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Tests fail with permission errors**
|
||||
- Ensure write permissions in temp directory
|
||||
- Check antivirus software isn't blocking file operations
|
||||
|
||||
**Config tests fail**
|
||||
- Verify no conflicting environment variables are set
|
||||
- Check that temp directories can be created
|
||||
|
||||
**Path tests fail on Windows**
|
||||
- Confirm path separator handling is correct
|
||||
- Verify Windows path validation logic
|
||||
|
||||
### Debug Mode
|
||||
```bash
|
||||
# Run with verbose output to see detailed failures
|
||||
go test -v ./pkg/...
|
||||
|
||||
# Run specific failing test
|
||||
go test -v -run TestSpecificFunction ./pkg/config
|
||||
```
|
||||
|
||||
## Coverage Goals
|
||||
|
||||
Current test coverage targets:
|
||||
- **Configuration**: 95%+ (critical for startup validation)
|
||||
- **Path utilities**: 90%+ (cross-platform compatibility critical)
|
||||
- **Constants**: 85%+ (verify all values and singleton behavior)
|
||||
- **HTTP client**: 90%+ (error handling is critical)
|
||||
- **Media handling**: 85%+ (core business logic)
|
||||
- **Processing**: 70%+ (limited by external FFmpeg dependency)
|
||||
|
||||
Generate coverage report to verify:
|
||||
```bash
|
||||
make test-coverage
|
||||
open coverage.html
|
||||
```
|
||||
BIN
bin/flo_download
BIN
bin/flo_download
Binary file not shown.
Binary file not shown.
@ -1,4 +1,4 @@
|
||||
package downloader
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
@ -6,7 +6,6 @@ import (
|
||||
"m3u8-downloader/pkg/constants"
|
||||
"m3u8-downloader/pkg/media"
|
||||
"m3u8-downloader/pkg/transfer"
|
||||
"m3u8-downloader/pkg/utils"
|
||||
"os"
|
||||
"os/signal"
|
||||
"sync"
|
||||
@ -14,7 +13,7 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
func Download(masterURL string, eventName string, debug bool) {
|
||||
func main() {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
@ -27,12 +26,10 @@ func Download(masterURL string, eventName string, debug bool) {
|
||||
cancel()
|
||||
}()
|
||||
|
||||
cfg := constants.MustGetConfig()
|
||||
|
||||
var wg sync.WaitGroup
|
||||
var transferService *transfer.TransferService
|
||||
if cfg.NAS.EnableTransfer {
|
||||
ts, err := transfer.NewTrasferService(cfg.NAS.OutputPath, eventName)
|
||||
if constants.EnableNASTransfer {
|
||||
ts, err := transfer.NewTrasferService(constants.NASPath)
|
||||
if err != nil {
|
||||
log.Printf("Failed to create transfer service: %v", err)
|
||||
log.Println("Continuing without transfer service...")
|
||||
@ -49,14 +46,7 @@ func Download(masterURL string, eventName string, debug bool) {
|
||||
}
|
||||
}
|
||||
|
||||
manifestWriter := media.NewManifestWriter(eventName)
|
||||
|
||||
eventPath := cfg.GetEventPath(eventName)
|
||||
if err := utils.EnsureDir(eventPath); err != nil {
|
||||
log.Fatalf("Failed to create event directory: %v", err)
|
||||
}
|
||||
|
||||
variants, err := media.GetAllVariants(masterURL, eventPath, manifestWriter)
|
||||
variants, err := media.GetAllVariants(constants.MasterURL)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to get variants: %v", err)
|
||||
}
|
||||
@ -64,19 +54,11 @@ func Download(masterURL string, eventName string, debug bool) {
|
||||
|
||||
sem := make(chan struct{}, constants.WorkerCount*len(variants))
|
||||
|
||||
manifest := media.NewManifestWriter(eventName)
|
||||
|
||||
for _, variant := range variants {
|
||||
// Debug mode only tracks one variant for easier debugging
|
||||
if debug {
|
||||
if variant.Resolution != "1080p" {
|
||||
continue
|
||||
}
|
||||
}
|
||||
wg.Add(1)
|
||||
go func(v *media.StreamVariant) {
|
||||
defer wg.Done()
|
||||
media.VariantDownloader(ctx, v, sem, manifest)
|
||||
media.VariantDownloader(ctx, v, sem)
|
||||
}(variant)
|
||||
}
|
||||
|
||||
@ -90,7 +72,4 @@ func Download(masterURL string, eventName string, debug bool) {
|
||||
}
|
||||
|
||||
log.Println("All Services shut down.")
|
||||
|
||||
manifestWriter.WriteManifest()
|
||||
log.Println("Manifest written.")
|
||||
}
|
||||
@ -1,43 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"flag"
|
||||
"fmt"
|
||||
"m3u8-downloader/cmd/downloader"
|
||||
"m3u8-downloader/cmd/processor"
|
||||
"m3u8-downloader/cmd/transfer"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func main() {
|
||||
url := flag.String("url", "", "M3U8 playlist URL")
|
||||
eventName := flag.String("event", "", "Event name")
|
||||
debug := flag.Bool("debug", false, "Enable debug mode")
|
||||
transferOnly := flag.Bool("transfer", false, "Transfer-only mode: transfer existing files without downloading")
|
||||
processOnly := flag.Bool("process", false, "Process-only mode: process existing files without downloading")
|
||||
|
||||
flag.Parse()
|
||||
|
||||
if *transferOnly {
|
||||
transfer.RunTransferOnly(*eventName)
|
||||
return
|
||||
}
|
||||
|
||||
if *processOnly {
|
||||
processor.Process(*eventName)
|
||||
return
|
||||
}
|
||||
|
||||
if *url == "" {
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
fmt.Print("Enter M3U8 playlist URL: ")
|
||||
inputUrl, _ := reader.ReadString('\n')
|
||||
inputUrl = strings.TrimSpace(inputUrl)
|
||||
downloader.Download(inputUrl, *eventName, *debug)
|
||||
return
|
||||
}
|
||||
|
||||
downloader.Download(*url, *eventName, *debug)
|
||||
}
|
||||
1
cmd/proc/main.go
Normal file
1
cmd/proc/main.go
Normal file
@ -0,0 +1 @@
|
||||
package proc
|
||||
@ -1,20 +0,0 @@
|
||||
package processor
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"m3u8-downloader/pkg/constants"
|
||||
"m3u8-downloader/pkg/processing"
|
||||
)
|
||||
|
||||
func Process(eventName string) {
|
||||
log.Printf("Starting processing for event: %s", eventName)
|
||||
cfg := constants.MustGetConfig()
|
||||
ps, err := processing.NewProcessingService(eventName, cfg)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create processing service: %v", err)
|
||||
}
|
||||
if err := ps.Start(context.Background()); err != nil {
|
||||
log.Fatalf("Failed to run processing service: %v", err)
|
||||
}
|
||||
}
|
||||
@ -1,114 +0,0 @@
|
||||
package transfer
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
"m3u8-downloader/pkg/config"
|
||||
"m3u8-downloader/pkg/constants"
|
||||
"m3u8-downloader/pkg/transfer"
|
||||
"m3u8-downloader/pkg/utils"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strconv"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
func getEventDirs(cfg *config.Config) ([]string, error) {
|
||||
dirs, err := os.ReadDir(cfg.Paths.LocalOutput)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read directory: %w", err)
|
||||
}
|
||||
var eventDirs []string
|
||||
for _, dir := range dirs {
|
||||
if dir.IsDir() {
|
||||
eventDirs = append(eventDirs, dir.Name())
|
||||
}
|
||||
}
|
||||
return eventDirs, nil
|
||||
}
|
||||
|
||||
func RunTransferOnly(eventName string) {
|
||||
cfg := constants.MustGetConfig()
|
||||
|
||||
// Check if NAS transfer is enabled
|
||||
if !cfg.NAS.EnableTransfer {
|
||||
log.Fatal("NAS transfer is disabled in configuration. Please enable it to use transfer-only mode.")
|
||||
}
|
||||
|
||||
if eventName == "" {
|
||||
events, err := getEventDirs(cfg)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to get event directories: %v", err)
|
||||
}
|
||||
if len(events) == 0 {
|
||||
log.Fatal("No events found")
|
||||
}
|
||||
if len(events) > 1 {
|
||||
fmt.Println("Multiple events found, please select one:")
|
||||
for i, event := range events {
|
||||
fmt.Printf("%d. %s\n", i+1, event)
|
||||
}
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
input, _ := reader.ReadString('\n')
|
||||
input = strings.TrimSpace(input)
|
||||
index, err := strconv.Atoi(input)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to parse input: %v", err)
|
||||
}
|
||||
if index < 1 || index > len(events) {
|
||||
log.Fatal("Invalid input")
|
||||
}
|
||||
eventName = events[index-1]
|
||||
} else {
|
||||
eventName = events[0]
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("Starting transfer-only mode for event: %s", eventName)
|
||||
|
||||
// Setup context and signal handling
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
sigChan := make(chan os.Signal, 1)
|
||||
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
|
||||
go func() {
|
||||
<-sigChan
|
||||
log.Println("Shutting down transfer service...")
|
||||
cancel()
|
||||
}()
|
||||
|
||||
// Verify local event directory exists
|
||||
localEventPath := cfg.GetEventPath(eventName)
|
||||
if !utils.PathExists(localEventPath) {
|
||||
log.Fatalf("Local event directory does not exist: %s", localEventPath)
|
||||
}
|
||||
|
||||
// Create transfer service
|
||||
transferService, err := transfer.NewTrasferService(cfg.NAS.OutputPath, eventName)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create transfer service: %v", err)
|
||||
}
|
||||
|
||||
// Find and queue existing files
|
||||
if err := transferService.QueueExistingFiles(localEventPath); err != nil {
|
||||
log.Fatalf("Failed to queue existing files: %v", err)
|
||||
}
|
||||
|
||||
// Start transfer service
|
||||
log.Println("Starting transfer service...")
|
||||
if err := transferService.Start(ctx); err != nil && err != context.Canceled {
|
||||
log.Printf("Transfer service error: %v", err)
|
||||
}
|
||||
|
||||
// Graceful shutdown
|
||||
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer shutdownCancel()
|
||||
transferService.Shutdown(shutdownCtx)
|
||||
|
||||
log.Println("Transfer-only mode completed.")
|
||||
}
|
||||
6
go.sum
6
go.sum
@ -1,6 +0,0 @@
|
||||
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
|
||||
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
|
||||
github.com/grafov/m3u8 v0.12.1 h1:DuP1uA1kvRRmGNAZ0m+ObLv1dvrfNO0TPx0c/enNk0s=
|
||||
github.com/grafov/m3u8 v0.12.1/go.mod h1:nqzOkfBiZJENr52zTVd/Dcl03yzphIMbJqkXGu+u080=
|
||||
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
|
||||
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
@ -1,238 +0,0 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
Core CoreConfig
|
||||
HTTP HTTPConfig
|
||||
NAS NASConfig
|
||||
Processing ProcessingConfig
|
||||
Transfer TransferConfig
|
||||
Cleanup CleanupConfig
|
||||
Paths PathsConfig
|
||||
}
|
||||
|
||||
type CoreConfig struct {
|
||||
WorkerCount int
|
||||
RefreshDelay time.Duration
|
||||
}
|
||||
|
||||
type HTTPConfig struct {
|
||||
UserAgent string
|
||||
Referer string
|
||||
}
|
||||
|
||||
type NASConfig struct {
|
||||
EnableTransfer bool
|
||||
OutputPath string
|
||||
Username string
|
||||
Password string
|
||||
Timeout time.Duration
|
||||
RetryLimit int
|
||||
}
|
||||
|
||||
type ProcessingConfig struct {
|
||||
Enabled bool
|
||||
AutoProcess bool
|
||||
WorkerCount int
|
||||
FFmpegPath string
|
||||
}
|
||||
|
||||
type TransferConfig struct {
|
||||
WorkerCount int
|
||||
RetryLimit int
|
||||
Timeout time.Duration
|
||||
FileSettlingDelay time.Duration
|
||||
QueueSize int
|
||||
BatchSize int
|
||||
}
|
||||
|
||||
type CleanupConfig struct {
|
||||
AfterTransfer bool
|
||||
BatchSize int
|
||||
RetainHours int
|
||||
}
|
||||
|
||||
type PathsConfig struct {
|
||||
BaseDir string
|
||||
LocalOutput string
|
||||
ProcessOutput string
|
||||
ManifestDir string
|
||||
PersistenceFile string
|
||||
}
|
||||
|
||||
var defaultConfig = Config{
|
||||
Core: CoreConfig{
|
||||
WorkerCount: 4,
|
||||
RefreshDelay: 3 * time.Second,
|
||||
},
|
||||
HTTP: HTTPConfig{
|
||||
UserAgent: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36",
|
||||
Referer: "https://www.flomarching.com",
|
||||
},
|
||||
NAS: NASConfig{
|
||||
EnableTransfer: true,
|
||||
OutputPath: "\\\\HomeLabNAS\\dci\\streams",
|
||||
Username: "NASAdmin",
|
||||
Password: "s3tkY6tzA&KN6M",
|
||||
Timeout: 30 * time.Second,
|
||||
RetryLimit: 3,
|
||||
},
|
||||
Processing: ProcessingConfig{
|
||||
Enabled: true,
|
||||
AutoProcess: true,
|
||||
WorkerCount: 2,
|
||||
FFmpegPath: "ffmpeg",
|
||||
},
|
||||
Transfer: TransferConfig{
|
||||
WorkerCount: 2,
|
||||
RetryLimit: 3,
|
||||
Timeout: 30 * time.Second,
|
||||
FileSettlingDelay: 5 * time.Second,
|
||||
QueueSize: 100000,
|
||||
BatchSize: 1000,
|
||||
},
|
||||
Cleanup: CleanupConfig{
|
||||
AfterTransfer: true,
|
||||
BatchSize: 1000,
|
||||
RetainHours: 0,
|
||||
},
|
||||
Paths: PathsConfig{
|
||||
BaseDir: "data",
|
||||
LocalOutput: "data",
|
||||
ProcessOutput: "out",
|
||||
ManifestDir: "data",
|
||||
PersistenceFile: "transfer_queue.json",
|
||||
},
|
||||
}
|
||||
|
||||
func Load() (*Config, error) {
|
||||
cfg := defaultConfig
|
||||
|
||||
if err := cfg.loadFromEnvironment(); err != nil {
|
||||
return nil, fmt.Errorf("failed to load environment config: %w", err)
|
||||
}
|
||||
|
||||
if err := cfg.resolveAndValidatePaths(); err != nil {
|
||||
return nil, fmt.Errorf("path validation failed: %w", err)
|
||||
}
|
||||
|
||||
return &cfg, nil
|
||||
}
|
||||
|
||||
func (c *Config) loadFromEnvironment() error {
|
||||
if val := os.Getenv("WORKER_COUNT"); val != "" {
|
||||
if parsed, err := strconv.Atoi(val); err == nil {
|
||||
c.Core.WorkerCount = parsed
|
||||
}
|
||||
}
|
||||
|
||||
if val := os.Getenv("REFRESH_DELAY_SECONDS"); val != "" {
|
||||
if parsed, err := strconv.Atoi(val); err == nil {
|
||||
c.Core.RefreshDelay = time.Duration(parsed) * time.Second
|
||||
}
|
||||
}
|
||||
|
||||
if val := os.Getenv("NAS_OUTPUT_PATH"); val != "" {
|
||||
c.NAS.OutputPath = val
|
||||
}
|
||||
|
||||
if val := os.Getenv("NAS_USERNAME"); val != "" {
|
||||
c.NAS.Username = val
|
||||
}
|
||||
|
||||
if val := os.Getenv("NAS_PASSWORD"); val != "" {
|
||||
c.NAS.Password = val
|
||||
}
|
||||
|
||||
if val := os.Getenv("ENABLE_NAS_TRANSFER"); val != "" {
|
||||
c.NAS.EnableTransfer = val == "true"
|
||||
}
|
||||
|
||||
if val := os.Getenv("LOCAL_OUTPUT_DIR"); val != "" {
|
||||
c.Paths.LocalOutput = val
|
||||
}
|
||||
|
||||
if val := os.Getenv("PROCESS_OUTPUT_DIR"); val != "" {
|
||||
c.Paths.ProcessOutput = val
|
||||
}
|
||||
|
||||
if val := os.Getenv("FFMPEG_PATH"); val != "" {
|
||||
c.Processing.FFmpegPath = val
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) resolveAndValidatePaths() error {
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get working directory: %w", err)
|
||||
}
|
||||
|
||||
// Only join with cwd if path is not already absolute
|
||||
if !filepath.IsAbs(c.Paths.BaseDir) {
|
||||
c.Paths.BaseDir = filepath.Join(cwd, c.Paths.BaseDir)
|
||||
}
|
||||
if !filepath.IsAbs(c.Paths.LocalOutput) {
|
||||
c.Paths.LocalOutput = filepath.Join(cwd, c.Paths.LocalOutput)
|
||||
}
|
||||
if !filepath.IsAbs(c.Paths.ProcessOutput) {
|
||||
c.Paths.ProcessOutput = filepath.Join(cwd, c.Paths.ProcessOutput)
|
||||
}
|
||||
if !filepath.IsAbs(c.Paths.ManifestDir) {
|
||||
c.Paths.ManifestDir = filepath.Join(cwd, c.Paths.ManifestDir)
|
||||
}
|
||||
if !filepath.IsAbs(c.Paths.PersistenceFile) {
|
||||
c.Paths.PersistenceFile = filepath.Join(c.Paths.BaseDir, c.Paths.PersistenceFile)
|
||||
}
|
||||
|
||||
requiredDirs := []string{
|
||||
c.Paths.BaseDir,
|
||||
c.Paths.LocalOutput,
|
||||
c.Paths.ProcessOutput,
|
||||
c.Paths.ManifestDir,
|
||||
}
|
||||
|
||||
for _, dir := range requiredDirs {
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
return fmt.Errorf("failed to create directory %s: %w", dir, err)
|
||||
}
|
||||
}
|
||||
|
||||
if c.NAS.EnableTransfer && c.NAS.OutputPath == "" {
|
||||
return fmt.Errorf("NAS output path is required when transfer is enabled")
|
||||
}
|
||||
|
||||
if c.Processing.Enabled && c.Processing.FFmpegPath == "" {
|
||||
return fmt.Errorf("FFmpeg path is required when processing is enabled")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) GetEventPath(eventName string) string {
|
||||
return filepath.Join(c.Paths.LocalOutput, eventName)
|
||||
}
|
||||
|
||||
func (c *Config) GetManifestPath(eventName string) string {
|
||||
return filepath.Join(c.Paths.ManifestDir, eventName+".json")
|
||||
}
|
||||
|
||||
func (c *Config) GetNASEventPath(eventName string) string {
|
||||
return filepath.Join(c.NAS.OutputPath, eventName)
|
||||
}
|
||||
|
||||
func (c *Config) GetProcessOutputPath(eventName string) string {
|
||||
return filepath.Join(c.Paths.ProcessOutput, eventName)
|
||||
}
|
||||
|
||||
func (c *Config) GetQualityPath(eventName, quality string) string {
|
||||
return filepath.Join(c.GetEventPath(eventName), quality)
|
||||
}
|
||||
@ -1,181 +0,0 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestConfig_Load(t *testing.T) {
|
||||
// Save original env vars
|
||||
originalVars := map[string]string{
|
||||
"WORKER_COUNT": os.Getenv("WORKER_COUNT"),
|
||||
"NAS_USERNAME": os.Getenv("NAS_USERNAME"),
|
||||
"LOCAL_OUTPUT_DIR": os.Getenv("LOCAL_OUTPUT_DIR"),
|
||||
"ENABLE_NAS_TRANSFER": os.Getenv("ENABLE_NAS_TRANSFER"),
|
||||
}
|
||||
defer func() {
|
||||
// Restore original env vars
|
||||
for key, value := range originalVars {
|
||||
if value == "" {
|
||||
os.Unsetenv(key)
|
||||
} else {
|
||||
os.Setenv(key, value)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Test default config load
|
||||
cfg, err := Load()
|
||||
if err != nil {
|
||||
t.Fatalf("Load() failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify defaults
|
||||
if cfg.Core.WorkerCount != 4 {
|
||||
t.Errorf("Expected WorkerCount=4, got %d", cfg.Core.WorkerCount)
|
||||
}
|
||||
if cfg.Core.RefreshDelay != 3*time.Second {
|
||||
t.Errorf("Expected RefreshDelay=3s, got %v", cfg.Core.RefreshDelay)
|
||||
}
|
||||
if !cfg.NAS.EnableTransfer {
|
||||
t.Errorf("Expected NAS.EnableTransfer=true, got false")
|
||||
}
|
||||
|
||||
// Test environment variable override
|
||||
os.Setenv("WORKER_COUNT", "8")
|
||||
os.Setenv("NAS_USERNAME", "testuser")
|
||||
os.Setenv("ENABLE_NAS_TRANSFER", "false")
|
||||
os.Setenv("LOCAL_OUTPUT_DIR", "custom_data")
|
||||
|
||||
cfg2, err := Load()
|
||||
if err != nil {
|
||||
t.Fatalf("Load() with env vars failed: %v", err)
|
||||
}
|
||||
|
||||
if cfg2.Core.WorkerCount != 8 {
|
||||
t.Errorf("Expected WorkerCount=8 from env, got %d", cfg2.Core.WorkerCount)
|
||||
}
|
||||
if cfg2.NAS.Username != "testuser" {
|
||||
t.Errorf("Expected NAS.Username='testuser' from env, got %s", cfg2.NAS.Username)
|
||||
}
|
||||
if cfg2.NAS.EnableTransfer {
|
||||
t.Errorf("Expected NAS.EnableTransfer=false from env, got true")
|
||||
}
|
||||
if !strings.Contains(cfg2.Paths.LocalOutput, "custom_data") {
|
||||
t.Errorf("Expected LocalOutput to contain 'custom_data', got %s", cfg2.Paths.LocalOutput)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfig_PathMethods(t *testing.T) {
|
||||
cfg, err := Load()
|
||||
if err != nil {
|
||||
t.Fatalf("Load() failed: %v", err)
|
||||
}
|
||||
|
||||
testEvent := "test-event"
|
||||
testQuality := "1080p"
|
||||
|
||||
// Test GetEventPath
|
||||
eventPath := cfg.GetEventPath(testEvent)
|
||||
if !strings.Contains(eventPath, testEvent) {
|
||||
t.Errorf("GetEventPath should contain event name, got %s", eventPath)
|
||||
}
|
||||
|
||||
// Test GetManifestPath
|
||||
manifestPath := cfg.GetManifestPath(testEvent)
|
||||
if !strings.Contains(manifestPath, testEvent) {
|
||||
t.Errorf("GetManifestPath should contain event name, got %s", manifestPath)
|
||||
}
|
||||
if !strings.HasSuffix(manifestPath, ".json") {
|
||||
t.Errorf("GetManifestPath should end with .json, got %s", manifestPath)
|
||||
}
|
||||
|
||||
// Test GetNASEventPath
|
||||
nasPath := cfg.GetNASEventPath(testEvent)
|
||||
if !strings.Contains(nasPath, testEvent) {
|
||||
t.Errorf("GetNASEventPath should contain event name, got %s", nasPath)
|
||||
}
|
||||
|
||||
// Test GetProcessOutputPath
|
||||
processPath := cfg.GetProcessOutputPath(testEvent)
|
||||
if !strings.Contains(processPath, testEvent) {
|
||||
t.Errorf("GetProcessOutputPath should contain event name, got %s", processPath)
|
||||
}
|
||||
|
||||
// Test GetQualityPath
|
||||
qualityPath := cfg.GetQualityPath(testEvent, testQuality)
|
||||
if !strings.Contains(qualityPath, testEvent) {
|
||||
t.Errorf("GetQualityPath should contain event name, got %s", qualityPath)
|
||||
}
|
||||
if !strings.Contains(qualityPath, testQuality) {
|
||||
t.Errorf("GetQualityPath should contain quality, got %s", qualityPath)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfig_PathValidation(t *testing.T) {
|
||||
// Create a temporary directory for testing
|
||||
tempDir, err := os.MkdirTemp("", "config_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
// Set environment variables to use temp directory
|
||||
os.Setenv("LOCAL_OUTPUT_DIR", filepath.Join(tempDir, "data"))
|
||||
defer os.Unsetenv("LOCAL_OUTPUT_DIR")
|
||||
|
||||
cfg, err := Load()
|
||||
if err != nil {
|
||||
t.Fatalf("Load() failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify directories were created
|
||||
if _, err := os.Stat(cfg.Paths.LocalOutput); os.IsNotExist(err) {
|
||||
t.Errorf("LocalOutput directory should have been created: %s", cfg.Paths.LocalOutput)
|
||||
}
|
||||
if _, err := os.Stat(cfg.Paths.ProcessOutput); os.IsNotExist(err) {
|
||||
t.Errorf("ProcessOutput directory should have been created: %s", cfg.Paths.ProcessOutput)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfig_ValidationErrors(t *testing.T) {
|
||||
// Save original env vars
|
||||
originalNASPath := os.Getenv("NAS_OUTPUT_PATH")
|
||||
originalFFmpegPath := os.Getenv("FFMPEG_PATH")
|
||||
defer func() {
|
||||
if originalNASPath == "" {
|
||||
os.Unsetenv("NAS_OUTPUT_PATH")
|
||||
} else {
|
||||
os.Setenv("NAS_OUTPUT_PATH", originalNASPath)
|
||||
}
|
||||
if originalFFmpegPath == "" {
|
||||
os.Unsetenv("FFMPEG_PATH")
|
||||
} else {
|
||||
os.Setenv("FFMPEG_PATH", originalFFmpegPath)
|
||||
}
|
||||
}()
|
||||
|
||||
// Note: Validation tests are limited because the default config
|
||||
// has working defaults. We can test that Load() works with valid configs.
|
||||
|
||||
// Test that Load works with proper paths set
|
||||
tempDir2, err := os.MkdirTemp("", "config_validation_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir2)
|
||||
|
||||
os.Setenv("NAS_OUTPUT_PATH", "\\\\test\\path")
|
||||
os.Setenv("LOCAL_OUTPUT_DIR", tempDir2)
|
||||
|
||||
cfg, err := Load()
|
||||
if err != nil {
|
||||
t.Errorf("Load() should work with valid config: %v", err)
|
||||
}
|
||||
if cfg == nil {
|
||||
t.Error("Config should not be nil")
|
||||
}
|
||||
}
|
||||
@ -1,51 +1,29 @@
|
||||
package constants
|
||||
|
||||
import (
|
||||
"m3u8-downloader/pkg/config"
|
||||
"sync"
|
||||
)
|
||||
|
||||
var (
|
||||
globalConfig *config.Config
|
||||
configOnce sync.Once
|
||||
configError error
|
||||
)
|
||||
|
||||
func GetConfig() (*config.Config, error) {
|
||||
configOnce.Do(func() {
|
||||
globalConfig, configError = config.Load()
|
||||
})
|
||||
return globalConfig, configError
|
||||
}
|
||||
|
||||
func MustGetConfig() *config.Config {
|
||||
cfg, err := GetConfig()
|
||||
if err != nil {
|
||||
panic("Failed to load configuration: " + err.Error())
|
||||
}
|
||||
return cfg
|
||||
}
|
||||
import "time"
|
||||
|
||||
const (
|
||||
MasterURL = "https://live-fastly.flosports.tv/streams/mr159021-260419/playlist.m3u8?token=st%3D1753571418%7Eexp%3D1753571448%7Eacl%3D%2Fstreams%2Fmr159021-260419%2Fplaylist.m3u8%7Edata%3Dssai%3A0%3BuserId%3A14025903%3BstreamId%3A260419%3BmediaPackageRegion%3Afalse%3BdvrMinutes%3A360%3BtokenId%3Abadd289a-ade5-48fe-852f-7dbd1d57aca8%3Bpv%3A86400%7Ehmac2%3D8de65c26b185084a6be77e788cb0ba41be5fcac3ab86159b06f7572ca925d77ba7bd182124af2a432953d4223548f198742d1a238e937d875976cd42fe549838&mid_origin=media_store&keyName=FLOSPORTS_TOKEN_KEY_2023-08-02&streamCode=mr159021-260419"
|
||||
WorkerCount = 4
|
||||
RefreshDelay = 3
|
||||
RefreshDelay = 3 * time.Second
|
||||
|
||||
HTTPUserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"
|
||||
REFERRER = "https://www.flomarching.com"
|
||||
OutputDirPath = "./data/flo_radio"
|
||||
|
||||
DefaultNASOutputPath = "\\\\HomeLabNAS\\dci\\streams"
|
||||
DefaultNASUsername = "NASAdmin"
|
||||
EnableNASTransfer = true
|
||||
NASPath = "\\\\HomeLabNAS\\dci\\streams\\2025_Atlanta"
|
||||
NASUsername = ""
|
||||
NASPassword = ""
|
||||
TransferWorkerCount = 2
|
||||
TransferRetryLimit = 3
|
||||
TransferTimeout = 30 * time.Second
|
||||
FileSettlingDelay = 5 * time.Second
|
||||
PersistencePath = "./data/transfer_queue.json"
|
||||
TransferQueueSize = 1000
|
||||
BatchSize = 10
|
||||
|
||||
DefaultTransferWorkerCount = 2
|
||||
DefaultTransferRetryLimit = 3
|
||||
DefaultTransferTimeout = 30
|
||||
DefaultFileSettlingDelay = 5
|
||||
DefaultTransferQueueSize = 100000
|
||||
DefaultBatchSize = 1000
|
||||
|
||||
DefaultCleanupBatchSize = 1000
|
||||
DefaultRetainLocalHours = 0
|
||||
|
||||
DefaultProcessWorkerCount = 2
|
||||
DefaultFFmpegPath = "ffmpeg"
|
||||
CleanupAfterTransfer = true
|
||||
CleanupBatchSize = 10
|
||||
RetainLocalHours = 0
|
||||
)
|
||||
|
||||
@ -1,240 +0,0 @@
|
||||
package constants
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestGetConfig(t *testing.T) {
|
||||
// Test successful config loading
|
||||
cfg, err := GetConfig()
|
||||
if err != nil {
|
||||
t.Fatalf("GetConfig() failed: %v", err)
|
||||
}
|
||||
if cfg == nil {
|
||||
t.Fatal("GetConfig() returned nil config")
|
||||
}
|
||||
|
||||
// Test that subsequent calls return the same instance (singleton)
|
||||
cfg2, err := GetConfig()
|
||||
if err != nil {
|
||||
t.Fatalf("Second GetConfig() call failed: %v", err)
|
||||
}
|
||||
|
||||
// Both should be the same instance due to sync.Once
|
||||
if cfg != cfg2 {
|
||||
t.Error("GetConfig() should return the same instance (singleton)")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMustGetConfig(t *testing.T) {
|
||||
// This should not panic with valid environment
|
||||
cfg := MustGetConfig()
|
||||
if cfg == nil {
|
||||
t.Fatal("MustGetConfig() returned nil")
|
||||
}
|
||||
|
||||
// Verify it returns a properly initialized config
|
||||
if cfg.Core.WorkerCount <= 0 {
|
||||
t.Errorf("Expected positive WorkerCount, got %d", cfg.Core.WorkerCount)
|
||||
}
|
||||
if cfg.Core.RefreshDelay <= 0 {
|
||||
t.Errorf("Expected positive RefreshDelay, got %v", cfg.Core.RefreshDelay)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMustGetConfig_Panic(t *testing.T) {
|
||||
// We can't easily test the panic scenario without breaking the singleton,
|
||||
// but we can test that MustGetConfig works normally
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
t.Errorf("MustGetConfig() panicked unexpectedly: %v", r)
|
||||
}
|
||||
}()
|
||||
|
||||
cfg := MustGetConfig()
|
||||
if cfg == nil {
|
||||
t.Fatal("MustGetConfig() returned nil without panicking")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigSingleton(t *testing.T) {
|
||||
// Reset the singleton for this test (this is a bit hacky but necessary for testing)
|
||||
// We'll create multiple goroutines to test concurrent access
|
||||
|
||||
configs := make(chan interface{}, 10)
|
||||
|
||||
// Launch multiple goroutines to call GetConfig concurrently
|
||||
for i := 0; i < 10; i++ {
|
||||
go func() {
|
||||
cfg, _ := GetConfig()
|
||||
configs <- cfg
|
||||
}()
|
||||
}
|
||||
|
||||
// Collect all configs
|
||||
var allConfigs []interface{}
|
||||
for i := 0; i < 10; i++ {
|
||||
allConfigs = append(allConfigs, <-configs)
|
||||
}
|
||||
|
||||
// All should be the same instance
|
||||
firstConfig := allConfigs[0]
|
||||
for i, cfg := range allConfigs {
|
||||
if cfg != firstConfig {
|
||||
t.Errorf("Config %d is different from first config", i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConstants_Values(t *testing.T) {
|
||||
// Test that constants have expected values
|
||||
if WorkerCount != 4 {
|
||||
t.Errorf("Expected WorkerCount=4, got %d", WorkerCount)
|
||||
}
|
||||
if RefreshDelay != 3 {
|
||||
t.Errorf("Expected RefreshDelay=3, got %d", RefreshDelay)
|
||||
}
|
||||
|
||||
// Test HTTP constants
|
||||
if HTTPUserAgent == "" {
|
||||
t.Error("HTTPUserAgent should not be empty")
|
||||
}
|
||||
if !strings.Contains(HTTPUserAgent, "Mozilla") {
|
||||
t.Error("HTTPUserAgent should contain 'Mozilla'")
|
||||
}
|
||||
if REFERRER != "https://www.flomarching.com" {
|
||||
t.Errorf("Expected REFERRER='https://www.flomarching.com', got '%s'", REFERRER)
|
||||
}
|
||||
|
||||
// Test default NAS constants
|
||||
if DefaultNASOutputPath != "\\\\HomeLabNAS\\dci\\streams" {
|
||||
t.Errorf("Expected DefaultNASOutputPath='\\\\HomeLabNAS\\dci\\streams', got '%s'", DefaultNASOutputPath)
|
||||
}
|
||||
if DefaultNASUsername != "NASAdmin" {
|
||||
t.Errorf("Expected DefaultNASUsername='NASAdmin', got '%s'", DefaultNASUsername)
|
||||
}
|
||||
|
||||
// Test transfer constants
|
||||
if DefaultTransferWorkerCount != 2 {
|
||||
t.Errorf("Expected DefaultTransferWorkerCount=2, got %d", DefaultTransferWorkerCount)
|
||||
}
|
||||
if DefaultTransferRetryLimit != 3 {
|
||||
t.Errorf("Expected DefaultTransferRetryLimit=3, got %d", DefaultTransferRetryLimit)
|
||||
}
|
||||
if DefaultTransferTimeout != 30 {
|
||||
t.Errorf("Expected DefaultTransferTimeout=30, got %d", DefaultTransferTimeout)
|
||||
}
|
||||
if DefaultFileSettlingDelay != 5 {
|
||||
t.Errorf("Expected DefaultFileSettlingDelay=5, got %d", DefaultFileSettlingDelay)
|
||||
}
|
||||
if DefaultTransferQueueSize != 100000 {
|
||||
t.Errorf("Expected DefaultTransferQueueSize=100000, got %d", DefaultTransferQueueSize)
|
||||
}
|
||||
if DefaultBatchSize != 1000 {
|
||||
t.Errorf("Expected DefaultBatchSize=1000, got %d", DefaultBatchSize)
|
||||
}
|
||||
|
||||
// Test cleanup constants
|
||||
if DefaultCleanupBatchSize != 1000 {
|
||||
t.Errorf("Expected DefaultCleanupBatchSize=1000, got %d", DefaultCleanupBatchSize)
|
||||
}
|
||||
if DefaultRetainLocalHours != 0 {
|
||||
t.Errorf("Expected DefaultRetainLocalHours=0, got %d", DefaultRetainLocalHours)
|
||||
}
|
||||
|
||||
// Test processing constants
|
||||
if DefaultProcessWorkerCount != 2 {
|
||||
t.Errorf("Expected DefaultProcessWorkerCount=2, got %d", DefaultProcessWorkerCount)
|
||||
}
|
||||
if DefaultFFmpegPath != "ffmpeg" {
|
||||
t.Errorf("Expected DefaultFFmpegPath='ffmpeg', got '%s'", DefaultFFmpegPath)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfig_Integration(t *testing.T) {
|
||||
cfg := MustGetConfig()
|
||||
|
||||
// Test that config values match or override constants appropriately
|
||||
if cfg.Core.WorkerCount != WorkerCount && os.Getenv("WORKER_COUNT") == "" {
|
||||
t.Errorf("Config WorkerCount (%d) should match constant (%d) when no env override", cfg.Core.WorkerCount, WorkerCount)
|
||||
}
|
||||
|
||||
if cfg.Core.RefreshDelay != time.Duration(RefreshDelay)*time.Second && os.Getenv("REFRESH_DELAY_SECONDS") == "" {
|
||||
t.Errorf("Config RefreshDelay (%v) should match constant (%v) when no env override", cfg.Core.RefreshDelay, time.Duration(RefreshDelay)*time.Second)
|
||||
}
|
||||
|
||||
// Test HTTP settings
|
||||
if cfg.HTTP.UserAgent != HTTPUserAgent {
|
||||
t.Errorf("Config UserAgent (%s) should match constant (%s)", cfg.HTTP.UserAgent, HTTPUserAgent)
|
||||
}
|
||||
if cfg.HTTP.Referer != REFERRER {
|
||||
t.Errorf("Config Referer (%s) should match constant (%s)", cfg.HTTP.Referer, REFERRER)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfig_PathMethods(t *testing.T) {
|
||||
cfg := MustGetConfig()
|
||||
|
||||
testEvent := "test-event-123"
|
||||
testQuality := "1080p"
|
||||
|
||||
// Test GetEventPath
|
||||
eventPath := cfg.GetEventPath(testEvent)
|
||||
if !strings.Contains(eventPath, testEvent) {
|
||||
t.Errorf("GetEventPath should contain event name '%s', got: %s", testEvent, eventPath)
|
||||
}
|
||||
|
||||
// Test GetManifestPath
|
||||
manifestPath := cfg.GetManifestPath(testEvent)
|
||||
if !strings.Contains(manifestPath, testEvent) {
|
||||
t.Errorf("GetManifestPath should contain event name '%s', got: %s", testEvent, manifestPath)
|
||||
}
|
||||
if !strings.HasSuffix(manifestPath, ".json") {
|
||||
t.Errorf("GetManifestPath should end with '.json', got: %s", manifestPath)
|
||||
}
|
||||
|
||||
// Test GetNASEventPath
|
||||
nasPath := cfg.GetNASEventPath(testEvent)
|
||||
if !strings.Contains(nasPath, testEvent) {
|
||||
t.Errorf("GetNASEventPath should contain event name '%s', got: %s", testEvent, nasPath)
|
||||
}
|
||||
|
||||
// Test GetProcessOutputPath
|
||||
processPath := cfg.GetProcessOutputPath(testEvent)
|
||||
if !strings.Contains(processPath, testEvent) {
|
||||
t.Errorf("GetProcessOutputPath should contain event name '%s', got: %s", testEvent, processPath)
|
||||
}
|
||||
|
||||
// Test GetQualityPath
|
||||
qualityPath := cfg.GetQualityPath(testEvent, testQuality)
|
||||
if !strings.Contains(qualityPath, testEvent) {
|
||||
t.Errorf("GetQualityPath should contain event name '%s', got: %s", testEvent, qualityPath)
|
||||
}
|
||||
if !strings.Contains(qualityPath, testQuality) {
|
||||
t.Errorf("GetQualityPath should contain quality '%s', got: %s", testQuality, qualityPath)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfig_DefaultValues(t *testing.T) {
|
||||
cfg := MustGetConfig()
|
||||
|
||||
// Test that default values are reasonable
|
||||
if cfg.Transfer.QueueSize != DefaultTransferQueueSize {
|
||||
t.Errorf("Expected transfer queue size %d, got %d", DefaultTransferQueueSize, cfg.Transfer.QueueSize)
|
||||
}
|
||||
|
||||
if cfg.Transfer.BatchSize != DefaultBatchSize {
|
||||
t.Errorf("Expected transfer batch size %d, got %d", DefaultBatchSize, cfg.Transfer.BatchSize)
|
||||
}
|
||||
|
||||
if cfg.Processing.WorkerCount != DefaultProcessWorkerCount {
|
||||
t.Errorf("Expected processing worker count %d, got %d", DefaultProcessWorkerCount, cfg.Processing.WorkerCount)
|
||||
}
|
||||
|
||||
if cfg.Cleanup.BatchSize != DefaultCleanupBatchSize {
|
||||
t.Errorf("Expected cleanup batch size %d, got %d", DefaultCleanupBatchSize, cfg.Cleanup.BatchSize)
|
||||
}
|
||||
}
|
||||
@ -5,50 +5,6 @@ import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// HTTPError represents an HTTP error with status code and message
|
||||
type HTTPError struct {
|
||||
StatusCode int
|
||||
Message string
|
||||
}
|
||||
|
||||
// Error returns the string representation of the HTTP error
|
||||
func (e *HTTPError) Error() string {
|
||||
return fmt.Sprintf("HTTP %d: %s", e.StatusCode, e.Message)
|
||||
}
|
||||
|
||||
// Is implements error comparison for errors.Is
|
||||
func (e *HTTPError) Is(target error) bool {
|
||||
var httpErr *HTTPError
|
||||
if errors.As(target, &httpErr) {
|
||||
return e.StatusCode == httpErr.StatusCode
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// NewHTTPError creates a new HTTP error
|
||||
func NewHTTPError(statusCode int, message string) error {
|
||||
return &HTTPError{
|
||||
StatusCode: statusCode,
|
||||
Message: message,
|
||||
}
|
||||
}
|
||||
|
||||
// IsHTTPError checks if an error is an HTTP error
|
||||
func IsHTTPError(err error) bool {
|
||||
var httpErr *HTTPError
|
||||
return errors.As(err, &httpErr)
|
||||
}
|
||||
|
||||
// GetHTTPStatusCode extracts the status code from an HTTP error
|
||||
func GetHTTPStatusCode(err error) int {
|
||||
var httpErr *HTTPError
|
||||
if errors.As(err, &httpErr) {
|
||||
return httpErr.StatusCode
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// Legacy support for existing code
|
||||
type HttpError struct {
|
||||
Code int
|
||||
}
|
||||
|
||||
@ -1,318 +0,0 @@
|
||||
package httpClient
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestHTTPError_Error(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
statusCode int
|
||||
message string
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "basic http error",
|
||||
statusCode: 404,
|
||||
message: "Not Found",
|
||||
want: "HTTP 404: Not Found",
|
||||
},
|
||||
{
|
||||
name: "server error",
|
||||
statusCode: 500,
|
||||
message: "Internal Server Error",
|
||||
want: "HTTP 500: Internal Server Error",
|
||||
},
|
||||
{
|
||||
name: "unauthorized error",
|
||||
statusCode: 401,
|
||||
message: "Unauthorized",
|
||||
want: "HTTP 401: Unauthorized",
|
||||
},
|
||||
{
|
||||
name: "empty message",
|
||||
statusCode: 400,
|
||||
message: "",
|
||||
want: "HTTP 400: ",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := &HTTPError{
|
||||
StatusCode: tt.statusCode,
|
||||
Message: tt.message,
|
||||
}
|
||||
|
||||
got := err.Error()
|
||||
if got != tt.want {
|
||||
t.Errorf("HTTPError.Error() = %q, want %q", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestHTTPError_Is(t *testing.T) {
|
||||
err404 := &HTTPError{StatusCode: 404, Message: "Not Found"}
|
||||
err500 := &HTTPError{StatusCode: 500, Message: "Server Error"}
|
||||
otherErr404 := &HTTPError{StatusCode: 404, Message: "Different message"}
|
||||
regularError := fmt.Errorf("regular error")
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
err error
|
||||
target error
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
name: "same error instance",
|
||||
err: err404,
|
||||
target: err404,
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "different HTTP errors with same status",
|
||||
err: err404,
|
||||
target: otherErr404,
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "different HTTP errors with different status",
|
||||
err: err404,
|
||||
target: err500,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "HTTP error vs regular error",
|
||||
err: err404,
|
||||
target: regularError,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "regular error vs HTTP error",
|
||||
err: regularError,
|
||||
target: err404,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "nil target",
|
||||
err: err404,
|
||||
target: nil,
|
||||
want: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
var got bool
|
||||
if httpErr, ok := tt.err.(*HTTPError); ok {
|
||||
got = httpErr.Is(tt.target)
|
||||
} else {
|
||||
got = false // Non-HTTP errors return false
|
||||
}
|
||||
if got != tt.want {
|
||||
t.Errorf("HTTPError.Is() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsHTTPError(t *testing.T) {
|
||||
httpErr := &HTTPError{StatusCode: 404, Message: "Not Found"}
|
||||
regularErr := fmt.Errorf("regular error")
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
err error
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
name: "http error",
|
||||
err: httpErr,
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "regular error",
|
||||
err: regularErr,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "nil error",
|
||||
err: nil,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "wrapped http error",
|
||||
err: fmt.Errorf("wrapped: %w", httpErr),
|
||||
want: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := IsHTTPError(tt.err)
|
||||
if got != tt.want {
|
||||
t.Errorf("IsHTTPError() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetHTTPStatusCode(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
err error
|
||||
want int
|
||||
}{
|
||||
{
|
||||
name: "http error 404",
|
||||
err: &HTTPError{StatusCode: 404, Message: "Not Found"},
|
||||
want: 404,
|
||||
},
|
||||
{
|
||||
name: "http error 500",
|
||||
err: &HTTPError{StatusCode: 500, Message: "Server Error"},
|
||||
want: 500,
|
||||
},
|
||||
{
|
||||
name: "wrapped http error",
|
||||
err: fmt.Errorf("wrapped: %w", &HTTPError{StatusCode: 403, Message: "Forbidden"}),
|
||||
want: 403,
|
||||
},
|
||||
{
|
||||
name: "regular error",
|
||||
err: fmt.Errorf("regular error"),
|
||||
want: 0,
|
||||
},
|
||||
{
|
||||
name: "nil error",
|
||||
err: nil,
|
||||
want: 0,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := GetHTTPStatusCode(tt.err)
|
||||
if got != tt.want {
|
||||
t.Errorf("GetHTTPStatusCode() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewHTTPError(t *testing.T) {
|
||||
statusCode := 404
|
||||
message := "Page not found"
|
||||
|
||||
err := NewHTTPError(statusCode, message)
|
||||
|
||||
// Check type
|
||||
httpErr, ok := err.(*HTTPError)
|
||||
if !ok {
|
||||
t.Fatalf("NewHTTPError should return *HTTPError, got %T", err)
|
||||
}
|
||||
|
||||
// Check fields
|
||||
if httpErr.StatusCode != statusCode {
|
||||
t.Errorf("Expected StatusCode=%d, got %d", statusCode, httpErr.StatusCode)
|
||||
}
|
||||
if httpErr.Message != message {
|
||||
t.Errorf("Expected Message=%q, got %q", message, httpErr.Message)
|
||||
}
|
||||
|
||||
// Check error string
|
||||
expectedErrorString := fmt.Sprintf("HTTP %d: %s", statusCode, message)
|
||||
if httpErr.Error() != expectedErrorString {
|
||||
t.Errorf("Expected error string=%q, got %q", expectedErrorString, httpErr.Error())
|
||||
}
|
||||
}
|
||||
|
||||
func TestHTTPError_StatusCodeChecks(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
statusCode int
|
||||
isClient bool
|
||||
isServer bool
|
||||
}{
|
||||
{"200 OK", 200, false, false},
|
||||
{"400 Bad Request", 400, true, false},
|
||||
{"401 Unauthorized", 401, true, false},
|
||||
{"404 Not Found", 404, true, false},
|
||||
{"499 Client Error", 499, true, false},
|
||||
{"500 Server Error", 500, false, true},
|
||||
{"502 Bad Gateway", 502, false, true},
|
||||
{"599 Server Error", 599, false, true},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := &HTTPError{StatusCode: tt.statusCode, Message: "test"}
|
||||
|
||||
isClient := err.StatusCode >= 400 && err.StatusCode < 500
|
||||
isServer := err.StatusCode >= 500 && err.StatusCode < 600
|
||||
|
||||
if isClient != tt.isClient {
|
||||
t.Errorf("Status %d: expected isClient=%v, got %v", tt.statusCode, tt.isClient, isClient)
|
||||
}
|
||||
if isServer != tt.isServer {
|
||||
t.Errorf("Status %d: expected isServer=%v, got %v", tt.statusCode, tt.isServer, isServer)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestHTTPError_Integration(t *testing.T) {
|
||||
// Test that HTTPError integrates well with standard error handling
|
||||
err := NewHTTPError(http.StatusNotFound, "Resource not found")
|
||||
|
||||
// Should be able to use with errors.Is
|
||||
target := &HTTPError{StatusCode: http.StatusNotFound}
|
||||
if !err.(*HTTPError).Is(target) {
|
||||
t.Error("HTTPError should match target with same status code")
|
||||
}
|
||||
|
||||
// Should be detectable as HTTPError
|
||||
if !IsHTTPError(err) {
|
||||
t.Error("Should be detectable as HTTPError")
|
||||
}
|
||||
|
||||
// Should return correct status code
|
||||
if GetHTTPStatusCode(err) != http.StatusNotFound {
|
||||
t.Error("Should return correct status code")
|
||||
}
|
||||
|
||||
// Should have meaningful string representation
|
||||
errorString := err.Error()
|
||||
if !strings.Contains(errorString, "404") {
|
||||
t.Error("Error string should contain status code")
|
||||
}
|
||||
if !strings.Contains(errorString, "Resource not found") {
|
||||
t.Error("Error string should contain message")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHTTPError_EdgeCases(t *testing.T) {
|
||||
// Test with zero status code
|
||||
err := NewHTTPError(0, "Zero status")
|
||||
if err.Error() != "HTTP 0: Zero status" {
|
||||
t.Errorf("Unexpected error string for zero status: %s", err.Error())
|
||||
}
|
||||
|
||||
// Test with very long message
|
||||
longMessage := strings.Repeat("a", 1000)
|
||||
err = NewHTTPError(500, longMessage)
|
||||
if !strings.Contains(err.Error(), longMessage) {
|
||||
t.Error("Long message should be preserved")
|
||||
}
|
||||
|
||||
// Test status code boundaries
|
||||
for _, code := range []int{399, 400, 499, 500, 599, 600} {
|
||||
err := &HTTPError{StatusCode: code}
|
||||
// Should not panic
|
||||
_ = err.Error()
|
||||
}
|
||||
}
|
||||
@ -1,85 +0,0 @@
|
||||
package media
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"log"
|
||||
"m3u8-downloader/pkg/constants"
|
||||
"m3u8-downloader/pkg/utils"
|
||||
"os"
|
||||
"sort"
|
||||
)
|
||||
|
||||
type ManifestWriter struct {
|
||||
ManifestPath string
|
||||
Segments []ManifestItem
|
||||
Index map[string]*ManifestItem
|
||||
}
|
||||
|
||||
type ManifestItem struct {
|
||||
SeqNo string `json:"seqNo"`
|
||||
Resolution string `json:"resolution"`
|
||||
}
|
||||
|
||||
func NewManifestWriter(eventName string) *ManifestWriter {
|
||||
cfg := constants.MustGetConfig()
|
||||
return &ManifestWriter{
|
||||
ManifestPath: cfg.GetManifestPath(eventName),
|
||||
Segments: make([]ManifestItem, 0),
|
||||
Index: make(map[string]*ManifestItem),
|
||||
}
|
||||
}
|
||||
|
||||
func (m *ManifestWriter) AddOrUpdateSegment(seqNo string, resolution string) {
|
||||
if m.Index == nil {
|
||||
m.Index = make(map[string]*ManifestItem)
|
||||
}
|
||||
|
||||
if m.Segments == nil {
|
||||
m.Segments = make([]ManifestItem, 0)
|
||||
}
|
||||
|
||||
if existing, ok := m.Index[seqNo]; ok {
|
||||
if resolution > existing.Resolution {
|
||||
existing.Resolution = resolution
|
||||
}
|
||||
return
|
||||
} else {
|
||||
item := ManifestItem{
|
||||
SeqNo: seqNo,
|
||||
Resolution: resolution,
|
||||
}
|
||||
m.Segments = append(m.Segments, item)
|
||||
m.Index[seqNo] = &item
|
||||
}
|
||||
}
|
||||
|
||||
func (m *ManifestWriter) WriteManifest() {
|
||||
sort.Slice(m.Segments, func(i, j int) bool {
|
||||
return m.Segments[i].SeqNo < m.Segments[j].SeqNo
|
||||
})
|
||||
|
||||
data, err := json.MarshalIndent(m.Segments, "", " ")
|
||||
if err != nil {
|
||||
log.Printf("Failed to marshal manifest: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if err := utils.ValidateWritablePath(m.ManifestPath); err != nil {
|
||||
log.Printf("Manifest path validation failed: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
file, err := os.Create(m.ManifestPath)
|
||||
if err != nil {
|
||||
log.Printf("Failed to create manifest file: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
defer file.Close()
|
||||
|
||||
_, err = file.Write(data)
|
||||
if err != nil {
|
||||
log.Printf("Failed to write manifest file: %v", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
@ -1,241 +0,0 @@
|
||||
package media
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestManifestWriter_NewManifestWriter(t *testing.T) {
|
||||
// Set up temporary environment for testing
|
||||
tempDir, err := os.MkdirTemp("", "manifest_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
// Set environment variable to use temp directory
|
||||
os.Setenv("LOCAL_OUTPUT_DIR", tempDir)
|
||||
defer os.Unsetenv("LOCAL_OUTPUT_DIR")
|
||||
|
||||
eventName := "test-event"
|
||||
writer := NewManifestWriter(eventName)
|
||||
|
||||
if writer == nil {
|
||||
t.Fatal("NewManifestWriter() returned nil")
|
||||
}
|
||||
|
||||
if writer.Segments == nil {
|
||||
t.Error("Segments should be initialized")
|
||||
}
|
||||
if writer.Index == nil {
|
||||
t.Error("Index should be initialized")
|
||||
}
|
||||
if len(writer.Segments) != 0 {
|
||||
t.Errorf("Segments should be empty, got %d items", len(writer.Segments))
|
||||
}
|
||||
}
|
||||
|
||||
func TestManifestWriter_AddOrUpdateSegment(t *testing.T) {
|
||||
writer := &ManifestWriter{
|
||||
ManifestPath: "test.json",
|
||||
Segments: make([]ManifestItem, 0),
|
||||
Index: make(map[string]*ManifestItem),
|
||||
}
|
||||
|
||||
// Test adding new segment
|
||||
writer.AddOrUpdateSegment("1001", "1080p")
|
||||
|
||||
if len(writer.Segments) != 1 {
|
||||
t.Errorf("Expected 1 segment, got %d", len(writer.Segments))
|
||||
}
|
||||
if writer.Segments[0].SeqNo != "1001" {
|
||||
t.Errorf("Expected SeqNo '1001', got '%s'", writer.Segments[0].SeqNo)
|
||||
}
|
||||
if writer.Segments[0].Resolution != "1080p" {
|
||||
t.Errorf("Expected Resolution '1080p', got '%s'", writer.Segments[0].Resolution)
|
||||
}
|
||||
|
||||
// Test updating existing segment with higher resolution
|
||||
writer.AddOrUpdateSegment("1001", "1440p")
|
||||
|
||||
if len(writer.Segments) != 1 {
|
||||
t.Errorf("Segments count should remain 1 after update, got %d", len(writer.Segments))
|
||||
}
|
||||
if writer.Segments[0].Resolution != "1440p" {
|
||||
t.Errorf("Expected updated resolution '1440p', got '%s'", writer.Segments[0].Resolution)
|
||||
}
|
||||
|
||||
// Test updating existing segment with lower resolution (should not change)
|
||||
writer.AddOrUpdateSegment("1001", "720p")
|
||||
|
||||
if writer.Segments[0].Resolution != "1440p" {
|
||||
t.Errorf("Resolution should remain '1440p', got '%s'", writer.Segments[0].Resolution)
|
||||
}
|
||||
|
||||
// Test adding different segment
|
||||
writer.AddOrUpdateSegment("1002", "720p")
|
||||
|
||||
if len(writer.Segments) != 2 {
|
||||
t.Errorf("Expected 2 segments, got %d", len(writer.Segments))
|
||||
}
|
||||
}
|
||||
|
||||
func TestManifestWriter_AddOrUpdateSegment_NilFields(t *testing.T) {
|
||||
writer := &ManifestWriter{
|
||||
ManifestPath: "test.json",
|
||||
}
|
||||
|
||||
// Test with nil fields (should initialize them)
|
||||
writer.AddOrUpdateSegment("1001", "1080p")
|
||||
|
||||
if writer.Segments == nil {
|
||||
t.Error("Segments should be initialized")
|
||||
}
|
||||
if writer.Index == nil {
|
||||
t.Error("Index should be initialized")
|
||||
}
|
||||
if len(writer.Segments) != 1 {
|
||||
t.Errorf("Expected 1 segment, got %d", len(writer.Segments))
|
||||
}
|
||||
}
|
||||
|
||||
func TestManifestWriter_WriteManifest(t *testing.T) {
|
||||
tempDir, err := os.MkdirTemp("", "manifest_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
manifestPath := filepath.Join(tempDir, "test-manifest.json")
|
||||
writer := &ManifestWriter{
|
||||
ManifestPath: manifestPath,
|
||||
Segments: make([]ManifestItem, 0),
|
||||
Index: make(map[string]*ManifestItem),
|
||||
}
|
||||
|
||||
// Add some test segments out of order
|
||||
writer.AddOrUpdateSegment("1003", "1080p")
|
||||
writer.AddOrUpdateSegment("1001", "720p")
|
||||
writer.AddOrUpdateSegment("1002", "1080p")
|
||||
|
||||
// Write manifest
|
||||
writer.WriteManifest()
|
||||
|
||||
// Verify file was created
|
||||
if _, err := os.Stat(manifestPath); os.IsNotExist(err) {
|
||||
t.Fatalf("Manifest file was not created: %s", manifestPath)
|
||||
}
|
||||
|
||||
// Read and verify content
|
||||
content, err := os.ReadFile(manifestPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read manifest file: %v", err)
|
||||
}
|
||||
|
||||
var segments []ManifestItem
|
||||
err = json.Unmarshal(content, &segments)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to unmarshal manifest JSON: %v", err)
|
||||
}
|
||||
|
||||
// Verify segments are sorted by sequence number
|
||||
if len(segments) != 3 {
|
||||
t.Errorf("Expected 3 segments in manifest, got %d", len(segments))
|
||||
}
|
||||
|
||||
expectedOrder := []string{"1001", "1002", "1003"}
|
||||
for i, segment := range segments {
|
||||
if segment.SeqNo != expectedOrder[i] {
|
||||
t.Errorf("Segment %d: expected SeqNo '%s', got '%s'", i, expectedOrder[i], segment.SeqNo)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify content structure
|
||||
if segments[0].Resolution != "720p" {
|
||||
t.Errorf("Expected first segment resolution '720p', got '%s'", segments[0].Resolution)
|
||||
}
|
||||
if segments[1].Resolution != "1080p" {
|
||||
t.Errorf("Expected second segment resolution '1080p', got '%s'", segments[1].Resolution)
|
||||
}
|
||||
}
|
||||
|
||||
func TestManifestWriter_WriteManifest_EmptySegments(t *testing.T) {
|
||||
tempDir, err := os.MkdirTemp("", "manifest_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
manifestPath := filepath.Join(tempDir, "empty-manifest.json")
|
||||
writer := &ManifestWriter{
|
||||
ManifestPath: manifestPath,
|
||||
Segments: make([]ManifestItem, 0),
|
||||
Index: make(map[string]*ManifestItem),
|
||||
}
|
||||
|
||||
// Write empty manifest
|
||||
writer.WriteManifest()
|
||||
|
||||
// Verify file was created
|
||||
if _, err := os.Stat(manifestPath); os.IsNotExist(err) {
|
||||
t.Fatalf("Manifest file was not created: %s", manifestPath)
|
||||
}
|
||||
|
||||
// Read and verify content is empty array
|
||||
content, err := os.ReadFile(manifestPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read manifest file: %v", err)
|
||||
}
|
||||
|
||||
var segments []ManifestItem
|
||||
err = json.Unmarshal(content, &segments)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to unmarshal manifest JSON: %v", err)
|
||||
}
|
||||
|
||||
if len(segments) != 0 {
|
||||
t.Errorf("Expected empty segments array, got %d items", len(segments))
|
||||
}
|
||||
}
|
||||
|
||||
func TestManifestWriter_WriteManifest_InvalidPath(t *testing.T) {
|
||||
writer := &ManifestWriter{
|
||||
ManifestPath: "/invalid/path/that/does/not/exist/manifest.json",
|
||||
Segments: []ManifestItem{{SeqNo: "1001", Resolution: "1080p"}},
|
||||
Index: make(map[string]*ManifestItem),
|
||||
}
|
||||
|
||||
// This should not panic, just fail gracefully
|
||||
writer.WriteManifest()
|
||||
|
||||
// Test passes if no panic occurs
|
||||
}
|
||||
|
||||
func TestManifestItem_JSONSerialization(t *testing.T) {
|
||||
item := ManifestItem{
|
||||
SeqNo: "1001",
|
||||
Resolution: "1080p",
|
||||
}
|
||||
|
||||
// Test marshaling
|
||||
data, err := json.Marshal(item)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to marshal ManifestItem: %v", err)
|
||||
}
|
||||
|
||||
// Test unmarshaling
|
||||
var unmarshaled ManifestItem
|
||||
err = json.Unmarshal(data, &unmarshaled)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to unmarshal ManifestItem: %v", err)
|
||||
}
|
||||
|
||||
if unmarshaled.SeqNo != item.SeqNo {
|
||||
t.Errorf("SeqNo mismatch: expected '%s', got '%s'", item.SeqNo, unmarshaled.SeqNo)
|
||||
}
|
||||
if unmarshaled.Resolution != item.Resolution {
|
||||
t.Errorf("Resolution mismatch: expected '%s', got '%s'", item.Resolution, unmarshaled.Resolution)
|
||||
}
|
||||
}
|
||||
@ -2,7 +2,6 @@ package media
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/grafov/m3u8"
|
||||
"log"
|
||||
@ -22,7 +21,6 @@ type StreamVariant struct {
|
||||
ID int
|
||||
Resolution string
|
||||
OutputDir string
|
||||
Writer *ManifestWriter
|
||||
}
|
||||
|
||||
func extractResolution(variant *m3u8.Variant) string {
|
||||
@ -46,7 +44,7 @@ func extractResolution(variant *m3u8.Variant) string {
|
||||
}
|
||||
}
|
||||
|
||||
func GetAllVariants(masterURL string, outputDir string, writer *ManifestWriter) ([]*StreamVariant, error) {
|
||||
func GetAllVariants(masterURL string) ([]*StreamVariant, error) {
|
||||
client := &http.Client{}
|
||||
req, _ := http.NewRequest("GET", masterURL, nil)
|
||||
req.Header.Set("User-Agent", constants.HTTPUserAgent)
|
||||
@ -71,8 +69,7 @@ func GetAllVariants(masterURL string, outputDir string, writer *ManifestWriter)
|
||||
BaseURL: base,
|
||||
ID: 0,
|
||||
Resolution: "unknown",
|
||||
OutputDir: path.Join(outputDir, "unknown"),
|
||||
Writer: writer,
|
||||
OutputDir: path.Join(constants.NASPath, "unknown"),
|
||||
}}, nil
|
||||
}
|
||||
|
||||
@ -86,7 +83,7 @@ func GetAllVariants(masterURL string, outputDir string, writer *ManifestWriter)
|
||||
vURL, _ := url.Parse(v.URI)
|
||||
fullURL := base.ResolveReference(vURL).String()
|
||||
resolution := extractResolution(v)
|
||||
outputDir := path.Join(outputDir, resolution)
|
||||
outputDir := path.Join(constants.NASPath, resolution)
|
||||
variants = append(variants, &StreamVariant{
|
||||
URL: fullURL,
|
||||
Bandwidth: v.Bandwidth,
|
||||
@ -99,7 +96,7 @@ func GetAllVariants(masterURL string, outputDir string, writer *ManifestWriter)
|
||||
return variants, nil
|
||||
}
|
||||
|
||||
func VariantDownloader(ctx context.Context, variant *StreamVariant, sem chan struct{}, manifest *ManifestWriter) {
|
||||
func VariantDownloader(ctx context.Context, variant *StreamVariant, sem chan struct{}) {
|
||||
log.Printf("Starting %s variant downloader (bandwidth: %d)", variant.Resolution, variant.Bandwidth)
|
||||
ticker := time.NewTicker(constants.RefreshDelay)
|
||||
defer ticker.Stop()
|
||||
@ -145,18 +142,9 @@ func VariantDownloader(ctx context.Context, variant *StreamVariant, sem chan str
|
||||
|
||||
err := DownloadSegment(ctx, client, j.AbsoluteURL(), j.Variant.OutputDir)
|
||||
name := strings.TrimSuffix(path.Base(j.Key()), path.Ext(path.Base(j.Key())))
|
||||
|
||||
if err == nil {
|
||||
log.Printf("✓ %s downloaded segment %s", j.Variant.Resolution, name)
|
||||
return
|
||||
}
|
||||
|
||||
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
|
||||
// Suppress log: shutdown in progress
|
||||
return
|
||||
}
|
||||
|
||||
if httpClient.IsHTTPStatus(err, 403) {
|
||||
} else if httpClient.IsHTTPStatus(err, 403) {
|
||||
log.Printf("✗ %s failed to download segment %s (403)", j.Variant.Resolution, name)
|
||||
} else {
|
||||
log.Printf("✗ %s failed to download segment %s: %v", j.Variant.Resolution, name, err)
|
||||
|
||||
@ -1,12 +0,0 @@
|
||||
package nas
|
||||
|
||||
import "time"
|
||||
|
||||
type NASConfig struct {
|
||||
Path string
|
||||
Username string
|
||||
Password string
|
||||
Timeout time.Duration
|
||||
RetryLimit int
|
||||
VerifySize bool
|
||||
}
|
||||
202
pkg/nas/nas.go
202
pkg/nas/nas.go
@ -1,202 +0,0 @@
|
||||
package nas
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type NASService struct {
|
||||
Config NASConfig
|
||||
connected bool
|
||||
}
|
||||
|
||||
func NewNASService(config NASConfig) *NASService {
|
||||
nt := &NASService{
|
||||
Config: config,
|
||||
}
|
||||
|
||||
// Establish network connection with credentials before accessing the path
|
||||
if err := nt.EstablishConnection(); err != nil {
|
||||
log.Fatalf("Failed to establish network connection to %s: %v", nt.Config.Path, err)
|
||||
}
|
||||
|
||||
err := nt.EnsureDirectoryExists(nt.Config.Path)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create directory %s: %v", nt.Config.Path, err)
|
||||
}
|
||||
return nt
|
||||
}
|
||||
|
||||
func (nt *NASService) CopyFile(ctx context.Context, srcPath, destPath string) error {
|
||||
src, err := os.Open(srcPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to open source file: %w", err)
|
||||
}
|
||||
defer src.Close()
|
||||
|
||||
dest, err := os.Create(destPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to create destination file: %w", err)
|
||||
}
|
||||
defer dest.Close()
|
||||
|
||||
done := make(chan error, 1)
|
||||
go func() {
|
||||
_, err := io.Copy(dest, src)
|
||||
done <- err
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case err := <-done:
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return dest.Sync()
|
||||
}
|
||||
}
|
||||
|
||||
func (nt *NASService) VerifyTransfer(srcPath, destPath string) error {
|
||||
srcInfo, err := os.Stat(srcPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to stat source file: %w", err)
|
||||
}
|
||||
|
||||
destInfo, err := os.Stat(destPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to stat destination file: %w", err)
|
||||
}
|
||||
|
||||
if srcInfo.Size() != destInfo.Size() {
|
||||
return fmt.Errorf("size mismatch: source=%d, dest=%d", srcInfo.Size(), destInfo.Size())
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (nt *NASService) EnsureDirectoryExists(path string) error {
|
||||
if err := os.MkdirAll(path, 0755); err != nil {
|
||||
return fmt.Errorf("Failed to create directory: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (nt *NASService) EstablishConnection() error {
|
||||
networkPath := nt.ExtractNetworkPath(nt.Config.Path)
|
||||
if networkPath == "" {
|
||||
return nil // local path, no network mount needed
|
||||
}
|
||||
|
||||
log.Printf("Establishing network connection to %s with user %s", networkPath, nt.Config.Username)
|
||||
|
||||
var cmd *exec.Cmd
|
||||
if nt.Config.Username != "" && nt.Config.Password != "" {
|
||||
cmd = exec.Command("net", "use", networkPath, "/user:"+nt.Config.Username, nt.Config.Password, "/persistent:no")
|
||||
} else {
|
||||
cmd = exec.Command("net", "use", networkPath, "/persistent:no")
|
||||
}
|
||||
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to establish network connection: %w\nOutput: %s", err, string(output))
|
||||
}
|
||||
|
||||
log.Printf("Network connection established successfully")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (nt *NASService) ExtractNetworkPath(fullPath string) string {
|
||||
// Extract \\server\share from paths like \\server\share\folder\subfolder
|
||||
if !strings.HasPrefix(fullPath, "\\\\") {
|
||||
return "" // Not a UNC path
|
||||
}
|
||||
|
||||
parts := strings.Split(fullPath[2:], "\\") // Remove leading \\
|
||||
if len(parts) < 2 {
|
||||
return "" // Invalid UNC path
|
||||
}
|
||||
|
||||
return "\\\\" + parts[0] + "\\" + parts[1]
|
||||
}
|
||||
|
||||
func (nt *NASService) TestConnection() error {
|
||||
testFile := filepath.Join(nt.Config.Path, ".connection_test")
|
||||
|
||||
f, err := os.Create(testFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to create test file: %w", err)
|
||||
}
|
||||
f.Close()
|
||||
|
||||
os.Remove(testFile)
|
||||
|
||||
nt.connected = true
|
||||
log.Printf("Connected to NAS at %s", nt.Config.Path)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (nt *NASService) IsConnected() bool {
|
||||
return nt.connected
|
||||
}
|
||||
|
||||
// Disconnect removes the network connection
|
||||
func (nt *NASService) Disconnect() error {
|
||||
networkPath := nt.ExtractNetworkPath(nt.Config.Path)
|
||||
if networkPath == "" {
|
||||
return nil // Local path, nothing to disconnect
|
||||
}
|
||||
|
||||
cmd := exec.Command("net", "use", networkPath, "/delete")
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("Warning: failed to disconnect from %s: %v\nOutput: %s", networkPath, err, string(output))
|
||||
// Don't return error since this is cleanup
|
||||
} else {
|
||||
log.Printf("Disconnected from network path: %s", networkPath)
|
||||
}
|
||||
|
||||
nt.connected = false
|
||||
return nil
|
||||
}
|
||||
|
||||
// FileExists checks if a file already exists on the NAS and optionally verifies size
|
||||
func (nt *NASService) FileExists(destinationPath string, expectedSize int64) (bool, error) {
|
||||
fullDestPath := filepath.Join(nt.Config.Path, destinationPath)
|
||||
|
||||
destInfo, err := os.Stat(fullDestPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return false, nil // File doesn't exist, no error
|
||||
}
|
||||
return false, fmt.Errorf("failed to stat NAS file: %w", err)
|
||||
}
|
||||
|
||||
// File exists, check size if expected size is provided
|
||||
if expectedSize > 0 && destInfo.Size() != expectedSize {
|
||||
log.Printf("NAS file size mismatch for %s: expected=%d, actual=%d",
|
||||
fullDestPath, expectedSize, destInfo.Size())
|
||||
return false, nil // File exists but wrong size, treat as not existing
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// GetFileSize returns the size of a file on the NAS
|
||||
func (nt *NASService) GetFileSize(destinationPath string) (int64, error) {
|
||||
fullDestPath := filepath.Join(nt.Config.Path, destinationPath)
|
||||
|
||||
destInfo, err := os.Stat(fullDestPath)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("failed to stat NAS file: %w", err)
|
||||
}
|
||||
|
||||
return destInfo.Size(), nil
|
||||
}
|
||||
@ -1,7 +0,0 @@
|
||||
package processing
|
||||
|
||||
type SegmentInfo struct {
|
||||
Name string
|
||||
SeqNo int
|
||||
Resolution string
|
||||
}
|
||||
@ -1,337 +0,0 @@
|
||||
package processing
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
"m3u8-downloader/pkg/config"
|
||||
"m3u8-downloader/pkg/nas"
|
||||
"m3u8-downloader/pkg/utils"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type ProcessingService struct {
|
||||
config *config.Config
|
||||
eventName string
|
||||
nas *nas.NASService
|
||||
}
|
||||
|
||||
func NewProcessingService(eventName string, cfg *config.Config) (*ProcessingService, error) {
|
||||
if cfg == nil {
|
||||
return nil, fmt.Errorf("configuration is required")
|
||||
}
|
||||
|
||||
nasConfig := nas.NASConfig{
|
||||
Path: cfg.NAS.OutputPath,
|
||||
Username: cfg.NAS.Username,
|
||||
Password: cfg.NAS.Password,
|
||||
Timeout: cfg.NAS.Timeout,
|
||||
RetryLimit: cfg.NAS.RetryLimit,
|
||||
VerifySize: true,
|
||||
}
|
||||
|
||||
nasService := nas.NewNASService(nasConfig)
|
||||
|
||||
if err := nasService.TestConnection(); err != nil {
|
||||
return nil, fmt.Errorf("failed to connect to NAS: %w", err)
|
||||
}
|
||||
|
||||
return &ProcessingService{
|
||||
config: cfg,
|
||||
eventName: eventName,
|
||||
nas: nasService,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (ps *ProcessingService) GetSegmentInfo() (map[int]string, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (ps *ProcessingService) GetEventDirs() ([]string, error) {
|
||||
if ps.eventName == "" {
|
||||
sourcePath := ps.config.NAS.OutputPath
|
||||
dirs, err := os.ReadDir(sourcePath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read directory %s: %w", sourcePath, err)
|
||||
}
|
||||
var eventDirs []string
|
||||
for _, dir := range dirs {
|
||||
if dir.IsDir() {
|
||||
eventDirs = append(eventDirs, dir.Name())
|
||||
}
|
||||
}
|
||||
return eventDirs, nil
|
||||
} else {
|
||||
return []string{ps.eventName}, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (ps *ProcessingService) Start(ctx context.Context) error {
|
||||
if !ps.config.Processing.Enabled {
|
||||
log.Println("Processing service disabled")
|
||||
return nil
|
||||
}
|
||||
|
||||
if ps.eventName == "" {
|
||||
events, err := ps.GetEventDirs()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get event directories: %w", err)
|
||||
}
|
||||
if len(events) == 0 {
|
||||
return fmt.Errorf("no events found")
|
||||
}
|
||||
if len(events) > 1 {
|
||||
fmt.Println("Multiple events found, please select one:")
|
||||
for i, event := range events {
|
||||
fmt.Printf("%d. %s\n", i+1, event)
|
||||
}
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
input, _ := reader.ReadString('\n')
|
||||
input = strings.TrimSpace(input)
|
||||
index, err := strconv.Atoi(input)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to parse input: %w", err)
|
||||
}
|
||||
if index < 1 || index > len(events) {
|
||||
return fmt.Errorf("invalid input")
|
||||
}
|
||||
ps.eventName = events[index-1]
|
||||
} else {
|
||||
ps.eventName = events[0]
|
||||
}
|
||||
}
|
||||
|
||||
//Get all present resolutions
|
||||
dirs, err := ps.GetResolutions()
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to get resolutions: %w", err)
|
||||
}
|
||||
|
||||
//Spawn a worker per resolution
|
||||
ch := make(chan SegmentInfo, 100)
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for _, resolution := range dirs {
|
||||
wg.Add(1)
|
||||
go ps.ParseResolutionDirectory(resolution, ch, &wg)
|
||||
}
|
||||
go func() {
|
||||
wg.Wait()
|
||||
close(ch)
|
||||
}()
|
||||
|
||||
segments, err := ps.AggregateSegmentInfo(ch)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to aggregate segment info: %w", err)
|
||||
}
|
||||
|
||||
aggFile, err := ps.WriteConcatFile(segments)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to write concat file: %w", err)
|
||||
}
|
||||
|
||||
// Feed info to ffmpeg to stitch files together
|
||||
outPath := ps.config.GetProcessOutputPath(ps.eventName)
|
||||
if err := utils.EnsureDir(outPath); err != nil {
|
||||
return fmt.Errorf("failed to create output directory: %w", err)
|
||||
}
|
||||
|
||||
concatErr := ps.RunFFmpeg(aggFile, outPath)
|
||||
if concatErr != nil {
|
||||
return concatErr
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ps *ProcessingService) GetResolutions() ([]string, error) {
|
||||
eventPath := ps.config.GetNASEventPath(ps.eventName)
|
||||
dirs, err := os.ReadDir(eventPath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read source directory %s: %w", eventPath, err)
|
||||
}
|
||||
|
||||
re := regexp.MustCompile(`^\d+p$`)
|
||||
|
||||
var resolutions []string
|
||||
for _, dir := range dirs {
|
||||
if dir.IsDir() && re.MatchString(dir.Name()) {
|
||||
resolutions = append(resolutions, dir.Name())
|
||||
}
|
||||
}
|
||||
|
||||
return resolutions, nil
|
||||
}
|
||||
|
||||
func (ps *ProcessingService) ParseResolutionDirectory(resolution string, ch chan<- SegmentInfo, wg *sync.WaitGroup) {
|
||||
defer wg.Done()
|
||||
|
||||
resolutionPath := utils.SafeJoin(ps.config.GetNASEventPath(ps.eventName), resolution)
|
||||
files, err := os.ReadDir(resolutionPath)
|
||||
if err != nil {
|
||||
log.Printf("Failed to read resolution directory %s: %v", resolutionPath, err)
|
||||
return
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
if !file.IsDir() {
|
||||
if !strings.HasSuffix(strings.ToLower(file.Name()), ".ts") {
|
||||
continue
|
||||
}
|
||||
no, err := strconv.Atoi(file.Name()[6:10])
|
||||
if err != nil {
|
||||
log.Printf("Failed to parse segment number: %v", err)
|
||||
continue
|
||||
}
|
||||
ch <- SegmentInfo{
|
||||
Name: file.Name(),
|
||||
SeqNo: no,
|
||||
Resolution: resolution,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (ps *ProcessingService) AggregateSegmentInfo(ch <-chan SegmentInfo) (map[int]SegmentInfo, error) {
|
||||
segmentMap := make(map[int]SegmentInfo)
|
||||
|
||||
rank := map[string]int{
|
||||
"1080p": 1,
|
||||
"720p": 2,
|
||||
"540p": 3,
|
||||
"480p": 4,
|
||||
"450p": 5,
|
||||
"360p": 6,
|
||||
"270p": 7,
|
||||
"240p": 8,
|
||||
}
|
||||
|
||||
for segment := range ch {
|
||||
fmt.Printf("Received segment %s in resolution %s \n", segment.Name, segment.Resolution)
|
||||
current, exists := segmentMap[segment.SeqNo]
|
||||
if !exists || rank[segment.Resolution] > rank[current.Resolution] {
|
||||
segmentMap[segment.SeqNo] = segment
|
||||
}
|
||||
}
|
||||
|
||||
return segmentMap, nil
|
||||
}
|
||||
|
||||
func (ps *ProcessingService) WriteConcatFile(segmentMap map[int]SegmentInfo) (string, error) {
|
||||
concatPath := ps.config.GetProcessOutputPath(ps.eventName)
|
||||
|
||||
if err := utils.EnsureDir(concatPath); err != nil {
|
||||
return "", fmt.Errorf("failed to create directories for concat path: %w", err)
|
||||
}
|
||||
|
||||
concatFilePath := utils.SafeJoin(concatPath, ps.eventName+".txt")
|
||||
f, err := os.Create(concatFilePath)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create concat file: %w", err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
// Sort keys to preserve order
|
||||
keys := make([]int, 0, len(segmentMap))
|
||||
for k := range segmentMap {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
sort.Ints(keys)
|
||||
|
||||
for _, seq := range keys {
|
||||
segment := segmentMap[seq]
|
||||
filePath := utils.SafeJoin(ps.config.GetNASEventPath(ps.eventName), segment.Resolution, segment.Name)
|
||||
line := fmt.Sprintf("file '%s'\n", filePath)
|
||||
if _, err := f.WriteString(line); err != nil {
|
||||
return "", fmt.Errorf("failed to write to concat file: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return concatFilePath, nil
|
||||
}
|
||||
|
||||
func (ps *ProcessingService) getFFmpegPath() (string, error) {
|
||||
// First try the configured path
|
||||
configuredPath := ps.config.Processing.FFmpegPath
|
||||
if configuredPath != "" {
|
||||
// Check if it's just the command name or a full path
|
||||
if filepath.IsAbs(configuredPath) {
|
||||
return configuredPath, nil
|
||||
}
|
||||
|
||||
// Try to find it in PATH
|
||||
if fullPath, err := exec.LookPath(configuredPath); err == nil {
|
||||
return fullPath, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: try local bin directory
|
||||
var baseDir string
|
||||
exePath, err := os.Executable()
|
||||
if err == nil {
|
||||
baseDir = filepath.Dir(exePath)
|
||||
} else {
|
||||
baseDir, err = os.Getwd()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
ffmpeg := utils.SafeJoin(baseDir, "bin", "ffmpeg")
|
||||
if runtime.GOOS == "windows" {
|
||||
ffmpeg += ".exe"
|
||||
}
|
||||
|
||||
if utils.PathExists(ffmpeg) {
|
||||
return ffmpeg, nil
|
||||
}
|
||||
|
||||
// Try current working directory
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
ffmpeg = utils.SafeJoin(cwd, "bin", "ffmpeg")
|
||||
if runtime.GOOS == "windows" {
|
||||
ffmpeg += ".exe"
|
||||
}
|
||||
|
||||
if utils.PathExists(ffmpeg) {
|
||||
return ffmpeg, nil
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("FFmpeg not found. Please install FFmpeg or set FFMPEG_PATH environment variable")
|
||||
}
|
||||
|
||||
func (ps *ProcessingService) RunFFmpeg(inputPath, outputPath string) error {
|
||||
fmt.Println("Running ffmpeg...")
|
||||
|
||||
fileOutPath := utils.SafeJoin(outputPath, ps.eventName+".mp4")
|
||||
fmt.Println("Input path:", inputPath)
|
||||
fmt.Println("Output path:", fileOutPath)
|
||||
|
||||
path, err := ps.getFFmpegPath()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to find FFmpeg: %w", err)
|
||||
}
|
||||
|
||||
cmd := exec.Command(path, "-f", "concat", "-safe", "0", "-i", inputPath, "-c", "copy", fileOutPath)
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("failed to run ffmpeg: %w", err)
|
||||
}
|
||||
|
||||
fmt.Println("FFmpeg completed successfully")
|
||||
return nil
|
||||
}
|
||||
@ -1,384 +0,0 @@
|
||||
package processing
|
||||
|
||||
import (
|
||||
"m3u8-downloader/pkg/config"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func createTestConfig(tempDir string) *config.Config {
|
||||
return &config.Config{
|
||||
Core: config.CoreConfig{
|
||||
WorkerCount: 2,
|
||||
RefreshDelay: 1 * time.Second,
|
||||
},
|
||||
NAS: config.NASConfig{
|
||||
OutputPath: filepath.Join(tempDir, "nas"),
|
||||
Username: "testuser",
|
||||
Password: "testpass",
|
||||
Timeout: 10 * time.Second,
|
||||
RetryLimit: 2,
|
||||
EnableTransfer: false, // Disable to avoid NAS connection
|
||||
},
|
||||
Processing: config.ProcessingConfig{
|
||||
Enabled: true,
|
||||
AutoProcess: true,
|
||||
WorkerCount: 1,
|
||||
FFmpegPath: "echo", // Use echo command for testing
|
||||
},
|
||||
Paths: config.PathsConfig{
|
||||
LocalOutput: filepath.Join(tempDir, "data"),
|
||||
ProcessOutput: filepath.Join(tempDir, "out"),
|
||||
ManifestDir: filepath.Join(tempDir, "data"),
|
||||
PersistenceFile: filepath.Join(tempDir, "queue.json"),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewProcessingService_Success(t *testing.T) {
|
||||
tempDir, err := os.MkdirTemp("", "processing_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
cfg := createTestConfig(tempDir)
|
||||
cfg.NAS.EnableTransfer = false // Disable NAS to avoid connection
|
||||
|
||||
// We can't test actual NAS connection, so we'll skip the constructor test
|
||||
// that requires NAS connectivity. Instead, test the configuration handling.
|
||||
|
||||
if cfg.Processing.FFmpegPath != "echo" {
|
||||
t.Errorf("Expected FFmpegPath='echo', got '%s'", cfg.Processing.FFmpegPath)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewProcessingService_NilConfig(t *testing.T) {
|
||||
_, err := NewProcessingService("test-event", nil)
|
||||
if err == nil {
|
||||
t.Error("Expected error for nil config")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "configuration is required") {
|
||||
t.Errorf("Expected 'configuration is required' error, got: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessingService_GetEventDirs(t *testing.T) {
|
||||
tempDir, err := os.MkdirTemp("", "processing_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
cfg := createTestConfig(tempDir)
|
||||
|
||||
// Create mock NAS directory structure
|
||||
nasDir := cfg.NAS.OutputPath
|
||||
os.MkdirAll(filepath.Join(nasDir, "event1"), 0755)
|
||||
os.MkdirAll(filepath.Join(nasDir, "event2"), 0755)
|
||||
os.MkdirAll(filepath.Join(nasDir, "event3"), 0755)
|
||||
// Create a file (should be ignored)
|
||||
os.WriteFile(filepath.Join(nasDir, "not_a_dir.txt"), []byte("test"), 0644)
|
||||
|
||||
ps := &ProcessingService{
|
||||
config: cfg,
|
||||
eventName: "", // Empty to test directory discovery
|
||||
}
|
||||
|
||||
dirs, err := ps.GetEventDirs()
|
||||
if err != nil {
|
||||
t.Fatalf("GetEventDirs() failed: %v", err)
|
||||
}
|
||||
|
||||
if len(dirs) != 3 {
|
||||
t.Errorf("Expected 3 event directories, got %d", len(dirs))
|
||||
}
|
||||
|
||||
expectedDirs := []string{"event1", "event2", "event3"}
|
||||
for _, expected := range expectedDirs {
|
||||
found := false
|
||||
for _, actual := range dirs {
|
||||
if actual == expected {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Errorf("Expected to find directory '%s' in results: %v", expected, dirs)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessingService_GetEventDirs_WithEventName(t *testing.T) {
|
||||
cfg := createTestConfig("/tmp")
|
||||
eventName := "specific-event"
|
||||
|
||||
ps := &ProcessingService{
|
||||
config: cfg,
|
||||
eventName: eventName,
|
||||
}
|
||||
|
||||
dirs, err := ps.GetEventDirs()
|
||||
if err != nil {
|
||||
t.Fatalf("GetEventDirs() failed: %v", err)
|
||||
}
|
||||
|
||||
if len(dirs) != 1 {
|
||||
t.Errorf("Expected 1 directory, got %d", len(dirs))
|
||||
}
|
||||
if dirs[0] != eventName {
|
||||
t.Errorf("Expected directory '%s', got '%s'", eventName, dirs[0])
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessingService_GetResolutions(t *testing.T) {
|
||||
tempDir, err := os.MkdirTemp("", "processing_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
cfg := createTestConfig(tempDir)
|
||||
eventName := "test-event"
|
||||
|
||||
// Create mock event directory with quality subdirectories
|
||||
eventPath := filepath.Join(cfg.NAS.OutputPath, eventName)
|
||||
os.MkdirAll(filepath.Join(eventPath, "1080p"), 0755)
|
||||
os.MkdirAll(filepath.Join(eventPath, "720p"), 0755)
|
||||
os.MkdirAll(filepath.Join(eventPath, "480p"), 0755)
|
||||
os.MkdirAll(filepath.Join(eventPath, "not_resolution"), 0755) // Should be ignored
|
||||
os.WriteFile(filepath.Join(eventPath, "file.txt"), []byte("test"), 0644) // Should be ignored
|
||||
|
||||
ps := &ProcessingService{
|
||||
config: cfg,
|
||||
eventName: eventName,
|
||||
}
|
||||
|
||||
resolutions, err := ps.GetResolutions()
|
||||
if err != nil {
|
||||
t.Fatalf("GetResolutions() failed: %v", err)
|
||||
}
|
||||
|
||||
expectedResolutions := []string{"1080p", "720p", "480p"}
|
||||
if len(resolutions) != len(expectedResolutions) {
|
||||
t.Errorf("Expected %d resolutions, got %d: %v", len(expectedResolutions), len(resolutions), resolutions)
|
||||
}
|
||||
|
||||
for _, expected := range expectedResolutions {
|
||||
found := false
|
||||
for _, actual := range resolutions {
|
||||
if actual == expected {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Errorf("Expected to find resolution '%s' in results: %v", expected, resolutions)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessingService_AggregateSegmentInfo(t *testing.T) {
|
||||
ps := &ProcessingService{}
|
||||
|
||||
// Create test channel with segments
|
||||
ch := make(chan SegmentInfo, 5)
|
||||
|
||||
// Add segments with different qualities for same sequence
|
||||
ch <- SegmentInfo{Name: "seg_1001.ts", SeqNo: 1001, Resolution: "720p"}
|
||||
ch <- SegmentInfo{Name: "seg_1001.ts", SeqNo: 1001, Resolution: "1080p"} // Higher quality, should win
|
||||
ch <- SegmentInfo{Name: "seg_1002.ts", SeqNo: 1002, Resolution: "480p"}
|
||||
ch <- SegmentInfo{Name: "seg_1003.ts", SeqNo: 1003, Resolution: "1080p"}
|
||||
ch <- SegmentInfo{Name: "seg_1001.ts", SeqNo: 1001, Resolution: "540p"} // Lower than 1080p, should not replace
|
||||
|
||||
close(ch)
|
||||
|
||||
segmentMap, err := ps.AggregateSegmentInfo(ch)
|
||||
if err != nil {
|
||||
t.Fatalf("AggregateSegmentInfo() failed: %v", err)
|
||||
}
|
||||
|
||||
// Should have 3 unique sequence numbers
|
||||
if len(segmentMap) != 3 {
|
||||
t.Errorf("Expected 3 unique segments, got %d", len(segmentMap))
|
||||
}
|
||||
|
||||
// Check sequence 1001 has the highest quality (1080p)
|
||||
seg1001, exists := segmentMap[1001]
|
||||
if !exists {
|
||||
t.Fatal("Segment 1001 should exist")
|
||||
}
|
||||
if seg1001.Resolution != "1080p" {
|
||||
t.Errorf("Expected segment 1001 to have resolution '1080p', got '%s'", seg1001.Resolution)
|
||||
}
|
||||
|
||||
// Check sequence 1002 has 480p
|
||||
seg1002, exists := segmentMap[1002]
|
||||
if !exists {
|
||||
t.Fatal("Segment 1002 should exist")
|
||||
}
|
||||
if seg1002.Resolution != "480p" {
|
||||
t.Errorf("Expected segment 1002 to have resolution '480p', got '%s'", seg1002.Resolution)
|
||||
}
|
||||
|
||||
// Check sequence 1003 has 1080p
|
||||
seg1003, exists := segmentMap[1003]
|
||||
if !exists {
|
||||
t.Fatal("Segment 1003 should exist")
|
||||
}
|
||||
if seg1003.Resolution != "1080p" {
|
||||
t.Errorf("Expected segment 1003 to have resolution '1080p', got '%s'", seg1003.Resolution)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessingService_WriteConcatFile(t *testing.T) {
|
||||
tempDir, err := os.MkdirTemp("", "processing_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
cfg := createTestConfig(tempDir)
|
||||
eventName := "test-event"
|
||||
|
||||
ps := &ProcessingService{
|
||||
config: cfg,
|
||||
eventName: eventName,
|
||||
}
|
||||
|
||||
// Create test segment map
|
||||
segmentMap := map[int]SegmentInfo{
|
||||
1003: {Name: "seg_1003.ts", SeqNo: 1003, Resolution: "1080p"},
|
||||
1001: {Name: "seg_1001.ts", SeqNo: 1001, Resolution: "720p"},
|
||||
1002: {Name: "seg_1002.ts", SeqNo: 1002, Resolution: "1080p"},
|
||||
}
|
||||
|
||||
concatFilePath, err := ps.WriteConcatFile(segmentMap)
|
||||
if err != nil {
|
||||
t.Fatalf("WriteConcatFile() failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify file was created
|
||||
if _, err := os.Stat(concatFilePath); os.IsNotExist(err) {
|
||||
t.Fatalf("Concat file was not created: %s", concatFilePath)
|
||||
}
|
||||
|
||||
// Read and verify content
|
||||
content, err := os.ReadFile(concatFilePath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read concat file: %v", err)
|
||||
}
|
||||
|
||||
contentStr := string(content)
|
||||
lines := strings.Split(strings.TrimSpace(contentStr), "\n")
|
||||
|
||||
if len(lines) != 3 {
|
||||
t.Errorf("Expected 3 lines in concat file, got %d", len(lines))
|
||||
}
|
||||
|
||||
// Verify segments are sorted by sequence number
|
||||
expectedOrder := []string{"seg_1001.ts", "seg_1002.ts", "seg_1003.ts"}
|
||||
for i, line := range lines {
|
||||
if !strings.Contains(line, expectedOrder[i]) {
|
||||
t.Errorf("Line %d should contain '%s', got: %s", i, expectedOrder[i], line)
|
||||
}
|
||||
if !strings.HasPrefix(line, "file '") {
|
||||
t.Errorf("Line %d should start with 'file ', got: %s", i, line)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessingService_getFFmpegPath(t *testing.T) {
|
||||
cfg := createTestConfig("/tmp")
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
ffmpegPath string
|
||||
shouldFind bool
|
||||
expectedError string
|
||||
}{
|
||||
{
|
||||
name: "echo command (should be found in PATH)",
|
||||
ffmpegPath: "echo",
|
||||
shouldFind: true,
|
||||
},
|
||||
{
|
||||
name: "absolute path test",
|
||||
ffmpegPath: func() string {
|
||||
if runtime.GOOS == "windows" {
|
||||
return "C:\\Windows\\System32\\cmd.exe"
|
||||
}
|
||||
return "/bin/echo"
|
||||
}(),
|
||||
shouldFind: true,
|
||||
},
|
||||
{
|
||||
name: "nonexistent command",
|
||||
ffmpegPath: "nonexistent_ffmpeg_command_12345",
|
||||
shouldFind: false,
|
||||
expectedError: "FFmpeg not found",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
testCfg := *cfg
|
||||
testCfg.Processing.FFmpegPath = tt.ffmpegPath
|
||||
|
||||
ps := &ProcessingService{
|
||||
config: &testCfg,
|
||||
eventName: "test",
|
||||
}
|
||||
|
||||
path, err := ps.getFFmpegPath()
|
||||
|
||||
if tt.shouldFind {
|
||||
if err != nil {
|
||||
t.Errorf("Expected to find FFmpeg, but got error: %v", err)
|
||||
}
|
||||
if path == "" {
|
||||
t.Error("Expected non-empty path")
|
||||
}
|
||||
} else {
|
||||
if err == nil {
|
||||
t.Error("Expected error for nonexistent FFmpeg")
|
||||
}
|
||||
if tt.expectedError != "" && !strings.Contains(err.Error(), tt.expectedError) {
|
||||
t.Errorf("Expected error containing '%s', got: %v", tt.expectedError, err)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSegmentInfo_Structure(t *testing.T) {
|
||||
segment := SegmentInfo{
|
||||
Name: "test_segment.ts",
|
||||
SeqNo: 1001,
|
||||
Resolution: "1080p",
|
||||
}
|
||||
|
||||
if segment.Name != "test_segment.ts" {
|
||||
t.Errorf("Expected Name='test_segment.ts', got '%s'", segment.Name)
|
||||
}
|
||||
if segment.SeqNo != 1001 {
|
||||
t.Errorf("Expected SeqNo=1001, got %d", segment.SeqNo)
|
||||
}
|
||||
if segment.Resolution != "1080p" {
|
||||
t.Errorf("Expected Resolution='1080p', got '%s'", segment.Resolution)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessJob_Structure(t *testing.T) {
|
||||
job := ProcessJob{
|
||||
EventName: "test-event",
|
||||
}
|
||||
|
||||
if job.EventName != "test-event" {
|
||||
t.Errorf("Expected EventName='test-event', got '%s'", job.EventName)
|
||||
}
|
||||
}
|
||||
@ -1,5 +0,0 @@
|
||||
package processing
|
||||
|
||||
type ProcessJob struct {
|
||||
EventName string
|
||||
}
|
||||
@ -69,6 +69,7 @@ func (cs *CleanupService) ExecuteCleanup(ctx context.Context) error {
|
||||
if batchSize > len(cs.pendingFiles) {
|
||||
batchSize = len(cs.pendingFiles)
|
||||
}
|
||||
cs.mu.Unlock()
|
||||
|
||||
log.Printf("Executing cleanup batch (size: %d)", batchSize)
|
||||
|
||||
|
||||
@ -3,28 +3,39 @@ package transfer
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"m3u8-downloader/pkg/nas"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
func TransferFile(nt *nas.NASService, ctx context.Context, item *TransferItem) error {
|
||||
destPath := filepath.Join(nt.Config.Path, item.DestinationPath)
|
||||
type NASTransfer struct {
|
||||
config NASConfig
|
||||
connected bool
|
||||
}
|
||||
|
||||
func NewNASTransfer(config NASConfig) *NASTransfer {
|
||||
return &NASTransfer{
|
||||
config: config,
|
||||
}
|
||||
}
|
||||
|
||||
func (nt *NASTransfer) TransferFile(ctx context.Context, item *TransferItem) error {
|
||||
destPath := filepath.Join(nt.config.Path, item.DestinationPath)
|
||||
|
||||
destDir := filepath.Dir(destPath)
|
||||
if err := nt.EnsureDirectoryExists(destDir); err != nil {
|
||||
if err := nt.ensureDirectoryExists(destDir); err != nil {
|
||||
return fmt.Errorf("Failed to create directory %s: %w", destDir, err)
|
||||
}
|
||||
|
||||
transferCtx, cancel := context.WithTimeout(ctx, nt.Config.Timeout)
|
||||
transferCtx, cancel := context.WithTimeout(ctx, nt.config.Timeout)
|
||||
defer cancel()
|
||||
|
||||
if err := nt.CopyFile(transferCtx, item.SourcePath, destPath); err != nil {
|
||||
if err := nt.copyFile(transferCtx, item.SourcePath, destPath); err != nil {
|
||||
return fmt.Errorf("Failed to copy file %s to %s: %w", item.SourcePath, destPath, err)
|
||||
}
|
||||
|
||||
if nt.Config.VerifySize {
|
||||
if nt.config.VerifySize {
|
||||
if err := nt.VerifyTransfer(item.SourcePath, destPath); err != nil {
|
||||
os.Remove(destPath)
|
||||
return fmt.Errorf("Failed to verify transfer: %w", err)
|
||||
@ -35,3 +46,79 @@ func TransferFile(nt *nas.NASService, ctx context.Context, item *TransferItem) e
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (nt *NASTransfer) copyFile(ctx context.Context, srcPath, destPath string) error {
|
||||
src, err := os.Open(srcPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to open source file: %w", err)
|
||||
}
|
||||
defer src.Close()
|
||||
|
||||
dest, err := os.Create(destPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to create destination file: %w", err)
|
||||
}
|
||||
defer dest.Close()
|
||||
|
||||
done := make(chan error, 1)
|
||||
go func() {
|
||||
_, err := io.Copy(dest, src)
|
||||
done <- err
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case err := <-done:
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return dest.Sync()
|
||||
}
|
||||
}
|
||||
|
||||
func (nt *NASTransfer) VerifyTransfer(srcPath, destPath string) error {
|
||||
srcInfo, err := os.Stat(srcPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to stat source file: %w", err)
|
||||
}
|
||||
|
||||
destInfo, err := os.Stat(destPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to stat destination file: %w", err)
|
||||
}
|
||||
|
||||
if srcInfo.Size() != destInfo.Size() {
|
||||
return fmt.Errorf("size mismatch: source=%d, dest=%d", srcInfo.Size(), destInfo.Size())
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (nt *NASTransfer) ensureDirectoryExists(path string) error {
|
||||
if err := os.MkdirAll(path, 0755); err != nil {
|
||||
return fmt.Errorf("Failed to create directory: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (nt *NASTransfer) TestConnection() error {
|
||||
testFile := filepath.Join(nt.config.Path, ".connection_test")
|
||||
|
||||
f, err := os.Create(testFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to create test file: %w", err)
|
||||
}
|
||||
f.Close()
|
||||
|
||||
os.Remove(testFile)
|
||||
|
||||
nt.connected = true
|
||||
log.Printf("Connected to NAS at %s", nt.config.Path)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (nt *NASTransfer) IsConnected() bool {
|
||||
return nt.connected
|
||||
}
|
||||
|
||||
@ -6,20 +6,19 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"m3u8-downloader/pkg/nas"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
type TransferQueue struct {
|
||||
config QueueConfig
|
||||
items *PriorityQueue
|
||||
stats *QueueStats
|
||||
nasService *nas.NASService
|
||||
cleanup *CleanupService
|
||||
workers []chan TransferItem
|
||||
mu sync.RWMutex
|
||||
config QueueConfig
|
||||
items *PriorityQueue
|
||||
stats *QueueStats
|
||||
nasTransfer *NASTransfer
|
||||
cleanup *CleanupService
|
||||
workers []chan TransferItem
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
type PriorityQueue []*TransferItem
|
||||
@ -49,17 +48,17 @@ func (pq *PriorityQueue) Pop() interface{} {
|
||||
return item
|
||||
}
|
||||
|
||||
func NewTransferQueue(config QueueConfig, nasTransfer *nas.NASService, cleanup *CleanupService) *TransferQueue {
|
||||
func NewTransferQueue(config QueueConfig, nasTransfer *NASTransfer, cleanup *CleanupService) *TransferQueue {
|
||||
pq := &PriorityQueue{}
|
||||
heap.Init(pq)
|
||||
|
||||
tq := &TransferQueue{
|
||||
config: config,
|
||||
items: pq,
|
||||
stats: &QueueStats{},
|
||||
nasService: nasTransfer,
|
||||
cleanup: cleanup,
|
||||
workers: make([]chan TransferItem, config.WorkerCount),
|
||||
config: config,
|
||||
items: pq,
|
||||
stats: &QueueStats{},
|
||||
nasTransfer: nasTransfer,
|
||||
cleanup: cleanup,
|
||||
workers: make([]chan TransferItem, config.WorkerCount),
|
||||
}
|
||||
|
||||
if err := tq.LoadState(); err != nil {
|
||||
@ -146,24 +145,6 @@ func (tq *TransferQueue) dispatchWork() {
|
||||
}
|
||||
|
||||
func (tq *TransferQueue) processItem(ctx context.Context, item TransferItem) {
|
||||
// Check if file already exists on NAS before attempting transfer
|
||||
if exists, err := tq.nasService.FileExists(item.DestinationPath, item.FileSize); err != nil {
|
||||
log.Printf("Failed to check if file exists on NAS for %s: %v", item.SourcePath, err)
|
||||
// Continue with transfer attempt on error
|
||||
} else if exists {
|
||||
log.Printf("File already exists on NAS, skipping transfer: %s", item.SourcePath)
|
||||
item.Status = StatusCompleted
|
||||
tq.stats.IncrementCompleted(item.FileSize)
|
||||
|
||||
// Schedule for cleanup
|
||||
if tq.cleanup != nil {
|
||||
if err := tq.cleanup.ScheduleCleanup(item.SourcePath); err != nil {
|
||||
log.Printf("Failed to schedule cleanup for existing file %s: %v", item.SourcePath, err)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
maxRetries := 3
|
||||
|
||||
for attempt := 1; attempt <= maxRetries; attempt++ {
|
||||
@ -179,7 +160,7 @@ func (tq *TransferQueue) processItem(ctx context.Context, item TransferItem) {
|
||||
}
|
||||
}
|
||||
|
||||
err := TransferFile(tq.nasService, ctx, &item)
|
||||
err := tq.nasTransfer.TransferFile(ctx, &item)
|
||||
if err == nil {
|
||||
item.Status = StatusCompleted
|
||||
tq.stats.IncrementCompleted(item.FileSize)
|
||||
|
||||
@ -5,11 +5,6 @@ import (
|
||||
"fmt"
|
||||
"log"
|
||||
"m3u8-downloader/pkg/constants"
|
||||
nas2 "m3u8-downloader/pkg/nas"
|
||||
"m3u8-downloader/pkg/utils"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
@ -17,53 +12,45 @@ import (
|
||||
type TransferService struct {
|
||||
watcher *FileWatcher
|
||||
queue *TransferQueue
|
||||
nas *nas2.NASService
|
||||
nas *NASTransfer
|
||||
cleanup *CleanupService
|
||||
stats *QueueStats
|
||||
}
|
||||
|
||||
func NewTrasferService(outputDir string, eventName string) (*TransferService, error) {
|
||||
cfg := constants.MustGetConfig()
|
||||
|
||||
nasConfig := nas2.NASConfig{
|
||||
Path: outputDir,
|
||||
Username: cfg.NAS.Username,
|
||||
Password: cfg.NAS.Password,
|
||||
Timeout: cfg.NAS.Timeout,
|
||||
RetryLimit: cfg.NAS.RetryLimit,
|
||||
func NewTrasferService(outputDir string) (*TransferService, error) {
|
||||
nasConfig := NASConfig{
|
||||
Path: constants.NASPath,
|
||||
Username: constants.NASUsername,
|
||||
Password: constants.NASPassword,
|
||||
Timeout: constants.TransferTimeout,
|
||||
RetryLimit: constants.TransferRetryLimit,
|
||||
VerifySize: true,
|
||||
}
|
||||
nas := nas2.NewNASService(nasConfig)
|
||||
nas := NewNASTransfer(nasConfig)
|
||||
|
||||
if err := nas.TestConnection(); err != nil {
|
||||
return nil, fmt.Errorf("failed to connect to NAS: %w", err)
|
||||
return nil, fmt.Errorf("Failed to connect to NAS: %w", err)
|
||||
}
|
||||
|
||||
cleanupConfig := CleanupConfig{
|
||||
Enabled: cfg.Cleanup.AfterTransfer,
|
||||
RetentionPeriod: time.Duration(cfg.Cleanup.RetainHours) * time.Hour,
|
||||
BatchSize: cfg.Cleanup.BatchSize,
|
||||
CheckInterval: cfg.Transfer.FileSettlingDelay,
|
||||
Enabled: constants.CleanupAfterTransfer,
|
||||
RetentionPeriod: constants.RetainLocalHours,
|
||||
BatchSize: constants.CleanupBatchSize,
|
||||
CheckInterval: constants.FileSettlingDelay,
|
||||
}
|
||||
cleanup := NewCleanupService(cleanupConfig)
|
||||
|
||||
queueConfig := QueueConfig{
|
||||
WorkerCount: cfg.Transfer.WorkerCount,
|
||||
PersistencePath: cfg.Paths.PersistenceFile,
|
||||
MaxQueueSize: cfg.Transfer.QueueSize,
|
||||
BatchSize: cfg.Transfer.BatchSize,
|
||||
WorkerCount: constants.TransferWorkerCount,
|
||||
PersistencePath: constants.PersistencePath,
|
||||
MaxQueueSize: constants.TransferQueueSize,
|
||||
BatchSize: constants.BatchSize,
|
||||
}
|
||||
queue := NewTransferQueue(queueConfig, nas, cleanup)
|
||||
|
||||
// Create local output directory if it doesn't exist
|
||||
localOutputPath := cfg.GetEventPath(eventName)
|
||||
if err := utils.EnsureDir(localOutputPath); err != nil {
|
||||
return nil, fmt.Errorf("failed to create local output directory: %w", err)
|
||||
}
|
||||
|
||||
watcher, err := NewFileWatcher(localOutputPath, queue, cfg.Transfer.FileSettlingDelay)
|
||||
watcher, err := NewFileWatcher(outputDir, queue)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create file watcher: %w", err)
|
||||
return nil, fmt.Errorf("Failed to create file watcher: %w", err)
|
||||
}
|
||||
|
||||
return &TransferService{
|
||||
@ -143,112 +130,7 @@ func (ts *TransferService) Shutdown(ctx context.Context) error {
|
||||
return fmt.Errorf("Failed to force cleanup: %w", err)
|
||||
}
|
||||
|
||||
// Disconnect from NAS
|
||||
if err := ts.nas.Disconnect(); err != nil {
|
||||
log.Printf("Warning: failed to disconnect from NAS: %v", err)
|
||||
}
|
||||
|
||||
log.Println("Transfer service shut down")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// QueueExistingFiles scans a directory for .ts files and queues them for transfer
|
||||
func (ts *TransferService) QueueExistingFiles(localEventPath string) error {
|
||||
cfg := constants.MustGetConfig()
|
||||
log.Printf("Scanning for existing files in: %s", localEventPath)
|
||||
|
||||
var fileCount, alreadyTransferred, scheduledForCleanup int
|
||||
|
||||
// Extract event name from path for NAS destination
|
||||
eventName := filepath.Base(localEventPath)
|
||||
|
||||
err := filepath.Walk(localEventPath, func(path string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
log.Printf("Error accessing path %s: %v", path, err)
|
||||
return nil // Continue walking
|
||||
}
|
||||
|
||||
// Only process .ts files
|
||||
if !info.IsDir() && strings.HasSuffix(strings.ToLower(info.Name()), ".ts") {
|
||||
// Extract resolution from directory path
|
||||
resolution := ts.extractResolutionFromPath(path)
|
||||
|
||||
// Get relative path from event directory
|
||||
relPath, err := filepath.Rel(localEventPath, path)
|
||||
if err != nil {
|
||||
log.Printf("Failed to get relative path for %s: %v", path, err)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Build NAS destination path (eventName/relPath)
|
||||
nasDestPath := filepath.Join(eventName, relPath)
|
||||
|
||||
// Check if file already exists on NAS with matching size
|
||||
exists, err := ts.nas.FileExists(nasDestPath, info.Size())
|
||||
if err != nil {
|
||||
log.Printf("Failed to check NAS file existence for %s: %v", path, err)
|
||||
// Continue with transfer attempt on error
|
||||
} else if exists {
|
||||
log.Printf("File already exists on NAS: %s (%s, %d bytes)", path, resolution, info.Size())
|
||||
alreadyTransferred++
|
||||
|
||||
// Schedule for cleanup if cleanup is enabled
|
||||
if cfg.Cleanup.AfterTransfer {
|
||||
if err := ts.cleanup.ScheduleCleanup(path); err != nil {
|
||||
log.Printf("Failed to schedule cleanup for already-transferred file %s: %v", path, err)
|
||||
} else {
|
||||
scheduledForCleanup++
|
||||
}
|
||||
}
|
||||
return nil // Skip queuing this file
|
||||
}
|
||||
|
||||
// Create transfer item
|
||||
item := TransferItem{
|
||||
ID: ts.generateTransferID(),
|
||||
SourcePath: path,
|
||||
DestinationPath: nasDestPath,
|
||||
Resolution: resolution,
|
||||
Timestamp: info.ModTime(),
|
||||
Status: StatusPending,
|
||||
FileSize: info.Size(),
|
||||
}
|
||||
|
||||
// Add to queue
|
||||
if err := ts.queue.Add(item); err != nil {
|
||||
log.Printf("Failed to queue file %s: %v", path, err)
|
||||
} else {
|
||||
log.Printf("Queued file: %s (%s, %d bytes)", path, resolution, info.Size())
|
||||
fileCount++
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to walk directory: %w", err)
|
||||
}
|
||||
|
||||
log.Printf("File scan completed - Queued: %d, Already transferred: %d, Scheduled for cleanup: %d",
|
||||
fileCount, alreadyTransferred, scheduledForCleanup)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ts *TransferService) extractResolutionFromPath(filePath string) string {
|
||||
dir := filepath.Dir(filePath)
|
||||
parts := strings.Split(dir, string(filepath.Separator))
|
||||
|
||||
for _, part := range parts {
|
||||
if strings.HasSuffix(part, "p") {
|
||||
return part
|
||||
}
|
||||
}
|
||||
|
||||
return "unknown"
|
||||
}
|
||||
|
||||
func (ts *TransferService) generateTransferID() string {
|
||||
return fmt.Sprintf("transfer_existing_%d", time.Now().UnixNano())
|
||||
}
|
||||
|
||||
@ -44,6 +44,15 @@ func (s TransferStatus) String() string {
|
||||
}
|
||||
}
|
||||
|
||||
type NASConfig struct {
|
||||
Path string
|
||||
Username string
|
||||
Password string
|
||||
Timeout time.Duration
|
||||
RetryLimit int
|
||||
VerifySize bool
|
||||
}
|
||||
|
||||
type QueueConfig struct {
|
||||
WorkerCount int
|
||||
PersistencePath string
|
||||
|
||||
@ -23,7 +23,7 @@ type FileWatcher struct {
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
func NewFileWatcher(outputDir string, queue *TransferQueue, settlingDelay time.Duration) (*FileWatcher, error) {
|
||||
func NewFileWatcher(outputDir string, queue *TransferQueue) (*FileWatcher, error) {
|
||||
watcher, err := fsnotify.NewWatcher()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -32,7 +32,7 @@ func NewFileWatcher(outputDir string, queue *TransferQueue, settlingDelay time.D
|
||||
outputDir: outputDir,
|
||||
queue: queue,
|
||||
watcher: watcher,
|
||||
settingDelay: settlingDelay,
|
||||
settingDelay: time.Second,
|
||||
pendingFiles: make(map[string]*time.Timer),
|
||||
}, nil
|
||||
}
|
||||
|
||||
@ -1,62 +0,0 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func SafeJoin(base string, elements ...string) string {
|
||||
path := filepath.Join(append([]string{base}, elements...)...)
|
||||
return filepath.Clean(path)
|
||||
}
|
||||
|
||||
func EnsureDir(path string) error {
|
||||
if err := os.MkdirAll(path, 0755); err != nil {
|
||||
return fmt.Errorf("failed to create directory %s: %w", path, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func PathExists(path string) bool {
|
||||
_, err := os.Stat(path)
|
||||
return !os.IsNotExist(err)
|
||||
}
|
||||
|
||||
func IsValidPath(path string) bool {
|
||||
if path == "" {
|
||||
return false
|
||||
}
|
||||
|
||||
return !strings.ContainsAny(path, "<>:\"|?*")
|
||||
}
|
||||
|
||||
func NormalizePath(path string) string {
|
||||
return filepath.Clean(strings.ReplaceAll(path, "\\", string(filepath.Separator)))
|
||||
}
|
||||
|
||||
func GetRelativePath(basePath, targetPath string) (string, error) {
|
||||
rel, err := filepath.Rel(basePath, targetPath)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get relative path: %w", err)
|
||||
}
|
||||
return rel, nil
|
||||
}
|
||||
|
||||
func ValidateWritablePath(path string) error {
|
||||
dir := filepath.Dir(path)
|
||||
if err := EnsureDir(dir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
testFile := filepath.Join(dir, ".write_test")
|
||||
file, err := os.Create(testFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf("path %s is not writable: %w", dir, err)
|
||||
}
|
||||
file.Close()
|
||||
os.Remove(testFile)
|
||||
|
||||
return nil
|
||||
}
|
||||
@ -1,235 +0,0 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestSafeJoin(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
base string
|
||||
elements []string
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "basic join",
|
||||
base: "data",
|
||||
elements: []string{"events", "test-event"},
|
||||
want: filepath.Join("data", "events", "test-event"),
|
||||
},
|
||||
{
|
||||
name: "empty elements",
|
||||
base: "data",
|
||||
elements: []string{},
|
||||
want: "data",
|
||||
},
|
||||
{
|
||||
name: "with path separators",
|
||||
base: "data/events",
|
||||
elements: []string{"test-event", "1080p"},
|
||||
want: filepath.Join("data", "events", "test-event", "1080p"),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := SafeJoin(tt.base, tt.elements...)
|
||||
if got != tt.want {
|
||||
t.Errorf("SafeJoin() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnsureDir(t *testing.T) {
|
||||
tempDir, err := os.MkdirTemp("", "utils_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
testPath := filepath.Join(tempDir, "test", "nested", "directory")
|
||||
|
||||
// Test creating nested directories
|
||||
err = EnsureDir(testPath)
|
||||
if err != nil {
|
||||
t.Errorf("EnsureDir() failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify directory was created
|
||||
if _, err := os.Stat(testPath); os.IsNotExist(err) {
|
||||
t.Errorf("Directory was not created: %s", testPath)
|
||||
}
|
||||
|
||||
// Test with existing directory (should not fail)
|
||||
err = EnsureDir(testPath)
|
||||
if err != nil {
|
||||
t.Errorf("EnsureDir() failed on existing directory: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPathExists(t *testing.T) {
|
||||
tempDir, err := os.MkdirTemp("", "utils_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
// Test existing path
|
||||
if !PathExists(tempDir) {
|
||||
t.Errorf("PathExists() should return true for existing path: %s", tempDir)
|
||||
}
|
||||
|
||||
// Test non-existing path
|
||||
nonExistentPath := filepath.Join(tempDir, "does-not-exist")
|
||||
if PathExists(nonExistentPath) {
|
||||
t.Errorf("PathExists() should return false for non-existent path: %s", nonExistentPath)
|
||||
}
|
||||
|
||||
// Test with file
|
||||
testFile := filepath.Join(tempDir, "test.txt")
|
||||
f, err := os.Create(testFile)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create test file: %v", err)
|
||||
}
|
||||
f.Close()
|
||||
|
||||
if !PathExists(testFile) {
|
||||
t.Errorf("PathExists() should return true for existing file: %s", testFile)
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsValidPath(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
path string
|
||||
want bool
|
||||
}{
|
||||
{"empty path", "", false},
|
||||
{"valid path", "data/events/test", true},
|
||||
{"path with colon", "data:events", false},
|
||||
{"path with pipe", "data|events", false},
|
||||
{"path with question mark", "data?events", false},
|
||||
{"path with asterisk", "data*events", false},
|
||||
{"path with quotes", "data\"events", false},
|
||||
{"path with angle brackets", "data<events>", false},
|
||||
{"normal windows path", "C:\\data\\events", true}, // Windows path separators are actually OK
|
||||
{"unix path", "/home/user/data", true},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := IsValidPath(tt.path)
|
||||
if got != tt.want {
|
||||
t.Errorf("IsValidPath(%q) = %v, want %v", tt.path, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizePath(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
path string
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "windows backslashes",
|
||||
path: "data\\events\\test",
|
||||
want: filepath.Join("data", "events", "test"),
|
||||
},
|
||||
{
|
||||
name: "unix forward slashes",
|
||||
path: "data/events/test",
|
||||
want: filepath.Join("data", "events", "test"),
|
||||
},
|
||||
{
|
||||
name: "mixed slashes",
|
||||
path: "data\\events/test\\file",
|
||||
want: filepath.Join("data", "events", "test", "file"),
|
||||
},
|
||||
{
|
||||
name: "redundant separators",
|
||||
path: "data//events\\\\test",
|
||||
want: filepath.Join("data", "events", "test"),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := NormalizePath(tt.path)
|
||||
if got != tt.want {
|
||||
t.Errorf("NormalizePath(%q) = %q, want %q", tt.path, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetRelativePath(t *testing.T) {
|
||||
tempDir, err := os.MkdirTemp("", "utils_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
basePath := tempDir
|
||||
targetPath := filepath.Join(tempDir, "subdir", "file.txt")
|
||||
|
||||
rel, err := GetRelativePath(basePath, targetPath)
|
||||
if err != nil {
|
||||
t.Errorf("GetRelativePath() failed: %v", err)
|
||||
}
|
||||
|
||||
expected := filepath.Join("subdir", "file.txt")
|
||||
if rel != expected {
|
||||
t.Errorf("GetRelativePath() = %q, want %q", rel, expected)
|
||||
}
|
||||
|
||||
// Test with invalid paths
|
||||
_, err = GetRelativePath("", "")
|
||||
if err == nil {
|
||||
t.Error("GetRelativePath() should fail with empty paths")
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateWritablePath(t *testing.T) {
|
||||
tempDir, err := os.MkdirTemp("", "utils_test_*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
// Test writable path
|
||||
writablePath := filepath.Join(tempDir, "test", "file.txt")
|
||||
err = ValidateWritablePath(writablePath)
|
||||
if err != nil {
|
||||
t.Errorf("ValidateWritablePath() failed for writable path: %v", err)
|
||||
}
|
||||
|
||||
// Verify directory was created
|
||||
dir := filepath.Dir(writablePath)
|
||||
if _, err := os.Stat(dir); os.IsNotExist(err) {
|
||||
t.Errorf("Directory should have been created: %s", dir)
|
||||
}
|
||||
|
||||
// Test with read-only directory (if supported by OS)
|
||||
if runtime.GOOS != "windows" { // Skip on Windows as it's more complex
|
||||
readOnlyDir := filepath.Join(tempDir, "readonly")
|
||||
os.MkdirAll(readOnlyDir, 0755)
|
||||
os.Chmod(readOnlyDir, 0444) // Read-only
|
||||
defer os.Chmod(readOnlyDir, 0755) // Restore permissions for cleanup
|
||||
|
||||
readOnlyPath := filepath.Join(readOnlyDir, "file.txt")
|
||||
err = ValidateWritablePath(readOnlyPath)
|
||||
if err == nil {
|
||||
t.Error("ValidateWritablePath() should fail for read-only directory")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "not writable") {
|
||||
t.Errorf("Expected 'not writable' error, got: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
146
test_runner.go
146
test_runner.go
@ -1,146 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func main() {
|
||||
fmt.Println("🧪 StreamRecorder Test Suite")
|
||||
fmt.Println("============================")
|
||||
|
||||
startTime := time.Now()
|
||||
|
||||
// Set test environment to avoid interfering with real data
|
||||
originalEnv := setupTestEnvironment()
|
||||
defer restoreEnvironment(originalEnv)
|
||||
|
||||
// Create temporary directory for test data
|
||||
tempDir, err := os.MkdirTemp("", "streamrecorder_test_*")
|
||||
if err != nil {
|
||||
fmt.Printf("❌ Failed to create temp directory: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
defer func() {
|
||||
fmt.Printf("🧹 Cleaning up test directory: %s\n", tempDir)
|
||||
os.RemoveAll(tempDir)
|
||||
}()
|
||||
|
||||
// Set test-specific environment variables
|
||||
os.Setenv("LOCAL_OUTPUT_DIR", filepath.Join(tempDir, "data"))
|
||||
os.Setenv("PROCESS_OUTPUT_DIR", filepath.Join(tempDir, "out"))
|
||||
os.Setenv("ENABLE_NAS_TRANSFER", "false") // Disable NAS for tests
|
||||
os.Setenv("PROCESSING_ENABLED", "false") // Disable processing that needs FFmpeg
|
||||
|
||||
fmt.Printf("📁 Using temporary directory: %s\n", tempDir)
|
||||
fmt.Println()
|
||||
|
||||
// Run tests for each package
|
||||
packages := []string{
|
||||
"./pkg/config",
|
||||
"./pkg/utils",
|
||||
"./pkg/constants",
|
||||
"./pkg/httpClient",
|
||||
"./pkg/media",
|
||||
"./pkg/processing",
|
||||
}
|
||||
|
||||
var failedPackages []string
|
||||
totalTests := 0
|
||||
passedTests := 0
|
||||
|
||||
for _, pkg := range packages {
|
||||
fmt.Printf("🔍 Testing package: %s\n", pkg)
|
||||
|
||||
cmd := exec.Command("go", "test", "-v", pkg)
|
||||
cmd.Dir = "."
|
||||
|
||||
output, err := cmd.CombinedOutput()
|
||||
outputStr := string(output)
|
||||
|
||||
// Count tests
|
||||
testCount := strings.Count(outputStr, "=== RUN")
|
||||
passCount := strings.Count(outputStr, "--- PASS:")
|
||||
|
||||
totalTests += testCount
|
||||
passedTests += passCount
|
||||
|
||||
if err != nil {
|
||||
fmt.Printf("❌ FAILED: %s (%d/%d tests passed)\n", pkg, passCount, testCount)
|
||||
failedPackages = append(failedPackages, pkg)
|
||||
|
||||
// Show failure details
|
||||
lines := strings.Split(outputStr, "\n")
|
||||
for _, line := range lines {
|
||||
if strings.Contains(line, "FAIL:") ||
|
||||
strings.Contains(line, "Error:") ||
|
||||
strings.Contains(line, "panic:") {
|
||||
fmt.Printf(" %s\n", line)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("✅ PASSED: %s (%d tests)\n", pkg, testCount)
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Print summary
|
||||
duration := time.Since(startTime)
|
||||
fmt.Println("📊 Test Summary")
|
||||
fmt.Println("===============")
|
||||
fmt.Printf("Total packages: %d\n", len(packages))
|
||||
fmt.Printf("Passed packages: %d\n", len(packages)-len(failedPackages))
|
||||
fmt.Printf("Failed packages: %d\n", len(failedPackages))
|
||||
fmt.Printf("Total tests: %d\n", totalTests)
|
||||
fmt.Printf("Passed tests: %d\n", passedTests)
|
||||
fmt.Printf("Failed tests: %d\n", totalTests-passedTests)
|
||||
fmt.Printf("Duration: %v\n", duration.Round(time.Millisecond))
|
||||
|
||||
if len(failedPackages) > 0 {
|
||||
fmt.Println()
|
||||
fmt.Println("❌ Failed packages:")
|
||||
for _, pkg := range failedPackages {
|
||||
fmt.Printf(" - %s\n", pkg)
|
||||
}
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Println()
|
||||
fmt.Println("🎉 All tests passed!")
|
||||
}
|
||||
}
|
||||
|
||||
func setupTestEnvironment() map[string]string {
|
||||
// Save original environment variables that we'll modify
|
||||
originalEnv := make(map[string]string)
|
||||
|
||||
envVars := []string{
|
||||
"LOCAL_OUTPUT_DIR",
|
||||
"PROCESS_OUTPUT_DIR",
|
||||
"ENABLE_NAS_TRANSFER",
|
||||
"PROCESSING_ENABLED",
|
||||
"NAS_OUTPUT_PATH",
|
||||
"FFMPEG_PATH",
|
||||
"WORKER_COUNT",
|
||||
}
|
||||
|
||||
for _, envVar := range envVars {
|
||||
originalEnv[envVar] = os.Getenv(envVar)
|
||||
}
|
||||
|
||||
return originalEnv
|
||||
}
|
||||
|
||||
func restoreEnvironment(originalEnv map[string]string) {
|
||||
fmt.Println("🔄 Restoring original environment...")
|
||||
for envVar, originalValue := range originalEnv {
|
||||
if originalValue == "" {
|
||||
os.Unsetenv(envVar)
|
||||
} else {
|
||||
os.Setenv(envVar, originalValue)
|
||||
}
|
||||
}
|
||||
}
|
||||
Loading…
x
Reference in New Issue
Block a user