vibetunnel/linux/pkg/api/sse.go
Helmut Januschka b90bfd9f46
Add Go implementation of VibeTunnel server (#16)
* Add Linux implementation of VibeTunnel

This commit introduces a complete Linux port of VibeTunnel, providing feature parity with the macOS version. The implementation includes:

- Full Go-based server with identical REST API and WebSocket endpoints
- Terminal session management using PTY (pseudo-terminal) handling
- Asciinema recording format for session playback
- Compatible CLI interface matching the macOS `vt` command
- Support for all VibeTunnel features: password protection, network modes, ngrok integration
- Comprehensive build system with Makefile supporting various installation methods
- Systemd service integration for running as a system daemon

The Linux version maintains 100% compatibility with the existing web UI and can be used as a drop-in replacement for the macOS app on Linux systems.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add comprehensive ngrok integration to Linux VibeTunnel

Implements full ngrok tunnel support for the Go/Linux version to match
the macOS Swift implementation, enabling secure public access to local
VibeTunnel instances.

- **ngrok Service**: Complete lifecycle management with status tracking
- **HTTP API**: RESTful endpoints matching macOS version
- **CLI Support**: Command-line ngrok flags and integration
- **Auto-forwarding**: Built-in HTTP request forwarding to local server

- `POST /api/ngrok/start` - Start tunnel with auth token
- `POST /api/ngrok/stop` - Stop active tunnel
- `GET /api/ngrok/status` - Get current tunnel status

- Uses `golang.ngrok.com/ngrok` SDK for native Go integration
- Thread-safe service with mutex protection
- Comprehensive error handling and logging
- Real-time status updates (disconnected/connecting/connected/error)
- Proper context cancellation for graceful shutdown

```bash
vibetunnel --serve --ngrok --ngrok-token "your_token"
vibetunnel --serve --port 4030 --ngrok --ngrok-token "your_token"
```

- Added golang.ngrok.com/ngrok v1.13.0
- Updated web packages (security fixes for puppeteer)

Maintains full API compatibility with macOS VibeTunnel for seamless
cross-platform operation and consistent web frontend integration.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* up

* Fix SSE streaming performance with byte-based approach

Addresses @badlogic's review feedback to prevent performance issues
with line-based file reading in processNewContent().

## Changes Made

### Performance Fix
- **Byte-based seeking**: Replace line counting with file position tracking
- **Efficient reads**: Only read new content since last position using file.Seek()
- **Memory optimization**: Avoid reading entire file on each update
- **Incomplete line handling**: Properly handle partial lines at file end

### Technical Details
- Changed lastLineCount *int → seenBytes *int64
- Use file.Seek(seenBytes, 0) to jump to last read position
- Read only new content with currentSize - seenBytes
- Handle incomplete lines by adjusting seek position
- Maintains same functionality with better performance

### Benefits
- **Scalability**: No longer reads entire file for each update
- **Performance**: O(new_content) instead of O(total_content)
- **Memory**: Constant memory usage regardless of file size
- **Reliability**: Handles concurrent writes and partial lines correctly

This prevents the SSE streaming from exploding in our faces as @badlogic
warned, especially for long-running sessions with large output files.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Optimize streaming performance to reduce 1+ second delays

Implements multiple optimizations to address user-reported 1+ second delay
between typing and stream rendering:

## PTY Reading Optimizations
- **Reduced sleep times**: 100ms → 1ms for EOF checks
- **Faster polling**: 10ms → 1ms for zero-byte reads
- **FIFO optimization**: 1s → 100ms for stdin EOF polling

## UTF-8 Buffering Improvements
- **Timeout-based flushing**: 5ms timer for incomplete UTF-8 sequences
- **Real-time streaming**: Don't wait for complete sequences in interactive mode
- **Smart buffering**: Balance correctness with responsiveness

## File I/O Optimizations
- **Immediate sync**: Call file.Sync() after each write for instant fsnotify
- **Reduced SSE timeout**: 1s → 100ms for session alive checks
- **Better responsiveness**: Ensure file changes trigger immediately

## Technical Changes
- Added StreamWriter.scheduleFlush() with 5ms timeout
- Enhanced writeEvent() with conditional file syncing
- Optimized PTY read/write loop timing
- Improved SSE streaming frequency

These changes target the main bottlenecks identified in the
PTY → file → fsnotify → SSE → browser pipeline.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix critical stdin polling delay causing 1+ second input lag

- Reduced FIFO EOF polling from 100ms to 1ms
- Reduced EAGAIN polling from 1ms to 100µs
- Added immediate continue after successful writes
- This eliminates the major input delay bottleneck

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix critical performance issues causing resource leaks and CPU burns

Performance optimizations based on code review feedback:

1. **Fix SSE goroutine leaks**:
   - Added client disconnect detection to SSE streams
   - Propagate write errors to detect when clients close connections
   - Prevents memory leaks from abandoned streaming goroutines

2. **Fix PTY busy-loop CPU burn**:
   - Increased sleep from 1ms to 10ms in idle scenarios
   - Reduces CPU wake-ups from 1000/s to 100/s (10x improvement)
   - Significantly reduces CPU usage when PTY is idle

3. **Multi-stream disconnect detection**:
   - Added error checking to multi-stream write operations
   - Prevents goroutine leaks in multi-session streaming

These fixes address the "thing of the things" - performance\!

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Standardize session creation API response format to match Rust server

Changes:
- Updated Go server session creation response to include success/message/error fields
- Now returns: {"success": true, "message": "Session created successfully", "error": null, "sessionId": "..."}
- Maintains backward compatibility with existing sessionId field
- Go server already supported both input formats (cmdline/command, cwd/workingDir)

This achieves protocol compatibility between Go and Rust implementations.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix delete endpoint to return 200 OK with JSON response

- Changed handleKillSession to return 200 OK instead of 204 No Content
- Added JSON response with success/message fields for consistency
- Fixes benchmark tool compatibility expecting 200 response

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Update Go server API to match Rust format exactly

- Use 'command' array instead of 'cmdline'
- Use 'workingDir' instead of 'cwd'
- Remove compatibility shims for cleaner API
- Better error messages matching Rust server

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Major performance optimizations for Go server

- Remove 100ms artificial delay in session creation (-100ms per session)
- Optimize PTY I/O handling with reduced polling intervals
- Implement persistent stdin pipes to avoid repeated open/close
- Batch file sync operations to reduce I/O overhead (5ms batching)
- Remove blocking status updates from API handlers
- Increase SSE session check interval from 100ms to 1s

Target: Match Rust performance (60ms avg latency, 16+ ops/sec)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix O_NONBLOCK compilation issue

* Add comprehensive TLS/HTTPS support with Caddy integration

Features:
- Optional TLS support via CLI flags (defaults to HTTP like Rust)
- Self-signed certificate generation for localhost development
- Let's Encrypt automatic certificate management for domains
- Custom certificate support for production environments
- HTTP to HTTPS redirect capability
- Maintains 100% backward compatibility with Rust version

Usage examples:
- Default HTTP: ./vibetunnel --serve (same as Rust)
- HTTPS with self-signed: ./vibetunnel --serve --tls
- HTTPS with domain: ./vibetunnel --serve --tls --tls-domain example.com
- HTTPS with custom certs: ./vibetunnel --serve --tls --tls-cert cert.pem --tls-key key.pem

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix terminal sizing issues and implement dynamic resize support

Backend changes:
- Add handleResizeSession API endpoint for dynamic terminal resizing
- Implement Session.Resize() and PTY.Resize() methods with proper validation
- Add session registry in Manager to track running sessions with PTY access
- Fix stdin error handling to prevent session crashes on EAGAIN errors
- Write resize events to asciinema stream for frontend synchronization
- Update default terminal dimensions from 80x24 to 120x30

Frontend changes:
- Add width/height parameters to SessionCreateData interface
- Calculate appropriate terminal dimensions when creating sessions
- Implement automatic resize API calls when terminal dimensions change
- Add terminal-resize event dispatch for backend synchronization
- Ensure resize events bubble properly for session management

Fixes nvim being stuck at 80x24 by implementing proper terminal
dimension management and dynamic resizing capabilities.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add client-side resize caching and Hack Nerd Font support

- Implement resize request caching to prevent redundant API calls
- Add debouncing to terminal resize events (250ms delay)
- Replace ResizeObserver with window.resize events only to eliminate pixel-level jitter
- Add Hack Nerd Font Mono as primary terminal font with Fira Code fallback
- Update session creation to use conservative 120x30 defaults
- Fix terminal dimension calculation in normal mode

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add comprehensive XTerm color and rendering enhancements

- Complete 256-color palette support with CSS variables (0-255)
- Enhanced XTerm configuration with proper terminal options
- True xterm-compatible 16-color theme
- Text attribute support: bold, italic, underline, dim, strikethrough, inverse, invisible
- Cursor blinking with CSS animation
- Font rendering optimizations (disabled ligatures, antialiasing)
- Terminal-specific CSS styling for better rendering
- Mac option key as meta, alt-click cursor movement
- Selection colors and inactive selection support

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-06-18 23:32:35 +02:00

350 lines
No EOL
8.5 KiB
Go

package api
import (
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
"strings"
"time"
"github.com/fsnotify/fsnotify"
"github.com/vibetunnel/linux/pkg/protocol"
"github.com/vibetunnel/linux/pkg/session"
)
type SSEStreamer struct {
w http.ResponseWriter
session *session.Session
flusher http.Flusher
}
func NewSSEStreamer(w http.ResponseWriter, session *session.Session) *SSEStreamer {
flusher, _ := w.(http.Flusher)
return &SSEStreamer{
w: w,
session: session,
flusher: flusher,
}
}
func (s *SSEStreamer) Stream() {
s.w.Header().Set("Content-Type", "text/event-stream")
s.w.Header().Set("Cache-Control", "no-cache")
s.w.Header().Set("Connection", "keep-alive")
s.w.Header().Set("X-Accel-Buffering", "no")
streamPath := s.session.StreamOutPath()
log.Printf("[DEBUG] SSE: Starting live stream for session %s", s.session.ID[:8])
// Create file watcher for high-performance event detection
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Printf("[ERROR] SSE: Failed to create file watcher: %v", err)
s.sendError(fmt.Sprintf("Failed to create watcher: %v", err))
return
}
defer watcher.Close()
// Add the stream file to the watcher
err = watcher.Add(streamPath)
if err != nil {
log.Printf("[ERROR] SSE: Failed to watch stream file: %v", err)
s.sendError(fmt.Sprintf("Failed to watch file: %v", err))
return
}
headerSent := false
seenBytes := int64(0)
// Send initial content immediately and check for client disconnect
if err := s.processNewContent(streamPath, &headerSent, &seenBytes); err != nil {
log.Printf("[DEBUG] SSE: Client disconnected during initial content: %v", err)
return
}
// Watch for file changes
for {
select {
case event, ok := <-watcher.Events:
if !ok {
return
}
// Process file writes (new content) and check for client disconnect
if event.Op&fsnotify.Write == fsnotify.Write {
if err := s.processNewContent(streamPath, &headerSent, &seenBytes); err != nil {
log.Printf("[DEBUG] SSE: Client disconnected during content streaming: %v", err)
return
}
}
case err, ok := <-watcher.Errors:
if !ok {
return
}
log.Printf("[ERROR] SSE: File watcher error: %v", err)
case <-time.After(1 * time.Second):
// Check if session is still alive less frequently for better performance
if !s.session.IsAlive() {
log.Printf("[DEBUG] SSE: Session %s is dead, ending stream", s.session.ID[:8])
if err := s.sendEvent(&protocol.StreamEvent{Type: "end"}); err != nil {
log.Printf("[DEBUG] SSE: Client disconnected during end event: %v", err)
}
return
}
}
}
}
func (s *SSEStreamer) processNewContent(streamPath string, headerSent *bool, seenBytes *int64) error {
// Open the file for reading
file, err := os.Open(streamPath)
if err != nil {
log.Printf("[ERROR] SSE: Failed to open stream file: %v", err)
return err
}
defer file.Close()
// Get current file size
fileInfo, err := file.Stat()
if err != nil {
log.Printf("[ERROR] SSE: Failed to stat stream file: %v", err)
return err
}
currentSize := fileInfo.Size()
// If file hasn't grown, nothing to do
if currentSize <= *seenBytes {
return nil
}
// Seek to the position we last read
if _, err := file.Seek(*seenBytes, 0); err != nil {
log.Printf("[ERROR] SSE: Failed to seek to position %d: %v", *seenBytes, err)
return err
}
// Read only the new content
newContentSize := currentSize - *seenBytes
newContent := make([]byte, newContentSize)
bytesRead, err := file.Read(newContent)
if err != nil {
log.Printf("[ERROR] SSE: Failed to read new content: %v", err)
return err
}
// Update seen bytes
*seenBytes = currentSize
// Process the new content line by line
content := string(newContent[:bytesRead])
lines := strings.Split(content, "\n")
// Handle the case where the last line might be incomplete
// If the content doesn't end with a newline, don't process the last line yet
endIndex := len(lines)
if !strings.HasSuffix(content, "\n") && len(lines) > 0 {
// Move back the file position to exclude the incomplete line
incompleteLineBytes := int64(len(lines[len(lines)-1]))
*seenBytes -= incompleteLineBytes
endIndex = len(lines) - 1
}
// Process complete lines
for i := 0; i < endIndex; i++ {
line := lines[i]
if line == "" {
continue
}
// Try to parse as header first
if !*headerSent {
var header protocol.AsciinemaHeader
if err := json.Unmarshal([]byte(line), &header); err == nil && header.Version > 0 {
*headerSent = true
log.Printf("[DEBUG] SSE: Sending event type=header")
// Skip sending header for now, frontend doesn't need it
continue
}
}
// Try to parse as event array [timestamp, type, data]
var eventArray []interface{}
if err := json.Unmarshal([]byte(line), &eventArray); err == nil && len(eventArray) == 3 {
timestamp, ok1 := eventArray[0].(float64)
eventType, ok2 := eventArray[1].(string)
data, ok3 := eventArray[2].(string)
if ok1 && ok2 && ok3 {
event := &protocol.StreamEvent{
Type: "event",
Event: &protocol.AsciinemaEvent{
Time: timestamp,
Type: protocol.EventType(eventType),
Data: data,
},
}
log.Printf("[DEBUG] SSE: Sending event type=%s", event.Type)
if err := s.sendRawEvent(event); err != nil {
log.Printf("[ERROR] SSE: Failed to send event: %v", err)
return err
}
}
}
}
return nil
}
func (s *SSEStreamer) sendEvent(event *protocol.StreamEvent) error {
data, err := json.Marshal(event)
if err != nil {
return err
}
lines := strings.Split(string(data), "\n")
for _, line := range lines {
if _, err := fmt.Fprintf(s.w, "data: %s\n", line); err != nil {
return err // Client disconnected
}
}
if _, err := fmt.Fprintf(s.w, "\n"); err != nil {
return err // Client disconnected
}
if s.flusher != nil {
s.flusher.Flush()
}
return nil
}
func (s *SSEStreamer) sendRawEvent(event *protocol.StreamEvent) error {
var data interface{}
if event.Type == "header" {
// For header events, we can skip them since the frontend might not expect them
// Or send them in a compatible format if needed
return nil
} else if event.Type == "event" && event.Event != nil {
// Convert to asciinema format: [timestamp, type, data]
data = []interface{}{
event.Event.Time,
string(event.Event.Type),
event.Event.Data,
}
} else {
// For other event types, use the original format
data = event
}
jsonData, err := json.Marshal(data)
if err != nil {
return err
}
lines := strings.Split(string(jsonData), "\n")
for _, line := range lines {
if _, err := fmt.Fprintf(s.w, "data: %s\n", line); err != nil {
return err // Client disconnected
}
}
if _, err := fmt.Fprintf(s.w, "\n"); err != nil {
return err // Client disconnected
}
if s.flusher != nil {
s.flusher.Flush()
}
return nil
}
func (s *SSEStreamer) sendError(message string) error {
event := &protocol.StreamEvent{
Type: "error",
Message: message,
}
return s.sendEvent(event)
}
type SessionSnapshot struct {
SessionID string `json:"session_id"`
Header *protocol.AsciinemaHeader `json:"header"`
Events []protocol.AsciinemaEvent `json:"events"`
}
func GetSessionSnapshot(sess *session.Session) (*SessionSnapshot, error) {
streamPath := sess.StreamOutPath()
file, err := os.Open(streamPath)
if err != nil {
return nil, err
}
defer file.Close()
reader := protocol.NewStreamReader(file)
snapshot := &SessionSnapshot{
SessionID: sess.ID,
Events: make([]protocol.AsciinemaEvent, 0),
}
lastClearIndex := -1
eventIndex := 0
for {
event, err := reader.Next()
if err != nil {
if err != io.EOF {
return nil, err
}
break
}
switch event.Type {
case "header":
snapshot.Header = event.Header
case "event":
snapshot.Events = append(snapshot.Events, *event.Event)
if event.Event.Type == protocol.EventOutput && containsClearScreen(event.Event.Data) {
lastClearIndex = eventIndex
}
eventIndex++
}
}
if lastClearIndex >= 0 && lastClearIndex < len(snapshot.Events)-1 {
snapshot.Events = snapshot.Events[lastClearIndex:]
if len(snapshot.Events) > 0 {
firstTime := snapshot.Events[0].Time
for i := range snapshot.Events {
snapshot.Events[i].Time -= firstTime
}
}
}
return snapshot, nil
}
func containsClearScreen(data string) bool {
clearSequences := []string{
"\x1b[H\x1b[2J",
"\x1b[2J",
"\x1b[3J",
"\x1bc",
}
for _, seq := range clearSequences {
if strings.Contains(data, seq) {
return true
}
}
return false
}