Task 230: .FLV File Format

Task 230: .FLV File Format

1. FLV File Format Specifications

The .FLV (Flash Video) file format is a container format developed by Macromedia (later Adobe) for streaming audio, video, and data over the internet, primarily used with Adobe Flash Player. It supports synchronized audio and video streams, with a maximum of one audio stream and one video stream per file. Data is stored in big-endian byte order (except where noted), and the format is based on the SWF file format but optimized for video delivery. Key features include support for codecs like H.263, VP6, H.264 (AVC), MP3, AAC, and Speex. The format includes a header followed by a series of tags, each with a back-pointer to the previous tag's size. Encryption is optional (via Filter flag). Timestamps are in milliseconds, and playback relies solely on FLV timestamps, ignoring embedded payload timing.

Specifications are documented in Adobe's "Video File Format Specification Version 10" (and Version 10.1), which covers both FLV and F4V formats.

2. List of Properties Intrinsic to the FLV File Format

The properties refer to the structural fields and elements defining the file's layout, including the header, tags, and payload-specific fields. These are parsed sequentially from the binary file. Properties are listed hierarchically, with types (e.g., UI8 = unsigned 8-bit integer, UB[4] = 4-bit unsigned bitfield, UI24 = unsigned 24-bit integer, SI24 = signed 24-bit integer, UI32 = unsigned 32-bit integer). The file starts at byte 0 with the header, followed by PreviousTagSize0 (always 0), then repeating tag + PreviousTagSizeN blocks.

Header Properties (fixed size, typically 9 bytes):

  • Signature: String (3 x UI8, always "FLV" or 0x46 0x4C 0x56)
  • Version: UI8 (typically 1)
  • TypeFlagsReserved1: UB[5] (must be 0)
  • TypeFlagsAudio (HasAudio): UB[1] (1 if audio present, 0 otherwise)
  • TypeFlagsReserved2: UB[1] (must be 0)
  • TypeFlagsVideo (HasVideo): UB[1] (1 if video present, 0 otherwise)
  • DataOffset: UI32 (offset to body, typically 9)

Body Properties (repeating for each tag):

  • PreviousTagSize: UI32 (size of previous tag including header; 0 for the first)

Tag Properties (11 bytes header + data):

  • Reserved: UB[2] (must be 0, for FMS compatibility)
  • Filter: UB[1] (0 = no preprocessing/encryption, 1 = encrypted/preprocessing required)
  • TagType: UB[5] (8 = audio, 9 = video, 18 = script data)
  • DataSize: UI24 (length of Data field)
  • Timestamp: UI24 (milliseconds since first tag)
  • TimestampExtended: UI8 (extends Timestamp to 32 bits)
  • StreamID: UI24 (always 0)
  • If Filter == 1:
  • EncryptionHeader: Variable (see spec for details; includes EncryptionAlgorithm, etc.)
  • FilterParams: Variable (parameters for filter/encryption)
  • Data: Variable (based on TagType)

Audio Tag Properties (TagType == 8, in Data field):

  • SoundFormat: UB[4] (0 = Linear PCM platform endian, 1 = ADPCM, 2 = MP3, 3 = Linear PCM little endian, 4 = Nellymoser 16kHz mono, 5 = Nellymoser 8kHz mono, 6 = Nellymoser, 7 = G.711 A-law, 8 = G.711 mu-law, 10 = AAC, 11 = Speex, 14 = MP3 8kHz, 15 = Device-specific)
  • SoundRate: UB[2] (0 = 5.5kHz, 1 = 11kHz, 2 = 22kHz, 3 = 44kHz)
  • SoundSize: UB[1] (0 = 8-bit, 1 = 16-bit)
  • SoundType: UB[1] (0 = mono, 1 = stereo)
  • SoundData: UI8[DataSize - 1] (payload; if SoundFormat == 10 (AAC), includes AACAUDIODATA)
  • If SoundFormat == 10 (AAC):
  • AACPacketType: UI8 (0 = sequence header, 1 = raw)
  • AACData: UI8[remaining] (AudioSpecificConfig or raw frames)

Video Tag Properties (TagType == 9, in Data field):

  • FrameType: UB[4] (1 = keyframe/seekable, 2 = inter frame/non-seekable, 3 = disposable inter frame, 4 = generated keyframe, 5 = info/command frame)
  • CodecID: UB[4] (2 = Sorenson H.263, 3 = Screen video, 4 = On2 VP6, 5 = On2 VP6 alpha, 6 = Screen video v2, 7 = AVC/H.264)
  • VideoData: UI8[DataSize - 1] (payload; if CodecID == 7 (AVC), includes AVCVIDEOPACKET)
  • If CodecID == 7 (AVC):
  • AVCPacketType: UI8 (0 = sequence header, 1 = NALU, 2 = end of sequence)
  • CompositionTime: SI24 (offset in ms; 0 if not type 1)
  • AVCData: UI8[remaining] (AVCDecoderConfigurationRecord, NALUs, or empty)

Script Data Tag Properties (TagType == 18, in Data field):

  • ScriptDataObject: AMF0-encoded (Action Message Format 0)
  • Name: STRING (typically "onMetaData")
  • Value: SCRIPTDATAVALUE (mixed type: UI8 type + value; types include 0=Number, 1=Boolean, 2=String, 3=Object, etc.)
  • For objects: Array of SCRIPTDATAVARIABLE (String name + SCRIPTDATAVALUE), ended by SCRIPTDATAOBJECTEND (UI24 = 0x000009)

These properties define the entire file structure; no other intrinsic file system properties (e.g., MIME type "video/x-flv" or extension ".flv") are part of the binary format itself.

4. Ghost Blog Embedded HTML/JavaScript for Drag-and-Drop .FLV Property Dumper

This is a self-contained HTML snippet with JavaScript that can be embedded in a Ghost blog (or any HTML page). It allows dragging and dropping an .FLV file, parses it using ArrayBuffer and DataView, and dumps all properties to the screen (in a

element). It handles basic non-encrypted files; errors are logged to console.

Drag and drop .FLV file here

5. Python Class for .FLV Handling

This Python class uses built-in modules to open, decode (parse), read/extract properties, print them to console, and write a modified copy (e.g., with timestamp adjusted for demo). It handles basic non-encrypted files.

import struct
import sys

class FLVHandler:
    def __init__(self, filepath):
        self.filepath = filepath
        self.data = None
        self.properties = {}

    def open_and_decode(self):
        with open(self.filepath, 'rb') as f:
            self.data = f.read()
        self._parse()

    def _parse(self):
        offset = 0
        sig = self.data[offset:offset+3].decode('utf-8')
        offset += 3
        version, = struct.unpack('>B', self.data[offset:offset+1])
        offset += 1
        flags, = struct.unpack('>B', self.data[offset:offset+1])
        offset += 1
        has_audio = (flags & 0x04) >> 2
        has_video = flags & 0x01
        data_offset, = struct.unpack('>I', self.data[offset:offset+4])
        offset += 4
        self.properties['header'] = {
            'signature': sig,
            'version': version,
            'has_audio': bool(has_audio),
            'has_video': bool(has_video),
            'data_offset': data_offset
        }

        offset = data_offset
        tags = []
        tag_index = 0
        while offset < len(self.data):
            prev_size, = struct.unpack('>I', self.data[offset:offset+4])
            offset += 4
            if offset >= len(self.data):
                break

            tag_byte1, = struct.unpack('>B', self.data[offset:offset+1])
            offset += 1
            filter = (tag_byte1 & 0x20) >> 5
            tag_type = tag_byte1 & 0x1F
            data_size = struct.unpack('>I', b'\x00' + self.data[offset:offset+3])[0]
            offset += 3
            timestamp = struct.unpack('>I', b'\x00' + self.data[offset:offset+3])[0]
            offset += 3
            ts_ext, = struct.unpack('>B', self.data[offset:offset+1])
            offset += 1
            full_ts = (ts_ext << 24) | timestamp
            stream_id = struct.unpack('>I', b'\x00' + self.data[offset:offset+3])[0]
            offset += 3

            tag_props = {
                'prev_size': prev_size,
                'filter': filter,
                'tag_type': tag_type,
                'data_size': data_size,
                'timestamp': full_ts,
                'stream_id': stream_id
            }

            if filter == 1:
                # Skip encryption for simplicity
                offset += data_size
            elif tag_type == 8:  # Audio
                audio_byte, = struct.unpack('>B', self.data[offset:offset+1])
                offset += 1
                sound_format = (audio_byte >> 4) & 0x0F
                sound_rate = (audio_byte >> 2) & 0x03
                sound_size = (audio_byte >> 1) & 0x01
                sound_type = audio_byte & 0x01
                tag_props.update({
                    'sound_format': sound_format,
                    'sound_rate': sound_rate,
                    'sound_size': sound_size,
                    'sound_type': sound_type
                })
                if sound_format == 10:  # AAC
                    aac_type, = struct.unpack('>B', self.data[offset:offset+1])
                    offset += 1
                    tag_props['aac_packet_type'] = aac_type
                offset += data_size - 1  # Skip data
            elif tag_type == 9:  # Video
                video_byte, = struct.unpack('>B', self.data[offset:offset+1])
                offset += 1
                frame_type = (video_byte >> 4) & 0x0F
                codec_id = video_byte & 0x0F
                tag_props.update({
                    'frame_type': frame_type,
                    'codec_id': codec_id
                })
                if codec_id == 7:  # AVC
                    avc_type, = struct.unpack('>B', self.data[offset:offset+1])
                    offset += 1
                    comp_time = struct.unpack('>i', b'\x00' + self.data[offset:offset+3])[0]  # SI24
                    offset += 3
                    tag_props.update({
                        'avc_packet_type': avc_type,
                        'composition_time': comp_time
                    })
                offset += data_size - 1  # Skip data
            elif tag_type == 18:  # Script
                # Skip AMF0 parsing for simplicity
                offset += data_size
            else:
                offset += data_size

            tags.append(tag_props)
            tag_index += 1

        self.properties['tags'] = tags

    def print_properties(self):
        print('FLV Properties:')
        print('Header:', self.properties['header'])
        for i, tag in enumerate(self.properties.get('tags', [])):
            print(f'Tag {i}:', tag)

    def write_modified(self, output_path):
        if not self.data:
            return
        # Demo: Adjust first timestamp by +1000 ms (if exists)
        modified_data = bytearray(self.data)
        if self.properties['tags']:
            first_tag_offset = self.properties['header']['data_offset'] + 4  # After prev_size0
            first_tag_offset += 1 + 3  # Skip tag_byte1 and data_size
            new_ts = self.properties['tags'][0]['timestamp'] + 1000
            new_ts_lower = new_ts & 0xFFFFFF
            new_ts_ext = (new_ts >> 24) & 0xFF
            modified_data[first_tag_offset:first_tag_offset+3] = struct.pack('>I', new_ts_lower)[1:]
            modified_data[first_tag_offset+3] = new_ts_ext
        with open(output_path, 'wb') as f:
            f.write(modified_data)
        print(f'Written modified FLV to {output_path}')

# Usage example:
# handler = FLVHandler('input.flv')
# handler.open_and_decode()
# handler.print_properties()
# handler.write_modified('output.flv')

6. Java Class for .FLV Handling

This Java class uses RandomAccessFile and ByteBuffer to open, decode, read, print properties to console, and write a modified copy.

import java.io.*;
import java.nio.*;
import java.util.*;

public class FLVHandler {
    private String filepath;
    private byte[] data;
    private Map<String, Object> properties = new HashMap<>();

    public FLVHandler(String filepath) {
        this.filepath = filepath;
    }

    public void openAndDecode() throws IOException {
        try (RandomAccessFile raf = new RandomAccessFile(filepath, "r")) {
            data = new byte[(int) raf.length()];
            raf.readFully(data);
        }
        parse();
    }

    private void parse() {
        ByteBuffer bb = ByteBuffer.wrap(data).order(ByteOrder.BIG_ENDIAN);
        int offset = 0;
        String sig = new String(new byte[]{bb.get(offset++), bb.get(offset++), bb.get(offset++)});
        byte version = bb.get(offset++);
        byte flags = bb.get(offset++);
        boolean hasAudio = ((flags & 0x04) >> 2) != 0;
        boolean hasVideo = (flags & 0x01) != 0;
        int dataOffset = bb.getInt(offset);
        offset += 4;
        Map<String, Object> header = new HashMap<>();
        header.put("signature", sig);
        header.put("version", version);
        header.put("has_audio", hasAudio);
        header.put("has_video", hasVideo);
        header.put("data_offset", dataOffset);
        properties.put("header", header);

        offset = dataOffset;
        List<Map<String, Object>> tags = new ArrayList<>();
        int tagIndex = 0;
        while (offset < data.length) {
            int prevSize = bb.getInt(offset);
            offset += 4;
            if (offset >= data.length) break;

            byte tagByte1 = bb.get(offset++);
            int filter = (tagByte1 & 0x20) >> 5;
            int tagType = tagByte1 & 0x1F;
            int dataSize = ((bb.get(offset++) & 0xFF) << 16) | ((bb.get(offset++) & 0xFF) << 8) | (bb.get(offset++) & 0xFF);
            int timestamp = ((bb.get(offset++) & 0xFF) << 16) | ((bb.get(offset++) & 0xFF) << 8) | (bb.get(offset++) & 0xFF);
            byte tsExt = bb.get(offset++);
            long fullTs = ((tsExt & 0xFFL) << 24) | timestamp;
            int streamId = ((bb.get(offset++) & 0xFF) << 16) | ((bb.get(offset++) & 0xFF) << 8) | (bb.get(offset++) & 0xFF);

            Map<String, Object> tagProps = new HashMap<>();
            tagProps.put("prev_size", prevSize);
            tagProps.put("filter", filter);
            tagProps.put("tag_type", tagType);
            tagProps.put("data_size", dataSize);
            tagProps.put("timestamp", fullTs);
            tagProps.put("stream_id", streamId);

            if (filter == 1) {
                // Skip encryption
                offset += dataSize;
            } else if (tagType == 8) { // Audio
                byte audioByte = bb.get(offset++);
                int soundFormat = (audioByte >> 4) & 0x0F;
                int soundRate = (audioByte >> 2) & 0x03;
                int soundSize = (audioByte >> 1) & 0x01;
                int soundType = audioByte & 0x01;
                tagProps.put("sound_format", soundFormat);
                tagProps.put("sound_rate", soundRate);
                tagProps.put("sound_size", soundSize);
                tagProps.put("sound_type", soundType);
                if (soundFormat == 10) {
                    byte aacType = bb.get(offset++);
                    tagProps.put("aac_packet_type", aacType);
                }
                offset += dataSize - 1;
            } else if (tagType == 9) { // Video
                byte videoByte = bb.get(offset++);
                int frameType = (videoByte >> 4) & 0x0F;
                int codecId = videoByte & 0x0F;
                tagProps.put("frame_type", frameType);
                tagProps.put("codec_id", codecId);
                if (codecId == 7) {
                    byte avcType = bb.get(offset++);
                    // SI24: signed, but for simplicity use int
                    int compTime = ((bb.get(offset++) << 16) | ((bb.get(offset++) & 0xFF) << 8) | (bb.get(offset++) & 0xFF));
                    if ((compTime & 0x800000) != 0) compTime -= 0x1000000; // Sign extend
                    tagProps.put("avc_packet_type", avcType);
                    tagProps.put("composition_time", compTime);
                }
                offset += dataSize - 1;
            } else if (tagType == 18) { // Script
                offset += dataSize;
            } else {
                offset += dataSize;
            }

            tags.add(tagProps);
            tagIndex++;
        }
        properties.put("tags", tags);
    }

    public void printProperties() {
        System.out.println("FLV Properties:");
        System.out.println("Header: " + properties.get("header"));
        @SuppressWarnings("unchecked")
        List<Map<String, Object>> tags = (List<Map<String, Object>>) properties.get("tags");
        for (int i = 0; i < tags.size(); i++) {
            System.out.println("Tag " + i + ": " + tags.get(i));
        }
    }

    public void writeModified(String outputPath) throws IOException {
        if (data == null) return;
        ByteBuffer bb = ByteBuffer.wrap(data.clone()).order(ByteOrder.BIG_ENDIAN);
        @SuppressWarnings("unchecked")
        List<Map<String, Object>> tags = (List<Map<String, Object>>) properties.get("tags");
        if (!tags.isEmpty()) {
            int firstTagOffset = (int) ((Map<?, ?>) properties.get("header")).get("data_offset") + 4; // After prev_size0
            firstTagOffset += 1 + 3; // Skip tag_byte1, data_size
            long newTs = (long) tags.get(0).get("timestamp") + 1000;
            int newTsLower = (int) (newTs & 0xFFFFFF);
            byte newTsExt = (byte) ((newTs >> 24) & 0xFF);
            bb.position(firstTagOffset);
            bb.put((byte) ((newTsLower >> 16) & 0xFF));
            bb.put((byte) ((newTsLower >> 8) & 0xFF));
            bb.put((byte) (newTsLower & 0xFF));
            bb.put(newTsExt);
        }
        try (FileOutputStream fos = new FileOutputStream(outputPath)) {
            fos.write(bb.array());
        }
        System.out.println("Written modified FLV to " + outputPath);
    }

    // Usage example:
    // public static void main(String[] args) throws IOException {
    //     FLVHandler handler = new FLVHandler("input.flv");
    //     handler.openAndDecode();
    //     handler.printProperties();
    //     handler.writeModified("output.flv");
    // }
}

7. JavaScript Class for .FLV Handling

This Node.js class uses fs to open, decode, read, print to console, and write. Run with Node.js (e.g., node script.js).

const fs = require('fs');

class FLVHandler {
  constructor(filepath) {
    this.filepath = filepath;
    this.data = null;
    this.properties = {};
  }

  openAndDecode() {
    this.data = fs.readFileSync(this.filepath);
    this.parse();
  }

  parse() {
    let offset = 0;
    const sig = this.data.toString('utf8', offset, offset + 3);
    offset += 3;
    const version = this.data.readUInt8(offset++);
    const flags = this.data.readUInt8(offset++);
    const hasAudio = (flags & 0x04) >> 2;
    const hasVideo = flags & 0x01;
    const dataOffset = this.data.readUInt32BE(offset);
    offset += 4;
    this.properties.header = {
      signature: sig,
      version,
      has_audio: !!hasAudio,
      has_video: !!hasVideo,
      data_offset: dataOffset
    };

    offset = dataOffset;
    const tags = [];
    let tagIndex = 0;
    while (offset < this.data.length) {
      const prevSize = this.data.readUInt32BE(offset);
      offset += 4;
      if (offset >= this.data.length) break;

      const tagByte1 = this.data.readUInt8(offset++);
      const filter = (tagByte1 & 0x20) >> 5;
      const tagType = tagByte1 & 0x1F;
      const dataSize = this.data.readUIntBE(offset, 3);
      offset += 3;
      const timestamp = this.data.readUIntBE(offset, 3);
      offset += 3;
      const tsExt = this.data.readUInt8(offset++);
      const fullTs = (tsExt << 24) | timestamp;
      const streamId = this.data.readUIntBE(offset, 3);
      offset += 3;

      const tagProps = {
        prev_size: prevSize,
        filter,
        tag_type: tagType,
        data_size: dataSize,
        timestamp: fullTs,
        stream_id: streamId
      };

      if (filter === 1) {
        offset += dataSize;
      } else if (tagType === 8) { // Audio
        const audioByte = this.data.readUInt8(offset++);
        const soundFormat = (audioByte >> 4) & 0x0F;
        const soundRate = (audioByte >> 2) & 0x03;
        const soundSize = (audioByte >> 1) & 0x01;
        const soundType = audioByte & 0x01;
        tagProps.sound_format = soundFormat;
        tagProps.sound_rate = soundRate;
        tagProps.sound_size = soundSize;
        tagProps.sound_type = soundType;
        if (soundFormat === 10) {
          const aacType = this.data.readUInt8(offset++);
          tagProps.aac_packet_type = aacType;
        }
        offset += dataSize - 1;
      } else if (tagType === 9) { // Video
        const videoByte = this.data.readUInt8(offset++);
        const frameType = (videoByte >> 4) & 0x0F;
        const codecId = videoByte & 0x0F;
        tagProps.frame_type = frameType;
        tagProps.codec_id = codecId;
        if (codecId === 7) {
          const avcType = this.data.readUInt8(offset++);
          let compTime = this.data.readIntBE(offset, 3);
          offset += 3;
          tagProps.avc_packet_type = avcType;
          tagProps.composition_time = compTime;
        }
        offset += dataSize - 1;
      } else if (tagType === 18) {
        offset += dataSize;
      } else {
        offset += dataSize;
      }

      tags.push(tagProps);
      tagIndex++;
    }
    this.properties.tags = tags;
  }

  printProperties() {
    console.log('FLV Properties:');
    console.log('Header:', this.properties.header);
    this.properties.tags.forEach((tag, i) => {
      console.log(`Tag ${i}:`, tag);
    });
  }

  writeModified(outputPath) {
    if (!this.data) return;
    const modifiedData = Buffer.from(this.data);
    if (this.properties.tags.length > 0) {
      let firstTagOffset = this.properties.header.data_offset + 4;
      firstTagOffset += 1 + 3;
      const newTs = this.properties.tags[0].timestamp + 1000;
      const newTsLower = newTs & 0xFFFFFF;
      const newTsExt = (newTs >> 24) & 0xFF;
      modifiedData.writeUIntBE(newTsLower, firstTagOffset, 3);
      modifiedData.writeUInt8(newTsExt, firstTagOffset + 3);
    }
    fs.writeFileSync(outputPath, modifiedData);
    console.log(`Written modified FLV to ${outputPath}`);
  }
}

// Usage example:
// const handler = new FLVHandler('input.flv');
// handler.openAndDecode();
// handler.printProperties();
// handler.writeModified('output.flv');

8. C++ Class for .FLV Handling

This C++ class uses fstream to open, decode, read, print to console (std::cout), and write. Compile with g++ script.cpp -o flvhandler.

#include <iostream>
#include <fstream>
#include <vector>
#include <map>
#include <string>
#include <cstdint>
#include <cstring>

class FLVHandler {
private:
    std::string filepath;
    std::vector<uint8_t> data;
    std::map<std::string, std::map<std::string, int64_t>> properties; // Simplified, using int64_t for values

public:
    FLVHandler(const std::string& fp) : filepath(fp) {}

    void openAndDecode() {
        std::ifstream file(filepath, std::ios::binary | std::ios::ate);
        if (!file) return;
        auto size = file.tellg();
        data.resize(size);
        file.seekg(0);
        file.read(reinterpret_cast<char*>(data.data()), size);
        parse();
    }

    void parse() {
        size_t offset = 0;
        char sig[4] = {};
        memcpy(sig, &data[offset], 3);
        offset += 3;
        uint8_t version = data[offset++];
        uint8_t flags = data[offset++];
        bool hasAudio = (flags & 0x04) >> 2;
        bool hasVideo = flags & 0x01;
        uint32_t dataOffset = (data[offset] << 24) | (data[offset+1] << 16) | (data[offset+2] << 8) | data[offset+3];
        offset += 4;
        std::map<std::string, int64_t> header;
        header["version"] = version;
        header["has_audio"] = hasAudio;
        header["has_video"] = hasVideo;
        header["data_offset"] = dataOffset;
        properties["header"] = header;

        offset = dataOffset;
        std::vector<std::map<std::string, int64_t>> tags;
        int tagIndex = 0;
        while (offset < data.size()) {
            uint32_t prevSize = (data[offset] << 24) | (data[offset+1] << 16) | (data[offset+2] << 8) | data[offset+3];
            offset += 4;
            if (offset >= data.size()) break;

            uint8_t tagByte1 = data[offset++];
            int filter = (tagByte1 & 0x20) >> 5;
            int tagType = tagByte1 & 0x1F;
            uint32_t dataSize = (data[offset] << 16) | (data[offset+1] << 8) | data[offset+2];
            offset += 3;
            uint32_t timestamp = (data[offset] << 16) | (data[offset+1] << 8) | data[offset+2];
            offset += 3;
            uint8_t tsExt = data[offset++];
            int64_t fullTs = (static_cast<int64_t>(tsExt) << 24) | timestamp;
            uint32_t streamId = (data[offset] << 16) | (data[offset+1] << 8) | data[offset+2];
            offset += 3;

            std::map<std::string, int64_t> tagProps;
            tagProps["prev_size"] = prevSize;
            tagProps["filter"] = filter;
            tagProps["tag_type"] = tagType;
            tagProps["data_size"] = dataSize;
            tagProps["timestamp"] = fullTs;
            tagProps["stream_id"] = streamId;

            if (filter == 1) {
                offset += dataSize;
            } else if (tagType == 8) { // Audio
                uint8_t audioByte = data[offset++];
                int soundFormat = (audioByte >> 4) & 0x0F;
                int soundRate = (audioByte >> 2) & 0x03;
                int soundSize = (audioByte >> 1) & 0x01;
                int soundType = audioByte & 0x01;
                tagProps["sound_format"] = soundFormat;
                tagProps["sound_rate"] = soundRate;
                tagProps["sound_size"] = soundSize;
                tagProps["sound_type"] = soundType;
                if (soundFormat == 10) {
                    uint8_t aacType = data[offset++];
                    tagProps["aac_packet_type"] = aacType;
                }
                offset += dataSize - 1;
            } else if (tagType == 9) { // Video
                uint8_t videoByte = data[offset++];
                int frameType = (videoByte >> 4) & 0x0F;
                int codecId = videoByte & 0x0F;
                tagProps["frame_type"] = frameType;
                tagProps["codec_id"] = codecId;
                if (codecId == 7) {
                    uint8_t avcType = data[offset++];
                    int32_t compTime = static_cast<int32_t>((data[offset] << 24) | (data[offset+1] << 16) | (data[offset+2] << 8)) >> 8; // SI24 sign extend
                    if (compTime & 0x800000) compTime |= 0xFF000000;
                    offset += 3;
                    tagProps["avc_packet_type"] = avcType;
                    tagProps["composition_time"] = compTime;
                }
                offset += dataSize - 1;
            } else if (tagType == 18) {
                offset += dataSize;
            } else {
                offset += dataSize;
            }

            tags.push_back(tagProps);
            tagIndex++;
        }
        // Store tags in properties (simplified, as map of maps)
        for (size_t i = 0; i < tags.size(); ++i) {
            properties["tag_" + std::to_string(i)] = tags[i];
        }
    }

    void printProperties() {
        std::cout << "FLV Properties:" << std::endl;
        auto headerIt = properties.find("header");
        if (headerIt != properties.end()) {
            std::cout << "Header:" << std::endl;
            for (const auto& kv : headerIt->second) {
                std::cout << "  " << kv.first << ": " << kv.second << std::endl;
            }
        }
        int tagIndex = 0;
        while (true) {
            auto tagIt = properties.find("tag_" + std::to_string(tagIndex));
            if (tagIt == properties.end()) break;
            std::cout << "Tag " << tagIndex << ":" << std::endl;
            for (const auto& kv : tagIt->second) {
                std::cout << "  " << kv.first << ": " << kv.second << std::endl;
            }
            tagIndex++;
        }
    }

    void writeModified(const std::string& outputPath) {
        if (data.empty()) return;
        std::vector<uint8_t> modifiedData = data;
        auto headerIt = properties.find("header");
        if (headerIt != properties.end() && properties.find("tag_0") != properties.end()) {
            size_t firstTagOffset = headerIt->second["data_offset"] + 4;
            firstTagOffset += 1 + 3;
            int64_t newTs = properties["tag_0"]["timestamp"] + 1000;
            uint32_t newTsLower = newTs & 0xFFFFFF;
            uint8_t newTsExt = (newTs >> 24) & 0xFF;
            modifiedData[firstTagOffset] = (newTsLower >> 16) & 0xFF;
            modifiedData[firstTagOffset + 1] = (newTsLower >> 8) & 0xFF;
            modifiedData[firstTagOffset + 2] = newTsLower & 0xFF;
            modifiedData[firstTagOffset + 3] = newTsExt;
        }
        std::ofstream outFile(outputPath, std::ios::binary);
        outFile.write(reinterpret_cast<const char*>(modifiedData.data()), modifiedData.size());
        std::cout << "Written modified FLV to " << outputPath << std::endl;
    }
};

// Usage example:
// int main() {
//     FLVHandler handler("input.flv");
//     handler.openAndDecode();
//     handler.printProperties();
//     handler.writeModified("output.flv");
//     return 0;
// }