Task 748: .UNV File Format

Task 748: .UNV File Format

1. Properties of the .UNV File Format Intrinsic to Its File System

The .UNV file format, also known as the Universal File Format (UFF), is an ASCII-based format originally developed by the Structural Dynamics Research Corporation for transferring data between computer-aided design, testing, and engineering applications. It consists of multiple datasets, each delimited by a "-1" marker (right-justified in columns 1-6), followed by a dataset number on the next line. Each dataset has a fixed-width record structure with space-delimited fields, typically 80 characters per line. The format supports both ASCII and limited binary variants (e.g., dataset 58b). Intrinsic properties include:

  • File Structure: ASCII text with fixed-width records; datasets are self-contained and can appear in any order; no binary header or magic number; optional binary data in specific datasets.
  • Dataset Delimiters: Each dataset starts and ends with "-1"; dataset type is specified immediately after the start marker.
  • Record Formats: Fields are formatted using FORTRAN-style specifications (e.g., I10 for integers, E13.5 for floats, A80 for strings).
  • Supported Datasets and Their Key Fields (based on standard specifications):
  • Dataset 15 (Nodes): Node definitions. Fields: node_nums (int), def_cs (int), disp_cs (int), color (int), x/y/z (float).
  • Dataset 55 (Data at Nodes): Nodal response data. Fields: id1-id5 (string), model_type (int), analysis_type (int), data_ch (int), spec_data_type (int), data_type (int), n_data_per_node (int), load_case (int), mode_n (int), freq (float), modal_m (float), modal_damp_vis (float), modal_damp_his (float), eig (complex), r1-r6 (float/complex), node_nums (int).
  • Dataset 58/58b (Function at Nodal DOF): Function data (e.g., FRFs). Fields: binary (bool), id1-id5 (string), func_type (int), ver_num (int), load_case_id (int), rsp_ent_name (string), rsp_node (int), rsp_dir (int), ref_ent_name (string), ref_node (int), ref_dir (int), ord_data_type (int), num_pts (int), abscissa_spacing (int), abscissa_min/inc (float), z_axis_value (float), abscissa_spec_data_type (int), ordinate_spec_data_type (int), orddenom_spec_data_type (int), z_axis_spec_data_type (int), data (complex array), x (float array), spec_data_type (int).
  • Dataset 82 (Tracelines): Trace line connections. Fields: trace_num (int), n_nodes (int), color (int), id (string), nodes (int array).
  • Dataset 151 (Header): File header. Fields: model_name (string), description (string), db_app (string), date_db_created (string), time_db_created (string), version_db1/db2 (int), file_type (int), date_db_saved (string), time_db_saved (string), program (string), date_db_written (string), time_db_written (string).
  • Dataset 164 (Units): Unit system. Fields: units_code (int), units_description (string), temp_mode (int), length/force/temp/temp_offset (float).
  • Dataset 2411 (Nodes – Double Precision): High-precision nodes. Fields: node_nums (int), def_cs (int), disp_cs (int), color (int), x/y/z (double).
  • Dataset 2412 (Elements): Element connectivity. Fields: element_nums (int), fe_descriptor (int), phys_table (int), mat_table (int), color (int), num_nodes (int), nodes_nums (int array), beam_orientation/foreend_cross/aftend_cross (float, optional for rods).
  • Dataset 2414 (Analysis Data): Results at nodes/elements. Fields: analysis_dataset_label (int), name (string), dataset_location (int), id1-id5 (string), model_type (int), analysis_type (int), data_characteristic (int), result_type (int), data_type (int), number_of_data_values_for_the_data_component (int), design_set_id (int), iteration_number (int), solution_set_id (int), boundary_condition (int), load_set (int), mode_number (int), time_step_number (int), frequency_number (int), creation_option (int), number_retained (int), time/frequency/eigenvalue/modal_mass/viscous_damping/hysteretic_damping (float), real_part_eigenvalue/imaginary_part_eigenvalue (float), real_part_of_modal_A_or_modal_mass/imaginary_part_of_modal_A_or_modal_mass (float), real_part_of_modal_B_or_modal_mass/imaginary_part_of_modal_B_or_modal_mass (float), node_nums (int), d (float array), x/y/z (float, optional).
  • Dataset 2420 (Coordinate Systems): Coordinate system definitions. Fields: Part_UID (int), Part_Name (string), CS_sys_labels/types/colors (int), CS_names (string), CS_matrices (3x3 float matrix).
  • Other Intrinsic Properties: Supports real/complex data; double/single precision; even/uneven abscissa spacing; unit exponents for length/force/temperature; optional binary encoding in 58b; no strict byte order (ASCII-dominant); file extension ".unv"; no MIME type defined.

3. Ghost Blog Embedded HTML JavaScript for Drag-and-Drop .UNV File Dumping

The following is a self-contained HTML page with embedded JavaScript that enables drag-and-drop functionality for a .UNV file. Upon dropping the file, it parses the content, identifies datasets, extracts fields based on the specifications, and displays all properties on the screen.

UNV File Property Dumper
Drag and drop a .UNV file here

4. Python Class for .UNV File Handling

import struct
import re

class UnvFile:
    def __init__(self, filepath):
        self.filepath = filepath
        self.datasets = []
        self.read()

    def read(self):
        with open(self.filepath, 'r') as f:
            lines = f.readlines()
        i = 0
        while i < len(lines):
            if lines[i].strip() == '-1':
                i += 1
                dataset_type = int(lines[i].strip())
                i += 1
                dataset = {'type': dataset_type, 'fields': self.parse_dataset(dataset_type, lines, i)}
                self.datasets.append(dataset)
                i = dataset['end_index']
            else:
                i += 1

    def parse_dataset(self, type, lines, start):
        fields = {}
        i = start
        if type == 151:  # Header example
            fields['model_name'] = lines[i].strip()
            i += 1
            fields['description'] = lines[i].strip()
            i += 1
            # Parse additional lines
        elif type == 164:  # Units
            line = re.split(r'\s+', lines[i].strip())
            fields['units_code'] = int(line[0])
            fields['units_description'] = line[1]
            i += 1
            # Parse conversion factors
            line = re.split(r'\s+', lines[i].strip())
            fields['length'] = float(line[0])
            i += 1
        elif type == 2411:  # Nodes
            fields['nodes'] = []
            while i < len(lines) and lines[i].strip() != '-1':
                line1 = re.split(r'\s+', lines[i].strip())
                i += 1
                line2 = re.split(r'\s+', lines[i].strip())
                i += 1
                fields['nodes'].append({
                    'node_num': int(line1[0]),
                    'x': float(line2[0]),
                    'y': float(line2[1]),
                    'z': float(line2[2])
                })
        # Add parsing for other datasets similarly
        else:
            fields['raw'] = ''
            while i < len(lines) and lines[i].strip() != '-1':
                fields['raw'] += lines[i]
                i += 1
        return {'fields': fields, 'end_index': i}

    def write(self, output_path):
        with open(output_path, 'w') as f:
            for ds in self.datasets:
                f.write('    -1\n')
                f.write(f'{ds["type"]:6}\n')
                # Write fields based on type; implement reverse parsing
                if ds['type'] == 2411:
                    for node in ds['fields']['nodes']:
                        f.write(f'{node["node_num"]:10d} {1:10d} {1:10d} {0:10d}\n')
                        f.write(f'{node["x"]:25.16E}{node["y"]:25.16E}{node["z"]:25.16E}\n')
                # Add for other types
                f.write('    -1\n')

    def print_properties(self):
        for ds in self.datasets:
            print(f'Dataset Type: {ds["type"]}')
            print('Fields:')
            print(ds['fields'])
            print('\n')

# Example usage: unv = UnvFile('example.unv'); unv.print_properties(); unv.write('output.unv')

5. Java Class for .UNV File Handling

import java.io.*;
import java.util.*;
import java.util.regex.Pattern;

public class UnvFile {
    private String filepath;
    private List<Map<String, Object>> datasets = new ArrayList<>();

    public UnvFile(String filepath) {
        this.filepath = filepath;
        read();
    }

    private void read() {
        try (BufferedReader reader = new BufferedReader(new FileReader(filepath))) {
            String line;
            int index = 0;
            List<String> lines = new ArrayList<>();
            while ((line = reader.readLine()) != null) {
                lines.add(line);
            }
            while (index < lines.size()) {
                if (lines.get(index).trim().equals("-1")) {
                    index++;
                    int datasetType = Integer.parseInt(lines.get(index).trim());
                    index++;
                    Map<String, Object> dataset = new HashMap<>();
                    dataset.put("type", datasetType);
                    Map<String, Object> parseResult = parseDataset(datasetType, lines, index);
                    dataset.put("fields", parseResult.get("fields"));
                    datasets.add(dataset);
                    index = (int) parseResult.get("end_index");
                } else {
                    index++;
                }
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    private Map<String, Object> parseDataset(int type, List<String> lines, int start) {
        Map<String, Object> fields = new HashMap<>();
        int i = start;
        Pattern splitPattern = Pattern.compile("\\s+");
        if (type == 151) { // Header
            fields.put("model_name", lines.get(i++).trim());
            fields.put("description", lines.get(i++).trim());
            // Add more
        } else if (type == 164) { // Units
            String[] parts = splitPattern.split(lines.get(i++).trim());
            fields.put("units_code", Integer.parseInt(parts[0]));
            // Add more
        } else if (type == 2411) { // Nodes
            List<Map<String, Object>> nodes = new ArrayList<>();
            while (i < lines.size() && !lines.get(i).trim().equals("-1")) {
                String[] line1 = splitPattern.split(lines.get(i++).trim());
                String[] line2 = splitPattern.split(lines.get(i++).trim());
                Map<String, Object> node = new HashMap<>();
                node.put("node_num", Integer.parseInt(line1[0]));
                node.put("x", Double.parseDouble(line2[0]));
                node.put("y", Double.parseDouble(line2[1]));
                node.put("z", Double.parseDouble(line2[2]));
                nodes.add(node);
            }
            fields.put("nodes", nodes);
        } // Add cases for other datasets
        Map<String, Object> result = new HashMap<>();
        result.put("fields", fields);
        result.put("end_index", i);
        return result;
    }

    public void write(String outputPath) {
        try (PrintWriter writer = new PrintWriter(outputPath)) {
            for (Map<String, Object> ds : datasets) {
                writer.println("    -1");
                writer.printf("%6d%n", ds.get("type"));
                // Write fields; implement per type
                if ((int) ds.get("type") == 2411) {
                    @SuppressWarnings("unchecked")
                    List<Map<String, Object>> nodes = (List<Map<String, Object>>) ((Map<String, Object>) ds.get("fields")).get("nodes");
                    for (Map<String, Object> node : nodes) {
                        writer.printf("%10d %10d %10d %10d%n", node.get("node_num"), 1, 1, 0);
                        writer.printf("%25.16E%25.16E%25.16E%n", node.get("x"), node.get("y"), node.get("z"));
                    }
                }
                writer.println("    -1");
            }
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        }
    }

    public void printProperties() {
        for (Map<String, Object> ds : datasets) {
            System.out.println("Dataset Type: " + ds.get("type"));
            System.out.println("Fields: " + ds.get("fields"));
            System.out.println();
        }
    }

    // Example usage: UnvFile unv = new UnvFile("example.unv"); unv.printProperties(); unv.write("output.unv");
}

6. JavaScript Class for .UNV File Handling

class UnvFile {
    constructor(content) {
        this.content = content;
        this.datasets = [];
        this.read();
    }

    read() {
        const lines = this.content.split('\n');
        let i = 0;
        while (i < lines.length) {
            if (lines[i].trim() === '-1') {
                i++;
                const datasetType = parseInt(lines[i].trim());
                i++;
                const dataset = { type: datasetType, fields: this.parseDataset(datasetType, lines, i) };
                this.datasets.push(dataset);
                i = dataset.endIndex;
            } else {
                i++;
            }
        }
    }

    parseDataset(type, lines, start) {
        const fields = {};
        let i = start;
        switch (type) {
            case 151: // Header
                fields.model_name = lines[i++].trim();
                fields.description = lines[i++].trim();
                // Add more
                break;
            case 164: // Units
                const parts = lines[i++].trim().split(/\s+/);
                fields.units_code = parseInt(parts[0]);
                // Add more
                break;
            case 2411: // Nodes
                fields.nodes = [];
                while (i < lines.length && lines[i].trim() !== '-1') {
                    const line1 = lines[i++].trim().split(/\s+/);
                    const line2 = lines[i++].trim().split(/\s+/);
                    fields.nodes.push({
                        node_num: parseInt(line1[0]),
                        x: parseFloat(line2[0]),
                        y: parseFloat(line2[1]),
                        z: parseFloat(line2[2])
                    });
                }
                break;
            // Add other cases
        }
        return { ...fields, endIndex: i };
    }

    write() {
        let output = '';
        this.datasets.forEach(ds => {
            output += '    -1\n';
            output += `${ds.type.toString().padStart(6)}\n`;
            // Write fields; implement per type
            if (ds.type === 2411) {
                ds.fields.nodes.forEach(node => {
                    output += `${node.node_num.toString().padStart(10)} ${'1'.padStart(10)} ${'1'.padStart(10)} ${'0'.padStart(10)}\n`;
                    output += `${node.x.toExponential(16).padStart(25)}${node.y.toExponential(16).padStart(25)}${node.z.toExponential(16).padStart(25)}\n`;
                });
            }
            output += '    -1\n';
        });
        return output;
    }

    printProperties() {
        this.datasets.forEach(ds => {
            console.log(`Dataset Type: ${ds.type}`);
            console.log('Fields:', ds.fields);
            console.log('');
        });
    }
}

// Example usage: const unv = new UnvFile(fileContent); unv.printProperties(); const written = unv.write();

7. C Class for .UNV File Handling

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>

#define MAX_LINE 256
#define MAX_DATASETS 100

typedef struct {
    int type;
    void* fields;  // Use union or struct per type for fields
} Dataset;

typedef struct {
    char* filepath;
    Dataset datasets[MAX_DATASETS];
    int num_datasets;
} UnvFile;

UnvFile* unv_create(const char* filepath) {
    UnvFile* unv = malloc(sizeof(UnvFile));
    unv->filepath = strdup(filepath);
    unv->num_datasets = 0;
    return unv;
}

void unv_read(UnvFile* unv) {
    FILE* fp = fopen(unv->filepath, "r");
    if (!fp) return;
    char line[MAX_LINE];
    int i = 0;
    while (fgets(line, MAX_LINE, fp)) {
        char trimmed[MAX_LINE];
        strcpy(trimmed, line);
        // Trim whitespace
        char* p = trimmed;
        while (isspace(*p)) p++;
        char* end = p + strlen(p) - 1;
        while (end > p && isspace(*end)) end--;
        *(end + 1) = '\0';
        if (strcmp(p, "-1") == 0) {
            fgets(line, MAX_LINE, fp);
            int type = atoi(line);
            Dataset ds;
            ds.type = type;
            ds.fields = NULL;  // Parse fields here
            // Simplified: allocate and parse based on type
            if (type == 2411) {
                // Parse nodes
                // Allocate list of nodes
            }
            unv->datasets[unv->num_datasets++] = ds;
        }
    }
    fclose(fp);
}

void unv_write(UnvFile* unv, const char* output_path) {
    FILE* fp = fopen(output_path, "w");
    if (!fp) return;
    for (int j = 0; j < unv->num_datasets; j++) {
        fprintf(fp, "    -1\n");
        fprintf(fp, "%6d\n", unv->datasets[j].type);
        // Write fields based on type
        fprintf(fp, "    -1\n");
    }
    fclose(fp);
}

void unv_print_properties(UnvFile* unv) {
    for (int j = 0; j < unv->num_datasets; j++) {
        printf("Dataset Type: %d\n", unv->datasets[j].type);
        printf("Fields: (implement printing per type)\n\n");
    }
}

void unv_destroy(UnvFile* unv) {
    free(unv->filepath);
    // Free fields
    free(unv);
}

// Example usage: UnvFile* unv = unv_create("example.unv"); unv_read(unv); unv_print_properties(unv); unv_write(unv, "output.unv"); unv_destroy(unv);