Skip to content
Contact Us

Mirth Connect & OIE Troubleshooting Guide

Find solutions to the most common Mirth Connect and Open Integration Engine (OIE) issues. Whether you are running Mirth Connect or OIE, these troubleshooting steps apply to both platforms — the underlying architecture, channel model, and connector framework are the same.

Before You Begin

Always check the main log file (mirth.log) first. Most issues leave clear error messages that point directly to the root cause. On Linux, this is typically at /opt/mirthconnect/logs/mirth.log or /var/log/mirthconnect/mirth.log.


Database connectivity is critical for Mirth Connect and OIE operations. For in-depth troubleshooting with complete code examples for each database platform, see the dedicated Database Troubleshooting Guide.

Problem: Communications link failure error

Error message
ERROR: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago.

Solution:

  1. Check MySQL server status
  2. Verify connection parameters
  3. Test network connectivity
  4. Increase connection timeout
Test MySQL connection and status
-- Test connection
mysql -h hostname -u username -p database_name
-- Check MySQL status
SHOW STATUS LIKE 'Threads_connected';

Problem: Login failed for user error

Solution:

  1. Verify SQL Server authentication mode (mixed mode required)
  2. Check user permissions
  3. Ensure SQL Server Browser service is running
  4. Verify port configuration (default 1433)

Problem: ORA-12154: TNS:could not resolve the connect identifier

Solution:

  1. Check TNS names configuration
  2. Verify Oracle client installation
  3. Test connection with SQL*Plus
  4. Check network connectivity

View detailed solutions for all databases


Symptoms:

  • OutOfMemoryError exceptions
  • Slow message processing
  • Server becomes unresponsive

Solutions:

  1. Increase JVM heap size:

    Recommended JVM settings for production
    -Xmx4g -Xms2g -XX:+UseG1GC
  2. Optimize channel configurations — Disable unnecessary logging, reduce message storage retention

  3. Implement message batching — Process multiple messages per transaction where possible

  4. Use connection pooling — Reuse database connections instead of creating new ones per message

Best Practices:

  1. Use appropriate connector types for your workload (TCP for low-latency, HTTP for REST APIs)
  2. Implement proper error handling to avoid retry storms
  3. Optimize JavaScript transformations — avoid unnecessary string operations and XML parsing
  4. Use database connection pooling with appropriate pool sizes

Key Metrics to Monitor:

  • Queue depth — Messages waiting to be processed
  • Processing rate — Messages per second throughput
  • Error rate — Percentage of failed messages
  • Memory usage — JVM heap utilization over time

Queue Buildup

If message queues grow continuously, your destinations cannot keep up with the inbound message rate. Identify the bottleneck (database writes, external API calls, transformation complexity) and address it before the queue exhausts available memory.


JavaScript Errors:

Common mistake: undefined variable
// This will throw a NullPointerException or TypeError
var result = undefinedVariable.toString();
Solution: check for undefined before access
var result = (typeof undefinedVariable !== 'undefined') ?
undefinedVariable.toString() : '';

Common Transformation Issues:

  • Null pointer exceptions — Always check for null/undefined before accessing properties
  • Data type mismatches — Cast values explicitly with toString(), parseInt(), parseFloat()
  • Encoding problems — Specify UTF-8 encoding when reading/writing files or HTTP responses
  • XML parsing errors — Validate XML structure before processing; use try-catch around XML operations
  1. Use Logger Statements:

    Strategic logging for debugging
    logger.info('Processing message ID: ' + connectorMessage.getMessageId());
    logger.info('Patient ID: ' + msg['PID']['PID.3']['PID.3.1'].toString());
    logger.error('Transform failed: ' + error.message);
  2. Enable Channel Logging:

    • Set log level to DEBUG in the channel settings
    • Monitor channel statistics in the Dashboard
    • Review error logs for stack traces
  3. Test with Sample Data:

    • Use message templates in the channel editor
    • Test edge cases (empty fields, special characters, oversized messages)
    • Validate transformations step by step using the transformer test panel

Common Errors:

  • PKIX path building failed — Certificate not trusted by Java keystore
  • Certificate not trusted — Self-signed or expired certificate
  • Hostname verification failed — Certificate CN does not match the server hostname

Solutions:

  1. Import certificates into the Java keystore:

    Import a certificate into the Java truststore
    keytool -import -alias mycert -file certificate.crt \
    -keystore $JAVA_HOME/lib/security/cacerts \
    -storepass changeit

    Keystore Password

    The default Java truststore password is changeit. If your organization has changed this, use the correct password. Incorrect passwords will produce a misleading “tampered or incorrect password” error.

  2. Verify certificate chain completeness — Ensure intermediate certificates are included

  3. Check certificate expiration:

    Check certificate expiration date
    keytool -list -alias mycert -keystore cacerts -storepass changeit -v | grep "Valid"
  4. Use proper certificate CN — The certificate Common Name must match the hostname used in the connection URL


Checklist:

  1. Java installation and version — Verify with java -version

    Verify Java installation
    java -version
    # Expected: openjdk version "11.x.x" or higher
  2. Database connectivity — Can the server reach the configuration database?

    Test database connectivity
    # MySQL
    mysql -h db-host -u mirth_user -p mirth_db -e "SELECT 1;"
    # PostgreSQL
    psql -h db-host -U mirth_user -d mirth_db -c "SELECT 1;"
  3. Port availability — Is port 8080 (HTTP) or 8443 (HTTPS) already in use?

    Check port availability
    netstat -tlnp | grep -E '8080|8443'
    # or
    ss -tlnp | grep -E '8080|8443'
  4. File permissions — Does the service user have read/write access to the installation directory?

  5. Log files — Check mirth.log for specific error messages

Common Issues:

  • Invalid XML in configuration files — A corrupted mirth.properties or channel XML can prevent startup
  • Database schema version mismatch — Occurs when upgrading without running the database migration
  • Missing dependencies — Custom libraries or JDBC drivers not found in the custom-lib directory
  • Incorrect file paths — Paths in mirth.properties that reference nonexistent directories

Schema Migration

When upgrading Mirth Connect or OIE, always backup your database before starting the new version. The first startup after an upgrade will attempt to migrate the database schema. If this fails, you will need the backup to recover.


Regular monitoring should include:

CheckFrequencyTool
Server statusEvery 5 minutesHealth check endpoint or monitoring agent
Channel statusEvery 5 minutesDashboard API or monitoring script
Database connectionsEvery 15 minutesConnection pool metrics
Memory usageEvery 5 minutesJVM metrics (JMX or dashboard)
Disk spaceEvery hourOS-level monitoring
Log file sizesDailyLog rotation policy
  1. Regular Backups:

    • Configuration backup with MirthSync
    • Database backup (daily for production)
    • Log rotation to prevent disk exhaustion
  2. Performance Monitoring:

    • Set up alerts for queue depth, error rates, and memory usage
    • Monitor trends over time to identify degradation
    • Capacity planning based on message volume growth
  3. Security Updates:

    • Keep Mirth Connect or OIE updated to the latest patch release
    • Update the Java runtime regularly
    • Review and rotate credentials periodically

Key log files to examine when troubleshooting:

Log FileContents
mirth.logMain application log — startup, errors, warnings
database.logDatabase operations and query errors
Channel-specific logsPer-channel message processing details

Contact professional support when:

  • Critical production issues — Message processing stopped, data loss risk
  • Complex integration requirements — Multi-system HL7/FHIR workflows
  • Performance optimization — Throughput below requirements despite tuning
  • Security configuration — HIPAA compliance, TLS hardening, access control
  • OIE migration — Moving from commercial Mirth Connect to Open Integration Engine

Our team has deep experience with both Mirth Connect and OIE across AWS, Azure, and GCP deployments.

Contact Saga IT for expert support | Mirth Connect services | OIE services