FAQ Overview

FAQ Overview

Frequently Asked Questions

I already have telematics - why do I need cameras?

While telematics can offer a certain level of insight into incidents (such as location, speed, G-force etc.), cameras can offer a full 360-degree view around a vehicle in order to provide accurate context around incidents and driver behavior. Benefits from a camera solution include:

  • Improved driver behavior
  • Enhanced duty of care
  • Improved road safety and saving lives
  • Reduced insurance costs
  • More control over claims management
  • Defense against fraudulent claims
  • Lower operational risk
  • Increased fleet efficiency
  • Lower vehicle wear and tear
  • Increase brand protection


What is the value of a connected camera solution?

A connected camera solution allows you to:

  • Receive footage from events (and request ad-hoc footage from vehicles) in near-real-time, without having to visit the vehicle to retrieve SD cards etc.
  • Stream footage from the vehicle in real time (‘Livestream’)
  • Monitor the health of the camera remotely, 24-7


What is the value of edge-based AI?

Edge-based AI is AI logic that sits on the camera. It is designed to work in real-time to alert the driver to:

  • Advanced Driver Assistance System (ADAS) events such as Forward Collision Warnings, Following Distance Warnings, Lane Departure Warnings and Pedestrian Detection
  • Driver State Monitoring (DSM) events such as Fatigue, Distraction, Phone Usage, Smoking, No Seatbelt.


What is the value of cloud-based AI?

Cloud-based AI is AI logic that sits on the platform, rather than on the camera. This approach brings the following benefits:

·         More suitable for in-depth and longer-term analysis

·         Provides a far more scalable solution

·         Gives VisionTrack more control over features and timelines

·         Gives VisionTrack more control over Support

·         Allows the solution to be more device-agnostic, as the logic can be applied to any connected camera footage.


Which Dashcams support AI?

Edge-based AI is supported on the VT3000-AI (C6D-AI), the VT3500-AI (ADPLUS) and the VT650-AI. The future VT3600-AI will also support AI.


How accurate is edge-based AI?

Using a dedicated ADAS camera, typical accuracies are as follows:

  • Lane Departure Warning  95%
  • Forward Collision Warning  95%
  • Pedestrian Detection Warning  90%

Factors affecting ADAS accuracy include:

  • Placement and calibration of camera
  • Sensitivity settings in configuration
  • Very high speeds
  • Extreme weather
  • Poor light conditions
  • The need for the preceding vehicle to be in the same 'recognized lane'
  • Human-shaped standing signs or roadside dummies (however, portrait patterns on the preceding vehicle or on a roadside billboard rarely trigger)
  • Accurate inputs from the Left and Right indicators
  • Water stains, cement joints, or other line-like marks on the roadway.


Using a dedicated ADAS camera, typical accuracies are as follows:

  • Fatigue  95%
  • Distraction  95%
  • Phone  95%
  • Smoking  95%
  • Seat belt  90%

Factors affecting DSM accuracy include:

  • Placement and calibration of camera
  • Sensitivity settings in configuration
  • Frequently blinking
  • Deep sunken eyes or long eyelashes
  • Eye glass reflection
  • Sunglasses and glasses that block infrared
  • Masks, thick beards and other objects around the mouth
  • Strong backlighting
  • Seatbelt same color and texture as driver’s clothing
  • Similar shaped and sized objects to a cigarette, e.g. pens
  • Hand and cigarette must be in view at the same time
  • Hand must not cover more than 80% of the cigarette
  • Cigarette is directly facing the DSM camera (no discernible features)
  • Cigarette is held horizontally
  • Phone is held side-on to the camera (either vertically or horizontally)
  • At least 20% of the phone must be visible
  • Phone color same as background
  • Similar shaped and sized objects to a phone, e.g. walkie-talkies and e-cigarettes
  • Hands free calling not detected

In many of the above cases, the AI system will typically trigger an alarm, preferring not to miss a genuine event even though it may be a false positive.


Please also note that the DSM features on an AI Dashcam are largely dependent on the manufacturer firmware. While dashcams such as the VT3500-AI broadly support all main DSM features, its better detections are in the areas of Phone Use and Seatbelts. All other DSM features are better supported with a dedicated DSM camera.


How accurate is cloud-based AI?

Cloud-based AI is being constantly trained and updated by the AI team. NARA is currently running at around 98.5 to 99% accuracy. NARA will present those videos that it cannot accurately determine as ‘Needs Reviewing’ in the Dashboard.


What exactly is Computer Vision?

Computer vision is a field of Artificial Intelligence that focuses on enabling computers to identify and understand objects and people in images and videos. Like other types of AI, computer vision seeks to perform and automate tasks that replicate human capabilities.


What is NARA and how does it work?

NARA (Notification, Analysis and Risk Assessment) is an automatic footage review service. It uses a ground-breaking cloud-based computer vision model to automatically review footage and identify collisions, near misses and false positives (such as speed bumps).

  • A shock event, accompanied by vehicle collision footage, is classified as ‘Critical’ by NARA
  • A harsh maneuver event (e.g. Turn) could be classified as ‘Requires Intervention’ by NARA
  • A harsh maneuver event, accompanied by near miss footage, could be classified as ‘Requires Intervention’ with ‘Near Miss’ by NARA
  • If none of the above occur, the event is typically classified as ‘Dismissed’ by NARA.


What is the difference between NARA and OSR?

The OSR (Occupant Safety Rating) uses complex algorithms across 3 dimensional G-force axes to determine the probability of driver injury. It is a subset of NARA and also works when there is no vehicle collision footage (e.g. hitting a tree or a wall). When the probability of injury is high (~80%), the event is classified as ‘Critical’ by the OSR calculation.


Does the camera/DVR send all the video it records to the platform?

No. The camera/DVR records all the time the vehicle is in use (and also for a configurable ‘power delay’ time after ignition OFF) but only sends video back to the platform when:

  • An event – that has been configured for automatic footage requests – occurs (e.g. harsh driving, shock, speeding, panic, ADAS, DSM)
  • A user (or API) requests footage from the device.


What is the maximum storage capacity on the camera/DVR?

Local storage capacity depends on the size of the SD/hard drive and number of attached cameras. For example:

  • A forward-facing camera with 128GB SD can store approximately 47 hours of main-stream driving footage (assuming standard configuration of 1080P / 15 frames per second)
  • A forward-facing camera (with either a driver-facing or rear camera, for example) and 128GB SD can store approximately 26 hours of main-stream driving footage (assuming standard configuration of 1080P / 15 FPS for main channel and 720P / 15 FPS for secondary channel)
  • A VT5500 DVR (with 1 TB hard drive and 5 connected cameras) can store approximately 84 hours of main-stream driving footage and 135 hours of sub-stream driving footage (assuming standard configuration of 720P / 15 FPS for main-stream and D1 / 15 FPS for sub-stream)
  • A VT6.3 DVR (with 4 TB hard drive and 8 connected cameras) can store approximately 200 hours of main-stream driving footage and 560 hours of sub-stream driving footage (assuming standard configuration of 720P / 15 FPS for main-stream and CIF+D1 / 15 FPS for sub-stream).


Does the camera/DVR record sound?

Certain cameras (such as the VT3000, VT3500, VT3600 and VT-C20-IPC & -Mini) can be configured to record sound. The VT2500 & VT4500 do support sound recording but not playback on the Vision Track platform. Audio needs to be enabled as an organization (account) license and as an option against the relevant user(s).


Does the camera/DVR support Live Streaming?

Yes. All main cameras/DVRs (except for the VT2500 & VT4500) support Live Streaming and Playback. These features need to be enabled as an organization (account) license.

How many channels can I have on a DVR?

A VT5500-C DVR supports 5 channels (4 x AHD, 1 X IPC) as well as additional inputs for both a monitor and an R-Watch, for example (may need additional converter)

A VT6.3 DVR supports 8 channels (6 x AHD, 2 X IPC) as well as additional inputs for both a monitor and an R-Watch, for example (may need additional converter)


How many channels can I have on a Dashcam?

  • VT3000 and VT3000-AI support forward-facing (built-in) plus 1 additional AHD channel (e.g. rear, driver, load – 720P maximum) and 1 additional IPC channel (e.g. rear, load).
  • VT3500-AI supports forward-facing and driver-facing (built-in) plus 1 additional AHD channel (e.g. rear, driver, load – 1080P maximum) and 1 additional IPC channel (e.g. rear, load) and Monitor.


Which Dashcams support OBD plug-and-play?

The VT3500-AI currently support OBD plug-and-play.


What is the difference between firmware and configuration?

Firmware is software that provides low-level control for a device's hardware and allows the hardware to function and communicate with other software running on the device. Firmware is provided by the hardware manufacturer and is then vetted and Validated or Blacklisted by VisionTrack.  Configuration is fully controlled by VisionTrack and is a set of behavioral switches/thresholds applied to that firmware. An example would be:

  • Firmware that allows the device to detect and alert when a driver has closed their eyes
  • Configuration that determines how long the driver’s eyes need to be closed before alerting


Can we utilize an existing monitor in the vehicle?

Yes, subject to an understanding of the monitor specification, inputs etc.


Can our monitor run other applications (e.g. routing)?

No, we typically cannot interface with a third-party system unless it’s just to view images and not interactive.


Can the camera/DVR always be on?

Yes, we can run a camera/DVR in ‘always on’ mode, although there are considerations around power drain and general hardware shelf-life/warranty if the device never gets a sleep mode. (In an ‘always-on’ mode, we will split the journey after 24 hours).


Is it possible to wake up the camera/DVR?

It is not currently possible to wake up the Dashcam.

The VT5500-C and VT6.3 DVRs can currently be woken up by VisionTrack support via SMS.


Does the camera support privacy?

Only the VT3500-AI (running firmware version ADPLUS_V355_T230103.71 and beyond) supports privacy.  By pressing the panic button for 6 seconds, the camera will stop recording (although will still track distance, location etc.) and will then start recording when the vehicle exceeds a configurable speed threshold.  This is designed to stop recording if the driver is on a break, eating, getting changed etc. However, please note that the platform doesn’t fully support this yet – in Privacy mode a device will show on the Health Check report as ‘Video Loss’ – meaning it will be difficult for customers or support to differentiate between a faulty device and one in privacy. Please contact Professional Services if the Privacy feature is required.


What would happen to the camera in the event of a catastrophic collision/incident?

Most devices have super capacitors which are designed to ensure continuity of power in the event of a catastrophic incident. The device file management system records in one second blocks, so in the worst-case scenario we occasionally lose the last second of a recording in a catastrophic incident.

Can footage can be retrieved directly from the DVR?

This is largely redundant with a connected camera, but can this be done via a USB, or by removing the SD card.  Please contact the MiFleet Support Team.


Where is the cloud data stored?

All data for the VisionTrack solution is hosted by Microsoft Azure (https://azure.microsoft.com), offering reliable, secure and scalable cloud computing services in an ISO27001:2013 certified hosted environment. Data is fully covered by the General Data Protection Regulation (GDPR).


What do we mean by ‘device agnostic’ when referring to the Vision Track platform?

‘Device-agnostic’ means that the platform can take footage from a range of third-party devices (dashcams and DVRs). More third-party devices can be added, but would typically involve a range of tests (including protocols, ability to send footage, livestream etc.)


What are the data retention periods on the platform? Can they be changed?

By default, any event that is identified as an ‘Incident’ (whether manually or via NARA) – and any associated telemetry – is kept on the platform for 7 years. All other footage and telemetry data is kept for 1 year. These storage periods can be shortened (request to Support). If storage periods need extending beyond these limits, these need to be discussed with MiFleet Support.


What is the role access capability on the platform?

As of May 2023, the platform offers the following key RBAC (Role-Based Access Control) XXX:

  • Report Viewer
  • Event Viewer
  • Standard
  • (Fleet) Manager
  • Administrator
  • Tracking
  • Driver

If multiple roles are assigned, the user will get the widest access. VisionTrack therefore recommends assigning only one role per user.

Certain roles, functions and reports are dependent on account type, licenses and user access, and therefore may not be visible on all accounts.

The platform also offers the following non-role-specific controls that can be assigned to individual users:

  • Media Audio – to control whether the user can access Audio for Media, Playback and Streaming
  • Media Download – to control whether the user can download video (not available to Report Viewer role)
  • Share Event – to control whether the user can share events (not available to Report Viewer role).


How can Vision Track integrate with other systems?

The Vision Track platform offers two main integration capabilities:

  • Application Programming Interfaces (APIs)/ Web services (RESTful API). These provide a bi-directional flow of data between VisionTrack and 3rd party systems through a fully documented developer’s portal.  These services allow partner and other third-party applications to extend their capabilities by integrating location, status, events and video footage into their processing on a real-time customer request basis.
    For more information on VisionTrack’s open APIs, visit: https://api.autonomise.ai/docs/index.html.
  • A range of webhooks / service hooks that automatically post/‘push’ data to an end point URL. This provides a real-time transmission of data such as Alerts, Events, Journeys, Vehicle Position and Media Footage.
  • Single Sign On is also supported.

The Application Engineering and PreSales teams can assist in scoping and setting up these APIs.


Do we offer an on-premise solution?

No, VisionTrack are not currently offering an on-premise solution. Our current platform is built on Microsoft Azure and uses many Microsoft-provided software services, because of the shared operational and security model benefits that doing this provides. There would also be considerations around having to provide ongoing support, upgrades and feature enhancements outside of our automated systems. Other customers have also moved away from this idea because of the additional cost and the lack of real benefits it provides.


Where does VisionTrack get its posted speed data?

VisionTrack gets posted speed data from HERE Maps (www.here.com).

Coverage information can be found here:


How can we report speed limit changes?

Speed limit changes can be reported to HERE Maps via MiFleet Support.


Do we support vehicle-specific limits by type (e.g. truck)?

Yes. The HERE data supports vehicle-specific road speeds (such as HGVs on motorways) based on ‘Vehicle Type’ set up against the vehicle.


What are the top right alerts in the Vision Track UI, and where can they be set?

The three top right alerts are:

  • Media that have been received today
  • Events that have been received today. These event counts (red, critical etc.) are defined in User > Organization > Settings.
  • Speeding alerts that have been received today.  These are enabled via User > Profile > Notifications per fleet.

Each alert allows the user to filter on other parameters (e.g. specific fleets/vehicles/event types/dates).


Which user guides are available?

The Event User Guide can be downloaded via the User > downloaded User Guide menu option. In addition, the following guides are available:

In which languages is the system available?

Users can change their system language (via User > Profile), to:

  • Czech
  • Danish
  • Dutch
  • English (UK)
  • English (US)
  • French
  • German
  • Italian
  • Polish
  • Portuguese
  • Spanish
  • Swedish

However, there is still some Dev work in progress to fully complete all localisations (date TBC).


How is a driver associated to a vehicle?

A driver can be associated to a vehicle in various ways:

  • Via Driver Management

Is it possible to make a historic association of driver to vehicle?

No, there is no way to update historic journeys and events, as our infrastructure is not designed for updating the data once it’s been written.  Data cannot be migrated between organizations for the same reason.


Do I need to repeat email recipients for notifications for each fleet, or is there a way to apply to all?

Yes, notifications need to be created for each fleet (on the assumption that fleet managers would only want notifications for their fleets). Please note that when setting up notifications for Driver Behavior events (Brake, Accelerate, Shock, Turn), the platform only alerts on Red and Critical events.


Can I reset a user’s password?

No. For security reasons, password reset is self-service which the user can initiate via the Recover Account option on the login screen.


Can I change the speed unit of measurement in the video overlay?

Yes, this can be configured via User > Organization > Settings.


Can I restrict users requesting excessive amounts of footage in one go?

Yes, the Default Video Length, Maximum SD Video Length and Maximum HD Video Length (as well as the ability for users to change the Video Length) can be configured via User > Organization > Settings.


Why can’t I see the ADAS/DSM filter options in the Event page?

At least one of the devices on the account needs to have ADAS/DSM defined in the device accessories in order for the ADAS/DSM filter to be visible.


Why do DSM reports show when I don’t have DSM on my fleet

The report list is currently locked down by role and organization license, neither of which currently include/exclude DSM.


Why does the Live Stream and Playback footage not play in sync?

Live Stream and Playback are largely dependent on the network coverage of the unit at the time of the request and the internet speed of the user. When footage from multiple cameras does not play in sync, this would be due to the capacity of the DVR to process and send footage, not the ability of the platform to process and display it.


How often does the device send back location/position?

Streamax devices typically send 1 single telemetry point to the platform every 5 seconds. DTEG devices typically send 1 single telemetry point to the platform every 30 seconds. These are used to, for example, to determine the live position on the map.

The platform processes the journey ~30 minutes after the device has stopped sending telemetry.


What is the typical ROI we can expect from the solution?

Return on Investment is largely dependent upon the scope of hardware and software deployed, and upon the driver/manager engagement/adoption process put in place by the customer. However, we have seen the following examples of ROI among our customer base:

  • 40% cut in claims costs
  • 24% reduction in claims frequency
  • 40% decrease in at-fault collisions
  • 50% reduction in vehicle damage
  • 100% increase in annual insurance rebate
  • 50% drop in road collisions
  • 80% cut in risky driver behaviour
  • (Particularly with NARA): 99% reduction in Red Event review and categorisation (Incident/Dismissed etc.)
  • (Particularly with NARA): 99% reduction in collision/incident alerting time
  • (Particularly with NARA): 95% reduction in claims processing time (some fault claims settlements concluded within 72 hours)
  • (Particularly with NARA): Claims savings on average of £2,000 for each collision detected (UK Supermarket based upon 7,000 vehicles).


Will we get a discount from our insurers for having video telematics

While the VisionTrack solution makes it far easier to reduce insurance costs, better manage claims and better defend against fraudulent claims, any insurance discount would be dependent on discussions between the customer and their insurer. However, we have seen instances of 100% increases in annual insurance rebates among our customer base.



Can I have access to Geofences without being an Admin user

No, only Admin users can set up and monitor geofences and as these are set up at organization, not fleet, level.


Can you remove power delay time from idle reports

Not currently, but there is Development work in progress to support this feature.


Can I add a Note or Intervention Form against a video that is not event-based (i.e. that was requested by a user)?

No, you can only add notes against event-based video, not user-requested video.

    • Related Articles

    • Access Rights Overview

      Access Rights access rights An access right is the ability to see certain system objects and perform allowed actions with them. MiFleet users may have access rights to the following macro-objects of the system: accounts and resources; users; units; ...
    • View video from SD Card - Streamax

      Access the Ceiba Software Insert the SD card into your PC Open CEIBA2, and select Type as “Local” The username is admin Password is blank Hit OK After logging in, double-click the serial number on the left, and you’ll see the calendar appear below. ...