While telematics can offer a certain level of insight into incidents (such as location, speed, G-force etc.), cameras can offer a full 360-degree view around a vehicle in order to provide accurate context around incidents and driver behavior. Benefits from a camera solution include:
A connected camera solution allows you to:
Edge-based AI is AI logic that sits on the camera. It is designed to work in real-time to alert the driver to:
Cloud-based AI is AI logic that sits on the platform, rather than on the camera. This approach brings the following benefits:
· More suitable for in-depth and longer-term analysis
· Provides a far more scalable solution
· Gives VisionTrack more control over features and timelines
· Gives VisionTrack more control over Support
· Allows the solution to be more device-agnostic, as the logic can be applied to any connected camera footage.
Edge-based AI is supported on the VT3000-AI (C6D-AI), the VT3500-AI (ADPLUS) and the VT650-AI. The future VT3600-AI will also support AI.
Using a dedicated ADAS camera, typical accuracies are as follows:
Factors affecting ADAS accuracy include:
Using a dedicated ADAS camera, typical accuracies are as follows:
Factors affecting DSM accuracy include:
In many of the above cases, the AI system will typically trigger an alarm, preferring not to miss a genuine event even though it may be a false positive.
Please also note that the DSM features on an AI Dashcam are largely dependent on the manufacturer firmware. While dashcams such as the VT3500-AI broadly support all main DSM features, its better detections are in the areas of Phone Use and Seatbelts. All other DSM features are better supported with a dedicated DSM camera.
Cloud-based AI is being constantly trained and updated by the AI team. NARA is currently running at around 98.5 to 99% accuracy. NARA will present those videos that it cannot accurately determine as ‘Needs Reviewing’ in the Dashboard.
Computer vision is a field of Artificial Intelligence that focuses on enabling computers to identify and understand objects and people in images and videos. Like other types of AI, computer vision seeks to perform and automate tasks that replicate human capabilities.
NARA (Notification, Analysis and Risk Assessment) is an automatic footage review service. It uses a ground-breaking cloud-based computer vision model to automatically review footage and identify collisions, near misses and false positives (such as speed bumps).
The OSR (Occupant Safety Rating) uses complex algorithms across 3 dimensional G-force axes to determine the probability of driver injury. It is a subset of NARA and also works when there is no vehicle collision footage (e.g. hitting a tree or a wall). When the probability of injury is high (~80%), the event is classified as ‘Critical’ by the OSR calculation.
No. The camera/DVR records all the time the vehicle is in use (and also for a configurable ‘power delay’ time after ignition OFF) but only sends video back to the platform when:
Local storage capacity depends on the size of the SD/hard drive and number of attached cameras. For example:
Certain cameras (such as the VT3000, VT3500, VT3600 and VT-C20-IPC & -Mini) can be configured to record sound. The VT2500 & VT4500 do support sound recording but not playback on the Vision Track platform. Audio needs to be enabled as an organization (account) license and as an option against the relevant user(s).
Yes. All main cameras/DVRs (except for the VT2500 &
VT4500) support Live Streaming and Playback. These features need to be enabled as
an organization (account) license.
A VT5500-C DVR supports 5 channels (4 x AHD, 1 X IPC) as well as additional inputs for both a monitor and an R-Watch, for example (may need additional converter)
A VT6.3 DVR supports 8 channels (6 x AHD, 2 X IPC) as well as additional inputs for both a monitor and an R-Watch, for example (may need additional converter)
The VT3500-AI currently support OBD plug-and-play.
Firmware is software that provides low-level control for a device's hardware and allows the hardware to function and communicate with other software running on the device. Firmware is provided by the hardware manufacturer and is then vetted and Validated or Blacklisted by VisionTrack. Configuration is fully controlled by VisionTrack and is a set of behavioral switches/thresholds applied to that firmware. An example would be:
Yes, subject to an understanding of the monitor specification, inputs etc.
No, we typically cannot interface with a third-party system unless it’s just to view images and not interactive.
Yes, we can run a camera/DVR in ‘always on’ mode, although there are considerations around power drain and general hardware shelf-life/warranty if the device never gets a sleep mode. (In an ‘always-on’ mode, we will split the journey after 24 hours).
It is not currently possible to wake up the Dashcam.
The VT5500-C and VT6.3 DVRs can currently be woken up by VisionTrack support via SMS.
Only the VT3500-AI (running firmware version ADPLUS_V355_T230103.71 and beyond) supports privacy. By pressing the panic button for 6 seconds, the camera will stop recording (although will still track distance, location etc.) and will then start recording when the vehicle exceeds a configurable speed threshold. This is designed to stop recording if the driver is on a break, eating, getting changed etc. However, please note that the platform doesn’t fully support this yet – in Privacy mode a device will show on the Health Check report as ‘Video Loss’ – meaning it will be difficult for customers or support to differentiate between a faulty device and one in privacy. Please contact Professional Services if the Privacy feature is required.
Most devices have super capacitors which are designed to ensure continuity of power in the event of a catastrophic incident. The device file management system records in one second blocks, so in the worst-case scenario we occasionally lose the last second of a recording in a catastrophic incident.
This is largely redundant with a connected camera, but can this be done via a USB, or by removing the SD card. Please contact the MiFleet Support Team.
All data for the VisionTrack solution is hosted by Microsoft Azure (https://azure.microsoft.com), offering reliable, secure and scalable cloud computing services in an ISO27001:2013 certified hosted environment. Data is fully covered by the General Data Protection Regulation (GDPR).
‘Device-agnostic’ means that the platform can take footage from a range of third-party devices (dashcams and DVRs). More third-party devices can be added, but would typically involve a range of tests (including protocols, ability to send footage, livestream etc.)
By default, any event that is identified as an ‘Incident’ (whether manually or via NARA) – and any associated telemetry – is kept on the platform for 7 years. All other footage and telemetry data is kept for 1 year. These storage periods can be shortened (request to Support). If storage periods need extending beyond these limits, these need to be discussed with MiFleet Support.
As of May 2023, the platform offers the following key RBAC (Role-Based Access Control) XXX:
If multiple roles are assigned, the user will get the widest access. VisionTrack therefore recommends assigning only one role per user.
Certain roles, functions and reports are dependent on account type, licenses and user access, and therefore may not be visible on all accounts.
The platform also offers the following non-role-specific controls that can be assigned to individual users:
The Vision Track platform offers two main integration capabilities:
The Application Engineering and PreSales teams can assist in scoping and setting up these APIs.
No, VisionTrack are not currently offering an on-premise solution. Our current platform is built on Microsoft Azure and uses many Microsoft-provided software services, because of the shared operational and security model benefits that doing this provides. There would also be considerations around having to provide ongoing support, upgrades and feature enhancements outside of our automated systems. Other customers have also moved away from this idea because of the additional cost and the lack of real benefits it provides.
VisionTrack gets posted speed data from HERE Maps (www.here.com).
Coverage information can be found here:
Speed limit changes can be reported to HERE Maps via MiFleet Support.
Yes. The HERE data supports vehicle-specific road speeds (such as HGVs on motorways) based on ‘Vehicle Type’ set up against the vehicle.
The three top right alerts are:
Each alert allows the user to filter on other parameters (e.g. specific fleets/vehicles/event types/dates).
The Event User Guide can be downloaded via the User > downloaded User Guide menu option. In addition, the following guides are available:
Users can change their system language (via User > Profile), to:
However, there is still some Dev work in progress to fully complete all localisations (date TBC).
A driver can be associated to a vehicle in various ways:
No, there is no way to update historic journeys and events, as our infrastructure is not designed for updating the data once it’s been written. Data cannot be migrated between organizations for the same reason.
Yes, notifications need to be created for each fleet (on the assumption that fleet managers would only want notifications for their fleets). Please note that when setting up notifications for Driver Behavior events (Brake, Accelerate, Shock, Turn), the platform only alerts on Red and Critical events.
No. For security reasons, password reset is self-service which the user can initiate via the Recover Account option on the login screen.
Yes, this can be configured via User > Organization > Settings.
Yes, the Default Video Length, Maximum SD Video Length and Maximum HD Video Length (as well as the ability for users to change the Video Length) can be configured via User > Organization > Settings.
At least one of the devices on the account needs to have ADAS/DSM defined in the device accessories in order for the ADAS/DSM filter to be visible.
The report list is currently locked down by role and organization license, neither of which currently include/exclude DSM.
Live Stream and Playback are largely dependent on the network coverage of the unit at the time of the request and the internet speed of the user. When footage from multiple cameras does not play in sync, this would be due to the capacity of the DVR to process and send footage, not the ability of the platform to process and display it.
Streamax devices typically send 1 single telemetry point to the platform every 5 seconds. DTEG devices typically send 1 single telemetry point to the platform every 30 seconds. These are used to, for example, to determine the live position on the map.
The platform processes the journey ~30 minutes after the device has stopped sending telemetry.
Return on Investment is largely dependent upon the scope of hardware and software deployed, and upon the driver/manager engagement/adoption process put in place by the customer. However, we have seen the following examples of ROI among our customer base:
While the VisionTrack solution makes it far easier to reduce insurance costs, better manage claims and better defend against fraudulent claims, any insurance discount would be dependent on discussions between the customer and their insurer. However, we have seen instances of 100% increases in annual insurance rebates among our customer base.
No, only Admin users can set up and monitor geofences and as these are set up at organization, not fleet, level.
Not currently, but there is Development work in progress to support this feature.