WiseIR

Hanwha Vision’s WiseIR technology combines advanced circuitry and image processing to deliver clearer and more efficient nighttime images under low-light conditions.

Intelligent IR for low-light conditions

Hanwha Vision’s WiseIR simultaneously uses three methods of control over camera IR LED output to reduce overexposure and halo effects in low- and no-light environments

Adaptive IR

In varifocal and zoom models, Adaptive IR selectively activates wide-angle IR LEDs for short focal lengths and narrow-angle LEDs for longer distances, optimizing coverage and eliminating dark edge areas.

For PTRZ cameras with adjustable lens positions, independent external IR LED banks activate only those aligned with the lens direction, ensuring efficient long-range IR illumination.

Dynamic Current Control

As objects approach the camera, reflected IR light can cause overexposure; reducing IR LED power via dimming technology—through frequency-adjusted integrated current—prevents image saturation. In varifocal or zoom models, during focal shifts, current allocation balances power between narrow- and wide-angle LEDs. In PTRZ models, current redistributes dynamically as the lens moves between IR banks, ensuring consistent and effective illumination in the viewed direction.

AE (Automatic Exposure)

When current control proves insufficient during object proximity changes, AE maintains target brightness by automatically adjusting exposure parameters—shutter speed, iris, and analog/digital gain—based on frame brightness comparisons.

Related Resource

White Paper

High efficiency IR performance through its WiselR technology
Learn more

Video Material

The new X AI camera_Enhanced IR performance
Learn more

Video Material

2Megapixel 55x IR PTZ Dome Camera Demo_XNP-6550RH
Learn more

Image Stabilization

Hanwha Vision’s image stabilization technology corrects video shake caused by environmental movement, providing clear and stable images.

Keep every frame steady with Smart Image Stabilization

Hanwha Vision cameras create clear and steady images by using Digital Image Stabilization (DIS) and Optical Image Stabilization (OIS) to minimize motion blur caused by wind and vibration

DIS (Digital Image Stabilization)

Hanwha Vision’s DIS technology analyzes captured video through software to provide stable images. By cropping the footage during analysis, it may result in a narrower field of view. This approach proves effective for cameras installed at height, where minor pole vibrations occur, or when using telephoto lenses for long-range detection, as small camera movements significantly impact video quality. Product specifications list DIS and DIS with Gyro options to identify applicable models.​​

DIS without built-in gyroscope sensor

Employs pure digital analysis of video frames to track and correct motion, achieving clear and stable images without a gyroscope.​

DIS with built-in gyroscope sensor

The gyroscope detects angular velocity and vibration in real time, feeding data to the DIS algorithm for precise corrections, particularly in windy or vibrating environments.​

OIS (Optical Image Stabilization)

Hanwha Vision’s OIS technology mechanically adjusts the lens optical path to compensate for shake. A gyroscope senses camera movement and shifts the lens or sensor in the opposite direction to stabilize the image. This method excels in low-light conditions with minimal image quality degradation.

*By cropping the footage during analysis, it may result in a narrower field of view.

Related Resource

White Paper

Powerful image stabilization with OlS + DIS combination
Learn more

Feature Article

The AI Secret to Crystal-Clear Security Footage
Learn more

Technology

Innovating with excellence Wisenet7
Learn more

Similarity Search

Connect the Dots Across Cameras

Similarity Search makes it easy to find the same person or vehicle across different cameras, locations, and timeframes.
Even when a subject moves out of view or appears again in a new scene, the system helps operators connect related footage — transforming scattered video into a continuous trail.

The Challenge of Continuity

In real-world environments, people and vehicles naturally move across entrances, corridors, buildings, and camera zones.

While cameras and analytics can capture key moments, those moments remain fragmented across different views and timelines.

Without a way to connect these fragments, operators must manually search footage camera by camera, making it difficult to reconstruct events or follow movement across a site.

SIMILARITY SEARCH

Restore Continuity Across Cameras

By connecting fragmented moments into a continuous trail, Similarity Search simplifies how operators investigate and review multi-camera events. Instead of starting from raw footage, users can begin with meaningful reference points and quickly expand their search across the entire system.

Simple cross-camera search

Use key object snapshots—such as BestShot images—as reliable reference points to search related footage across multiple cameras

Clear event reconstruction

Connect critical moments from different camera views to understand how an incident unfolds from start to finish

Less manual review, faster results

Reduce review time by starting searches from meaningful highlights instead of scrubbing full video timelines

When a person appears on camera, AI analytics extract attribute data, such as the color of upper and lower clothing, and send it as search-ready metadata.

Cameras that support Similarity Search provide this attribute data to a compatible VMS, where information from multiple cameras is analyzed together to identify and track people with similar characteristics.

Operators can select attribute-based references and receive a ranked list of related results across cameras and time periods, making cross-camera review and investigation far more efficient.

BestShot

Faster, More Accurate Review

Hanwha Vision cameras equipped with BestShot automatically capture and crop the highest-quality image of each detected object — whether a person or vehicle.
These optimized snapshots, along with detailed metadata, help operators review events quickly and perform fast, targeted forensic searches.

Critical Moments Lost in Endless Footage

Most surveillance footage includes long stretches where important objects appear too small, unclear, or surrounded by unnecessary background.
Operators often rewind, replay, and manually scrub timelines just to locate a single clear frame.

When time matters, finding the right moment shouldn’t feel like searching for a needle in a haystack.

BESTSHOT

Clear Evidence, Ready for Search

By automatically capturing the clearest key images, BestShot helps operators streamline review, search, and reconstruct events with greater accuracy.

Instant highlights of each object

Automatically extracts the clearest frame of each person or vehicle, cropped tightly for quick recognition

Faster search and replay

No need to scrub through entire recordings — BestShots act as visual shortcuts that jump directly to key moments

Smarter metadata for precise filtering

Each BestShot includes object type and attributes, allowing operators to filter results and replay events instantly in the VMS/NVR

AI-based Object Detection technology continuously analyzes video to identify people and vehicles, including classifications such as car, bus, truck, motorcycle, and bicycle.
Once an object is tracked, BestShot evaluates multiple frames and automatically selects the clearest, most representative moment, cropping the image tightly around the object.

Alongside the image, cameras generate rich metadata describing:

  • Object type (person/vehicle categories)
  • Vehicle color
  • Clothing colors (top/bottom)
  • Additional attribute data for VMS search

These BestShots and metadata are sent to the VMS/NVR, enabling fast retrieval, filtering, and forensic review without manual frame-by-frame playback.

Related Resource

Feature Article

Forget the Passive CCTV of the Past
Learn more

White Paper

Edge AI Camera
Learn more

Object Detection & Classification

From Clear Images to Accurate Identification

Hanwha Vision’s Object Detection and Classification technology accurately detects and classifies a wide range of objects
including people, vehicles, faces, and license plates — from clean, high-quality video in real time.
Built on clear image inputs and advanced AI recognition, it not only identifies objects precisely, but also extracts rich attribute data
such as clothing color, vehicle type, and plate information, transforming video into structured, intelligent data.

When Objects Stay Anonymous

Surveillance cameras capture countless people and vehicles every day. But without intelligent classification, they remain just moving shapes, even when image quality is high.

Operators can see activity, yet struggle to determine who is involved, what type of vehicle passed by, or which attributes matter for investigation. Without object-level understanding, identifying suspects, tracing vehicles, or filtering events by appearance becomes slow, manual, and unreliable.

OBJECT DETECTION AND CLASSIFICATION

Smarter Detection, Faster Decisions

Object Detection & Classification adds true intelligence to video by turning clear visual input into precise object-level understanding and delivering that information as searchable data in the VMS.

Accurate detection from clean images

Reliably identify people, vehicles, faces, and license plates from high-quality video

Attribute-level classification

Automatically extract details such as clothing color and vehicle color/type

Faster search, smarter investigation

Use object and attribute metadata to locate events instantly without manual footage review

AI-based video analytics continuously analyzes incoming video frames to detect people, vehicles, faces, and license plates in real time. Once detected, each object is further processed through advanced classification models that extract detailed attributes such as top and bottom clothing color, vehicle color and type, and plate information.

These detected objects and their attributes are automatically converted into searchable metadata and delivered to the VMS. This enables fast filtering, event tracking, and forensic investigation — without the need for time-consuming manual review.

By combining real-time detection, precise classification, and metadata-based search, Object Detection & Classification transforms raw video into structured, actionable intelligence.

Related Resource

Feature Article

Forget the Passive CCTV of the Past
Learn more

White Paper

Edge AI Camera
Learn more

Sound Classification

A New Way to See

For decades, security systems have relied on sight alone. Hanwha Vision now introduces a new dimension of protection — security that listens.
By detecting critical sounds and understanding where they come from, this solution expands surveillance beyond what cameras alone can see.

The Blind Spots Cameras Can’t Cover

Many security incidents begin with sound — a scream, a sudden impact, or breaking glass. But these early warning signs often occur outside the camera’s field of view or in privacy-sensitive areas where cameras cannot be installed.

Without audio intelligence, these signals go unnoticed, delaying response and increasing risk.

SOUND CLASSIFICATION

Proactive Security Powered by Sound Detection

Hanwha Vision’s AI Audio Solution adds a powerful new layer of awareness to video surveillance. By detecting urgent audio cues before a situation escalates, it enables earlier intervention, faster response, and broader coverage beyond visual surveillance.

Early threat detection

Detect critical sounds before situations escalate

Direction-aware situational awareness

Pinpoint where a sound originates for faster, more accurate response

Privacy-friendly coverage

Enhance safety in locker rooms, restrooms, and changing areas — without cameras

Hanwha Vision’s solutions use AI-based Sound Classification to identify critical sounds such as screams and breaking glass in real time even in environments with background noise.

At the same time, a multi-microphone TDoA (Time Difference of Arrival) algorithm analyzes the time difference of sound arrival across microphones to determine the direction of the sound source. This allows operators not only to know what happened, but also where it happened instantly.

All processing is performed through edge-based analytics, eliminating the need for a separate server while enabling real-time detection and alerts. For camera-restricted spaces, Audio Beacon (SPS-A100M) delivers the same intelligent audio detection, providing a powerful security alternative without violating privacy.

By integrating sound into surveillance, these solutions elevate security from a purely visual system to a comprehensive “see and hear” approach for a more complete understanding of every event.

Related Resource

Feature Article

A New Way to See
Learn more

White Paper

AI Sound Classification and Sound Direction Detection
Learn more

AI-based Analytics

Turn Video into Actionable Intelligence

Hanwha Vision’s AI-based Analytics transforms video into actionable data by detecting, classifying, and understanding objects and behavior in real time.
From people and vehicles to movement patterns and crowd behavior, it enables proactive monitoring, smarter operations, and data-driven decision-making.

Too Much Videos, Not Enough Insights

Traditional surveillance captures massive amounts of footage but most of it remains unused. Operators are left scanning screens manually, reacting after incidents occur, and struggling to extract meaningful insights from raw video streams. Without intelligent analytics, security systems stay reactive instead of proactive, and operational data remains hidden in plain sight.

ai-based analytics

Valuable Insights from Every Scene

Hanwha Vision’s AI-based Analytics converts visual information into real-time intelligence. By understanding objects, movement, behavior, and density, it delivers precise alerts, accurate statistics, and operational insights that go far beyond conventional motion detection.

Proactive detection & Faster response

Identify risks and events in real time before they escalate

Smarter operations with real-world data

Turn foot traffic, queues, and flow into actionable operational insights

Accurate, reliable analytics at scale

AI-based object recognition minimizes false data and ensures consistent performance across environments

People Counting

People Counting performs real time counting based on a virtual line, accurately detecting multiple people simultaneously as they pass through. The system automatically generates counting reports and can transmit data via SUNAPI, enabling seamless integration with external systems for analysis and operations.

Crowd Counting

Crowd Counting analyzes people counting data per defined zone in real time to monitor crowd density and distribution.
The system accurately counts up to 500 people per zone, providing reliable insights into congestion levels and crowd buildup.
This enables operators to identify overcrowding risks early and take proactive measures for safety and flow control.

Vehicle Counting

Vehicle Counting detects and classifies vehicles as they pass through defined zones. By tracking traffic flow in real-time, it delivers accurate vehicle volume data for parking facilities, city traffic management, logistics hubs, and toll systems.
This supports congestion analysis, infrastructure planning, and operational optimization.

Queue Management

Queue Management monitors up to three defined queue zones simultaneously, displaying real-time people counting data directly on the web viewer.
The system triggers alarms based on the number of people per zone, enabling immediate response to congestion. It also supports event action settings, allowing automated system responses based on predefined queue conditions.

Slip and Fall Detection

Slip & Fall Detection tracks a person whose full body has been visible in the detection area for more than three seconds.
When the system detects a sudden change from a walking posture to a fallen position, it verifies the event and triggers an alarm in approximately six seconds, ensuring fast response while minimizing false alerts.

Line Crossing / IVA Area

AI-based Line Crossing and IVA Area Detection monitor virtual lines and defined zones in real time. The system identifies people and vehicles, triggering alerts only when valid objects cross a line or enter a restricted area.
Unlike pixel-based detection, it ignores shadows, reflections, or random motion — ensuring precise intrusion and perimeter monitoring for facilities, borders, and restricted zones.

Face Mask Detection

Face Mask Detection identifies whether a person is wearing a mask using facial feature analysis and classification algorithms.
It allows facilities to automatically monitor compliance in healthcare sites, transportation hubs, or controlled-access areas. Alerts and statistical data can be generated without requiring manual enforcement.

Social Distancing Detection

Social Distancing Detection monitors the proximity between multiple people simultaneously in real-time, identifying when predefined distance rules are violated. Once distances are calibrated through pre-setting, the system continuously measures spacing and triggers real time detection events when thresholds are breached. All processing is performed through edge-based AI analytics, eliminating the need for a separate server while ensuring fast, reliable response.

Dynamic Privacy Masking

Where Precision Meets Privacy

Dynamic Privacy Masking (DPM) offers advanced privacy protection by intelligently applying
a colored or mosaic overlay to moving objects like people and vehicles in real time.

This ensures privacy during live viewing while maintaining the integrity of the original, unmasked video.

Balancing Visibility and Privacy

Surveillance often involves personal data — faces, license plates, identities. Static masks can’t follow motion, and manual setup increases risk of error or exposure. As privacy regulations grow stricter, operators need a way to protect individuals and still
preserve essential evidence when incidents occur.

Dynamic Privacy Masking

Privacy Protection That Tracks You

Hanwha Vision’s AI-based Dynamic Privacy Masking detects and tracks sensitive subjects across every camera movement — automatically applying and adjusting masks in real time. It ensures precise coverage frame by frame, maintaining privacy compliance while keeping nonsensitive areas clear and visible. When incidents occur, original footage remains securely available to authorized personnel for forensic review.

Automatic real-time masking

AI identifies and conceals sensitive information instantly and accurately

Dual-stream privacy management

Records both masked and original video securely, accessible only with proper authorization

Regulatory compliance

Stay compliant with privacy laws without losing access to critical forensic evidence

Traditional privacy masking depends on fixed zones that can’t adapt to dynamic scenes.
Hanwha Vision’s AI-based Dynamic Privacy Masking goes further. It continuously detects and tracks moving objects, applying masks automatically as both the camera and subjects move.

For investigative or forensic review, the original footage can still be securely accessed by authorized personnel, maintaining full transparency while safeguarding data during live monitoring.

Users can choose between two masking modes:

  • Mosaic mode for pixelated privacy overlays
  • Color opaque mode for solid, colored masking

Each mode can be easily customized in size and color to match operational needs or regulatory requirements. This flexibility allows operators to balance visibility and privacy with precision — ensuring consistent, real-time protection across every scene.

WiseStream

Intelligent Compression, Optimized Performance

WiseStream is an intelligent video compression technology that
integrates Hanwha Vision’s proprietary AI into the video encoding process.

When Data Overload Threatens Performance

As surveillance system expand, raw video data can quickly overwhelm bandwidth and storage capacity. Without optimization, even the best cameras risk lag, dropped frames, and excessive costs. Managing video shouldn’t mean compromising quality – and that’s where WiseStream steps in.

wisestream

Adaptive Compression for Every Scene

WiseStream automatically adjusts compression based on scene motion and detail, ensuring both clarity and effi ciency in every frame.

Reliable forensic clarity

Preserve fine details essential for accurate forensic search and post-incident analysis

Operational efficiency

Reduce bandwidth and storage load with optimized, noise-free video data

Enhanced analytics accuracy

Deliver cleaner inputs for AI algorithms,improving detection precision and event reliability

The AI-powered Difference

Conventional data reduction techniques typically adjust compression based on general motion across the entire frame. It often dedicates high-quality encoding to background motion that is irrelevant to the actual subjects of interest.

WiseStream leverages advanced AI analytics to intelligently distinguish between regions of interest and static backgrounds. By recognizing specific objects and motion patterns, it dynamically tunes bitrate and compression ratios to focus quality exactly where it matters most.

Here’s how WiseStream’s core mechanism works:

 

  • AI Object Detection: An AI model precisely identifies user-defined key objects such
as people and vehicles, within the video stream.
  • Differential Quality Application: The Region of Interest (ROI), where objects are
detected, is encoded at a high quality to preserve detail. In contrast, the Non-ROI is
subjected to a higher compression rate to reduce data volume.
  • Efficient Static Area Compression: The system’s AI-based analysis minimizes
unnecessary data generation in static areas, ensuring efficient compression of the
entire scene.

It works seamlessly with H.264 and H.265 to maintain compatibility and maximize efficiency across all Hanwha Vision devices.

The result is crisp, efficient video that reduces bandwidth strain, cuts storage costs, and
keeps your system performing at its best.

Related Resource

Feature Article

More Data Efficiency, Uncompromised Quality
Learn more

Why Hanwha Vision

Less Data, Greater Possibilities
Learn more

White Paper

WiseStream – Advanced Video Data Reduction Technology
Learn more

AI-based Image Enhancement

Smarter Monitoring Starts with Clarity

Hanwha Vision’s AI-based Image Enhancement technologies ensures crystal-clear visibility – from backlit scenes to dimly lit environment.

When Lighting Conditions Get in the Way

Video surveillance doesn’t always happen in ideal conditions. Backlit entrances wash out faces. Dim corridors hide movement. Traditional noise reduction often sacrifices sharpness, while excessive WDR tuning creates unnatural contrast.

The result? Missed details, unreliable analytics, and wasted storage on unusable footage.

AI-BASED IMAGE ENHANCEMENT

AI Performance That Delivers Results

Hanwha Vision’s AI-based Image Enhancement technologies analyzes every frame in real time. From optimizing exposure, reducing noise, and balancing contrast with intelligence that adapts to the scene.

Reliable forensic clarity

Preserve fine details essential for accurate forensic search and post-incident analysis

Operational efficiency

Reduce bandwidth and storage load with optimized, noise-free video data

Enhanced analytics accuracy

Deliver cleaner inputs for AI algorithms, improving detection precision and event reliability

Dual NPU

At the heart of AI-based Image Enhancement lies the Dual NPU architecture of the Wisenet 9 SoC. This powerful design separates image processing from analytics, allowing each neural processor to operate independently for maximum efficiency.
With one NPU dedicated to visual enhancement, it precisely controls exposure, noise reduction, and shutter optimization — delivering sharper, clearer, and more dynamic images under any lighting condition.

AI-based Noise Reduction (AI-NR)

Conventional noise reduction techniques often blur details or leave ghosting effects. Our AI-NR takes it a step further — the AI model is meticulously trained on the unique noise characteristics of each specific image sensor, enabling it to distinguish real image data from noise at a pixel-by-pixel level.

Rather than simply smoothing over imperfections, it selectively removes noise while preserving fine textures, sharp edges, and natural colors, even in extremely low-light environments. By eliminating unnecessary noise data at the source, the AI-NR technology not only delivers remarkably clear and detailed video, but also reduces bitrate for more efficient storage and network performance.

extremeWDR

Hanwha Vision cameras excel at handling scenes with extreme lighting variations, by employing AI-based WDR (Wide Dynamic Range). Unlike traditional WDR that can make the entire image look unnatural, extremeWDR intelligently analyzes each scene to identify and adjust overly bright and dark areas. This precise process determines the optimal exposure and adjusts motion artifacts, contrast, and tone through the advanced AI algorithm.

The result is a perfectly balanced and natural-looking image where every part of the scene is captured with exceptional clarity and detail, regardless of the challenging backlighting conditions.

AI-based Prefer Shutter

Unlike conventional systems that mainly controlled high-speed shutters to reduce blur, AI-based Prefer Shutter technology uses the latest AI to identify object appearance and movement. This allows it to automatically readjust the shutter speed to an optimal level, effectively improving noise while minimizing motion blur.

In scenes without movement, the shutter slows down to ensure low noise and bright, clear images. Conversely, when significant object movement is detected, the shutter speeds up to reduce blur and deliver crisp visual clarity.

Related Resource

Feature Article

The AI Secret to Crystal-Clear Security Footage
Learn more

White Paper

AI Image Enhancements of Wisenet 9
Learn more

AI

Hanwha Vision’s AI technology transforms security cameras into intelligent solutions that deliver real value.

AI That Optimizes Every Step of Surveillance

Hanwha Vision’s AI technologies work together to transform how video is captured, analyzed, understood, and used.

From enhancing image quality and protecting privacy to detecting events, identifying objects, and accelerating investigations,
our AI optimizes every step of the surveillance workflow — enabling smarter decisions, faster responses, and more reliable security outcomes.

AI AT THE EDGE

Why AI Matters

AI is no longer an optional enhancement — it is the foundation of modern video security.
With larger environments, higher resolutions, and an ever-growing volume of video data, operators need systems that can process information intelligently, reduce noise, surface what matters, and automate complex tasks.

Hanwha Vision’s AI brings clarity, speed, and insight to an industry where every second counts.

Leading the Way in Global AI Standards

Hanwha Vision has achieved certification for ISO/IEC 42001, the world’s first international standard for AI Management Systems (AIMS). By adopting this globally recognized framework, we ensure that our AI technologies are not only high-performing but also fully compliant with the highest benchmarks for safety, transparency, and ethical accountability.

AI TECHNOLOGIES

Unified Intelligence

Hanwha Vision’s AI technologies work as a unified intelligence — enhancing clarity, structuring data, highlighting key moments, and connecting information across cameras.
Together, they transform raw video into actionable business intelligence that drives smarter decisions and operational excellence.

Capture

AI improves visual clarity at the input stage, delivering higher-quality frames for analysis

Detect

Real-time recognition of people, vehicles, and key attributes for precise situational awareness

Search

Surface related scenes and movements across cameras through intelligent attribute-based search

Analyze

Transform events, counts, and behavioral patterns into actionable business insights

From Fundamental to Advanced AI

Build expertise with Hanwha Vision’s eLearning

Related Resource

White Papers

Our experts provide insights and perspectives on the latest tech

Learn more
eLearning

Start a course today and take your skills to the next level

Learn more
News Hub

Read in-depth feature articles on technology trends

Learn more

Powering the Next Generation Wisenet 9

Powering the Next Generation, Experience the future of Edge AI

Precision, Intelligent, Expandable, and Secure.

Dual

NPU

3x

AI Performance

50%

Reduced Bandwidth

Dual NPU

The AI Revolution at the Edge

The new Wisenet 9 SoC’s (System on a Chip) Dual NPU design dedicates independent resources to video quality and analytics. This design guarantees exceptional and unimpaired performance in both, preserving optimal video quality and noise reduction even with demanding analytics.

Watch the Dual NPU video

What Makes a Camera See Everything

In shadows, glare, and everything in between?
WDR and Low-light technologies are key. Wisenet 9 has upgraded WDR and low-light capabilities to a whole new level with its advance AI-based image enhancement technologies.

AI-based Image Enhancement

Unveiling Hidden Details

The newly integrated AI-based Noise Reduction (NR) processing works with conventional NR to effectively eliminate noise from challenging low-light environments. It achieves this using a dedicated denoising network, independent of other NPU resources, yielding denoised images with preserved detail and reduced motion artifacts. The Image Signal Processor (ISP) then leverages these enhanced images to apply sophisticated enhancement technologies, optimizing detail, texture, color, and tone mapping for a finely tuned video output.

REVEAL EVERY DETAIL

Low-light Performance

Wisenet 9 delivers clearer, more balanced video in challenging lighting conditions—helping security teams identify critical details like faces, license plates, and objects in both bright sunlight and dark shadows, so they can respond quickly and accurately. 

ENHANCED DYNAMIC CLARITY

extremeWDR

Multi-frame extremeWDR (Wide Dynamic Range) technology optimizes tone and contrast to improve the detail and clarity of images in adverse lighting conditions.

AI-based Bandwidth Reduction

WiseStream, an AI-based compression technology, uses object detection powered by AI algorithms. When combined with H.265 compression, it reduces bandwidth usage without compromising video quality, enabling efficient video management.

Expandable AI

Acknowledging the unique security requirements across various industries, Wisenet 9 is engineered for exceptional AI customization. This empowers users to adapt their security systems with specialized functionalities, transcending standard surveillance to deliver targeted, intelligent solutions.

Object Classification & Analytics

Wisenet 9’s AI offers detailed attribute analysis (color, bag, face mask, etc.) and advanced analytics (people/vehicle counting, slip/fall, crowd, etc.) for faster, informed decisions.

WiseAI

Wisenet 9 cameras offer a suite of specialized AI features—such as blocked exit detection, loitering detection, tailgating, and stopped vehicle/pedestrian detection. This flexibility ensures an accurate detection of critical risks and tailored automatic responses to any environment.

Dynamic Privacy Masking

The Dynamic Privacy Masking (DPM) will mask any objects with color or mosaic to protect privacy based on user preferences.

Enabling VMS Similarity Search

Wisenet 9 equipped cameras deliver rich attribute data from the edge,
immediately enabling VMS Similarity Search.
This allows operators to track a person of interest across all cameras.

CYBERSECURITY

Is your security an afterthought or “Secure by Design”?

What are you really buying when you invest in a surveillance system? It’s not just cameras—it’s trust. We’ve peeled back the layers of Wisenet 9 to show you how we build a Cybersecurity Fortress from silicon to software. Watch how we transform security from an afterthought into a proactive foundation for your critical data and operations.

White Papers

AI Image Enhancements of Wisenet 9
Cybersecurity Enhancement of Wisenet 9 Products
More Data Efficiency, Uncompromised Quality

Hanwha Vision Open Platform

Hanwha Vision offers an open platform (HVOP) that empowers third-party partners to develop secure, efficient, and scalable applications.

This enables them to analyze video, transmit files to the cloud, and leverage product information to create innovative solutions.

View detail

Stay up to date on Wisenet 9 with our exclusive newsletter!

By entering your email address, you will be considered to have agreed to our Privacy Policy and marketing communications.

Hanwha Vision is the leader in global video surveillance with the world's best optical design / manufacturing technology and image processing technology focusing on video surveillance business for 30 years since 1990.