securitylinkindia

hid convergence

The Convergence of Physical & Logical Access

For many security professionals, recent high-profile data breaches have shifted attention to external cyber threats. Despite this newfound focus, the Institute for Critical Infrastructure Technology reports that more than half of all cybersecurity incidents can be traced to insiders with legitimate access to corporate facilities and networks. Another survey from the Ponemon Institute reveals that the majority of respondents are more concerned by outside threats than those that originate internally. While external threats are very real, working to confront internal vulnerabilities can prevent incidents from happening in the first place. By addressing both physical and logical access in a more unified approach, organizations can reduce their risk for a costly breach while also improving user experience and operational efficiency. This idea is frequently referred to by the industry buzzword of ‘convergence.’ From a technical standpoint, convergence is defined as “the merging of distinct technologies, industries, or devices into a unified whole.” In terms of access control, convergence can be viewed as “the merging of physical and logical access control technologies to provide a more unified and simplified approach to identity management.” “Convergence means a simplified approach,” said Sheila Loy, Director of Healthcare Industry, Identity and Access Management at HID Global, “That can mean many different things, but it’s essentially making it easier for the user to get both digital access and door access. That usually comes in the form of a card or a mobile device – something that can do both.” While the notion of convergence is nothing new, this approach to security is becoming an increasingly viable way to mitigate threats. To explore this further, ASIS International recently partnered with HID Global to survey security professionals regarding their experience and related plans on convergence projects. The data in this paper is based on the responses of 745 ASIS International members who have direct responsibilities in physical and/ or information security. The benefits of convergence: Improved user experience, operational efficiency and security Security administrators are looking for solutions that are easy, convenient and fast. By introducing solutions that better blend physical access control (PACS) with logical access control (LACS), organizations of all types will enjoy three key benefits including: 1) positive user experience, 2) enhanced administrative experience, and 3) improved security. Positive user experience Oftentimes, the weakest link in even the strongest of security systems lies within the end user. If interactions with security technologies are confusing or cumbersome, employees will take shortcuts that introduce unnecessary vulnerabilities. Converged PACS and LACS solutions help reduce this risk by boosting convenience, particularly by requiring employees to only carry one card or mobile device. This type of solution also eliminates the need to constantly refresh passwords. In today’s world, most end-users wear an ID badge to access facilities, which is a form factor they are accustomed to using. Even more, many employees either use a user name and password or a one-time password fob or token to access networks. While this approach may provide an additional layer of security, it is prohibitive in terms of convenience. Alternatively, providing a single form factor for both physical and logical access creates a more streamlined user experience, which ultimately increases user adoption to desired security policies. “Building occupants who have entitlements to both physical areas and logical applications will see an enhancement in their experience,” said Brandon Arcement, Director of Product Marketing at HID Global, “Convergence results in greater employee efficiency and a more pleasant work environment for building occupants. It’s easier for employees to carry one card or one mobile device to access both systems, rather than having to carry a card for the door as well as a fob for the computer or having to remember passwords.” In terms of logical or network access, one major pain point for end users is the need to remember and frequently reset their passwords. When ASIS International members were asked, “How access to network and logical applications is done today,” a resounding 85% of respondents indicated that they use a user name and password. 85% of respondents also indicate that they have an organizational policy regarding the creation of passwords such as requiring numbers or special characters. Not only is this inconvenient for users and administrators, it presents another common security risk – employees writing their passwords on notes left visible on their desk. Enhanced administrative experience Converged access control solutions provide an improved administrative experience. When survey respondents were asked to rank a series of benefits of PACS and LACS convergence, the top response was ‘easier to manage employee credentials,’ followed by ‘one card for multiple applications.’ These top responses reflect two key angles within an improved administrative experience. First, many applications used to manage credentials are now web-based with secure, simple access for administrators. This allows security teams to issue, modify, or revoke credentials away from the office or during off-hours. The second angle is the ability to deploy a converged ‘high value’ form factor that allows for multiple applications. For example, using one card for multiple uses reduces costs for additional or replacement cards, as well as reduces the time required to produce multiple credentials for individual applications. According to survey data, the value of leveraging smartcards for applications beyond physical access is more than theoretical – 73% of respondents agree that they have interest in using smart cards for applications beyond traditional physical access control. Finally, more converged access control solutions provide security administrators with more visibility into audit data. This makes achieving compliance easier, thus reducing the potential for associated fines and damaged reputations. Improved security The most important benefit of any technology is improved security. Innovative technologies for physical access include contact and contactless cards with encryption that adds additional layers of security upon entering doors, elevators or parking garages. Meanwhile, digital certificates loaded onto that same smart card can ensure trusted login to networks and applications, as well as encrypt e-mails and digitally sign documents. Converged solutions improve security in three key areas: Increased adoption rate of converged…

Read More
nist

Security Considerations for Code Signing

Recent security-related incidents indicate the need for a secure software supply chain to protect software products (also referred to as code) during the development, build, distribution, and maintenance phases. Of particular concern is provisioning and updating software that plays a critical role in platform security. A wide range of software products including firmware, operating systems, mobile applications, and application container images must be distributed and updated in a secure and automatic way to prevent forgery and tampering. An effective and common method of protecting software is to apply a digital signature to the code. Digitally signing code provides both data integrity to prove that the code was not modified, and source authentication to identify who was in control of the code at the time it was signed. When the recipient verifies the signature, he is assured that the code came from the source that signed it, and that it has not been modified in transit.   “NIST plans to develop further guidance to help organizations evaluating, deploying or managing code signing systems. The high-level recommendations described in this document are expected to form the basis for more detailed recommended practices for code signing”   This white paper targets software developers and product vendors who are implementing a code signing system or reviewing the security of an existing system, with the goal of achieving improved security and customer confidence in code authenticity and integrity. System integrators and administrators who are concerned about the trustworthiness of the applications that are installed and run on their systems will learn the properties they should expect from a code signing solution to protect their software supply chain. This white paper describes features and architectural relationships of typical code signing solutions that are widely deployed today. It defines code signing use cases and identifies some security problems that can arise when applying code signing solutions to those use cases. Finally, it provides recommendations for avoiding those problems, and resources for more information. Properly applied, these recommendations will help to ensure that the software supply chain is resistant to attack. NIST plans to develop further guidance to help organizations evaluating, deploying or managing code signing systems. The high-level recommendations described in this document are expected to form the basis for more detailed recommended practices for code signing. The basics of code signing This section provides high-level technical details about how this process works. There are multiple roles in the process: developer, signer and verifier. Developer The developer is the entity responsible for writing, building, and/ or submitting the code that will be signed. This entity maintains a secure development environment, including the source code repository, and will submit code to the signer after it has completed the organization’s software development and testing processes. Signer The signer is the entity responsible for managing the keys used to sign software. This role may be performed by the same organization that developed or built the software, or by an independent party in a position to vouch for the source of the code. The signer generates the code signing private/ public key pair on a device that is sufficiently protected, as the security of this process relies upon the protection of the private key. In many cases, the signer then provides the public key to a certification authority (CA) through a certificate signing request. The CA will confirm the signer’s identity and provides a signed certificate that ties the signer to the provided public key. Anyone can use the public key associated with this certificate to validate the authenticity and integrity of code signed with this key pair. If no CA is used, the public key must instead be distributed using a trusted, out-of-band mechanism. The signer ensures through technical and procedural controls that only authorized code is signed. When code is submitted by developers for signing, the signer verifies their identities and their authority to request a signature. The signer may also take additional steps to verify the code is trustworthy. Ultimately, two or more trusted agents of the code signing system may be needed to approve the request and generate a digital signature. In some cases, the signed code may also be provided to a time stamp authority to indicate when the code was signed. Verifier The verifier is responsible for validating signatures on signed code. The verifier may be a software component provided by the same developer as the signed code (e.g., for a signed firmware update), or it may be a shared component provided by the platform (e.g., the operating system). Architectural components The code signing architecture is composed of a set of logical components that are responsible for different aspects of the code signing process. The code signing/ verifying architecture represented in Figure 1 potentially has four distinct components: the code signing system (CSS), the certification authority (CA), the time stamp authority (TSA), and the verifier(s). Code signing system (CSS) The first component, the CSS, receives code submitted for signing, authenticates and authorizes the submitter, and generates the signature. To generate these signatures the CSS has one or more private signing keys, which need to be carefully protected from extraction or unauthorized use. Certification authority (CA) Typically, a CSS utilizes a CA to enable authenticating the identities of signers. CAs issue certificates to signers in accordance with certificate policies, which specify the security controls and practices the CA follows when issuing certificates, and impose requirements on the subjects of the certificates. NIST Interagency Report 7924 is a reference certificate policy that specifies most of the requirements for a CA that issues code signing certificates. There are also industry groups such as the CA/ Browser Forum and the CA Security Council, that have published requirements documents for the issuance of code signing certificates. Time stamp authority (TSA) Some code signing architectures use a TSA to demonstrate when a particular piece of code was signed. When a TSA is used, signatures are sent to the TSA which applies its own signature and signing time to the package….

Read More

A Guide to Connected Lighting for IP Video Surveillance

Connectivity is the heartbeat of smart technology. Connectivity between devices improves the quality of decision making from each device, the level of service that each device can provide, and magnifies the value of the overall system. A truly smart device can make relevant decisions, whether that decision is a recommendation on Netflix or Spotify, automatic windscreen wipers that operate when rain is detected or smart heating that turns on 30 minutes before we arrive home from work. To be smart a device has to be connected to other sensors, whether in-built or external, and make connection to them quick and easy. Consider a modern car, it is packed with internal sensors that monitor speed, tyre pressure, temperature, rainfall, lane departure, parking distances, seat-belt sensors, and also integrates with external devices such as GPS and modern smart phones that allow hands free calling, email and SMS message reading. But how does this trend for connectivity affect the professional security systems and surveillance lighting, and what new trends are being seen. Whenever smarter devices are demanded, they are expected to be able to communicate with one another, with easy central management in one place. What does connectivity mean for surveillance lighting Most modern surveillance systems are now IP based. It’s not only been a definite trend in the security industry for a number of years, but it’s become an expectation that our surveillance systems will connect one to the site at any given time, and allow to respond quickly to alarms or events. This need for connectivity within video surveillance systems, and the need for good night-time pictures have fuelled the development of IP lighting and the latest Network Illuminators such as Raytec’s VARIO2 IP that allows security professionals to provide lighting on-demand, delivering the appropriate lighting response to any event. With full IP addressability security professionals are always connected to their network illuminators, and with a simple click can instantly provide real-time responses to security events, even before personnel can be alerted or arrive on site. Within larger systems and unmanned sites automated network lighting provides a dynamic response to on-site activity. Smart and programmable, IP lighting can be configured to meet the exact needs of any security system. It can be fully integrated with and triggered via your VMS/ BMS systems, IP cameras, detection systems, and all other network devices via simple HTTP commands, API or even externally via telemetry input. Network lighting can also trigger and control other devices on the network, for example switching the camera into night mode when darkness is detected. The possibilities for configuring IP lighting are endless. IP lighting not only allows one to stay completely connected to our illuminators, but also enables a deeper integration with other devices, connects him to all activity on site, and increases safety and security all round. Easy set-up and commissioning Installation, setup and commissioning are made significantly easier for security professionals through the use of IP addressable network lighting. Gone are the days illuminator settings were adjusted in situ, often up a ladder or lift. With network lighting, installers now have quick access to configure all lighting settings via an integrated web interface from any remote network location. Adjusting the illumination had often been a time consuming part of any security installation and often involved a 2 person trip to site at night to achieve the best results. But with network lighting, set-up becomes a 1 person job done safely from ground level anywhere on the network. One can remotely adjust the lighting levels and fine-tune the CCTV images in real time, side by side with the camera viewer. This significantly reduces visits to site and minimises labour time and costs. One can also setup illuminators individually or in groups for quick and easy operation of large sites. Network lighting raises the bar for night time performance. We no longer have to settle for images that could be improved, or wait for an engineer to go to site to make operational changes. For example one can remotely alter the photocell sensitivity to change the time that the lighting turns on and off. He can also change the grouping of illuminators at any time, alter the way in which the lighting is triggered, and use timer settings to configure the duration that lighting may be activated on alarm – all done remotely via the illuminator’s web interface. In the rare occasion that an issue occurs with an illuminator, remote diagnostics can be proactively carried out over the network to instantly troubleshoot the problem – again significantly reducing time on site. The development of Network Lighting completely changes the way one thinks about lighting and how he plans the installations. It gives him the platform to achieve a much higher performing system with improved images, and increased security and safety 24/7. Take live control All cameras need light to see during the hours of darkness. But with IP based security systems on the rise, it’s Network Lighting which not only helps generate good night time images, but which works with the entire IP system to help stay connected to the site at all times. Being IP addressable, Network Lighting is designed to deliver the right amount of lighting exactly when and where needed. Whether controlling it via its web interface, the VMS/ BMS, or other platform integration, one has the ability to take control over the lighting at any time to respond to live events there and then. For example, the detection system may identify movement out of hours and raise an alarm. Before taking action, such as deploying a guard or even a police response, it is critical to investigate who or what is there. This is where high quality, dynamic lighting is essential. Knowing the camera location, one can take control over the adjacent Infra-Red lighting to review the situation. He can increase lighting levels to generate more detailed picture information and support cameras zooming into the scene, or decrease the lighting levels to avoid overexposure…

Read More

Building a Future-Proof Data Processing Solution with Intelligent IoT Gateways

To date, most discussions about the Industrial Internet of Things (IoT) have been about connecting new devices and rapidly bringing them online. In this white paper we look at the implications of bringing such a large number of devices online, namely, the need for efficient methods to collect information from these devices, and discuss how to handle the large amount of data collected from these devices. Industrial IoT solutions are judged on their ability to adapt to various data acquisition needs and how they can transform the data collected from devices into useful business insights that can help decision makers. What makes an Industrial IoT solution truly stand out is the flexible data handling possibilities that it can provide. However, providing efficient, reliable and maintainable data handling as part of an Industrial IoT solution presents significant challenges because of the very nature of data-management solutions that exist today, which are designed mainly for information technology (IT) applications. System integrators looking to deploy IT solutions in the Industrial IoT world are faced with complicated requirement specifications and have to spend a lot of time and money customizing these applications to suit specific industrial automation (IA) requirements. Key challenges In the following sections we discuss some of the key challenges of converging IT solutions with the Industrial IoT. Customizing applications for the Industrial IoT Most solution integrators are not familiar with the various fieldbus protocols used by field devices in industrial automation. They typically end up deploying standard data management solutions that can cater well to IT applications but cannot support the data-acquisition, storage, processing, transmission, and data-analytic needs of industrial automation solutions. Furthermore, the solution integrators might not have the necessary skills to customize these data management solutions for the needs of the Industrial IoT. A customized solution that can fill the gap between the IT and IA applications is required. IT experts, who are oriented more towards the needs of business applications, need to be trained on the critical requirements of industrial applications so that they can build data solutions that are a good fit for Industrial IoT solutions. Customized Industrial IoT solutions It is common knowledge that any customization of controllers, data loggers, and routers requires huge investments of time and money. Embedded computers are highly customizable, but you have to build what you need from scratch. An intermediate solution that combines the capabilities of a controller, data logger, router, and customized software will significantly shorten the time-to-market. Such a ready-made solution will allow you to focus on your core competencies rather than build a customized solution. Easy-to-Use GUI for data acquisition A configurable easy-to-use GUI (graphical user interface) for data acquisition, which allows an IT expert to handle the popular Modbus protocols in industrial automation applications without the need for any additional programming will take the pressure off the field engineers who can focus on the tasks that they are good at. Achieving seamless integration of data Edge devices are deployed and used every day to fill the information gap in the field. These devices operate based on different Modbus protocols or may sometimes use proprietary protocols. The deployment of these edge devices is widening the boundaries of a traditional enterprise into spaces that were never before imagined. Centralized data management systems must be able to integrate disparate data types from these devices and add the relevant contextual dimension to the data to create a unified view of the operations for effective system management. The volume of data that is generated by the edge devices is growing exponentially. Industrial IoT solutions must include a strategy to handle such large volumes of data. Industrial IoT applications should have the built-in ability to respond locally to a field alert and take corrective action at the device end to enable faster response time instead of transmitting data to a centralized data management system. The process of transmitting data to a centralized system for processing could take up to a few minutes, which could be too late if it involves data relating to critical industrial processes. Local intelligence and edge-computing One strategy to achieve faster response time is to deploy Big Data solutions, which are expensive and require skilled personnel. Alternatively, you can process data locally either in the IoT gateway or at the device-end and make decisions locally, which is much faster. Then, only critical data needs to be sent to the central system after data is processed locally. Solutions that support such local intelligence will also help in reducing the data load on an industrial network. On-demand communication An Industrial IoT Gateway is a critical component of an effective Industrial IoT solution. The gateway is used to mass-deploy devices at the field site, acquire data from these devices, and route this data on demand to the central system, to other devices, or to a remote site. However, the complexity of routing heterogeneous data in a network and the stability of the network link might hinder the progress of your solution deployment, and create bottlenecks for remote data transmission. An IoT solution that can simplify data acquisition from devices that use the most popular Modbus protocols and route this data using built-in 4G LTE communication capability will enable efficient, on-demand transmission of data, and faster response times. Developing a future-proof solution Keeping in mind how rapidly the Industrial IoT field is changing, a scalable solution that can adopt and implement a new technology will give you good returns on your investment. For example, if your service provider decides to upgrade to a newer technology like 4G LTE, substantial changes in your network infrastructure might be required to maintain basic connectivity. Older communication standards such as 3G may no longer be supported and you will have to upgrade to the new technology. As more and more service providers jump on the 4G LTE bandwagon, the sooner you adopt this new technology the better off your business will be. Rather than wait for the change to swing into action, a better solution…

Read More

Maximising the Benefits of C3 (Command & Control Centre)

A command and control centre (CCC), by definition, centralises the monitoring, control and command of an organisation’s overall operations. It is most often associated with crisis or disaster management in the context of a city or state government body, police or even military agencies. It is also used by universities, transportation departments, utility companies, and any other organisations that need to manage distributed operations. Command centres have been a critical element for successful management of operations and/ or security management, and have been transforming with the advancements in the technological space. With the introduction of rapidly evolving new technologies, new organisational challenges and threats, command centre design and construction have become more complex and challenging than ever before. Today, CCCs need to be modular and should be equipped with correlation rules, process flows, rich algorithms, analytics, reporting, a geospatial platform, an Internet of things (IoT) platform, and other open platform systems. Since each organisation has its own specific needs and purposes for establishing such infrastructure, the command centre should be highly configurable, scalable and operator friendly. In the case of a city governance or safety body, a social media platform can cover voice, text, video, and mobile apps for citizens to interface with the CCC and thus take it to an entirely new level and provide better and efficient services to citizens. Challenges faced by organisations in the planning and operations of CCCs Perception of command centres today A CCC is a centre for information collection, analysis, decision making and management. Its primary purpose is to gather and process all the information required to plan and respond – quickly and effectively – to potential emergency incidents. The fig. 1 next page depicts the building blocks of a command and control solution, which primarily comprises field sensors as data collection points, database systems as information repositories, and communication systems as means for information dissemination, along with the key modules that empower information analysis and presentation of outcomes in a command centre application. The following are a few examples of factors driving the need for CCCs: Increasing technology dependence leading to the need for an integrated and efficient control and management platform. Efficient data handling needs for big data, data mining, analytics, IoT etc. Integrated view to address social, residential, commercial and national security needs. Need for reliable, flexible, sustainable, real-time and scalable systems to provide an integrated view of all sensors compatible with proprietary networks and legacy systems. Need for a collaborative work environment across teams working in silos at different locations. Disparate systems impacting operational efficiencies of businesses and driving up costs. Structured methodology for incident handling ensuring effective decision making and response. Transition from manual processes to system-defined automated or hybrid process. Evolution of command centres The concept of a command centre can be traced back to the 19th century and has continued to evolve since then. In conjunction with technological advancements, a new variety of threats have also arisen. However, each incident has fuelled innovations in counter response, resulting in further advancements in technologies. The fig. 2 next page represents the advancements in terms of threats and counter-response systems over the last three centuries. Establishment of a command centre Often, command centres are conceptualised at later stages of establishing the technology components and infrastructure, and in most of these cases, they end up as inadequate or unsuitable control rooms which are not able to achieve the organisational goals. The first and foremost step is to ensure that the functional goals and measurable key performance indicators are clearly defined at the pre-design stage itself. The selection of the right technologies and service level agreement (SLA) requirements is essential as it directly impacts the end results and the budget required for setting up such infrastructure. Once the functional requirements are documented, the requirements in terms of equipment specifications and other IT and non-IT requirements can be finalised. One of the key parameter for efficient operations of a command centre is defining an incident and its severity. This primarily helps in identification of associated stakeholders, operational process, sensors and systems for finalising the steps to be followed as part of the standard operating procedures (SOPs). Many a times we primarily emphasize upon the digital part only while forgetting about the importance of physical infrastructure design in the operations of a command centre. Being a monitoring and command centre which operates 24×7 for all 365 days of a year, the physical infrastructure for such a facility should be designed post considering the vital parameters such as ergonomics, seating layout in order of operational needs for better collaboration, secure and resilient operations. Once we are clear with the functional requirements and the physical infrastructural design, the next step shall be building capacities within the organisation for operating under such technologically advanced systems in line with the defined goals and KPIs for operations. A regular performance assessment and feedback process ensure that there is a continuous improvement in operating efficiency of the command centre by addressing the feedback for optimisation in relation to the people, process or systems. The last but not the least step is to devise a framework with periodic reporting of welldefined SLAs for measuring the KPIs through performance evaluation. Key challenges in today’s CCCs Presence of manual integrations A key indicator of a wrongly designed command centre is when manual integration of multiple information feeds is done by analysts to provide the operators the tools they need. This can lead to an inefficient utilisation of resources and time. Taking steps to train and improve the efficiency of operators, to derive information efficiently from the feeds puts them in a position to respond to events in a timelier manner, and potentially adds additional value to the organisation. Inconsistent information In many cases, there is a gap between the exchange of information between command centre operators and field personnel. This results in the loss of ‘crucial’ time and a loosely prepared response. Information overload Many command centres get information from various sources, but…

Read More

Honeywell Cyber Security & IP Video Surveillance 2017

Analog video solutions rely on outdated technology. These systems have made way for more secure, IP-based video surveillance systems to provide reliable and cost-efficient solutions in today’s information-rich, digital world. Modern IP technology can enable effective and manageable video surveillance to protect people, their information and their properties, and help ensure continuous operation. It can also create the potential for enhanced safety and security benefits for our society to prevent costly security incidents. However, the cyber security of IP technology has been challenged by the pace of technology transition and development, creating potential safety and economic risks. Cyber-attacks at the local and global scale are on the rise, and according to a 2016 report published by Grant Thornton, the total estimated global financial loss associated with cyber security attacks is estimated to be U.S. $315 billion each year. One example of a major cyber-attack occurred in the U.S. in October of 2016 where Internet access was denied to many major websites including Twitter, The Guardian, and CNN. This attack, which was the largest of its kind at that time, was conducted by a botnet virus called ‘Mirai’ from infected Internet Protocol (IP) video devices on the internet. Threat and vulnerability The importance of cyber security in the IP environment is widely recognized. It requires protecting devices, networks, programs, and data from being copied, changed, or destroyed by unintended or unauthorized access. Since video surveillance products such as IP cameras, network video recorders (NVRs), and video management software (VMS) are IP-enabled, they can be accessed from a remote location using internet connectivity, which means they have the same vulnerabilities as other devices and systems in the open IP world. The U.S. National Strategy to Secure Cyberspace is a report that outlines a five-level threat and vulnerability model, including home/ small business, large enterprise, sector/ infrastructure, national, and global categories. In the report, the U.S. government expresses concerns about: The network devices used to attack critical infrastructures; Large-scale enterprises being increasingly targeted by malicious cyber actors, both for the data and the power they possess; and The fact that cyber vulnerabilities could directly affect the operations of a whole sector or infrastructure. Not only has cybercrime caused significant interruptions for businesses and negatively impacted infrastructure in recent years, but it has also led to large-scale data breaches. According to PwC’s Global Economic Crime Survey 2016, the risk of cybercrime was the second most reported type of economic crime affecting 32% of organizations in 2016. Furthermore, the average cost of a data breach to organizations is $4 million, up from $3.8 million in 2015. Many countries and international organizations have been working on data-protection legislation, national standards, and regulations in most sectors. These regulatory initiatives will help reduce vulnerabilities and clarify questions of liability. Business interruption Business interruption is a type of cybercrime that is usually launched by inserting malicious code on a company or infrastructure network, which limits the network’s ability to provide service and inhibits a company’s ability to conduct business. Malicious code, or ‘malware,’ comprised of viruses, worms, botnets etc., which can be injected into IP devices with weak points, propagate itself to seek more victims on the network and steal sensitive information for the purpose of economic benefit. A botnet, short for ‘robot network,’ is an aggregation of computers compromised by bots (automated machines or robots). These bots are controlled by malicious cyber actors by launching Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks to targeted critical infrastructures or enterprises. DoS and DDoS pose a serious threat to business service. In June 2015, hackers grounded ten planes belonging to a Polish airline and blocked flight plans sent to planes by launching a DoS attack. The Mirai attack mentioned earlier is also an example of a DDoS attack. Data breach The video system is the core of a security system and contains critical information including system data, deployment, event and alarm information. When this data is compromised it’s called a data breach and this crime can cause significant security and safety risks Video surveillance in private and public applications may capture and record video images of people not relevant to security and safety incidents. Many countries are working toward privacy-protection legislation to prevent privacy breaches by intruders and inside employees. For example, in the U.S., 47 states have breach-notification laws in effect and in Ireland, it is illegal to post video surveillance footage on the internet. Compliance and liability With cyber legislation, national standards and sector regulations in place, regulatory compliance becomes a rigid entrance requirement for IP systems including video surveillance. It impacts the framework for product design, sales, industry entrance, system integration, and user operation. Meanwhile, there is also a market trend of increased cyber insurance sales spurred by the awareness of broader cyber risks. A vulnerable system will be forced to upgrade or be replaced for regulatory compliance, or the customer will have to pay a much higher premium to cover the liability every year. This is why Honeywell is committed to providing a forward-looking, cyber-secure video solution for its partners and customers. Honeywell cyber security solution any businesses haven’t conducted a cyber-threat analysis and don’t know how vulnerable they are to cyber threats. Honeywell can help by analyzing customers’ problems, then implementing best practices to execute optimal product and system design. Honeywell has also developed cyber-security management processes and released vulnerability reporting policies to help its customers face a growing cyber-security challenge. Rigorous system hardening At the product and system design and development phases, Honeywell uses in-house and third-party testing tools to evaluate product vulnerabilities and fix issues to harden the system. To mitigate the risks associated with malicious code, data privacy breaches and system mis-configuration, Honeywell employs the Information Communication Technology (ICT) industry’s security guidelines, which addresses specific video surveillance requirements.   Since IP video surveillance can be installed in both private and public networks the exposed cyber threat can vary accordingly. It is necessary to target system hardening according to…

Read More

A New Alternative to Video Transmission Over Ethernet For Industrial Security Applications

By Bruce M. Berman – ComNet Vice President of New Business Development The industrial security market has been witnessing the gradual transition to video, audio, and data transmission over Ethernet since the beginning of this decade. This change has impacted numerous other markets as well, including the transportation, factory automation/ industrial control, and utility/ electric power transmission and distribution markets. Prior to the introduction of video over IP (or Internet Protocol), a separate network of analog or digitally encoded video was typically utilized for hauling the video from the edge of the network back to the monitoring location. Audio for telephony or a communications intercom system; RS-232, RS-422, or RS-485 serial data, commonly used for CCTV camera pan-tilt-zoom (PTZ) control or the card access element of the system, was transmitted from the field devices back to the control center on other dedicated and parallel networks (see figure 1). The transmission media of choice was usually optical fiber for reasons of robustness and bandwidth.    These technologies and system design approaches are still very viable solutions for hauling high-quality full-motion video, audio, and data, and when optical fiber is employed as the communications media, extremely long transmission distances and electrically noisy environments are easily accommodated. The difficulty of installing and maintaining two or more parallel and technically diverse networks, one for video, one for audio, and another for serial or other data, has motivated many users to consider the use of Ethernet as their preferred communications networking system. The relative ease of integration of the key components of the system onto a common platform has largely made Ethernet the networking solution of choice in many markets, including the industrial security market. With the advent of Ethernet, it now became practical and cost-effective to consolidate the video, audio and data elements of a security communications subsystem onto a single network (see figure 2). Although in theory this should be the ideal platform for the typical local or wide area communications network utilized for industrial security and other surveillance applications, in practice several key and recurring issues are frequently encountered by the systems integrator and end-user responsible for the installation, maintenance, and operation of the system. When analog video is to be deployed onto the network, a video encoder is required to convert the camera video output into an electrical signal that is compatible with transmission over an Ethernet-based network. These encoders employ signal compression technology to reduce the bandwidth occupied by the video, so as to increase the number of potential video, audio, or data signals that may share the finite bandwidth available on the network. Present video compression standards include MPEG-2, MPEG-4 and H.264, with MPEG-4 currently most widely used. The H.264 standard is newer and offers the advantage of enhanced video quality with the benefit of reduced bandwidth. MPEG-2 was originally developed for use by the commercial television broadcast industry, and although capable of superb video quality, its bandwidth requirements are large. As such, it has not been widely accepted for use within those communications networks employing Ethernet. Regardless of the compression standard utilized, hardware decoders or decoding software compatible with the encoded video are required for viewing the video. One major issue involves the relative lack of MPEG-4 or H.264 video encoders that are environmentally hardened when these devices are installed in an out-of-plant operating environment. In this kind of environment, issues such as ambient operating temperature, voltage transient protection, vibration, mechanical shock, and humidity with condensation must be considered to ensure that the video encoders or other field equipment are capable of providing long-term reliability and stable performance. The market is full of suppliers that build quality encoders designed for deployment in benign, conditioned operating environments such as when the equipment is fielded in an adequately heated and cooled communications equipment room. However, those manufacturers that build hardware capable of withstanding the extended operating temperature range, humidity with condensation, and electrical voltage transients and noise encountered in an outdoor or out-of-plant environment are few and far between, and the equipment is costly as a result. The MPEG-4 and H.264 video compression standards are suitable for transmission over Ethernet. As these standards rely upon video compression, the video in these standards is not transmitted in real time, and exhibits a certain amount of latency depending upon the compression standard utilized. Some users may encounter potential legal issues with video transmission systems that are not real-time. Other users may have operator issues with the time lag or delay between executing a pan-tilt-zoom command, and the actual execution of the command as viewed on the CCTV monitor. Full-motion 30 frames per second true broadcast-quality video with zero latency is not achievable considering the current state of Ethernet-based systems, and significant system bandwidth is required to achieve acceptable video quality. The high system bandwidth requirement imposed by the video ultimately limits the total number of video channels and other signal sources that may be inserted onto the Ethernet platform. Many end-users have been disappointed with the video quality of their video-over-Ethernet system, especially when the video is viewed on highly revealing wall monitors. In addition, some video surveillance or monitoring applications mandate the use of high resolution cameras, and much of the resolution provided by these cameras may be lost when the video is compressed to MPEG-4 or H.264 and inserted onto the network. Although Ethernet is based upon the industry accepted IEEE 802.3 standard, and in theory any manufacturer’s Ethernet equipment should be completely interoperable with any other manufacturer’s equipment, in practice this is very frequently not the case. Interoperability issues require the involvement of a trained IT professional to resolve, and in some cases, resolution is not possible. Trained IT or technical personnel are required for the initial installation, setup, and long-term maintenance of the system, and the long-term cost associated with this are obvious and frequently not within the budget of many users. They must be considered as part of the overall life-cycle cost of owning and operating the…

Read More

Enhanced Ethernet Technology (ePoE)

The current strongly developing surveillance industry has given rise to a significant number of solutions and technologies that can be divided into two main areas – Analog Technology and Network Surveillance Technology. Analog surveillance systems adopt a point-to-point connection to transmit data directly from one device to another – namely, an analog camera connected by coaxial cable to the port of a DVR allows the camera feed to be viewed, thus achieving its surveillance functionality. If the camera feed is to be viewed remotely, a remote network connection must be established through the DVR. (Figure 1) A network video monitoring system adopts modern Ethernet technology and uses a LAN connection between camera devices and an NVR. All front-end network cameras and back-end network storage devices are connected to the same Ethernet network, which is then used for communication between the devices. Any node on the network may access any device on the network as long as it obtains authorization from the accessed device. (Figure 2) Pros and cons The main advantages of an analog video system are convenient plug-and-play connections and no delays in transmission. Disadvantages include – it is difficult to improve resolution and it is hard to achieve unified management in large-scale system solutions. In order to overcome these disadvantages, a new generation of analog monitoring technologies has emerged in the industry including CVI, AHD and TVI, which provide HD resolutions for analog systems. As for unified system management, hybrid DVRs have been created alongside other unified management platform product solutions. In a network video monitoring system, advantages lie in easy unified management, flexible upgrade to higher resolutions, and remote PoE power supply. However, disadvantages include long video feed delays and network bottlenecks caused by multiple devices sharing the same amount of bandwidth. As with analog devices, with the development of the industry, network video monitoring technologies are also continuously being upgraded. For example, the new generation of H.265 encoding technology has greatly reduced network bandwidth usage and stronger encoding processors have lessened video codec delay. Currently network camera video delay has been reduced to just 150ms, unnoticeable when viewing video feeds. Overall, along with the differentiation of the industry and technological development, two systems are constantly merging – nothing more than merging of the best of both technologies. From a macro perspective, there are mainly two main appeals in the development of security industry – higher resolutions and greater networking. However, there are still a large number of HD analog monitoring systems on the market which adopt standard resolution basic coax wiring. Additionally, due to many factors, the cost of labour for transforming such a system is continuously increasing while the device costs are decreasing, which has led to a situation where reconstruction during the system upgrade or transformation process has become less viable, giving way to a new requirement i.e., coaxial upgrade. So far, the new generation of HD analog technologies such as CVI, AHD and TVI has only satisfied the first and third requirements (HD and coaxial upgrade). In accordance with the future development of the Internet of Things (IoT), connecting devices over LAN is becoming an irresistible trend. Therefore, it is necessary that analog monitoring is transformed into network management in order to improve the operability of centralized management and dispatch, which is the second requirement in networking. As for current analog technology, it is hard to meet this requirement. Equally, for network technology, it is hard to achieve the third requirement of coaxial upgrade. Coaxial Ethernet technology that converts network into coax does already exist on the market, but it is achieved only through multiple parts and at a high cost, making it inviable as a universal technology. Therefore, the most urgent need in the current market is a technology that provides network coaxial functionality at a low cost. Dahua’s ePoE Dahua’s patented enhanced Ethernet (ePoE) consists of two core technologies: The first adopts advanced physical layer 2D-PAM3 coding modulation, and can achieve full duplex transmission over 800 meters at a speed of 10Mbps, or 100Mbps at shorter distances via a pair of twisted or coaxial cable media. It also supports PoE and PoC power supply technology for both the transmission media, greatly simplifying construction and wiring. The other one adopts a Dahua-patented application-layer synchronous negotiation mechanism that guarantees self-adaptive compatibility of enhanced Ethernet via implementing first the time base sync and then mode negotiation. When connecting to matching devices via any media, it can automatically determine the current mode of use, whether it is enhanced mode or general network mode. Thus, the system is fully self-sustaining and requires no manual configuration, achieving true plug and play. Application scenarios Enhanced Ethernet technology (ePoE) is automatically compatible with three connection modes operating over the same network at the same time – traditional network, long-distance network, and coaxial network. As shown in the figure 3, for traditional HD upgrade projects, in order to reduce wiring and construction, it uses original analog coaxial wiring, equipped with HD analog cameras and HD DVR, and connects to back-end services via network switch. Meanwhile, additional network cameras (IPC) are connected to the back end directly via network switch, and thus a centralized surveillance system is formed. This makes the mixed monitoring of both analog and digital possible, although it is quite complex for centralized management of configurations. The new generation of Dahua enhanced Ethernet technology (ePoE) is compatible with traditional Ethernet networks and so it can reuse original switches, IPCs and other network devices, requiring no large-scale device upgrades. In addition, ePoE directly converts network signals to and from coax through a passive converter, which allows the reuse of existing coaxial wiring, greatly reducing upgrade costs. Power over Coax (PoC) functionality provided by this technology eliminates the need for renovations to add power wiring for new cameras. Finally, future surveillance area expansion can be performed without a need of adding additional repeaters. Simply use additional cables to add new monitoring devices into the 800m capable network. ePoE makes…

Read More

Taking the Pulse of City Traffic

Smart sensors provide insight in city traffic dynamics and provide the basics for informed decisions and traffic improvements The problem of traffic congestion in cities is not likely to go away in the next decades. In order to get a grip on the problem, city authorities around the world are increasingly making use of smart technologies to get real-time insight into their traffic situation. By collecting traffic data, they are hoping to make informed decisions to improve the quality of life of anyone living, working and travelling in the city. Economic development and quality of life has always been a difficult marriage for city authorities and urban planners. Traffic congestion is considered to be a necessary by-product of economic growth. But at the same time, idling times and traffic jams are very costly to the economy and thus pose a serious threat to further economic growth. In an estimate made by the U.S. Environmental Protection Agency (U.S. EPA) long duration idling is said to consume over one billion gallons of diesel fuel annually, at a cost of over $5 billion. In addition, congestion creates frustration with traffic users, and it results in more emissions, more pollution and increased health risks. Enabling smart cities Next to promoting public transportation use, carpooling and ride sharing, city authorities increasingly turn to smart technologies to make the city traffic run smoother and more efficiently. One measure that has already proven to be very efficient in many cities is the installation of traffic sensors to control the operation of traffic signals. By monitoring traffic at intersections, traffic signal scheme scan be adapted according to the amount of traffic, which as a result can reduce vehicle idling times and relieve city traffic. As an example, the UK’s national Automobile Association stated that cutting queuing time by just one minute per day on three major roads leading into a city could save more CO2 than switching off 2,000 streetlights. The use of road sensors and software systems fits into the broader idea of smart cities, a vision which aims at better managing the city’s assets including transportation, law enforcement, power plants, hospitals, and many more, and as a result improving the quality of life in the city. By integrating smart technologies into the city infrastructure, real-time traffic data can be collected to provide a basis for smarter decisions on traffic management, parking management, urban planning, energy management and many more. Smart city sensors FLIR Systems has been providing smart sensors to traffic authorities for many years, in order to collect real-time data at intersections and arterial roads, which are typical traffic bottlenecks. Smart sensors from FLIR are based on visual CCTV and thermal imaging technology and can be used to measure a variety of parameters and provide valuable insight into traffic flows. FLIR video detection sensors are a highly reliable and accurate alternative to loops and other detection technologies for signalized intersection control and management. FLIR’s integrated visual sensors provide information on the presence of vehicles approaching or waiting at an intersection, information that can be used to control the traffic lights more efficiently. Pedestrian presence detectors are able to give pedestrians the appropriate green-time and visibility, so mobility and safety of both motorists and vulnerable road users are guaranteed. Thermal imaging sensors use heat energy emitted from vehicles and bicyclists to make a distinction between both and make it possible to adapt green times according to the specific road user type (bike or other vehicle). Traffic sensors can also be used to collect a variety of relevant traffic data, like volume, average speed and occupancy. Better insight, better decisions The collection of various types of traffic data ultimately allows traffic authorities and urban planners to make more informed decisions, on a wide variety of topics. Intersection traffic control Traffic data can be used to better manage traffic lights and to provide dedicated signal schemes according to traffic volume. Traffic authorities can use real-time data to keep traffic moving and impose variable speed limits or dynamic green waves. Small adaptations of signal times at intersections based on historical data can already have a large impact on vehicle idling times. City authorities could for example, choose to adapt signal times according to the time of day, giving more green time for vehicles in one particular direction during rush hour. Real-time traffic data can be communicated on electronic message signs or traveller apps and help the traffic user make informed decisions about his travel options. Signal schemes could also be adapted in favour of pedestrians, for example near schools or sports arenas. Thermal imaging sensors can even make a distinction between the detection of vehicles and bicyclists, allowing traffic authorities to provide a dedicated signal scheme for bicyclists, which allows them to cross the intersection safely. Priority for public transportation or emergency vehicles Measures to improve the quality of life in the city often coincide with the promotion of public transportation. Smart technology can further support this vision by giving public transportation vehicles more priority rights in traffic and enabling them to make movements that the general traffic is not allowed to make. This could include priority bus lanes or dedicated signal schemes for buses. Priority can also be given to reduce delays for emergency vehicles in operation. When activated, an emergency vehicle priority scheme can adapt traffic signals ahead of an emergency vehicle’s arrival to provide a green wave allowing the emergency crew to arrive at the destination in the shortest possible time while also reducing the need to cross intersections against a red light. Energy management 19% of energy use in the world is used for lighting, and 6% of greenhouse emissions in the world derive from this energy used for lighting. Light pollution is a global problem caused by inefficient, intrusive and unnecessary use of artificial light. Smart lighting is an efficient way to save energy and reduce the amount of light along our roads. By using smart occupancy sensors, public street lighting can be…

Read More

‘POWERS’ The Lighting Standard For Video Surveillance

The security industry has long suffered from a lack of accepted standards for the measurement of illumination distances. Published illuminator distances have been left to the subjective interpretation of individual manufacturers – resulting in varying claims. This has always made it virtually impossible for consultants, installers and end-users to specify surveillance lighting with confidence to reliably compare products and achieve a consistent level of performance. After facing this challenge for last 25 years, Raytec have tackled this problem head on by publishing the industry’s first open and transparent standard for surveillance lighting – POWERS. The need for a lighting standard In conjunction with your choice of camera and lens, the right lighting is the most powerful tool in helping you to achieve outstanding night-time performance from any professional video surveillance system. But with many different lighting solutions available in the market today, it can be difficult to accurately compare the performance of one illuminator to the next. At one level, this may seem fairly straightforward – just look at the claimed distance performance from the manufacturer. However, until now there has been no standardized way to evaluate different illuminators and their performance claims – particularly infra-red lighting, and crucially, there have been no standardized testing methods for manufacturers to adhere to. Traditional lighting industries i.e., street/ urban/ commercial lighting etc., have long established and standardized testing methods allowing all illuminators to be accurately compared. Sadly, this is simply not the case with lighting for surveillance and security. Why is it so difficult to compare illuminators? Manufacturers currently publish very limited technical information on the performance of their surveillance lighting – most only quoting a maximum distance. Without standardization, methods for calculating performance, especially distance, have always been left open to interpretation – leading to varying claims. It is not uncommon to have two illuminators with a similar light output, that are quoted with wildly different maximum distances, because each manufacturer has a different opinion on what is classed as an acceptable image quality. One manufacturer may take a realistic view and quote 150m (492 ft) but the other may take an overly optimistic view and rate this same product for 250m (820ft). Since both products actually deliver exactly the same light output, it is obvious that the illuminator from the realistic manufacturer is going to provide a much better picture at its quoted distance. But where does the agreed and acceptable performance benchmark lie? Let’s look at this another way… same light output, different distance…which is correct? 1st Problem: Most manufacturers don’t provide calculation data to support their distance claims. Consider these images produced with two different infra-red illuminators (subject at 70m). Both could claim to deliver 70m+. Clearly one significantly outperforms the other with more clarity and detail – ultimately leading to better subject identification and analytics. But it’s all subjective – without calculation data, we don’t know how much light each illuminator is actually delivering at its maximum distance. 2nd Problem: Most manufacturers don’t state the camera specification used. This is especially important if you are comparing one illuminator against another that promises almost identical image quality at a similar distance. How do you know that both illuminators are truly equal? In reality, one illuminator may have actually been tested with a much higher performance camera – which in effect boosts its capability. Beware – this is often how a lower performance/ smaller illuminator can ‘appear’ to outperform a higher performance illuminator on paper. To claim better distances a lighting manufacturer may have used or accepted a much more expensive and sensitive camera, a much more expensive and higher performing lens, a more highly reflective surface and a low quality picture, to quantify the published distance. Illuminator performance – the important information Distance is only one area of lighting performance to consider. Other criteria to consider include, angle, flexibility and adaptability, integration capability, consumption, environmental impact, reliability, lifetime, warranty, customer support, and lighting partner credentials. In short, when looking at an illuminator for video surveillance, people want to know: How far does the illuminator shine? How wide does it shine? Is it efficient and reliable? What features does it have? Will the manufacturer support me with warranty/ certification/ technical support? And  so the POWERS standard was born – to answer all of these questions and more. Introducing the POWERS standard As the world leader in LED lighting, Raytec has taken the lead and published the industry’s first lighting standard for video surveillance – ‘POWERS.’ The standard is based on decades of practical industry experience and sets a lighting performance benchmark backed up by consistent scientific testing methods. It highlights the relevant criteria to consider, enabling security professionals to more easily specify and reliably compare the performance of different illuminators. It also helps them gain an understanding as to how their chosen lighting product will perform. ‘P’ for Peak power The first ‘P’ of POWERS stands for Peak Power. It is the measurement that should be used by all manufacturers to scientifically calculate their quoted performance distances and is therefore one of the most important areas of the standard. The Peak Power of an illuminator is measured at the centre point of the beam using an appropriate light or power meter. At Raytec, we measure all our units at 3 metres from the light source. Why Peak Power? The peak power of your illuminator and the lighting power you wish to achieve on scene, together with the principles of inverse square law (a law at the heart of scientific lighting calculations), are used to calculate the true usable distance of your illuminator. Calculating maximum distances should be based on adhering to a consistent level of lighting power on scene at your required distance – a vitally important part of the POWERS standard. This is to ensure that you achieve the same quality of image from your illuminator, every time. But what lighting levels should you adhere to? Raytec recommend a minimum lighting power on scene of 0.35 μW/cm2…

Read More