Blogs

1 person likes this.
As enterprise Kubernetes deployments continue to scale, aligning infrastructure limits across compute, storage, and orchestration layers becomes critical. In environments combining Red Hat OpenShift Container Platform (RHOCP), Cisco Unified Compute System (UCS), and Hitachi Virtual Storage Platform (VSP) with Hitachi Container Storage Interface (CSI), Logical Unit Number (LUN) scalability and pod density are tightly coupled variables that directly impact performance, stability, and operational predictability. This blog outlines key configuration limits and design considerations to help ensure a balanced, scalable deployment. Cisco UCS Virtual Interface ...
0 comments
1 person likes this.
Overview & Objectives: This blog offers a comprehensive, step-by-step guide for provisioning Fibre Channel Non-Volatile Memory Express (FC-NVMe) storage disks—specifically Non-DRS volumes—from the Hitachi Virtual Storage Platform One Block 20 (VSP One B20) family to IBM POWER servers running the IBM AIX operating system, using Hitachi RAID Manager (CCI) . Beyond basic provisioning, this guide explores essential configuration tasks, including: ·          Setting up multipathing for high availability ·          Modifying default device parameters in the Hitachi AIX Object Data Manager (ODM) ·          ...
0 comments
Be the first person to like this.
Author: @Bhakta Sahoo Introduction This blog shows phases of cybersense policy execution. CyberSense Overview CyberSense helps organizations recover from ransomware quickly and reliably by ensuring trusted data integrity. It’s the only cyber resilience solution that uses AI trained on real ransomware to detect ransomware-induced data corruption with 99.99% accuracy—so you can identify clean recovery points, restore clean data, and minimize the impact of a cyber-attack. Policy job phases and Actions The CyberSense workflow follows a sophisticated, multi-phase process that begins with system authentication ...
0 comments
Be the first person to like this.
Author: @Bhakta Sahoo Introduction This blog provides a detailed overview of the installation, configuration, and volume scanning process in Index Engine’s CyberSense. CyberSense is a cybersecurity and data integrity software solution developed by Index Engine. It is designed to detect ransomware attacks, data corruption, and other malicious activities in block data by analyzing Thin Image Advanced safe snaps.   Environment                                                             Figure 1:  Index Engine environment ...
0 comments
1 person likes this.
Introduction Universal Replicator (UR) operations involve two storage systems. One of the systems is located at the primary site, called main control unit, and the other system is located at the secondary site, called remote control unit. The primary storage system controls the P-VOL and the following operations: Host I/Os to the P-VOL P-VOL data copy to the master journal The secondary storage system controls the S-VOL and the following operations: Initial copy and update copy between the P-VOL and the restore journal Journal commands to the primary storage system Journal data copy from the master journal to the restore ...
0 comments
1 person likes this.
What is DRS no-ADR Volume? A DRS no-ADR volume is a specialized storage volume in Hitachi VSP One BHE systems that supports Thin Image Advanced (TIA) pair operations (along with Shadow Image and remote replication products) but does not apply capacity saving features such as compression or deduplication. DRS no-ADR volume has the capability to enable control data (metadata) to be aware of data management of the Redirect-on-Write (ROW) snapshot function. While working with DRS no-ADR volumes, a temporary increase in pool usage capacity is expected, which should be considered during capacity planning. Managing DRS no-ADR Volume ...
0 comments
1 person likes this.
In today’s digital enterprises, software delivery speed defines competitiveness. Organizations are modernizing applications, moving to microservices, and embracing hybrid cloud models to gain flexibility and scale. Yet one persistent challenge slows DevOps teams everywhere: storage . Storage is the backbone of the CI/CD pipeline—powering Git operations, builds, tests, deployment artifacts, and stateful application data. But in hybrid cloud environments, storage often becomes the bottleneck. Public‑cloud block storage can deliver inconsistent performance, on‑prem storage can be difficult to integrate with cloud‑native workflows, and replication across sites ...
0 comments
Be the first person to like this.
As enterprises accelerate their shift toward hybrid cloud, one requirement has become non‑negotiable: a storage platform that delivers enterprise‑grade consistency, performance, and resilience—no matter where workloads run. Red Hat OpenShift has established itself as the leading enterprise Kubernetes platform for deploying modern, containerized, and stateful applications. But as organizations run more databases, analytics engines, and event‑streaming workloads on OpenShift, the choice of persistent storage becomes critical. While Google Cloud Hyperdisk is a strong native option, many businesses quickly run into its architectural limitations when running ...
0 comments
1 person likes this.
Automating SAP HANA Full-Stack Installation by Hitachi Vantara If you've ever been part of an SAP HANA deployment project, you already know the drill. The storage team carves out LUNs. The Linux team configures kernel parameters. The SAP Basis team downloads gigabytes of installation media, manually runs the installer, and hopes nothing breaks. It's a multi-day, multi-team effort, and somewhere in the middle, someone always misses a step. What if you could hit one command and watch the whole thing deploy itself? That's exactly what we built with the SAP HANA Full-Stack Installation Ansible Playbook Pipeline , a fully automated, ...
0 comments
1 person likes this.
Why We Built This Enterprise storage does not live in isolation. Servers, networks, applications, and security systems are already monitored inside SIEM platforms, and Splunk is the most widely deployed among them. But storage? Storage has always been the missing piece. Teams managing Hitachi VSP One Block storage arrays had rich, granular metrics available inside VSP 360, but that data never made it into the same operational dashboards where the rest of the infrastructure was visible. We believed this gap was worth closing for every customer running Hitachi VSP One Block storage arrays in a Splunk-centric environment. Storage observability should ...
0 comments
Be the first person to like this.
This blog describes how  Windows Server Failover Clustering (WSFC) on Windows Server 2025 can use Storage Spaces Direct (S2D) volumes and external SAN volumes on Hitachi VSP One Block side‑by‑side within the same cluster. The intent is to reflect a joint Hitachi Vantara + Microsoft approach that preserves customer choice: Microsoft provides the virtualization and clustering platform primitives, while Hitachi Vantara provides enterprise SAN storage and associated operational patterns for performance, resiliency, and data services.   Key idea: This mixed topology is a supported, flexible architectural pattern (described as “Hyperconverged ...
0 comments
2 people like this.
Introduction Hitachi Ansible modules for VSP One Block and VSP One Object provide a simplified and automated way to configure and manage Hitachi storage systems, enabling seamless integration with Ansible playbooks and workflows.  These modules are used to perform specific tasks within playbooks.  Below are some best practices for creating playbooks with Hitachi’s Ansible modules.  Encrypt sensitive data Simplify tasks with Ansible roles Organize tasks and handlers Use dynamic inventory Documentation Test and validate playbooks The sections below will dive into each of these best practices. Encrypt Sensitive Data ...
1 comment
1 person likes this.
Introduction Every cloud migration project seems to start with the same document: how do we get to the cloud? It’s usually detailed and step-by-step. The document that’s often missing is just as important: how do we get back out if we need to? HDPS is typically viewed as a backup and recovery platform, but it can also be used as a practical migration tool. If migration is the act of bringing workloads up somewhere else with confidence, then a platform built for reliable restore operations can become the backbone of that move. This approach has two big advantages: you can rehearse the migration before cutover, and you can treat “exit” as a planned ...
0 comments
2 people like this.
Hitachi Data Protection Suite (HDPS) VMware Backup Transport Modes: NBD vs HotAdd vs SAN A practical guide to choosing the right data path for faster, safer VMware backups and restores with Hitachi Data Protection Suite. When a VMware backup “feels slow,” the bottleneck is often not your backup target or your dedupe ratio - it is the path your backup infrastructure uses to read virtual disk blocks out of vSphere. In VMware image-level protection, that path is controlled by the transport mode. The same HDPS storage policy can perform very differently depending on whether the data is being pulled over the LAN (NBD), read locally inside the cluster (HotAdd), ...
0 comments
1 person likes this.
Organizations running scale‑up or scale‑out SAP HANA TDI often provision storage manually through storage GUIs or CLI tools. Different storage models require different tools and workflows, which is time‑consuming and prone to human error. In scale‑out SAP HANA deployments, this challenge grows further because storage must be configured individually for multiple HANA nodes, leading to significant manual work and extended setup time. SAP HANA tailored data center integration (TDI) each installation is customized by assembling hardware, operating system, and hypervisor (optional) from SAP-certified components. SAP HANA Tailored Data Center Integration- Overview ...
0 comments
1 person likes this.
body { font-family: Arial, Helvetica, sans-serif; line-height: 1.7; margin: 2em; background: #fafafa; } h1, h2, h3, h4, h5, h6 { color: #333; } ul, ol { margin-left: 2em; } blockquote { background: #f5f5f5; padding: 1em 1.5em; border-left: 4px solid #aaa; } pre, code { background: #f5f5f5; padding: 0.2em 0.6em; border-radius: 3px; } img { max-width: 100%; border: 1px solid #ddd; border-radius: 4px; margin: 1em 0; } .note { font-size: small; color: #555; background: #ffffe0; padding: 0.5em 1em; border-left: 4px solid #ffd700; margin: 1em 0; } .limited-img { width: clamp(480px, 75%, 800px); height: auto; } del { ...
0 comments
7 people like this.
Introduction Modern web applications need secure, smooth, and user‑friendly login systems . Building authentication from scratch is risky, complex, and time‑consuming. This is why most production‑grade applications rely on SSO (Single Sign‑On) solutions like Okta . This article explains how Okta SSO works in a React application , using simple language, practical examples, diagrams, and minimal code . The goal is to help new and mid‑level frontend developers understand the real production flow and start implementing Okta authentication with confidence. By the end of this blog, you will understand: What SSO and Okta are ...
6 comments
1 person likes this.
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think, learn, and make decisions. AI enables computers and systems to perform tasks that typically require human intelligence, such as problem-solving, understanding language, recognizing images, and making predictions. AI is largely categorized into 2 types: Generative AI and Agentic AI . Generative AI is a type of artificial intelligence that creates new content such as text, images, music, or code based on patterns learned from data. Agentic AI refers to AI systems that can act independently to achieve goals by making decisions, planning steps, ...
0 comments
2 people like this.
VSP 360 REST API - Load balance MPB's on VSP One Block and VSP One Block High End. When LDEVs are created they are assigned in a round robin fashion across all the MPB's [ Microprocessor Blades ]. Over time as LDEVs are created and deleted the number of LDEVs on each MPB may not be the same. Also as load shifts over time the MPB's may become unbalanced. Another scenario is when you upgrade a 2-node Block High End array to a 4-node etc. VSP 360 shows this on the dashboard. Select "Processor Load Balance". The below shows a imbalance in IOPS across two B28 MPB's. Note that this is an example only as the MPB's are actually balanced inspite of the IOPS ...
0 comments
6 people like this.
Introduction In today’s rapidly evolving IT landscape, organizations are increasingly seeking ways to modernize their infrastructure and consolidate workloads for efficiency and scalability. Migrating virtual machines (VMs) across different platforms is a critical step in this journey, but it often presents unique challenges - especially when dealing with clustered environments and the need for high data integrity and minimal downtime. This blog provides a comprehensive guide to streamlining the migration of clustered Windows Server 2025 VMs from VMware to Hyper-V, leveraging Microsoft System Center Virtual Machine Manager (SCVMM) and Hitachi storage solutions ...
0 comments