Monday, 6 March 2023

Azure Stack Edge

What is Azure Stack Edge?

Azure Stack Edge is a cloud-managed, hardware-as-a-service solution shipped by Microsoft. It brings the Azure cloud’s power to a local and ruggedized server that can be deployed virtually anywhere local AI and advanced computing tasks need to be performed.

Capable of running without a consistent internet signal, it can be used to process videos for local security systems and speech for call centers in real-time or to run advanced machine learning models when it is not technically or economically viable to do so directly to the cloud.

This solution allows companies to deploy containers running Azure technologies directly onto the local server, decreasing latency of a round-trip to the cloud and reducing project risks caused by network issues.



Azure Stack Edge

Azure Stack Edge is a cloud-managed, hardware-as-a-service solution shipped by Microsoft. It brings the Azure cloud’s power to a local and ruggedized server that can be deployed virtually anywhere local AI and advanced computing tasks need to be performed.

Capable of running without a consistent internet signal, it can be used to process videos for local security systems and speech for call centers in real-time or to run advanced machine learning models when it is not technically or economically viable to do so directly to the cloud.

This solution allows companies to deploy containers running Azure technologies directly onto the local server, decreasing latency of a round-trip to the cloud and reducing project risks caused by network issues.

Enabling the intelligent edge

Edge computing enables organizations to leverage the cloud without latency, bandwidth, or connectivity constraints that can impact business operations or impair the customer experience.

Neal Analytics builds intelligent edge solutions powered by Azure Stack Edge so you can leverage Microsoft Azure services, even when a direct, cloud-based option isn’t possible. This means virtually any solution, including AI-driven solutions, can function with or without internet connectivity and enables organizations to process data locally, avoiding the excessive time and bandwidth requirements to stream and process data in the cloud.

Storage Sync Services

WHAT IS AZURE FILE SYNC SERVICE

Azure File Sync service is a service that help us to share files or data from on premises Windows server or cloud VM to Azure storage or vice versa.

Suppose, we have a on premises Windows Storage Server or Windows FTP Server and we all save or upload our files or document in a particular folder of this server, definitely who are belonging in a same network otherwise we cannot use this on premises server. Now we need to create a Storage Account in our Azure portal. After creating the Storage Account, we need to create a link between Azure storage and on premises server. 

Azure file shares can be used in two ways:

  • Directly mounting these server less Azure file shares (SMB).
  • Caching Azure file shares on-premises using Azure File Sync.

Advantages of Azure File Sync

  • It help us to take a cloud backup
  • Disaster recovery
  • Easy and frequently access
  • Multi-server access and sync
  • Transition between on-premises and cloud
  • Cost effective 
  • Lift and shift

Configuration

  • Create a Storage Account in your Azure portal
  • Configure File Share in your Storage Account
  • Create Azure File Sync
  • Configure the Server Endpoints
  • Create a Window 2012 Server on your local VM
  • Create a folder and give it share permission
  • Install Storage Sync Agent on Windows 2012 Server
  • Create a Sync group
  • Now Wrapping up

Create a sync group and a cloud endpoint

A sync group defines the sync topology for a set of files. Endpoints within a sync group are kept in sync with each other. A sync group must contain one cloud endpoint, which represents an Azure file share and one or more server endpoints. A server endpoint represents a path on registered server. A server can have server endpoints in multiple sync groups. You can create as many sync groups as you need to appropriately describe your desired sync topology.

A cloud endpoint is a pointer to an Azure file share. All server endpoints will sync with a cloud endpoint, making the cloud endpoint the hub. The storage account for the Azure file share must be located in the same region as the Storage Sync Service. The entirety of the Azure file share will be synced, with one exception: A special folder, comparable to the hidden “System Volume Information” folder on an NTFS volume, will be provisioned. This directory is called “SystemShareInformation”. It contains important sync metadata that will not sync to other endpoints. Do not use or delete it!

In the Azure portal, go to your Storage Sync Service, and then select + Sync group

  • Sync group name: The name of the sync group to be created. This name must be unique within the Storage Sync Service, but can be any name that is logical for you.
  • Subscription: The subscription where you deployed the Storage Sync Service in Deploy the Storage Sync Service.
  • Storage account: If you Select storage account, another pane appears in which you can select the storage account that has the Azure file share that you want to sync with.
  • Azure file share: The name of the Azure file share with which you want to sync.

Storsimple Data Managers

 

Hybrid cloud storage deployments are becoming a norm rather than an exception in enterprise IT strategy. For organizations with large data estates, hybrid cloud management brings in the best of both worlds, as they could benefit from the scale and agility of cloud all the while leveraging it as an extension of on-premise resources on an on-demand basis.

For years, Azure has made this kind of hybrid deployment possible through Azure StorSimple, but Microsoft has recently announced that Azure StorSimple Virtual array will reach end of life in December 2020, and Azure StorSimple Physical Array will be phased out by December 2022.

What is your plan to meet a hybrid cloud storage solution without Azure StorSimple?

Data migration, replication, and management are some of the main areas of concern when implementing hybrid cloud storage architectures. With Microsoft no longer offering solutions for these challenges through Azure StorSimple, enterprises looking for efficient integration between on-premises storage systems and the Azure cloud can turn to NetApp Cloud Volumes ONTAP.

In this blog we’ll explore how users dealing with Azure StorSimple EOL can turn to NetApp Cloud Volumes ONTAP to support their Azure hybrid architectures. Cloud Volumes ONTAP can replace all the functionalities of StorSimple, plus provide additional benefits.

Azure StorSimple End of Life: Migrate to Cloud Volumes ONTAP

NetApp Cloud Volumes ONTAP for Azure cloud is another alternative to hybrid deployment with StorSimple that can be leveraged by enterprises in hybrid architectures. It caters to workloads deployed in Azure that need access to cloud storage over NAS and SAN protocols.

Cloud Volumes ONTAP is deployed and managed from NetApp Cloud Manager, thereby providing a single pane view to manage ONTAP systems on-premises as well as in Azure and the other major public clouds. Cloud Volumes ONTAP offers the benefit of data fabric architecture, that provides the same data management experience across multi-cloud and hybrid environments, and more features and benefits.

Data protection and recovery
Using NetApp Snapshot™ technology, Cloud Volumes ONTAP provides data protection and point-in-time data recovery:

  • Instant snapshot copy creation time.
  • Schedulable creation process.
  • Incremental updates.
  • Storage efficient to save storage costs.

Storage economy
Cloud Volumes ONTAP storage efficiency features reduce cloud data storage costs:

  • Thin provisioning
  • Data compression
  • Data compaction
  • Data deduplication
  • Plus, space-efficient snapshots and zero-capacity data cloning via FlexClone®. 

Data tiering to lower-cost object storage
Cloud Volumes ONTAP can automatically tier infrequently-used data between Azure disk storage and Azure Blob, which gives users more flexibility and costs-savings than tiering used by StorSimple:

  • Cost-effective storage for active data that isn’t in frequent use.
  • Cost-effective storage for disaster recovery, backup and archive data copies.
  • Seamless and automatic tiering between block storage and object storage that isn’t offered natively.

Business continuity and DR
Cloud Volumes ONTAP makes DR failover and failback processes seamless and automatic via SnapMirror data replication, a central part of a disaster recovery strategy.

Manageability
Cloud Volumes ONTAP uses NetApp Cloud Manager, which provides users a single-pane view to manage storage volumes on-premises as well as in Azure, or any other cloud.

  • Access all your ONTAP storage systems, on-prem or in any cloud.
  • All functions available through API, providing IAC capabilities for developers.
  • Schedule all of your storage operations such as snapshots or data cloning.
  • The data fabric architecture, that provides the same data management experience across multi-cloud and hybrid environments

Additional benefits
With Azure StorSimple end of life coming soon, switching to Cloud Volumes ONTAP now can provide hybrid environments with additional benefits that were never available on StorSimple:

  • Dual-node high availability architecture on Azure, ensuring the lowest possible RTO (RTO<60 sec) and RPO (RPO=0).
  • Rapid environment data cloning for testing and development through NetApp FlexClone® technology.
  • Kubernetes and containers platform integration and automatic persistent volume provisioning via NetApp Trident.
  • SnapMirror® data replication helps with seamless lift and shift data migration and replication from NetApp on-premises environments and the cloud.
  • Data encryption at rest as well as in transit is available out of the box to ensure data security.

    In a previous version of this blog we provided additional, in-depth information about StorSimple. Read on if you are planning to continue using StorSimple despite the EOL announcement, or want to understand it in more detail.

Learn More About StorSimple

The Enterprise Use Case for StorSimple

Azure StorSimple is designed to simplify hybrid storage, where a SAN storage device is integrated seamlessly with cloud services. It offers a single interface for administrators to manage both cloud and on-premises storage. StorSimple fits into almost all enterprise use cases, be it providing high performance storage for file services, application hosting, database hosting etc. or cost-effective cloud storage for backup, logs, archive, and more.

Let’s explore some of these use cases in detail and see how StorSimple helps customers get the most out of their hybrid storage investment.

Storage tiering for efficient data management:  Effective storage management of inactive data is one of the major challenges faced by organizations. Though they aren’t accessed on a daily basis, backup data, monitoring logs, old archives, etc. are still important for BC/DR purposes or to meet regulatory and compliance norms.

StorSimple has an automated storage tiering feature that keeps the active data in high performance on-prem SSD drives, while data that is accessed less frequently is tiered to HDDs and archival data is tiered to Azure Cloud object storage. The data is moved based on the usage pattern and does not need any user intervention. When data stored in the cloud is accessed, it is moved back to local storage, without affecting the user experience.

Data protection through snapshots: StorSimple devices support scheduled and on-demand backups through VSS-integrated snapshots. These snapshots help to recover critical applications in the event of data corruption or accidental data loss. As the snapshot technology is natively integrated with VSS, the resulting backups are application consistent. 

Since these snapshots are incremental, they are very cost effective. The time required for the backup is minimal and only the data that has changed since the last backup is considered for subsequent backups, which minimizes storage usage and costs.

Achieve storage economy: StorSimple uses thin provisioning technology, where only the storage resources needed to meet data requirements are allocated, as opposed to allocating the full size of a volume in advance. Cloud storage is leveraged to increase capacity when required and the available storage would appear to be greater than the physical resources available.

StorSimple also uses deduplication and compression technologies to reduce the storage footprint. Duplicate copies of data are not saved, and the system is intelligent to identity and store the changed data only. Data is stored in a compressed format to reduce the use of storage. While used for applications that handle extremely large data sets, thin provisioning, deduplication and compression collectively helps in achieving storage economy.

Business continuity and DR: StorSimple offers cloud snapshots, where the copy of data is stored in the cloud which can be used for setting up a secondary site during DR scenarios. Cloud snapshots help organizations eliminate tape-based backups and other tools to implement an operational BC/DR process. The on-prem environment can also be recovered in Azure, thus eliminating the need of investing in a separate physical DR environment.

CAPEX to OPEX conversion and agility:  StorSimple helps organizations be more agile in the IT deployment process as they can easily spin up environments from snapshots. The data available in Azure StorSimple cloud appliances can be easily accessed by other VMs in Azure and hence users are not restricted by the location of the data. StorSimple also eliminates huge upfront investments to increase storage capacity. It can effectively use cloud storage as an extension of on-premise storage, thereby converting CAPEX to OPEX.

StorSimple Deployment Options

Azure StorSimple devices are available in three deployment options that can be leveraged by customers based on the architecture required: StorSimple Physical Array, Virtual Array, and Cloud Appliance. All devices are managed centrally from the Azure cloud portal through the StorSimple Device manager service, thus providing a single pane of management for on-premises as well as cloud storage.

StorSimple Physical Array (will reach EOL in 2022): Microsoft Azure StorSimple 8000 series physical array comprises both SSDs and HDDs and is deployed in customer data centers. It can be accessed over iSCSI protocol by supported devices. StorSimple Physical Array is shipped with built-in dual storage controllers configured in active-passive mode to ensure high availability for the device. The processor and storage are shared among the controllers for easy takeovers during failovers. The device is also integrated seamlessly with Azure cloud storage for tiering cold data as well as for cloud snapshots.

The two models in the 8000 series are 8100 and 8600. 8100 series is a 2U series device with local usable capacity of ~15 TB and the total storage capacity scalable to 200 TB including cloud storage. 8600 is a 4U device that includes an Extended Bunch of Device (EBOD) enclosure that provides a local capacity of ~38 TB and the total capacity can be increased to 500 TB by leveraging cloud storage.

With EOL approaching, Physical Array users face the choice of either replacing their on-prem components or moving entirely to the cloud.

StorSimple Virtual Array (support ends in December 2020): Organizations that do not have the infrastructure to deploy a physical device can deploy the StorSimple Virtual array in supported virtualized platforms, i.e., Hyper-V and VMware. It is best suited for small-scale environments such as a data room in a branch office. Virtual Array supports a maximum local capacity of 6.4 TB, which is expandable to 64 TB by leveraging Cloud Storage. Volumes can be exposed to other Microsoft Azure Services as iSCSI or SMB targets.

StorSimple Virtual Array offers all the features of physical device, such as data tiering, snapshots, backup, and DR, without any additional costs of procuring and deploying a physical storage device. However, considering the lower storage capacity when compared to physical devices, it is best suited for backup and archival data use cases, where the majority of the data would be tiered to cloud storage.

StorSimple Cloud Appliance (as part of the 8000 series, support ends 2022): The StorSimple cloud appliance is an add-on capability that is available for customers who have purchased StorSimple Physical devices. As the name indicates, it is deployed in Azure in a single-node VM and serves the purpose of backup and DR in the cloud.

If the physical device is not available during a DR scenario, the data can be failed over to StorSimple Cloud Appliance. The data can be accessed by VMs in Azure through iSCSI connections. Cloud appliance is available in two models: 8010 and 8020 series. Premium disk storage is supported in 8020 series and hence can be used for high performance workloads.

 A New Way to Extend the Hybrid Cloud Storage Landscape

Enterprise data management, migration, and synchronization are the most important features for a hybrid cloud storage solution, along with other capabilities like BC/DR and storage economy. For a while, Azure StorSimple offered a way to address them, but EOL means users will have to find different solutions.

NetApp Cloud Volumes ONTAP for Azure cloud is an alternative to hybrid deployment with StorSimple that can be leveraged by enterprises in hybrid architectures. It caters to workloads deployed in Azure that need access to cloud storage over NAS and SAN protocols.

Cloud Volumes ONTAP not only replaces essential hybrid deployment features offered by StorSimple, but enhances them with additional capabilities, without compromising on data security, availability and integrity.

Data Lake Storage Gen1

 

What is Azure Data Lake Storage?

Azure Data Lake Storage (ADLS) is a secured and scalable Data Lake that helps to achieve high-performance analytics workloads. It is also known as Azure Data Lake Store. It offers a single storage platform to integrate a large volume of organizational data. It is very cost-effective and provides tiered storage and policy management. ADLS also offers single sign-on capabilities and access controls using Hadoop Distributed File System. Azure Data Lake Storage enables us to use all those tools which support HDFS. 

Benefits of Azure Data Lake?

The Data Lake in Azure solution is designed for organizations that want to take advantage of Big Data. It provides a data platform that can help Developers, Data Scientists, and Analysts store data of any size and format and perform all types of processing and analytics across multiple platforms using various programming languages. It can also work with existing solutions, such as identity management and security solutions. Moreover, it integrates with other data warehouses and cloud environments. It can be useful for organizations that need the following:

  • Azure Active Directory:

Azure Active Directory or AAD allows you to provide Role-Based Access Control (RBAC) or identity within the solutions. These identities have several applications that can be managed by the service principal. The service principal stores the principal’s credentials if a service wants to connect to it, whereas, managed identities are directly connected to the service, so there is no need to manage credential storage.

  • Multi-protocol SDK:

It’s a new version of the Blob Storage SDK used with Azure Data Lake to handle reading and writing of the data from ADLS and retry if a transient failure occurs. However, there are some limitations as it cannot perform atomic manipulation or control the access.

  • Low-cost Storage: 

Azure storage emerged as a cost-effective solution for data storage with various functionalities, such as data migrations from hot storage to colder storage, life-cycle management system, high power, archive storage, and more.

  • Reliability:

Azure Storage allows users to make copies of their data to prepare for data center failure or a natural disaster. Also, the advanced threat detection system integrates with the data storage and detects malicious programs or software that might damage the data or compromise your privacy.

  • Scalability:

Azure is massively scalable with a current limit of up to 500 petabytes in various regions around the world, except the USA and Europe where the limit is 2 petabytes. It offers both linear and vertical scaling

Working of Azure Data Lake

Azure Data Lake is built on Azure Blob storage, the Microsoft object storage solution for the cloud. The solution mat features low-cost, tiered storage and high-availability/disaster recovery capabilities. It integrates with other Azure services, including Azure Data Factory, a tool used for creating and running extract, transform, and load (ETL) and extract, load, and transform (ELT) processes.

The solution is based on the Apache Hadoop YARN (Yet Another Resource Negotiator) cluster management platform. It can scale dynamically across SQL servers within the data lake, as well as servers in the Azure SQL Database and the Azure SQL Data Warehouse.

To start using Azure Data Lake, you need to create a free account on the Microsoft Azure portal. From the portal, you will be able to access all of the Azure services.

Azure Data Lake Store Security

When implementing a Big Data solution, security shouldn’t be optional. To conform to security standards and limit sensitive information visibility, data must be secured in transit and at rest. ADLS provides rich security capabilities so users can have peace of mind when storing their assets in the ADLS infrastructure. Users can monitor performance, audit usage, and access control through the integrated Azure Active Directory.

Auditing

ADLS create audit logs for all operations performed in it. These logs can be analyzed with U-SQL scripts.

Access Control

ADLS provides access control through the support of POSIX-compliant access control lists (ACL) on files and folders stored in its infrastructure. It also manages authentication through the integration of AAD based on OAuth tokens from supported identity providers. Tokens will carry the user’s security group data, and this information will be passed through all the ADLS microservices.

Data Encryption

ADLS encrypts data in transit and at rest, providing server-side encryption of data with the help of keys, including customer-managed keys in the Azure Key Vault.

Data Encryption Key Types

ADLS uses a Master Encryption Key (MEK) stored in Azure’s key vault to encrypt and decrypt data. Users have the option to manage this key themselves but there is always a risk of not being able to decrypt the data if the key is lost. ADLS also includes the following keys:

  • Block Encryption Key (BEK): These are keys generated for each block of data
  • Data Encryption Key (DEK): These keys are encrypted by the MEK and are responsible for generating BEKs to encrypt data blocks

Azure Data Lake Store Pricing

Data Lake Store is currently available in the US-2 region and offers preview pricing rates (excluding Outbound Data transfer):

UsageCost
Data StoredUS$0.04 per GB per month
Data Lake TransactionsUS$0.07 per million transactions

In the next section of this Azure Data Lake Tutorial, you will learn to get started with Analytics.

How do I get started?

Getting started with Azure Data Lake Analytics is extremely easy. Here’s what you’ll need:

  • An Azure subscription — grab a free trial if you don’t have one.
  • An Azure Data Lake Analytics account — create one in your Azure subscription
    • You’ll also have to create a Store account during this step.
  • Some data to play with — start with text or images.

You don’t need to install anything on your personal computer to use it. You can write and submit the necessary jobs in your browser.


Components of Azure Data Lake

The full solution consists of three components that provide storage, analytics service, and cluster capabilities.

Azure Data Lake Storage is a massively scalable and secure Data Lake for high-performance analytics workloads. Azure Lake Data Storage was formerly known and is sometimes still referred to as the Azure Data Lake Store. Designed to eliminate data silos, it provides a single storage platform that organizations can use to integrate their data. 

The storage can help optimize costs with tiered storage and policy management. It also provides role-based access controls and single sign-on capabilities through Azure Active Directory. Users can manage and access data within the storage using the Hadoop Distributed File System (HDFS). Therefore, any HDFS-based tool that you are using will work with ADLS.

Azure Data Lake Analytics is an on-demand analytics platform for Big Data. Users can develop and run parallel data transformation and processing programs in U-SQL, R, Python, and .NET over petabytes of data. U-SQL is a Big Data query language created by Microsoft for the Azure Data Lake Analytics service. With Azure Data Lake Analytics, users pay for each job to process data on-demand in analytics as a service environment. It is a cost-effective analytics solution because you pay only for the processing power that you use.

Azure HDInsight is a cluster management solution that offers easy, fast, and cost-effective ways to process massive amounts of data. It’s a cloud deployment infrastructure of Apache Hadoop that enables users to take advantage of optimized open-source analytic clusters for Apache Spark, Hive, Map Reduce, HBase, Storm, Kafka, and R-Server. With these frameworks, you can support a broad range of functions, such as ETL, data warehousing, machine learning, and IoT. Azure HDInsight also integrates with Azure Active Directory for role-based access controls and single sign-on capabilities.


Need of Azure Data Lake

The Azure Data Lake offers the following benefits and facilities:

  • Data warehousing: Since the solution supports any type of data, you can use it to integrate all of your enterprise data into a single data warehouse.
  • Internet of Things (IoT) capabilities: The Azure platform provides tools for processing streaming data in real-time from multiple types of devices.
  • Support for hybrid cloud environments: You can use the Azure HDInsight component to extend an existing on-premises Big Data infrastructure to the Azure cloud.
  • Enterprise features: The environment is managed and supported by Microsoft and includes enterprise features for security, encryption, and governance. You can also extend your on-premises security solutions and controls to the Azure cloud environment.
  • Speed to deployment: It’s easy to get up and running with the Azure Data Lake solution. All of the components are available through the portal and there are no servers to install or infrastructure to manage.

Sunday, 5 March 2023

Storsimple Device Manager

 Azure StorSimple is considered a hybrid cloud storage solution because it combines primary, backup, snapshot, archive and offsite storage solution into one cohesive solution. StorSimple is delivered through SSD and HDD based storage arrays that provide better performance and IOPS than tape storage.

How Azure StorSimple is used?

It is used to provide data storage and backup solution for both cloud and on-premises IT Infrastructure.  It is primarily used to provide storage resources for:

  • Backing up or offloading inactive or infrequently used data
  • All in one / central location for storing primary, backup, archive, and offsite data
  • Providing snapshots, volumes, or data to virtual machines for development, testing, or disaster recovery

Azure StorSimple also provides a StorSimple virtual appliance that enables in automatically creating and deploying Azure virtual machine using virtual machines snapshots or preconfigured and stored volumes.

Azure StorSimple uses inline data deduplication and compression techniques to reduce inflexible storage growth and an SSD (solid-state drives) based storage infrastructure for lowering storage input/output costs. It can be managed and controlled from Azure Management Portal.



Recovery Services Vault

 

What is Azure Recovery Services vault?

Azure Recovery Services vault exists of Azure Backup and Site Recovery, two services that provide you the ability to protect your data outside your own data center. With Azure Backup you have a backup solution which replicates your backup data to the Azure cloud, so your data is save outside your own data center without the need to use backup tapes. Azure Site Recovery provides you a failover solution for your server infrastructure, without the need to setup an second data center or rent some rack space.

Azure Site Recovery

The second service as part of Azure Recovery Services vault is Azure Site Recovery. With Site Recovery you have the ability to replicate servers outside of your own data center to Azure, it does not matter if these servers a physical or virtual running on Hyper-V (with or without System Center VMM) or on VMware. It also allows you to replicate SC VMM or VMware virtualized servers to a recovery site running SC VMM or VMware.
As an example you can use Site Recovery as a recovery site for your Hyper-V infrastructure without setting up a second data center yourself (with al the maintenance involved). Just install the Azure Site Recovery Provider agent on the Hyper-V host and you are able to replicate the vm`s to Azure. You are able to choose the replication interval, recovery point retention and app-consistent snapshot frequency. Site Recovery supports replication of servers running different sorts of applications, like Active Directory, Files server or SharePoint (see this site for a recent list of supported applications). After replication is enabled for you servers and the status is protected, you can perform a test failover (to test your replication strategy), planned failover (as an example for maintenance) or an unplanned failover (if the original server is down). You can perform a failover of one server, but you can also setup a recovery plan which contains several servers. If for an example you have a website running with a database on the back-end, you want the server with the database to be available before the website is brought back online. You can specify which server needs to be brought back online as first, which one as second and so on.
Replicating an Domain Controller is supported from on-premises to Azure with ASR, but think of the setup in which you have an site-to-site VPN from your data center to Azure, deploy an Domain Controller with DNS on Azure which holds all your AD information. It guarantees you don`t have data loss and you are able to deploy RDS or Citrix servers so your users can even access the data when the complete site goes down and even the VPN connection is unavailable. Just perform a failover of the protected servers, turn on the RDS Servers (turn them off when not used to save some money) and your back in business.
There are a lot of different scenarios possible using Azure Site Recovery as you can image and they are pretty easy to setup.

Azure Site Recovery is billed by the instance protected ($25) and the amount of storage you use, more details on pricing you can find on this site. If you want to give it a try; every instance that is protected is free for the first 31 days!

Storage Account

Introduction to Azure Storage Account

Azure Storage Account is Microsoft’s cloud storage solution for latest data storage scenarios. Azure Storage provides an object store which is a tediously expandable object store for data objects. In addition to this, a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store is also provided by the Azure Storage.

Azure Storage is

  • Durable: Redundancy ensures that your data is safe in the event of transient hardware failures. You can also opt to replicate data across datacentres or geographical regions for additional protection from local catastrophe or natural disaster.
  • Highly available: Data replicated in this way remains highly available in the event of an unexpected outage.
  • Secure: All the data written to Azure Storage is converted into code by the service. Azure Storage provides you with own access control policy over who has access to your data.
  • Scalable: Azure Storage is designed to be massively extensible to meet the data storage and performance needs of today’s applications.
  • Managed: Microsoft Azure handles hardware maintenance, updates, and critic issues for you.
  • Accessible: Data in Azure Storage is available from anywhere in the world over HTTP or HTTPS. Microsoft provides SDKs for Azure Storage in a variety of languages.

These are the following services in Azure Storage Account

  1. Azure Blobs:

    Azure Blob storage is Microsoft’s object storage explication for the cloud. Blob storage is efficient for storing massive amounts of unstructured data, such as text or binary data.

  2. Azure Files:

    Azure Files enables you to set up highly accessible network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that many virtual machines share the same files with both read and write access. These files can also be read using the storage client libraries or REST interface.

  3. Azure Queues:

    The Azure Queue service is used to retrieve and store messages. Queue messages can be up to 64 KB in size, and a queue can contain trillions of messages. Queues are generally used to store lists of messages to be handled asynchronously.

  4. Azure Table Storage:

    Azure Table storage is now an element of Azure Cosmos DB. Along with the Azure Table storage service, there is also a new Azure Cosmos DB Table API which provides throughput-optimized tables, global distribution, and automatic secondary indexes.

Accessing blobs, files, and queues

Each storage account has two keys, either of which can be used for any operation. There are two keys, so you can roll over the keys occasionally to improve the security. It is critical that these keys be kept secure because their possession, along with the account name, allows unlimited access to all data in the storage account.

Replication options for an Azure Storage Account include

  1. Locally Redundant Storage: A simple, low-cost replication strategy. Data gets duplicate within a single storage scale unit.
  2. Zone Redundant Storage: Replication for high availability and durability. Data is replicated synchronously across three availability zones.
  3. Geo Redundant Storage: Cross-regional replication to protect against region-wide unavailability.
  4. Read Access Geo Redundant Storage: across-regional replication with reading access to the replica.

Features of Azure Storage Account

  1. The total capacity of an Azure account: 500 TB.
  2. A number of tables in an Azure storage account: Limited only by the capacity of the storage account.
  3. A number of partitions in a table: Limited only by the capacity of the storage account.
  4. A number of entities in a partition: Limited only by the capacity of the storage account.
  5. Size of an individual entity: Up to 1 MB with a maximum of 255 properties (including the Partition KeyRow Key, and Timestamp)
  6. Size of the Partition Key: A string up to 1 KB in size.
  7. Size of the Row Key: A string up to 1 KB in size.
  8. Size of an Entity Group Transaction: A transaction can include at most 100 entities and the payload must be less than 4 MB in size