DB2 Write Suspend

When doing a snapshot from a storage array, if the server contains a DB2 instance running, there is no certainty that the snapshot contains a consistent copy of the database.

To launch a snapshot and ensure consistent copy in DB2 is possible to put the database at “write suspend”, that is, it overrides the disk access in write mode, and work in the buffer pool memory. Queries whether it will record but writes are performed only in memory.

How to know the DB2 connection port

Maybe there are other methods, in this short article, a simple way to know the port that serves DB2 server.

We get the name of the service TCP / IP:

> db2 get dbm cfg | grep SVCENAME

Capture the result:

TCP/IP Service name (SVCENAME) = db2TRP

Look at /etc/services:

> cat /etc/services | grep sapdb2QRP

db2TRP 5912/tcp # DB2 Communication Port

 

The listening port is 5912!

Average time of disk dccess read/write in DB2

Through DB2 we can get the average time in ms disk access is having DB2. These times are crucial for the detection of a IO problem with DB2 instance.

Usually we take into consideration that a value close to 2-3ms is good, more than 10ms can indicate problems.

Avg ms/write:

select trunc(decimal(sum(pool_write_time))/decimal(

(sum(pool_data_writes)+sum(pool_index_writes))),3)

from sysibmadm.snaptbsp

 

Avg ms/read:

Using Machine Learning/AI to Boost the Supply Chain: 5 Use Cases

Industry - AI to boost the Supply ChainThis article will discuss how supply chains are being improved through the use of innovative technologies before highlighting five uses of artificial intelligence and machine learning in supply chains.

When you finish reading, you’ll understand why many industry analysts have described A.I. technologies as disruptive innovations that have the potential to alter and improve operations across entire supply chains..

Open Source for Big Data: An Overview

Software Open SourceThis article will describe the relevance of open source software and big data before describing five interesting and useful open source big data tools and projects.

Big data workloads are those that involve the processing, storage, and analysis of large amounts of unstructured data to derive business value from that data. Traditional computing approaches and data processing software weren’t powerful enough to cope with big data, which typically inundates organizational IT systems on a daily basis.

The widespread adoption of Big Data analytics workloads over the past few years has been driven, in part, by the open source model, which has made frameworks, database programs, and other tools available to use and modify for those who want to delve into these big data workloads..

What is Storage Tiering and How Can it Reduce Storage Costs?

Tiered storageTiered storage is a way of managing data by assigning it to different types of storage devices/media depending on the current value that the underlying information provides. The efficient management of data recognizes that all information provides an intrinsic value from the time it’s created to the time it becomes obsolete and that this value changes over the information lifecycle.

The typical factor determining the value of information is how frequently you access it, however, policy-based rules can factor a number of other issues to determine information value. For example, old bank transactions, which might have a low value, could suddenly shift in value depending on special circumstances, such as a tax audit. This article discusses some pros, cons, and best practices for tiered storage..

The Importance of Big Data Disaster Recovery

Analytics information big data Disaster recovery is a set of processes, techniques, and tools used to swiftly and smoothly recover vital IT infrastructure and data when an unforeseen event causes an outage.

The statistics tell the best story about the importance of disaster recovery—98 percent of organizations reported that a single hour of downtime costs over $100,000, while 81 percent indicated that an hour of downtime costs their business over $300,000...

The Story of Big Data on AWS

Big data and AWSAmazon Web Services (AWS) is a subsidiary of Amazon that provides cloud computing services, accessible to both individuals and companies.

While newer cloud providers like Microsoft Azure and Google grow at a faster rate, AWS still holds a commanding position at the top of the cloud provider market..

What is Big Data Marketing and Does it Help Lead Generation?

Bigdata marketing and lead generationBig Data marketing refers to the use of high-velocity, voluminous, and variable data to improve a company’s marketing efforts. When people think of the term Big Data, they often make the erroneous assumption that it’s just about the size of the datasets.

However, Big Data refers to data expanding on two other fronts —velocity and variety..