Wednesday, August 30, 2017

November 23, 2017 – csDUG Conference

The Czech Republic and Slovakia DB2 Users Group (csDUG) is a Regional User Group (RUG) that was created 5 years ago.

On 23 November 2017, we will be hosting the 5th csDUG event, with 2 tracks!
The event is sponsored by IBM and CA Technologies.



This is a full day FREE conference with top industry speakers - an great opportunity DB2 users to meet industry experts !

Agenda of the conference



Location


IBM lab, building 4
V Parku 4,
148 00 Prague 4, Chodov
Czech Republic


Are you a qualified DB2 specialist? Do you want to discover what DB2 is or what it can do for you? Are you an experienced DBA in environments other databases and need support in DB2 for a new project? Are you interested in what role it can play a DB2 architecture of your business?
     --> Join the November 23 csDUG conference to listen to industry experts and network with other people interested in DB2!


Register to this FREE conference :

  • Send an email to Register.csDUG.23.NOV@gmail.com (please specify your name, country, job title, company and language(s) spoken)
  • If you register before September 15th ... and if you never attended an IDUG conference, and you are interested in receiving a FREE pass to IDUG (Lisbon October 1-5), please mention it in the email.
  • Become a member of the virtual csDUG group at http://www.worldofdb2.com/group/csdug (this helps us to calculate the budget for the complementary lunch). If you are not already a member of the WorldOfDB2 website, you will need to register first)



Friday, August 11, 2017

What’s new in CA RC/Query ?

CA RC/Query for DB2 for z/OS has recently been enhanced with various features.

You may review the 8 minutes video that summarizes these enhancements ...


DSN (DSNAME) Line command

  •           Will provide the underlying dataset name and below more information
  •           Volume Serial number under which the dataset resides
  •           Space type allocation (CYLS/TRKS/BLKS)
  •           Primary space allocation
  •           Secondary space allocation
  •           Number of extents
  •           Creation date of the Underlying dataset

Applicable on Database, Tablespace, Table, Index, and individual partition of Tablespace or Index.

More catalog columns in frequently used RC Query reports

  •           RC/Query will display all the required columns pertaining to the frequently used RC/Query reports.
  •           Users doesn’t need to swap to the other reports to get more information about any object
  •          The reports include Table List, Tablespace List, Database List, Index List, Package list, Plan List, Trigger List, View List and System List.


Defaulting space L, to list the Higher level objects

  •         RC/Query has defaulted the “space L” functionality by aiming for BMC users.
  •           Now on, users don’t need to type “space L” to know any other attribute.
  •           For ex: From Table list screen, to know Database, users need to type “DB L” but from now on just type DB to get the database name.
  •           Same thing is applicable for other reports and other space L commands in other reports.
  •           However, the existing functionality is NOT affected. So users can get the same database name with “DB L” and also DB to get the database name.


Shortening START, STOP and DISPLAY commands

  •           RC/Query has come-up with shortening the START, STOP and DISPLAY commands.
  •           From now on, users can use STA, STO and DIS commands along with START, STOP and DISPLAY commands to start, stop and display the object(s) status.


Enabling Case sensitivity in RC/Query header

  •           For example: A table creator can be in upper and lower cases. What happens when the user wants to fetch the records of only the lower case creator field?
  •           RC/Query has enhanced to use the case sensitive values in all the RC/Query report’s header fields (currently its limited to only Item name – and now its enhanced to all other fields of RC/Query reports.
  •           With this, users can fetch the data pertaining to the exact case without any compromise.


Reset Header command to clean up the header fields in RC query report

  •           Current Challenge:

o   What-if I want to remove the value of “Object name” field?
o   What-if I want to remove the value of all the header fields of RC/Query report?
  •           Solution:

o   RC/Query has come-up with a new feature to reset the header part of all of its report’s header fields.
o   Removal can be adjusted to any only “Object name” field or all other fields of RC/Query header.
o   Use RESHDR to reset the value of “Object name” field
o   Use RESHDR ALL to reset the value of all the fields of current RC/Query header report.

DB2 Analytics Accelerator (IDAA) Line commands

  • Below commands are included to support DB2 Analytics Accelerator:

o   PING
Verifies whether the IP connection between DB2 for z/OS and the IBM DB2 Analytics Accelerator is available.
o   ACCALT
Alters the distribution keys, and organization keys of one or more DB2 tables residing on Accelerator, according to your specifications.
o   ALOAD
Loads data from one or more source tables in DB2, to accelerator.
o   ENARPL
Enables replication updates for one or more tables, on accelerator.
o   DISRPL
Disables replication updates for one or more tables, on accelerator.
o   DACCELF
Removes the table from DB2 analytics accelerator forcefully.
o   RESARCH
Restores the archived table partition data from DB2 accelerator, to their original locations, according to your specifications.


Intelligent Use of Command Utility

  •           Intelligently identify the object name on which the utility needs to be executed

  •           Utilities include

o   REORG
o   COPY
o   RECOVER
o   RUNSTATS

  •          These commands are applicable when you execute the utilities on

o   Storage Group
o   Database
o   Tablespace
o   Table
o   View

Thursday, August 10, 2017

Demonstrate Data Compliance !


Enterprise data are subject to various regulations depending on their geographical location and type of business. An increased effort is expected and mandated to respect those rules, typically meant to better secure and protect the accuracy and privacy of enterprise data. In various regulations, it is also expected to actually demonstrate Compliance, which is not a piece of cake.
In addition, most people think that external threats (such as an external hacker trying to access corporate data) are the most common data security issues. In reality, various studies have shown that internal threats comprise 80% of all security threats. In other words, companies should make sure to protect their corporate data against their own employees.

Examples of regulations


Sarbanes-Oxley Act (SOX) : The goal of SOX is to regulate corporations in order to reduce fraud and conflicts of interest, to improve disclosure and financial reporting, and to strengthen confidence in public accounting. Specifically, the section 404 of this act, the one giving IT shops fits, specifies that the CFO must do more than simply vow that the company’s finances are accurate; he or she must guarantee the processes used to add up the numbers. Those processes are typically computer programs that access data in a database, and DBAs create and manage that data as well as many of those processes.

Health Insurance Portability and Accountability Act (HIPAA) : This legislation contains language specifying that health care providers must protect individual’s health care information even going so far as to state that the provider must be able to document everyone who even so much as looked at their information. Aka. can a DBA produce a list of everyone who looked at a specific row or set of rows in any database ?

Payment Card Industry & Data Security Standard (PCI DSS) : This well-known standard was developed by the major credit card companies to help prevent credit card fraud, hacking and other security issues. A company processing, storing, or transmitting credit card numbers must be PCI DSS compliant or they risk losing the ability to process credit card payments. Given the availability and volume concerns of payment card transactions this information is typically stored in an enterprise database.

General Data Protection Regulation (GDPR) : This new regulation applies to organizations that do business in the European Union, and will be effective in May 2018. It is meant to strengthen and unify data protection for individuals within the European Union, but it also focuses on the export of data (or even accessing the data) outside the EU. The stated objective of GDPR is to return control of personal data back to the individual. This includes data retention requirements, data privacy rules and huge penalties for being out of compliance.

Personal Information Protection and Electronic Documents Act (PIPEDA) : This Canadian regulation specifies the rules to govern collection, use, or disclosure of the personal information in the course of recognizing the right of privacy of individuals with respect to their personal information. It also specifies the rules for the organizations to collect, use, and disclose personal information.

Demonstrate Compliance!


It’s (almost) as simple as a 1-2-3 process!

Step 1 to Data Compliance : Define Data Compliance for your business

Depending on the type of corporate data you own, the type of business you are in, and the geography you do business with, the regulations you want to comply with will be different. And the definition of Personal Information to protect will be different!
As a typical example, the format of social security numbers is different from one country to another. If you do business in Czech Republic (for example), the social security numbers (Rodné číslo) have a specific format
  [0-9]{2}[0,1,5][0-9][0-9]{2}/?[0-9]{4}
 



Step 2 to Data Compliance : Locate the sensitive personal data

While most companies understand the need to comply to regulation(s), a typical challenge is to determine where all the sensible personal data are actually located within the corporate data.
When you have defined what kind of data you are going after (Step 1), the challenge is to make sure you know where those are stored : where are those “Rodné číslo” in the corporate data ?
You may think you know where all these are stored, but … are you sure? Remember: the goal is to demonstrate compliance, so you better be sure you know exactly where all those “Rodné číslo” are stored.



Step 3 to Data Compliance : Secure, protect, and demonstrate compliance

When you know what personal data you are going after, and you know where they are located, the game is to make sure the authorizations and security settings are defined properly, so that only the individuals that must have access to it… have access to it.
In other words, you need to produce a report that clearly states what personal data are where, and who has access to it.

Find and control regulated mainframe data and classify for compliance with CA Data Content Discovery (DCD) 

Compliance and adherence to regulations is critical to help prevent data breaches.

CA Data Content Discovery helps you identify data exposure risks on z Systems™ by scanning through the mainframe data infrastructure.
By discovering where the data is located, classifying the data to determine sensitivity level and providing comprehensive reporting on the scan results, mission essential data can be protected and exposure risks can be mitigated.



CA Data Content Discovery (DCD) comes with a number of pre-defined classifiers out-of-the-box, to comply with various well-known regulations.
In addition, CA Data Content Discovery (DCD) can be configured to look for sensible industry-specific or country-specific data in your corporate data, aka. you can create custom classifiers such as a “Rodné číslo” (as discussed above) : 

[0-9]{2}[0,1,5][0-9][0-9]{2}/?[0-9]{4}



SQL Adria - June 2017 - Summary


SQL Adria is a DB2 Regional User Group for Croatia and Slovenia, founded 20+ years ago. This non-profit organization organizes conferences and seminars, as a mean to continuously provide technical education, to share knowledge, to exchange ideas and experience among users and vendors.

Those events are regularly attended by dozens of DB2 Users, both DB2 Administrators and DB2 Application Developers.

The SQL Adria 2017 summer event happened in Šibenik, Croatia from 11th June 2017 to 15th June 2017.

Sessions during the SQL Adria Seminar


Tracking Guide to #db2 Galaxy by Denis Tronin @trode05

Steve Thomas @Steve_db2 is talking about Locks, Latches, Drains & Claims 

Protecting the Crown Jewels - your #data by Andy Ward

Eberhard Hechler from #IBM speaks about #MachineLearning and Data Lakes

Jane Man at #SQLAdria discusses how to create #DB2 mobile applications

Full room for Zeljen Stanic @staze01 SLA presentation

IDAA News from IBM Development by @Hrle1 (Namik Hrle)

Mainframe Operations Intelligence Solutions by Tom Juhl @tomjuhl


Reference


https://www.sqladria.net/en/seminar/european-sqladria-seminar-%E2%80%93-%C5%A1ibenik-2017

https://twitter.com/SQL_Adria

https://twitter.com/DB2forZ

If you attended the Conference, feel free to leave a comment below to indicate, for example, which session / presenter you enjoyed the most !

Friday, June 9, 2017

csDUG 2017 - The call for presentations is now opened !


Dear fans of quality databases!

One year has passed and here we are again planning the next, for your favorite conference csDUG!

But what would this event be without presenters and without all the discussions on a given topic? Let’s again create a unique atmosphere, the same as in previous years and let’s get carried away by the interesting topics that DB2 offers. This year, the conference will take place on November  23rd 2017 (Thursday) in The Park, Prague 4 - Chodov.

We are opening a call for presentations on this event. Contact me to sign up, it is enough to state the title of your presentation and also short description of the main objectives. The presentation should not be longer than 45 minutes. The presentation is usually followed by short discussion. The call for presentations will be opened until 15th August 2017.

The whole csDUG team is looking forward to our mutual cooperation!

The csDUG team

csDUG 2016 (Prague) group picture

csDUG 2015 (Ostrava) group picture

Friday, June 2, 2017

Summary of IDUG NA 2017 Anaheim





IDUG hosted its annual
 2017 NA conference in Anaheim (California) a few weeks ago. 

The event was a real success, with 500+ attendees.


Terry Johnson kicks-off the event with Mickey Mouse gloves

Top 5 sessions (in my point of view):

  • B02: DB2 Monitoring From Zero to Hero (Mariusz Koczar)
  • V02: Validating DB2 Recovery Time Objective (RTO) on a Live Production System (Bjarne Nelson)
  • A11: Tear Down the Wall – Breaking the Partition Limits in DB2 12 (Emil Kotrc)
  • G08: The DB2 12 Catalog – What Happened Since DB2 11 (Steen Rasmussen)
  • E16: Using REXX to Build Your Own DB2 Tools (David Simpson)

Emil Kotrc presenting Partition Limits in DB2 12
Troy Coleman speaks about DB2 Security

Steve Thomas enjoys explaining DB2 locks

The CA Technologies team at IDUG

The next IDUG North America DB2Conference will take place on April 29 - May 3, 2018 in Philadelphia, Pennsylvania. 

Interested to attend? Act now! The call for presentation for next IDUG NA is opened! (if you are selected as a presenter, the conference fees are for free).

It's not always only about DB2 ...



Thursday, June 1, 2017

Use Getpage Sampling to improve performance of CA Subsystem Analyzer

Reducing Collection Overhead

When it comes to DB2 performance products, customers are often demanding reduction in the overhead associated. Available in CA DB2 Tools 19, CA Subsystem Analyzer now allows to activate Getpage Sampling feature.

Sampling getpage requests reduces collection overhead significantly. Instead of capturing all getpages for databases, tablespaces, tables, indexes, datasets, and dataset extents, the percentage that you select is sampled. With sampling enabled, getpage count values are approximated. The sampling process is based on proven sample size and correction for finite population calculations using a confidence level of 95 percent and a confidence interval of 1 percent. Therefore, the approximated values are within 1 percent of the actual values 95 percent of the time when sufficient getpage activity occurs during an interval.

Watch the video

This feature is explained in a video, available in YouTube :



Recommended Sampling Rate

3 percent sampling rate results in the highest reduction of collection overhead. However, accuracy must be considered when choosing a sampling rate. The table below shows the recommended sampling rate for the number of actual getpage requests per object that you expect to occur per interval. Using the recommended sampling rate ensures accurate getpage counts. For example, if you expect 30,000 or more getpages for each object, specify 25. One out of every four getpages is sampled.

Recommended Sampling Rate
Getpages Per Object Expected
3%
300,000 or more
6%
150,000 or more
12%
70,000 or more
25%
30,000 or more
50%
10,000 or more
100% (*)
0 (no minimum)


References

User documentation : https://docops.ca.com/

Friday, May 26, 2017

Why to start your IT career on the Mainframe technology?

If you just finished university with a degree in Computer Sciences, you are probably looking for a job… and if you are reading this article, you are probably still searching. Although you might not even know what a Mainframe is, you might want to consider a career on the “big iron”; and here is why.

The mainframe is a 40+ year old platform and most software is written in low-level languages such as Assembler or Cobol. Granted. As much as it appears non-attractive, it is a real opportunity: while most of the world data and processing resides in Mainframes, Mainframe professionals (so called “Mainframers”) are close to retirement. The equation is simple, IT talents with such knowledge will be a rarity in the very near future, and the biggest fortune companies will crave for them.


But … do not think Mainframe is solely legacy. In fact, lots of new projects exist on Mainframe, most of which use Java, C, or C++. The new trend of “virtualization” is a notion which has existed for decades in the Mainframe world. If you think about it, Mainframe systems are nothing other than a private cloud: Mainframe means an enormous amount of data, incredible processing capabilities, and very high security (who ever heard about a virus on a Mainframe?). Mainframe also rhymes with green computing, since it uses much less energy than other platforms, because one Mainframe can support a workload equivalent to thousands of distributed servers.

In a few words, it is cool to work on Mainframe!

If you are looking for a Mainframe job in central Europe, check out this link:
www.proudly.cz/catechnologies