Friday, June 9, 2017

csDUG 2017 - The call for presentations is now opened !


Dear fans of quality databases!

One year has passed and here we are again planning the next, for your favorite conference csDUG!

But what would this event be without presenters and without all the discussions on a given topic? Let’s again create a unique atmosphere, the same as in previous years and let’s get carried away by the interesting topics that DB2 offers. This year, the conference will take place on November  23rd 2017 (Thursday) in The Park, Prague 4 - Chodov.

We are opening a call for presentations on this event. Contact me to sign up, it is enough to state the title of your presentation and also short description of the main objectives. The presentation should not be longer than 45 minutes. The presentation is usually followed by short discussion. The call for presentations will be opened until 15th August 2017.

The whole csDUG team is looking forward to our mutual cooperation!

The csDUG team

csDUG 2016 (Prague) group picture

csDUG 2015 (Ostrava) group picture

Friday, June 2, 2017

Summary of IDUG NA 2017 Anaheim





IDUG hosted its annual
 2017 NA conference in Anaheim (California) a few weeks ago. 

The event was a real success, with 500+ attendees.


Terry Johnson kicks-off the event with Mickey Mouse gloves

Top 5 sessions (in my point of view):

  • B02: DB2 Monitoring From Zero to Hero (Mariusz Koczar)
  • V02: Validating DB2 Recovery Time Objective (RTO) on a Live Production System (Bjarne Nelson)
  • A11: Tear Down the Wall – Breaking the Partition Limits in DB2 12 (Emil Kotrc)
  • G08: The DB2 12 Catalog – What Happened Since DB2 11 (Steen Rasmussen)
  • E16: Using REXX to Build Your Own DB2 Tools (David Simpson)

Emil Kotrc presenting Partition Limits in DB2 12
Troy Coleman speaks about DB2 Security

Steve Thomas enjoys explaining DB2 locks

The CA Technologies team at IDUG

The next IDUG North America DB2Conference will take place on April 29 - May 3, 2018 in Philadelphia, Pennsylvania. 

Interested to attend? Act now! The call for presentation for next IDUG NA is opened! (if you are selected as a presenter, the conference fees are for free).

It's not always only about DB2 ...



Thursday, June 1, 2017

Use Getpage Sampling to improve performance of CA Subsystem Analyzer

Reducing Collection Overhead

When it comes to DB2 performance products, customers are often demanding reduction in the overhead associated. Available in CA DB2 Tools 19, CA Subsystem Analyzer now allows to activate Getpage Sampling feature.

Sampling getpage requests reduces collection overhead significantly. Instead of capturing all getpages for databases, tablespaces, tables, indexes, datasets, and dataset extents, the percentage that you select is sampled. With sampling enabled, getpage count values are approximated. The sampling process is based on proven sample size and correction for finite population calculations using a confidence level of 95 percent and a confidence interval of 1 percent. Therefore, the approximated values are within 1 percent of the actual values 95 percent of the time when sufficient getpage activity occurs during an interval.

Watch the video

This feature is explained in a video, available in YouTube :



Recommended Sampling Rate

3 percent sampling rate results in the highest reduction of collection overhead. However, accuracy must be considered when choosing a sampling rate. The table below shows the recommended sampling rate for the number of actual getpage requests per object that you expect to occur per interval. Using the recommended sampling rate ensures accurate getpage counts. For example, if you expect 30,000 or more getpages for each object, specify 25. One out of every four getpages is sampled.

Recommended Sampling Rate
Getpages Per Object Expected
3%
300,000 or more
6%
150,000 or more
12%
70,000 or more
25%
30,000 or more
50%
10,000 or more
100% (*)
0 (no minimum)


References

User documentation : https://docops.ca.com/

Friday, May 26, 2017

Why to start your IT career on the Mainframe technology?

If you just finished university with a degree in Computer Sciences, you are probably looking for a job… and if you are reading this article, you are probably still searching. Although you might not even know what a Mainframe is, you might want to consider a career on the “big iron”; and here is why.

The mainframe is a 40+ year old platform and most software is written in low-level languages such as Assembler or Cobol. Granted. As much as it appears non-attractive, it is a real opportunity: while most of the world data and processing resides in Mainframes, Mainframe professionals (so called “Mainframers”) are close to retirement. The equation is simple, IT talents with such knowledge will be a rarity in the very near future, and the biggest fortune companies will crave for them.


But … do not think Mainframe is solely legacy. In fact, lots of new projects exist on Mainframe, most of which use Java, C, or C++. The new trend of “virtualization” is a notion which has existed for decades in the Mainframe world. If you think about it, Mainframe systems are nothing other than a private cloud: Mainframe means an enormous amount of data, incredible processing capabilities, and very high security (who ever heard about a virus on a Mainframe?). Mainframe also rhymes with green computing, since it uses much less energy than other platforms, because one Mainframe can support a workload equivalent to thousands of distributed servers.

In a few words, it is cool to work on Mainframe!

If you are looking for a Mainframe job in central Europe, check out this link:
www.proudly.cz/catechnologies