Monday, 31 July 2017

Introduction to Azure Cosmos DB

Azure Cosmos DB is one of the latest cloud based service offerings from Microsoft. Azure Cosmos DB is a superset data service of DocumentDB. Earlier, Microsoft offered DocumentDB as Data as a Service (DaaS), which supported a limited set of features and functionalities. Microsoft’s engineers shared their challenges, running the company’s cloud-based services, such as Bing, Azure and Office 365 using DocumentDB. Microsoft understood their engineer’s challenges and marketed the opportunity to take DocumentDB to the next level. As a result, Microsoft come up with Azure Cosmos DB - A globally distributed, multi-model database.

Saturday, 29 July 2017

Managing SQL Server Backup and Restore History Information

Every time you backup or restore a database SQL Server keeps history information about the backup or restore operation.  The backup and restore information is captured and stored in the msdb database.  This backup and restore information if left unchecked can take up more and more space over time.  To minimize msdb storage space you need to consider removing the backup and restore history records when they are no longer useful.

Friday, 28 July 2017

Backing Up SQL Server Databases Hosted on Azure VMs

In this article, we will provide a more comprehensive overview of the different backup options applicable to Azure IaaS SQL Server workloads.

In general, you have the following options for backing up databases residing on SQL Server instances that are hosted by Azure virtual machines:

Database Performance Advice for DBAs

As DBAs, we can get mired in the depths of performance tuning parameters and scripts, sometimes getting lost in all the details. It is a good idea to always have a set of goals and philosophies that you can lean on to keep you focused and working on the appropriate things. That is what I want to talk about in this month’s DBA Corner column. Here are six high-level rules of thumb for achieving your DBMS-related performance goals and maintaining your sanity.

Thursday, 27 July 2017

Planning Database Recovery Options

When a failure occurs, the DBA must ascertain whether recovery is required. It is possible, though not very likely for active databases, that a failure does not impact the integrity of your data. Assuming that recovery is required, the DBA will need to determine what resources (backup copies and log files) are available and how best to perform the needed database recovery. Several questions must be answered to figure out the type and extent of failure. The answers will dictate the steps you take to recover the system.

Tuesday, 25 July 2017

Enhance Your MySQL XML Import Procedures using Prepared Statements

In my Importing XML Data into MySQL Tables Using a Stored Procedure article, I outlined some ways to work around MySQL's restrictions on stored procedures to import XML data into your MySQL database tables. That article covered the basics of formatting your XML files as well as writing your proc to process the data. In today's article, we'll be enhancing our proc to use a Prepared Statement, include error handling and validation, as well as handling additional XML formats.

Monday, 24 July 2017

Importing XML Data into MySQL Tables Using a Stored Procedure

In theory, it would seem that a stored procedure would be the perfect vehicle for importing XML data from a file. In practice, there are several mitigating factors to doing so.  For starters, the LOAD XML INFILE and LOAD DATA INFILE statements have not been permitted to run within a Stored Procedure since MySQL v5.0.7.  On top of that, there are serious limitations in how dynamic you can make your procs; too many to make them as flexible that you'd need them to be in order to handle a variety of input files.  Finally, right now there are no means for mapping XML data to table structures, so you really have to know what's coming in.  But it's not all bad news. If you're willing to target an XML file with a rigid and known structure per proc, you can still get a lot of use out of one call.

Saturday, 22 July 2017

Automating Data Warehouse Support

Big data is here, and fits nicely into an organization's data warehouse infrastructure. Most large IT enterprises have installed one or more big data applications. These applications provide fast access to large stores of data, usually customer or sales data. Your technical staff that supports these applications and the systems that analyze and consume the data didn’t exist ten years ago. In addition, new technologies and new business models using advanced analytics require installation, configuration, and performance tuning.

Speeding up MySQL Data Imports

A Two-pronged Approach

There are two parts to importing data:
  1. Reading the data
  2. Inserting it into tables
We’ll address each of these separately as they are quite distinct from one another.

Friday, 21 July 2017

Database as a Service: Performance Tuning

Many businesses want to get their latest applications in front of customers as quickly as possible. In order to shorten total development time, they have turned to providers of database as a service (DBaaS) to manage database creation and maintenance. Delegating these resource-intensive tasks to outside experts relieves the business from hardware acquisition and installation, software selection and licensing costs, and staffing for all these functions; however, after the application is up and running what happens as the database gets larger, the number of customers grows, and the transaction load on the system begins to strain resources? Who is responsible for performance tuning, and what pieces should be managed first?

Thursday, 20 July 2017

The Rise of the Renaissance Database Administrator


What do these market-defining trends have in common?

· Analytics for all
· Analytics as competitive differentiator
· Internet of Things
· Artificial intelligence/Machine learning/Cognitive computing
· Real-time analytics/event management

Monday, 17 July 2017

Querying Multiple MySQL Tables

It’s been said that one of the drawbacks to normalization to the third form (3NF) is more cumbersome data extraction due to the greater number of tables.  These require careful linking via JOIN clauses.  Improper table joining can easily result in erroneous results or even in the dreaded Cartesian Product. In today’s article, we’ll explore how table joins are achieved in MySQL.

Saturday, 15 July 2017

Big Data Quality Assurance

With promises of incredibly fast queries, many IT shops implemented one or more big data applications in combination with high-performance hardware and software suites. Results have been generally good: many installations report incredible decreases in query elapsed times, sometimes by factors of 100 or more. Such improvements translated to faster results from analytical queries as well as a better understanding of corporate data.

The Big Data DBA

Many IT enterprises are starting pilot projects to implement big data solutions. As a DBA, are you ready to support these efforts, and integrate them into your current architecture, processes, and standards?

Big Data, DBA

Thursday, 13 July 2017

Top 6 MySQL DBA Mistakes

To err is human, or so the saying goes.  Unfortunately, in the world of IT, innocuous mistakes made early on can lead to really daunting problems down the road. While you can never eliminate human error or bad decisions, there are steps that you can take to minimize the likelihood of finding yourself in a pickle due to a hasty decision arrived at in the spur of the moment or a mistake made early on in design.  In today’s article, we’ll go over a few of the most common DBA mistakes to avoid.  Some of these relate specifically to MySQL, while others may be applicable to any RDBMS.

Transactions in MySQL

What is a transaction?

If you are asking this question, you are probably used to website databases, where most often it does not matter in which order you run transactions, and if one query fails, it has no impact on others. If you are updating some sort of content, often you will not care when the update is performed, as long as the reads are being taken care of quickly. Similarly, if an update fails, the reads can still carry on reading the old data in the meantime. However, there are times when it is vitally important in which order queries run, and that all queries in a group run, or none at all. The classic example is from the banking environment. An amount of money is taken from one person's account, and put into another, for example as follows, a 500-unit transaction:

Tuesday, 11 July 2017

Basic DB2 System Tuning Strategies

While the performance of a mission-critical application sometimes takes center stage, IT staff are always aware that the performance of the overall system affects all applications.  Database management systems, operating system software, physical data storage and retrieval, and backup and recovery operations all take part in providing a solid infrastructure for applications. For many customer facing applications such as on-line transaction processing, the database management system is the key component.

Monday, 10 July 2017

DB2 and BigData 2.0

Big data applications start big and keep growing. As the masses of big data being analyzed grow both in size and complexity (think XML data, binary large objects or BLOBs, and URLs of internet page visits), the hardware and software communities have responded with huge storage offerings and massively parallel storage and retrieval mechanisms. The logical next step is for the IT enterprise to take advantage of these technological innovations for things other than classical big data processing.

Sunday, 9 July 2017

Database as a Service: Outlook Cloudy

For most companies, IT-related hardware and software maintenance is costly, time-consuming and requires hiring and retaining a support staff of operating system and database management system specialists. Delegating these responsibilities to an outside firm allows a new application to be developed and implemented more quickly. Enterprises also consider this possibility as a way to purchase only the resources that they need at initial implementation. As the application data and number of users grows, additional IT resources can be purchased as needed.

Saturday, 8 July 2017

Database as a Service: Your First Application

For the entrepreneur who wants to implement a significant digitally-based service or sell products to customers on-line, the need to get the application written and implemented quickly is a strong one. The thought of creating and maintaining an information technology infrastructure is daunting. Purchasing hardware, finding a space with appropriate power requirements, installing software, managing leases and licensing agreements, backup and recovery ... the list is long, and getting it all done will be time-consuming.

Thursday, 6 July 2017

How to Find Out Which Database Object Got Deleted

In my last tip, I showed you how to use a transaction log backup file to identify who deleted an object from a database.   In this tip, I will show you how to find out the actual table name for the object deleted.  

I ran the following code to identify who delete tables from my database in my last tip:

SELECT [Current LSN], [Operation], [Transaction Name], [Transaction ID], SUSER_SNAME ([Transaction SID]) AS DBUser

How to Read a Transaction Log Backup

The transaction log backup is stored in SQL Server proprietary format, just like the transaction log itself. Even though the transaction log backup is in a special format, Microsoft has provided us with the fn_dump_dblog() function to read transaction log backups.   This function is undocumented, so you should use care when using this function in a production environment.

Tuesday, 4 July 2017

An Introduction to JSON Support in the Oracle Database

This article provides an overview of the JSON functionality available when using an Oracle database, along with links to relevant articles.

  • What's the point?

In today's development world relational databases are considered a legacy technology by many. PL/SQL is also considered a legacy language. With that in mind, how do we as Oracle DBAs and PL/SQL developers stay relevant? One way is to make sure everything we do is easily accessible.

Sunday, 2 July 2017

Getting Started with Azure SQL Data Warehouse - Part 6


Azure SQL Data Warehouse is Microsoft’s new enterprise-class, elastic petabyte-scale, data warehouse service that can scale according to organizational demands in just few minutes.  In my last article of the series, I discussed the importance of statistics, how to created them for better performance, workload management and performance optimization guidelines in SQL Data Warehouse. This article will cover the different ways to loading data into SQL Data Warehouse and how they work and perform.

Saturday, 1 July 2017

Getting Started with Azure SQL Data Warehouse - Part 5

Azure SQL Data Warehouse is Microsoft’s new enterprise-class, elastic petabyte-scale, data warehouse service that can scale according to organizational demands in just few minutes. In my last article of the series, I discussed round-robin and the distributed table, how partitioning works in SQL Data Warehouse and looked at the impact of choosing the right distribution key. I also talked about PolyBase and how you can leverage it to import data-in or export data-out from SQL Data Warehouse very easily and quickly. In this article, I am going to cover performance optimization in SQL Data Warehouse and the different aspects that impact performance.