How to disable autocommit option in SQL Server Management Studio

Have you ever wondered how to disable the autocommit option in Sql Server Management Studio?

The answer is quick. Simply uncheck the option SET IMPLICIT_TRASLATIONS. To do it follow next path from the menu:

Tools> Options> Execution of the query> SQL Server> ANSI, then Uncheck Option SET IMPLICIT_TRANSACTIONS.

 

 

It seems rather simple but is a common doubt when you start using SQL Server Management Studio. By sharing this tip I hope to save time to someone that wants to disable the autocommit option in SQL Management Studio.

[[ad]]

Database Express. One way to start with the big ones.

In a previous blog entry ( OpenSource Databases. Why did we choose for our project Mysql?), talked about the Open Source databases such as interesting and reliable option for the development of business intelligence projects. We saw some different products and compare between them.

But there are other alternatives (with limitations in most cases) that allow us to start working with "big" of a free. Express versions are called. Versions are designed for small systems, for development work or training, for prototyping or evaluation, that allow us to "initiation" with the great managers of relational databases. In most cases, the product is offered fully functional, but with limitations (the maximum size of the database, use Ram memory or processors of the machine, etc). There may also be features that are not active in these versions (such as partitioning version  of Oracle Express).  I leave the link to the manufacturers' web sites where you can download these versions:

SQL08: Synchronization Database Microsoft Dynamics AX 2009 on SQL Server 2008

For those database administrators who have to deal with such a Dynamics Ax 2009 and his henchmen (developers, consultants, etc. )  I leave here a couple of things you should know (or I should say) when we join ax2009 and sql server 2008. Sometimes you can point to the database as a source of the problem but not always. Some requirements to consider for installing Ax2009 are that the user you want access to the system should be user and DOMAIN in sql server role must be a member of securityadmin dbcreator and to create the new database from Ax installer. Once installed (or during the installation process) the problem with the database that we can find include: 

The cost of having high availability systems with Oracle

[quote]What's the cost of downtime to your business? $100,000 per hour, $1,000,000 or more? The recent volcanic ash that has grounded European flights is estimated to be costing the airlines $200M a day. In the IT world, High Availability (HA) architectures allow for disaster recovery as well as uninterrupted business continuity during system failure...[/quote]

https://bigdatamatters.com/bigdatamatters/2010/04/high-availability-wit…

Defragment to optimize MySQL tables

 

In MySQL, when you delete records from a table, space is reallocated automatically. It is as empty space and forming new attachments will advantage.

The problem is that if a table perform many DELETE operations, the physical space of the table will become increasingly fragmented and the performance is reduced.

In the MyISAM and InnoDB, OPTIMIZE TABLE command available to perform an optimization on any table that, among other things, performs an automatic defragmentation of the table.

It is highly recommended to use this command regularly especially on tables that are more statements of disposal of records.

As a precaution, keep in mind that during implementation, of course, the table is blocked. You have to remember when you are going to use with large tables and busy.

Supersimples The syntax is:

 

OPTIMIZE [NO_WRITE_TO_BINLOG | LOCAL] TABLE mi_tabla1 [, mi_tabla2] ...;

To make a fragmented table optimization can be selected to have free space, probably as a result of DELETE statements:

Update SQL Server table statistics dynamically throughout a database

In Oracle databases there is a table that allows to list all the tables in the database (table 'dba_tables') and we can use this 'dba_tables' to create maintenance scripts dynamically.  In SQL Server we can create also scripts of tables maintenace by querying the table [dataBase].dbo.sysobjects.

In the example below we use a T-SQL script to update statistics for all tables in a SQL Server database by querying dynamically the data dictionary (using the table dbo.sysobjects). This T-SQL code can be encapsulated in a stored procedure or in a job to be executed by the SQL Server Agent to automatically keep statistics updated on all tables of the dbo scheme in a SQL Server database.

Oracle10g: Change the character set of the database

It may happen that after you install or configure a new Oracle database we realize that the character set chosen during installation is not correct. What we may happen in cases like this is to delete the database and reconfigure it or worse ...But you do not. We can change the character set stopping the database, looking up strictly by changing the settings and restart the database. Howto:
 

- First we connect to the database 

$ sqlplus sys/pwd@prod as sysdba
 

- We stop the database 

SQL>SHUTDOWN IMMEDIATE;

 

- We raise strictly * 

SQL>STARTUP MOUNT;
SQL>ALTER SYSTEM ENABLE RESTRICTED SESSION;
SQL>ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
SQL>ALTER DATABASE OPEN;

- Change the character map 

SQL>ALTER DATABASE CHARACTER SET <new characters map>;

- Restart the database and yata 

Oracle10g: Manual standby database (initial approach)

An Oracle database in Standby is an exact copy of an operational database on a remote server, used as a backup, and copy for reference, disaster recovery, etc..

A database in the Standby mode is more than a normal backup because it can be put into production disaster in less time than if we had to restore a copy (either from a simple rman or export).Restore a copy from file takes time, and during this period the system is unavailable. With an additional database in standby mode there is nothing (or almost nothing to restore) in case of disaster. Within minutes, allowing the change is continuity of service. It offers the performance advantages of a cluster or safety of the mirror but the ratio of costs versus benefits and leave time seems right to me.

From a global perspective:

Oracle 10g: Possible optimization in massive data dump

In batch runs to make a massive data dump into the same table using an INSERT or UPDATE for register within a block, the process can be optimized with the use of parameters (if client supports it) or if we use ODBC with bind variables.
Recall the steps taken by Oracle to process a query:
1) Sintactic Validation 
2) Semantic Validation
3) Optimization 
4) Generation of the QEP (Query Execution Plan)
5) Implementation of the QEP (Query Execution Plan)
Sentences can pick up the parameters by value (where salary > 1000) or once the sentence is compiled using Bind Variables (where salary>: b1). The advantage of the second option is that Oracle compile the sentence only one-time and reuses the compiled code for each of the values for the parameters.
But we must be aware because in the latter case because Oracle can't calculate the degree of selectivity of a query and, instead, apply a degree of selectivity by default (associated with each type of operation), which can give in wrong decisions.

Easily export data from Oracle to flat file

A simple way to export data from a query, table, etc.. of an oracle database to a flat file is to use the SPOOL command in SQLPlus. This would not need to rely on visual aids, which are not always available or do not always work as we want. Also you can use the Oracle format functions in the same SELECT statement that generated the data already in the format we need.

If, for example, we want to retrieve some data from all records in a table of customers sorted by date of discharge, simply open a SQLPlus session and run this series of commands: 

SQL> SET HEADING OFF
SQL> SET FEEDBACK OFF
SQL> SPOOL C:\datos_de_clientes.txt
SQL> SELECT 'Cliente ' || CLI_NOMBRE || ', ' || CLI_NIF || '. Fecha alta: ' || TO_CHAR(CLI_FECHAALTA,'YYYY-MM-DD')
FROM TABLA_CLIENTES
ORDER BY CLI_FECHAALTA DESC;
SQL> SPOOL OFF;
SQL> SET FEEDBACK ON
SQL> SET HEADING ON

The first lines hide the headers that contain the field name, and do not concern us because we only want the data. Spool directs the output of data to the file 'datos_de_clientes.txt' on the C drive on the local machine.