Asked to Not Use Transactions and to Use A Workaround to Simulate OneHandling exceptions in stored procedures...

Mutable named tuple with default value and conditional rounding support

Why do sellers care about down payments?

Output a Super Mario Image

How do I say "quirky" in German without sounding derogatory?

How can I discourage sharing internal API keys within a company?

Bash, import output from command as command

What are uses of the byte after BRK instruction on 6502?

Should you only use colons and periods in dialogues?

Is low emotional intelligence associated with right-wing and prejudiced attitudes?

Is there any reason to concentrate on the Thunderous Smite spell after using its effects?

How are aircraft depainted?

The Planck constant for mathematicians

Speedometer as a symbol into awesomebox

What organs or modifications would be needed for a life biological creature not to require sleep?

Where to disclose a zero day vulnerability

What officially disallows US presidents from driving?

Real mode flat model

What is my breathable atmosphere composed of?

Can I tap all my opponent's lands while they're casting a spell to negate it?

What does a Light weapon mean mechanically?

Is there a real-world mythological counterpart to WoW's "kill your gods for power" theme?

Telling my mother that I have anorexia without panicking her

Were Roman public roads build by private companies?

Write a function that returns an iterable object of all valid points 4-directionally adjacent to (x, y)



Asked to Not Use Transactions and to Use A Workaround to Simulate One


Handling exceptions in stored procedures called using insert-exec blocksSleeping SPID blocking other transactionsOracle GoldenGate add trandata errorsSynonyms and transactionsSQL Server: affect other transactions?How to handle errors in a transaction in a stored procedure?Issue with Table Naming Conventions and Policy Management in SQL Server 2016Investigating errors from strange query70-761 Practice exam Question about @@Trancount final value






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







11















I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!










share|improve this question







New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






















  • The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

    – i-one
    6 hours ago








  • 3





    No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

    – Dan Guzman
    3 hours ago




















11















I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!










share|improve this question







New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






















  • The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

    – i-one
    6 hours ago








  • 3





    No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

    – Dan Guzman
    3 hours ago
















11












11








11


1






I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!










share|improve this question







New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!







sql-server t-sql transaction






share|improve this question







New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|improve this question







New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this question




share|improve this question






New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 9 hours ago









ForrestForrest

563 bronze badges




563 bronze badges




New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.


















  • The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

    – i-one
    6 hours ago








  • 3





    No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

    – Dan Guzman
    3 hours ago





















  • The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

    – i-one
    6 hours ago








  • 3





    No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

    – Dan Guzman
    3 hours ago



















The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

– i-one
6 hours ago







The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

– i-one
6 hours ago






3




3





No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

– Dan Guzman
3 hours ago







No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

– Dan Guzman
3 hours ago












2 Answers
2






active

oldest

votes


















6
















You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.




  • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


  • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


  • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


  • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.



Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can find a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






share|improve this answer

































    -1
















    If transactions are blocking each other, above isn't a good fix to the problem.



    Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions



    There could be inefficient code within the transactions. Are there parts that could be moved outside of the transaction? Select statements shouldn't be inside transactions. Only the actual inserts/updates/deletes should be in there.






    share|improve this answer




























      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "182"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });







      Forrest is a new contributor. Be nice, and check out our Code of Conduct.










      draft saved

      draft discarded
















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f248677%2fasked-to-not-use-transactions-and-to-use-a-workaround-to-simulate-one%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      6
















      You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



      Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.




      • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


      • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


      • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


      • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.



      Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can find a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



      In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






      share|improve this answer






























        6
















        You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



        Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.




        • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


        • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


        • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


        • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.



        Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can find a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



        In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






        share|improve this answer




























          6














          6










          6









          You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



          Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.




          • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


          • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


          • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


          • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.



          Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can find a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



          In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






          share|improve this answer













          You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



          Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.




          • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


          • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


          • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


          • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.



          Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can find a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



          In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 4 hours ago









          mustacciomustaccio

          11.9k9 gold badges29 silver badges45 bronze badges




          11.9k9 gold badges29 silver badges45 bronze badges




























              -1
















              If transactions are blocking each other, above isn't a good fix to the problem.



              Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions



              There could be inefficient code within the transactions. Are there parts that could be moved outside of the transaction? Select statements shouldn't be inside transactions. Only the actual inserts/updates/deletes should be in there.






              share|improve this answer






























                -1
















                If transactions are blocking each other, above isn't a good fix to the problem.



                Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions



                There could be inefficient code within the transactions. Are there parts that could be moved outside of the transaction? Select statements shouldn't be inside transactions. Only the actual inserts/updates/deletes should be in there.






                share|improve this answer




























                  -1














                  -1










                  -1









                  If transactions are blocking each other, above isn't a good fix to the problem.



                  Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions



                  There could be inefficient code within the transactions. Are there parts that could be moved outside of the transaction? Select statements shouldn't be inside transactions. Only the actual inserts/updates/deletes should be in there.






                  share|improve this answer













                  If transactions are blocking each other, above isn't a good fix to the problem.



                  Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions



                  There could be inefficient code within the transactions. Are there parts that could be moved outside of the transaction? Select statements shouldn't be inside transactions. Only the actual inserts/updates/deletes should be in there.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 7 hours ago









                  user238855user238855

                  491 silver badge4 bronze badges




                  491 silver badge4 bronze badges


























                      Forrest is a new contributor. Be nice, and check out our Code of Conduct.










                      draft saved

                      draft discarded

















                      Forrest is a new contributor. Be nice, and check out our Code of Conduct.













                      Forrest is a new contributor. Be nice, and check out our Code of Conduct.












                      Forrest is a new contributor. Be nice, and check out our Code of Conduct.
















                      Thanks for contributing an answer to Database Administrators Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f248677%2fasked-to-not-use-transactions-and-to-use-a-workaround-to-simulate-one%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

                      Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

                      Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...