Hello Dear Reader! I had recently written a blog for the PFE blog, "How Many Tables Can I Have In SQL Azure DB & SQL Server". It was a fun blog that I had written during a very long layover when traveling home. It covers the maximum number of objects that you can create in SQL Server.
In this blog I had written up a demo to create the maximum number of tables you can in a database. I also explained that based on table creation rates it would take around 64 days for me to reach that limit. As a point of interest I tried creating In-Memory OLTP/Hekaton tables vs. native T-SQL tables. There I noticed something very interesting. Creating In-Memory OLTP tables was slower than creating T-SQL tables. A lot slower.
To be crystal clear this is specifically on the creation of tables nothing more. This has nothing to do with the insert, deletion, or reading of data. My blog was about how many OBJECTS could a database hold, and I had a tight loop creating tables over and over again. I explored different options to get as many tables created as quickly as possible.
"So Balls", you say, "You said this was slow. How slow?"
Great point Dear Reader, back to the question!
I was able to create 388 native T-SQL tables per second and only 3 In-Memory OLTP tables. On the book of faces someone asked about the results very quickly.
I reached out to my colleague Robert Dorr (Blog) from the SQL Team. He and Bob Ward, from the SQL Tiger Team, (@BobWardMS | Blog) are running a great blog called Bob SQL, https://blogs.msdn.microsoft.com/bobsql/ , you should check this out.
He was kind enough to write up a reply that covered how things work under the covers. To read his blog click here.
QUICK SUMMARY
Each In-Memory table is managed compiled code. By making In-Memory objects compiled this allows us to save CPU cycles and allows us to skip some of the traditional over head of native T-SQL objects.
When we issue the DDL to create an In-Memory table we follow the same processes as a native T-SQL table. We also do two additional steps we create the Access DDL code and we compile the Access DDL code.
These steps give us a one time hit on creation of an object that give us long term benefits as we execute against our In-Memory OLTP/Hekaton tables.
WRAP UP
There are nothing but great questions out there Dear Reader. The worst kind of question is the one that is not asked. In this case the question pushed me to look for an answer that I already wanted to know.
In this case a demo that I almost didn't write, a question from a place that I normally do not look for questions, are all that we needed to get an answer. My thanks again to Robert Dorr for taking the time to give us the answer.
I love it when a plan comes together. Until next time Thanks for stopping by.
Thanks,
Bradley Ball
Showing posts with label SQL Internals. Show all posts
Showing posts with label SQL Internals. Show all posts
Monday, August 29, 2016
Friday, October 18, 2013
PASS Summit 2013 Deck and Demos Live
Ready to rock some Internals. |
I did something different this time that I hope you will enjoy. Internals are great but why do they matter? In the world of performance tuning and troubleshooting an understanding of Internals is essential.
Understanding what's going on under the hood often tells us where we need to be looking when we are problem solving.
"So Balls", you say, "You said you were doing something new?"
Thank You for keeping me on point Dear Reader! I'll be mixing a whole list of Why Do Internals Matter with this presentation and I'd love your feedback.
Hope to see you there! As always Thanks for stopping by!
Thanks,
Brad
Wednesday, October 2, 2013
DevConnections Deck & Demos Live!
Hello Dear Reader! Just
a quick blog today. The Deck and Demo’s
for my Presentations today at SQL DevConnections are Live!
I just want to say a Big Thank You to the Organizers
for having me and putting on a great conference. I also want to Thank the people who attended the
presentations today.
We had almost a packed house and standing room only for the Inside
the query Optimizer session and lots of great questions all around!
To view
the deck and Demo’s you can Click
on my Resource Page or ClickHere for the Inside the Query Optimizer Deck & Demo’s and Click
Here for the SQL Internals Deep Dive Deck & Demo’s.
Thanks again for a great day of SQL learning here in Vegas!
Thanks,
Brad
Tuesday, September 24, 2013
Can You Compress a Temp Table?
Hello Dear Reader! We
are finishing up the final day of the Performance Tuning Workshop here in
Atlanta and I got an interesting question on Compression from Tim Radney (@tradney | Blog).
The question: Can you compress a temp table? Just a quick
blog to get the answer out there while Gareth Swanepoel (@GarethSwan | Blog) teaches the class about Extended Events.
My guess was yes. Temp
Tables can have statistics, Clustered and Non-Clustered Indexes, while they
only exist in the session they are created, I would be they could be
compressed. If you would actually want
to compress them is a different discussion, but let’s prove this out.
DEMO
Here’s a quick demo to show you can do this. So first up we will create our Temp Table
specifying with Data_Compression=ROW.
This will create our temp table #myTable1, we will then
insert 15000.
if exists(select name from tempdb.sys.tables where name like '#myTable1%')
begin
drop table #mytable1
end
go
create table #myTable1(
myid int identity(1,1) primary key clustered
,mychar1 char(500) default 'a'
,mychar2 char(3000) default 'b'
) with (data_compression=row)
go
declare @i int
set @i=0
while(@i<5000)
begin
set @i=@i+1
insert into #myTable1
default values
end
Now let’s use DBCC IND to view the pages associated with our
table, and DBCC Page to Validate that our data is compressed.
dbcc ind(tempdb, '#myTable1', 1)
go
dbcc traceon(3604)
go
dbcc page('tempdb', 1, 376,3)
go
Looking at the output of DBCC Page I can see that the CD
array for my compressed data is present near the header. Row compression is indeed on.
Now let’s rebuild this using page compression on a rebuild
operation using sp_spaceused to measure the size of the table.
And it is now Page Compressed. Thanks for the question Tim! And as always Dear Reader Thank you for
stopping by.
Thanks,
Brad
Friday, August 16, 2013
24 Hours of PASS Q & A: Why are Bit’s Flipped?
http://bit.ly/16UyEPH |
Why are Bytes
sometimes swapped and why do you have to flip bits in order to decode a bitmap?
At the time I took a WAG at the question, Wild @$$
Guess. I said that they were flipped
because of a need to protect proprietary data.
I couldn’t have been more wrong.
Fortunately for me my friend and co-worker Roger Wolter (@RWolter50 | Blog) was watching and was quick to let me know that
the source of the byte swapping was not proprietary. It was because of Endians.
“So Balls,” you say, “It was because of Endi-a-Whats?”
Don’t worry Dear Reader, I was right there with you. This was a new term to me, some of you clever
former Computer Science majors are already seeing a light bulb form. For my sake humor me and pretend you haven’t
already figured this out. Some of you former
Literature Majors, or general readers of classic tales, are wondering what Gulliver
Travels has to do with SQL Internals.
BIG AND LITTLE
ENDIANS
In Jonathan Swift’s satirical novel Gulliver’s Travels,
Gulliver ends up in a land called Lilliput.
Lilliput has hostilities with their neighbor Blefuscu. You see Lilliput likes to eat their hard
boiled eggs by cracking the little end.
Whereas Blefuscu likes to eat their eggs by cracking the big end. There for the Lilliput’s are known as Little
End-ians and the Blefuscu are Big End-ians.
Side stepping Swift’s satirical play on societal issues the term was
later utilized in Computer Science over 200 years later.
So what does this mean for Computers? It is how we read data out of memory.
We will be covering memory at a very high level, even though we are pretty deep. Memory is one big array waiting to hold our data.
What does that array hold? Bytes. How do we find where we are storing the bytes in our array? We give it an address and we look up that memory address.
We will be covering memory at a very high level, even though we are pretty deep. Memory is one big array waiting to hold our data.
What does that array hold? Bytes. How do we find where we are storing the bytes in our array? We give it an address and we look up that memory address.
An address is not an Index, but for the correlation of how
to look up memory data it is comparable to the way we store data on a Clustered
Index page and then look that data up by its unique key. So to make this an easy comparison for my SQL
Family, let’s just say that an Address is our Index and how we will look up
data in our array/table.
When we read data into memory there is a memory address
assigned to the byte or bytes depending on the chipset of the machine. Big Endian Processors read the data from Left
to right, also known as most significant byte to smallest address. Motorola and quite a few others use Big
Endian. x86 and x64 processors use
Little Endian. Since SQL Server run’s on
x86 and x64 hardware we will focus mainly on that.
For example take value XSWB.
If we translate each letter to two byte hex pairs that we would place
into memory we would get X=58, S=53, W=57, B=42, or 58535742. Each hex pair would be translated to binary
which would then translate to ASCI characters which would become the regular
letters we see. How would we store that
in memory? The ASCII example below is for 8 bit access.
*We will disprove the flipping of ASCII bit’s here in a moment using a 64 bit access. But what I want you to get from this is the concept. More in a moment. Also here’s a really nice graphic from theWikipedia Entry on Endianess, well worth the read.
*We will disprove the flipping of ASCII bit’s here in a moment using a 64 bit access. But what I want you to get from this is the concept. More in a moment. Also here’s a really nice graphic from theWikipedia Entry on Endianess, well worth the read.
BIG ENDIAN
Address
|
Value
|
1000
|
58=X
|
1001
|
53=S
|
1002
|
57=W
|
1003
|
42=B
|
LITTLE ENDIAN
Address
|
Value
|
1000
|
42=B
|
1001
|
57=W
|
1002
|
53=S
|
1003
|
58=X
|
This behavior is left over from when 8 bit processors had 16 bit memory registers and it was more efficient to load the lower byte first. If it was only an 8 bit operation then the top byte could be skipped. Thanks to Roger for all the technical explanations, more on that to come.
Since ASCI characters show up internally a little bit nicer than this, each letter is a two byte hex pair. No need for swapping to decode. When we get large numbers, we can really see this at work within SQL Server. For example let’s use the following statement.
use master
go
if exists(select name from sys.databases where name='demoInternals')
begin
drop database demoInternals
end
go
Create database demoInternals
go
use demoInternals
go
if exists(select name from sys.tables where name='brad1')
begin
drop table brad1
end
go
create table brad1(mychar char(4) primary key clustered, myint int)
go
insert into brad1(mychar, myint)
values('XSWB', 12345678)
go
We’ll create the value we just looked at XSWB and an integer
value of 12345678. Now let’s do a DBCC IND,
get our page number and look at the page dump.
dbcc ind('demointernals', 'brad1', 1)
go
dbcc page('demointernals', 1, 278, 3)
The ASCII doesn’t look byte swapped, but the integers
obviously are. This lead to another question that I asked Roger. Ridiculously smart man that he is, he told me
that ASCII characters do not need to load the registers in the arithmetic
processors. For that reason we do not
have to swap bytes.
Thanks Roger for all the great info. This was a lot of fun to learn. Thank You Dear
Reader for stopping by.
Thanks,
Brad
Wednesday, July 31, 2013
24 Hours of PASS Deck and Demo's Live!
Hello Dear Reader! Another very quick blog. Thank you to all of the people that tuned in to see me present on SQL Data Internals for the 24 Hours of PASS tonight. I truly appreciate you spending your hard earned time with me.
My Deck and demo's are now live on the resources page. I've added a list of all the presentation's that I used as references. Any Scripts not in the deck you can find at the following links.
Click Here for the Slide Deck and Click Here for the Demos. Now the links to all the other material.
As always Thanks for stopping by!
Thanks,
Brad
My Deck and demo's are now live on the resources page. I've added a list of all the presentation's that I used as references. Any Scripts not in the deck you can find at the following links.
Click Here for the Slide Deck and Click Here for the Demos. Now the links to all the other material.
Paul Randal MCM Video Series Data Structures
http://technet.microsoft.com/en-us/sqlserver/gg313756.aspx
Bradley Ball SQL Internals Reading Data Records Part
1: Tag Bytes http://bidn.com/blogs/SQLBalls/ssas/2776/sql-internals-reading-data-records-part-1-tag-bytes
Bradley Ball SQL Internals Reading Data Records Part
2: Null Bitmap Offset http://bidn.com/blogs/SQLBalls/ssas/2781/sql-internals-reading-data-records-part-2-null-bitmap-offset
Bradley Ball SQL Internals Reading Data Records Part
3: Fixed Length Columns http://bidn.com/blogs/SQLBalls/ssas/2785/sql-internals-reading-data-records-part-3-fixed-length-columns
Bradley Ball SQL Internals Reading Data Records Part
4: Null Bitmap http://bidn.com/blogs/SQLBalls/ssas/2789/sql-internals-reading-data-records-part-4-null-bitmap
Bradley Ball SQL Internals Reading Data Records Part
5: Variable Offset Array http://bidn.com/blogs/SQLBalls/ssas/2791/sql-internals-reading-data-records-part-5-variable-offset-array
Bradley Ball Differences in the Null Bitmap between
SQL 2005 and SQL 2012 http://www.sqlballs.com/2012/07/differences-in-null-bitmap-between-sql.html
Bradley Ball SQL Internals Reading Data Records Part
6: Variable Length Data http://www.sqlballs.com/2012/07/sql-internals-reading-data-records-part.html
As always Thanks for stopping by!
Thanks,
Brad
I'm Presenting Today for 24 Hours of PASS!!
Hello Dear Reader! This is just a very quick blog to let you know that the 24 Hour's of PASS is currently going on. Brent Ozar(@BrentO | Blog) started things off Strong and Alan Hirt(@SQLHA | Blog) is working his magic as we speak.
This year I was extremely honored to get to introduce Joseph D'Antoni (@Jdanton | Blog) in the 6 am slot tomorrow. It would also work out that I could be a backup presenter as well. So Tonight at 6 PM East Coast Time, following the one and only Brian Knight(@BrianKnight | Blog) is SQLBall presenting on Data Internals.
This is a mere glimpse into the 90 minute Spotlight Session I'll have at the Summit this year. So come join me for some Spelunking around the Data Internals tonight! Here's the link to sign up and Here is the Abstract.
SQL Internals
Alright Dear Reader, I'm off to practice! I'll see you at 6 pm.
Thanks,
Brad
This year I was extremely honored to get to introduce Joseph D'Antoni (@Jdanton | Blog) in the 6 am slot tomorrow. It would also work out that I could be a backup presenter as well. So Tonight at 6 PM East Coast Time, following the one and only Brian Knight(@BrianKnight | Blog) is SQLBall presenting on Data Internals.
This is a mere glimpse into the 90 minute Spotlight Session I'll have at the Summit this year. So come join me for some Spelunking around the Data Internals tonight! Here's the link to sign up and Here is the Abstract.
SQL Internals
Duration: 60 minutes
Track: Enterprise Database Administration & Deployment
Often as DBAs, we learn backwards. You get a DBA job, and you have databases that have tables, which have data in them. Nobody tells you that there are records, stored on pages, grouped in extents, assigned by allocation bitmaps, with IAM pages and allocation units that span partitions and filegroups. Today is your lucky day! This 24 Hours of PASS session will cover byte swapping and bitmaps and dive a little into decoding Hex and binary, working to give you an understanding of database internals and how you use them every day!
Alright Dear Reader, I'm off to practice! I'll see you at 6 pm.
Thanks,
Brad
Wednesday, May 22, 2013
Speaking At the PASS Summit!
I’m Speaking at The PASS Summit 2013!
Hello Dear Reader!
Big news. I’ve been selected as a
speaker for the SQL PASS Summit 2013 in beautiful Charlotte NC! I graduated High School and College in North
Carolina.
My roommate and best friend
lived there and I made the trip with him to his family’s house on
holidays. I’ve not been back for several
years, and I’m really looking forward to being in Charlotte for more than just
an airplane stop.
Getting selected for the Summit is a huge honor. My initial reaction…..
Excited, doesn’t begin to describe it Dear Reader. My session this year will be different for
the first time since I started submitting to PASS. The past two years I did a Deep Dive on Page &
Row Compression. I love this presentation,
and it is a lot of fun to give. This
year I got selected for something different.
“So Balls”, you say, “What did you get selected for, why so
excited?”
Excellent question Dear Reader! The excitement? This session is not just a session. It’s a Spotlight Session. That means 90 minutes and a little more
pressure to do a really good job. The Topic
Data Internals Deep Dive. Here’s the abstract:
Data Internals Deep Dive
As a DBA we learn
backwards. Most of the time you get a
DBA job and you have Databases, that have tables, and in those tables you have
data. Nobody tells you that we have records, stored on Pages, grouped in
Extents, assigned by Allocation Bitmaps, with IAM Pages and Allocation Units
that span Partitions and Filegroups.
Today is your lucky day, we will start at the bottom and build our way
up. We'll cover byte swapping, bitmaps,
and dive a little into decoding Hex & binary. Working to give you a full understanding of
database internals, and how you use them every day!
As much as I love data internals, the really cool part? We have no idea when SQL V-Next is going to
land. If it is before the Summit, then
expect we’ll cover some of those internal components as well! Guess we’ll just have to see.
As always Thanks for stopping by!
Thanks,
Brad
Saturday, April 27, 2013
SQL Saturday 215 Deck and Demos Live
Hello Dear Reader! I'm up in Jacksonville for SQL Saturday 215. Today I'm presenting SQL Internals, Backups, and Recovery Models! OH MY!
I'm also going to be the MC for the SQL Iron Chef BI presentation today. Before the festivities being I wanted to upload my deck and Demo, a few tweeks from the last presentation I did up in Boston. One of the big differences, more data for the partitioning & Peicemeal Restore demo. I got an SSD this week, and it's time to have some fun with it!
I'll be in the DBA Auditorium from 1:15 - 2:15 today. If you'd like a copy of the deck Click Here, if you'd like a copy of the demos Click Here.
Hope to see you there!
I'm also going to be the MC for the SQL Iron Chef BI presentation today. Before the festivities being I wanted to upload my deck and Demo, a few tweeks from the last presentation I did up in Boston. One of the big differences, more data for the partitioning & Peicemeal Restore demo. I got an SSD this week, and it's time to have some fun with it!
Early morning outside the main hall SQL Saturday 215 Jacksonville |
I'll be in the DBA Auditorium from 1:15 - 2:15 today. If you'd like a copy of the deck Click Here, if you'd like a copy of the demos Click Here.
Hope to see you there!
Saturday, April 6, 2013
SQL Saturday 203 Boston, Deck and Demos Live!
Hello Dear Reader!
A quick note I'm presenting today at 1:30 pm for SQL Saturday Boston #203.
I'll be giving my presentation on the 4th Floor! The topic today is SQL Internals, Backups, and Recovery Models! OH MY!
Here's a quick link to the deck and the demo's, all live on the Resources page.
Also I'll be giving away a seat or two to the Pragmatic Works Performance Tuning Workshop here in Boston in June this year! Make sure to stop by!
Hope to see you there!
Thanks,
Brad
A quick note I'm presenting today at 1:30 pm for SQL Saturday Boston #203.
I'll be giving my presentation on the 4th Floor! The topic today is SQL Internals, Backups, and Recovery Models! OH MY!
Here's a quick link to the deck and the demo's, all live on the Resources page.
Also I'll be giving away a seat or two to the Pragmatic Works Performance Tuning Workshop here in Boston in June this year! Make sure to stop by!
Hope to see you there!
Thanks,
Brad
Monday, August 13, 2012
Database Corruption, Transparent Data Encryption, and Trace Flag 5004
This one comes straight from the
email bag. A friend recently had a
problem, they were placing TDE on a database and the encryption scan had
stopped at state 2 percent_complete 0. I'm bouncing around the Charlotte NC
airport facing some plane delays, and I thought what better time than to play
around with a little database corruption.
“So Balls”, you say, “What does TDE stuck in an encryption scan
have to do with corruption.”
Great Question Dear Reader! The default Page_Verify setting in SQL Server
is Checksum. This means when a page is
read into memory and written back to disk a Checksum is calculated based off
the pages contents. When it is written
back to disk, that Checksum is there. When
the page is read again the Checksum is used as a validation. If the Checksum fails then it tosses an error
reporting the page as a Suspect Page.
Think of this like going through
the TSA Checkpoint, you’ve got your ticket and your identification. If your ticket says ‘Serenity’, but your ID
says ‘Zachary’ you will probably get flagged by the system as Suspect. In both cases that’s where the probing
begins.
WE’VE GOT A PROBLEM
For this example I’m going to use
a database that I’ve corrupted called CorruptAdventure taken from a corrupted
version of AdventureWorksDW2008R2. Horrible name for a database, it was just
asking for corrupting. We’ll start out
assuming everything is fine. The powers
that be want TDE, Transparent Data Encryption, enabled on the database and we
will do that. First we’ll create our Master
Key and a Database Certificate to use in the encryption.
/*
Create Master Key
and Certificate
*/
USE master
GO
Create Master Key Encryption By Password='MasterKeyPass1'
GO
Create Certificate DatabaseCertificate With
Subject='Dont Put Anything Importiant in the subject'
GO
Now we’ll point to
CorruptAdventure and create a Database Encryption Key and set encryption to
on. Transparent Data Encryption will
read each page into memory. If it doesn’t
have a checksum one will get written.
Our page has a checksum, but it’s contents have been corrupted. When SQL calculates a checksum to validate
the current on, the page will get logged to the MSDB.dbo.Suspect_Pages table.
use CorruptAdventure
go
create database encryption key
with algorithm = aes_256
encryption by
server certificate
DatabaseCertificate
go
Alter Database CorruptAdventure
Set Encryption on
go
It looks like it is encrypting!
Whoa! We hit our error.
Let’s query our Suspect_Pages
table. Just like I thought we’ve got our
database ID and our page ID. The error_type
column is equal to 2, this means our page was flagged suspect during a Checksum
operation.
/*
It just stalled out
Why would this happen?
A page checksum occurs on all pages
whent the TDE scan
*/
select * from msdb.dbo.suspect_pages
Now let’s run DBCC CheckDB and
verify if we really have something wrong with our database.
DBCC CHECKDB(CorruptAdventure) WITH NO_INFOMSGS
Msg 8928, Level 16, State 1, Line 1
Object ID 325576198, index ID 5,
partition ID 72057594043498496, alloc unit ID 72057594044940288 (type In-row
data): Page (1:3874) could not be processed.
See other errors for details.
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 325576198, index
ID 5, partition ID 72057594043498496, alloc unit ID 72057594044940288 (type
In-row data), page (1:3874). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed.
Values are 12716041 and -4.
Msg 8976, Level 16, State 1, Line 1
Table error: Object ID 325576198, index
ID 5, partition ID 72057594043498496, alloc unit ID 72057594044940288 (type
In-row data). Page (1:3874) was not seen in the scan although its parent
(1:3888) and previous (1:3873) refer to it. Check any previous errors.
Msg 8978, Level 16, State 1, Line 1
Table error: Object ID 325576198, index
ID 5, partition ID 72057594043498496, alloc unit ID 72057594044940288 (type
In-row data). Page (1:3875) is missing a reference from previous page (1:3874).
Possible chain linkage problem.
CHECKDB found 0 allocation errors and 4 consistency
errors in table 'FactInternetSales' (object ID 325576198).
CHECKDB found 0 allocation errors and 4 consistency
errors in database 'CorruptAdventure'.
repair_allow_data_loss is the minimum repair level
for the errors found by DBCC CHECKDB (CorruptAdventure).
Just as I suspected
corruption. We got the Database ID and page number from Suspect_Pages
and DBCC CHECKDB just verified that the page is indeed corrupt. Now we can find
exactly what type of data is corrupted, which will determine our strategy for
handling it. We have the Object ID and
Index ID for the DBCC CHECKDB Scan.
We can do a query against sys.indexes joined to sys.objects
using the IndexID, 5, and ObjectId, 32576198, provided. We will get the table name, index name, and
index type.
select
o.name as TableName
,i.name as IndexName
,i.type_desc
from
sys.indexes i
left join sys.objects o
on i.object_id=o.object_id
where
o.object_id=325576198
and i.index_id=5
Our corruption is on a non-clustered index. If you ever get corruption this is one of the
easiest types to fix. We drop our
non-clustered index and re-create it, and it should fix everything.
USE CorruptAdventure
GO
DROP INDEX IX_FactInternetSales_OrderDateKey ON dbo.FactInternetSales
GO
CREATE NONCLUSTERED INDEX IX_FactInternetSales_OrderDateKey
ON dbo.FactInternetSales
(OrderDateKey ASC)
GO
Now let’s Run DBCC CHECKDB to get a clean bill of health.
DBCC CHECKDB(CorruptAdventure) WITH NO_INFOMSGS
Excellent, looking at our TDE status it still hasn’t
moved.
The TDE encryption scan should have paused when the Checksum
error occurred. In case it didn’t you
can manually pause the encryption scan and reset it with Trace Flag 5004. Turning Trace Flag 5004 on will stop the
encryption scan right where it is. You
then need to turn Trace Flag 5004 off so you can re-issue the encryption
command and watch it commence. You might not need to use Trace Flag 5004, but I like to play this one on the safe side.
DBCC TRACEON(5004)
GO
DBCC TRACEOFF(5004)
GO
ALTER DATABASE CorruptAdventure
SET ENCRYPTION ON
Let’s check our encryption status.
We are progressing again, and it looks like now we’ve
completed! Excellent, not only did we
get our database encrypted but we were able to fix corruption that we were not
previously aware of. One last peek at
our TDE scan and we see it is complete and our encryption_status is now 3, no
longer stuck at 2.
Well my plane just arrived, so that’s all for now Dear
Reader, as always Thanks for stopping by.
Thanks,
Brad
Subscribe to:
Posts (Atom)