Now let’s move to sharding native replication, which is RAFT-based, that means reliable and fault-tolerant, usually providing subzero or subsecond zero data loss replication support. Generally, what is sharding in native replication?
This is a completely transparent, built-in Oracle sharding that duplicates data across the different shards. So data are generally put into chunks. And then the chunks are replicated across three or five shards, depending on the level of fault tolerance required.
This is completely provided by the Oracle sharding database and does not require any other components, such as GoldenGate or Data Guard. So if you remember when we talked about the architecture, we said that each shard, each database can have a Data Guard component, whether through GoldenGate or through Data Guard to have a standby.
And that way, you can support high availability with native sharding and replication; you don’t rely on the secondary database. You actually —shards will back each other up by holding replicas, enabling global management of replicas, ensuring everything is preserved, and handling all fault operations.
Now this is a logical replication, generally consensus-based, kind of like different components all aware of each other. They know which component is good, depending on the load and the failure. The sharded databases behind the scenes decide who is actually serving the data to the client. That can provide sub-second failovers with zero data loss.
Now major benefits for having sharding native replication is that it is completely transparent to the application or any of the structures. You just want to go ahead and use this replication and identify the replication factor. The rest is managed by the Oracle sharded database behind the scene.
It supports fast, sub-second failover with zero data loss. And depending on the number of replicas, it can even tolerate multiple failures, like two server failures. And when the loads are submitted, they are also load-balanced across all these shards based on where the data is located and the replicas. So this way, it can also provide you with a little bit of better utilization of the hardware and load management.
So generally, it’s designed to help you keep your regular SQL-based databases without having to resort to FauxSQL or NoSQL environment, getting into other databases like we were talking about earlier, like MongoDB or MariaDB, and things like that.
Now, some of the use cases are when you want to have zero sub-second data loss within the sharded database. So if you use a replica factor 3, then you can tolerate one single failure without any data loss. And the load will be managed completely behind the scenes. With factor 5, you can actually have two concurrent failures. But with zero data loss, data is still maintained behind the scenes.
Now, hyperscale fleet-level operation is applied to this type of native replication. Basically, when a node faults and has data issues, we generally rebuild the node and its data, and everything is replaced. So rebuild based on the other available component in general.
And it allows you to better manage your capacity, data load, and the hardware that is available for the operation. Finally, no management is required. This is all maintained behind the scenes. And Oracle’s sharded database handles all administration. This also supports commit-based data management and execution. It fully supports transactional data management.
To use it, basically all you have to do is identify that you are going to use native replication and then specify the replication factor. So these two parameters have been added to the statement.
So when you create your shard catalog, you identify whether you are going to use the native replication. So the native option was added. And then you can identify the replication factor that will be used.
This concludes the review and discussion of native replication. And again, there isn’t much to discuss. It’s just a matter of understanding the databases behind the scenes that are supporting each other. They are holding the replicas.
Each database can be a master of a different chunk of data. And then, depending on the request, that request can be distributed from the master to the children. So, better support of performance. And also in case of failures. So that concludes the topic of native replication.

Recent Comments