Showing posts with label RedHat. Show all posts
Showing posts with label RedHat. Show all posts

Monday, October 29, 2018

Oracle Linux on SPARC is dead? Oracle Linux at Risk?

Oracle Linux on SPARC is dead? Oracle Linux at Risk?

Oracle made substantial changes in their strategy last year, perhaps on a "Wim"... and now they seemed to bet wrong.

Oracle has long pinned some of it's engineered systems on a clone of Red Hat Linux. After purchasing Sun, they released storage servers based upon Solaris on Intel and left it's other engineered systems on their knock-off Linux OS. 

Oracle had to suffer through the successive Intel CPU fixes making each successive patch release slower or less secure. Now, the dominate Linux Vendor [Red Hat] that Oracle had been copying is being purchased by IBM. 

[SPARC logo, courtesy SPARC International]

SPARC Life


It appears that there is still a SPARC of life in the world's highest performing CPU architecture... and that life in Solaris. There has been a recent roadmap release [ie 2018-08] which is substantially the same as it's previous release some 5 months earlier.

[Oracle logo, courtesy Oracle Corporation]

Oracle SPARC

Oracle releases a new roadmap with an M8+ chip coming (ie 2018-03) and continues to design Oracle Solaris, for the distant future, for over a decade.

Oracle SPARC Solaris appears to be a steady ship, in turbulent seas.

[Fujitsu logo, courtest Fujitsu corporation]

Fujitsu SPARC

This seems to coincides with the Fujitsu roadmap [i.e. since last year!] It seems Fujitsu is designing the silicon for Oracle as they advance the Solaris Software layer. Fujitsu, a hardware provider supplying SPARC chips for Sun when SPARC was first created, continues to talk about new product coming [in 2018-02-03, 2018-03-15] - which is good news!

Fujitsu leaked [in 2018-07-06] is getting closer to releasing it's new Supercomputer architecture, not based upon SPARC, which probably means Fujitsu's Linux for SPARC will soon have no future.

Oracle Linux

There has been some speculation about Linux on SPARC from NetMgt. The last update of Oracle Linux on SPARC looks like Summer 2017. It appears to have stalled, possibly killed when Wim returned to the Oracle in November 2017.

Oracle's knock-off Linux is based  upon Red Hat Linux... which is now being purchased [in 2018-10-28] by arch-enemy competitor IBM... who competes in all Oracle's major spaces (i.e. Cloud, RISC servers, Intel Servers, Database, etc.)

Conclusions

NetMgt has been tracking Oracle Linux for some time, but it appears Oracle Linux on SPARC stalled last year. Oracle Linux on SPARC now appears dead on arrival. Fujitsu SPARC no longer has a need for Linux. Oracle's Linux, is now oddly in a strange risk place, under Intel whose CPU's get slower with every defect fix. Oracle SPARC Solaris continues to be the highest performing Vendor Architecture and OS combination - the "sun" continued to shine in the darkness of declining performance of competitors.

Sunday, May 25, 2014

Solaris: Loopback Optimization and TCP_FUSION

Abstract:
Since early days of computing, the most slowest interconnects have always been between platforms through input and output channels. The movement from Serial ports to higher speed communications channels such as TCP/IP became the standard mechanism for applications to not only communicate between physical systems, but also on the same system! During Solaris 10 development, a capability to increase the performance of the TCP/IP stack with application on the same server was introduced called TCP_FUSION. Some application vendors may be unaware of safeguards built into Solaris 10 to keep denial of service attacks or starvation of the applications due to the high performance of TCP writers on the loopback interface.
Functionality:
Authors Brendan Gregg and Jim Mauro describe the functionality of TCP_FUSION in their book: DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X, and FreeBSD.
Loopback TCP packets on Solaris may be processed by tcp fusion, a performance feature that bypasses the ip layer. These are packets over a fused fused connection, which will not be visible using the ip:::send and ip:::receive probes, (but they can be seen using the tcp:::send and tcp:::receive probes.) When TCP fusion is enabled (which it is by default), loopback connections become fused after a TCP handshake, and then all data packets take a shorter code path that bypasses the IP layer.
The modern application hosted under Solaris will demonstrate a significant benefit over being hosted under alternative operating systems.

Demonstrated Benefits:
TCP socket performance, under languages such as Java, may demonstrate a significant performance improvement, often shocking software developers!
While comparing java TCP socket performance between RH Linux and Solaris, one of my test is done by using a java client sending strings and reading the replies from a java echo server. I measure the time spent to send and receive the data (i.e. the loop back round trip).
The test is run 100,000 times (more occurrence are giving similar results). From my tests Solaris is 25/30% faster on average than RH Linux, on the same computer with default system and network settings, same JVM arguments (if any) etc.
The answer seems clear, TCP_FUSION is the primary reason.
In Solaris that's called "TCP Fusion" which means two local TCP endpoints will be "fused". Thus they will bypass the TCP data path entirely. 
Testing will confirm this odd performance benefit under stock Solaris under Linux.
Nice! I've used the command
echo 'do_tcp_fusion/W 0' | mdb -kw

and manage to reproduce times close to what I've experienced on RH Linux. I switched back to re-enable it using
echo 'do_tcp_fusion/W 1' | mdb -kw

Thanks both for your help.
Once people understand the benefits of TCP_FUSION, they will seldom go back.

Old Issues:
The default nature of TCP_FUSION means any application hosted under Solaris 10 or above will, by default, receive the benefit of this huge performance boost. Some early releases of Solaris 10 without patches may experience a condition where a crash can occur, because of kernel memory usage. The situation, workaround, and resolution is described:

Solaris 10 systems may panic in the tcp_fuse_rcv_drain() TCP/IP function when using TCP loopback connections, where both ends of the connection are on the same system. This may allow a local unprivileged user to cause a Denial of Service (DoS) condition on the affected host.
To work around the described issue until patches can be installed, disable TCP Fusion by adding the following line to the "/etc/system" file and rebooting the system: set ip:do_tcp_fusion = 0x0.
This issue is addressed in the following releases: SPARC Platform Solaris 10 with patch 118833-23 or later and x86 Platform Solaris 10 with patch 118855-19 or later.
Disabling TCP_FUSION feature is no longer needed for DoS protections.

Odd Application Behavior:
If an application running under Solaris does not experience a performance boost, but rather a performance degradation, it is possible your ISV is not completely understand TCP_FUSION or the symptoms of an odd code implementation. When developers expect the receiving application on a socket to respond slowly, this can result in bad behavior with TCP sockets accelerated by Solaris.

Instead of application developers optimizing the behavior of their receiving application to take advantage of 25%-30% potential performance benefit, some of those applications vendors chose to suggest disabling TCP_FUSION with their applications: Riverbed's Stingray Traffic Manager and Veritas NetBackup (4x slowdown.) Those unoptimized TCP reading applications, which perform reads 8x slower than their TCP writing application counterparts, perform extremely poorly in the TCP_FUSION environment.

Possible bad TCP_FUSION interaction?
There is a better way to debug this issue rather than shutting off the beneficial behavior. Blogger Steffen Weiberle at Oracle wrote pretty extensively on this.

First, one may want to understand if it is being used. TCP_FUSION is often used, but not always:
There are some exceptions to this, including when using IPsec, IPQoS, raw-socket, kernel SSL, non-simple TCP/IP conditions. or the two end points are on different squeues. A fused connect will revert to unfused if an IP Filter rule will drop a packet. However TCP fusion is done in the general case.
When TCP_FUSION is enabled for an application, there is a risk that the TCP data provider can provide data so fast over TCP that it can cause starvation of the receiving application! Solaris OS developers anticipated this in their acceleration design.
With TCP fusion enabled (which it is by default in Solaris 10 6/06 and later, and in OpenSolaris), when a TCP connection is created between processes on a system, the necessary things are set up to transfer data from the sender to the receiver without sending it down and back up the stack. The typical flow control of filling a send buffer (defaults to 48K or the value of tcp_xmit_hiwat, unless changed via a socket operation) still applies. With TCP Fusion on, there is a second check, which is the number of writes to the socket without a read. The reason for the counter is to allow the receiver to get CPU cycles, since the sender and receiver are on the same system and may be sharing one or more CPUs. The default value of this counter is eight (8), as determined by tcp_fusion_rcv_unread_min.
Some ISV developers may have coded their applications in such a way to anticipate that TCP is slow and coded their receiving application to be less efficient than the sending application. If the receiving application is 8x slower in servicing the reading from the TCP socket, the OS will slow down the provider. Some vendors call this a "bug" in the OS.

When doing large writes, or when the receiver is actively reading, the buffer flow control dominates. However, when doing smaller writes, it is easy for the sender to end up with a condition where the number of consecutive writes without a read is exceeded, and the writer blocks, or if using non-blocking I/O, will get an EAGAIN error.
So now, one may see the symptoms: errors with TCP applications where connections on the same system are experiencing slowdowns and may even provide EAGAIN errors.

Tuning Option: Increase Slow Reader Tolerance
If the TCP reading application is known to be 8x slower than the TCP writing application, one option is to increase the threshold that the TCP writer becomes blocked, so maybe 32x as many writes can be issued [to a single read] before the OS performs a block on the writer, from a safety perspective. Steffen Weiberle also suggested:
To test this I suggested the customer change the tcp_fusion_rcv_unread_min on their running system using mdb(1). I suggested they increase the counter by a factor of four (4), just to be safe.
# echo "tcp_fusion_rcv_unread_min/W 32" | mdb -kw
tcp_fusion_rcv_unread_min:      0x8            =       0x20

Here is how you check what the current value is.
# echo "tcp_fusion_rcv_unread_min/D" | mdb -k
tcp_fusion_rcv_unread_min:
tcp_fusion_rcv_unread_min:      32

After running several hours of tests, the EAGAIN error did not return.
Tuning Option: Removing Slow Reader Protections
If the reading application is just poorly written and will never keep up with the writing application, another option is to remove the write-to-read protection entirely. Steffen Weiberle wrote:
Since then I have suggested they set tcp_fusion_rcv_unread_min to 0, to turn the check off completely. This will allow the buffer size and total outstanding write data volume to determine whether the sender is blocked, as it is for remote connections. Since the mdb is only good until the next reboot, I suggested the customer change the setting in /etc/system.
\* Set TCP fusion to allow unlimited outstanding writes up to the TCP send buffer set by default or the application.
\* The default value is 8.
set ip:tcp_fusion_rcv_unread_min=0
There is a buffer safety tunable, where the writing application will block if the kernel buffer fills, so you will not crash Solaris if you turn this write-to-read ratio safety switch off.

Tuning Option: Disabling TCP_FUSION
This is the proverbial hammer on inserting a tack into a cork board. Steffen Weiberle wrote:
To turn TCP Fusion off all together, something I have not tested with, the variable do_tcp_fusion can be set from its default 1 to 0.
...
And I would like to note that in OpenSolaris only the do_tcp_fusion setting is available. With the delivery of CR 6826274, the consecutive write counting has been removed.
Network Management has not investigated what the changes were in the final releases of OpenSolaris or more recent  Solaris 11 releases from Oracle in regards to TCP_FUSION tuning.
Tuning Guidelines:
The assumption of Network Management is that the common systems administrator is working with well-designed applications, where the application reader is keeping up with the application writer, under Solaris 10. If there are ill-behaved applications under Solaris 10, but one is interested in maintaining the 25%-30% performance improvement, some of the earlier tuning suggestions below will provide much better help than the typical ISV suggested final step.

Check for TCP_FUSION - 0=off, 1=on (default)
SUN9999/root#   echo "do_tcp_fusion/D" | mdb -k
do_tcp_fusion:
do_tcp_fusion: 1

Check for TCP_FUSION unread to written ratio - 0=off, 8=default
SUN9999/root# echo "tcp_fusion_rcv_unread_min/D" | mdb -k
tcp_fusion_rcv_unread_min:
tcp_fusion_rcv_unread_min:      8   
Quadruple the TCP_FUSION unread to write ratio and check the results:
SUN9999/root# echo "tcp_fusion_rcv_unread_min/W 32" | mdb -kw
tcp_fusion_rcv_unread_min:      0x8            =       0x20
SUN9999/root# echo "tcp_fusion_rcv_unread_min/D" | mdb -k
tcp_fusion_rcv_unread_min:
tcp_fusion_rcv_unread_min:      32
Disable the unread to write ratio and check the results:
SUN9999/root# echo "tcp_fusion_rcv_unread_min/W 0" | mdb -kw
SUN9999/root# echo "tcp_fusion_rcv_unread_min/D" | mdb -k
tcp_fusion_rcv_unread_min:
tcp_fusion_rcv_unread_min:      0
Finally, disable TCP_FUSION to lose all performance benefits of Solaris, but keep your ISV happy.
SUN9999/root# echo "do_tcp_fusion/W 0" | mdb -kw
May this be helpful for Solaris 10 platform administrators, especially with Network Management platforms!

Friday, August 20, 2010

Linux: 5 Year Old Root Exploit Finally Patched


Security Focus:
It has been a over half decade, but a Linux kernel root exploit has finally been patched. Yes, Oracle Enterprise Linux, RedHat, and others have been running around with this issue for a long time.


For your Novell fans, the SUSE distribution has been OK since 2004, but it has not trickled down to the other distributions since the fix had not been incorporated into the official kernel until now.

Network Management:
In a world of network management where a central or even distributed systems monitor or manage millions of potential device across many thousands of networks, a root exploit in an operating system kernel dating back over a half decade is extremely high risk.

If it has to run and has to run securely - a generic Linux distribution may not fit the bill.

Look for Operating System vendors who have a strong record with understanding Data Centers and managing networks, not just OS vendors who can do it more cheaply.

Tuesday, September 8, 2009

Apache: Hack, Rollback, Recover, and Secure

Apache: Hack, Rollback, Recover, and Secure

Apache: Hack

It was a bad day for the Apache team - apache.org was hacked 2009-08-28 through their CentOS Linux derivative. The CentOS is a Linux distribution bundled with RedHat Package Manager.



Apache: Rollback

When your system has been hacked, what would be your first choice to recover?

Go to an on-line backup? (How do you know it was not also compromised?)
Go to a tape backup? (How do you know it was not also compromised?)
How far back to you go? (Do you only keep 3 backups?)
Do you re-build from scratch?
aurora.apache.org runs Solaris 10, and we were able to restore the box to a known-good configuration by cloning and promoting a ZFS snapshot from a day before the CGI scripts were synced over. Doing so enabled us to bring the EU server back online, and to rapidly restore our main websites.
The Apache team was very fortunate - they implemented Sun SPARC Solaris ZFS. They were able to roll back to a snapshot and recover.


Apache: Recover

It was mentioned that the ZFS snapshot was used to ultimately recover the web site. How does ZFS offer this type of capability?

ZFS is a 128 bit unified file system and volume manager. It offers virtually unlimited volume size... and virtually unlimited snapshots. Any production environment exposed to the internet should use ZFS as a best practice, to be able to quickly resort back to a pre-corrupted stage.

For example, you can schedule a snapshot of your system every day, or every 15 minutes (if you want!), and you can hold these snapshots for a week, with virtually no overhead. At any point in time, you can drop back to a previous release, just as the apache.org foundation decided to do.

What exactly is a "snapshot"? The zfs manual page reads:
A read-only version of a file system or volume at a given point in time. It is specified as filesystem@name or volume@name.
The process of taking an old snapshot and making it writable is called a "clone". The manual page reads:
A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and initially consumes no additional space.
This "clone" can be "promoted" so as to become the master version of the file system volume, erasing the old content. This is also described in the zfs manual page:
Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The origin property exposes this dependency, and the destroy command lists any such dependencies, if they exist.
The clone parent-child dependency relationship can be reversed by using the promote subcommand. This causes the “origin” file system to become a clone of the specified file system, which makes it possible to destroy the file system that the clone was created from.
This is basically the process that Apache.org used to recover their Linux web servers.

Apache: Secure

Securing those web servers are a different story. SSH is no magic bullet - this was also compromised. There is no magic bullet in the open-source world. Different open-source communities have different certification processes. A Linux kernel may come up, be slowly accepted into a distribution, with patches made along the way from the original kernel team as well as a separate distribution company.

One of the sections was about positive lessons :
  • The use of ZFS snapshots enabled us to restore the EU production web server to a known-good state.
  • Redundant services in two locations allowed us to run services from an alternate location while continuing to work on the affected servers and services.
  • A non-uniform set of compromised machines (Linux/CentOS i386, FreeBSD-7 amd_64, and Solaris 10 on sparc) made it difficult for the attackers to escalate privileges on multiple machines.
This is the "gold standard".

While your front-end Linux boxes may get hacked, diversifying your infrastructure with an additional (secure) OS, and using a real unified file system & volume management system like ZFS under Solaris makes the hackers struggle while provides options to every-day system administrators.

IBM recently acknowledged Sun Solaris as being best-in-class in security - something to keep in mind.

Be cautious of other vendors like Microsoft with long outstanding security holes in their IIS web serving software or security issues which they refuse to fix. These are not good candidates for customer facing systems - for obvious reasons. Imagine gaining access to the IIS server and just querying the usernames and passwords from the ebedded MSSQL server - oh, the humanity!