You can have the data safely on-prem, connected to computers that are connected to the internet, or safely in the cloud, connected to computers that are connected to the internet. The threats are not that different.
You'd be fooling yourself if you think any moderately complex company still hasn't moved to the cloud or isn't thinking about it (with rare exceptions)
Yeah, not really sure how a globally distributed manufacturing operation with a complex supply chain and customers all over the world that need access to data for their operations is supposed to function effectively without it.
(and I say that as someone that used to sell commercial aviation data that came on CDs...)
I'm not sure what the 'critical' stuff is either or what the details of Airbus' network hosting and knowledge compartmentalization strategy is, but you're not going to run a globally distributed manufacturing business with complex supply and maintenance requirements without having technical specs, CAD files, diagnostic criteria customer records etc sitting on computers connected to the internet.
It would be reasonably "secure" if it is encrypted on a physically private network using in-house _modified_ _mainstream_ encryption algorithm, then after an over-the-air transfer then you can store it on a third party could under the control of foreign interests. Oh, don't forget the file names have to be encrypted too.
Why would a company without cryptographic expertise modifying an existing algorithm without any particular goal in mind just to be different, produce something more secure than the winning solution in an open cryptographic competition?
> directory names
And file structure too, preferably. Incremental sync could be done with XTS mode.
You need only cryptographic common sense: it seems you have no idea how much it is easy to modify a mainstream cryptographic software to add basic and robust cryptographic modifications...
I've been assessing systems that use cryptography for about 20 years as part of my work in information security, and I've never seen a customization that increased the security of a cryptographic algorithm over following the best practices.
Usually, non-specialists fiddling with cryptographic algorithms makes them much less secure. Developers who aren't cryptographic mathematicians should generally use a well-respected algorithm, follow current best practices, and treat that component as a magic box that's not to be tampered with.
Doing such "customizations" (which are actually crypto 101) will break all attacks designed specifically for a crypto algo in mind. Even better if you lie on the crypto algorithm.
Ofc, that must be encrypted on systems which "cannot connect" (and you can go overkill with EM protection with a very good faraday cage).
If you are making such a technical pain for attackers, they will switch to social engineering anyway.
Algorithms like AES-GCM are standards because - when used according to best practices - there are no known practical attacks against them.
If someone has an attack that would defeat the cryptographic protection in a particular piece of software, the software is likely doing one or more of the following:
* Not using a modern, well-tested algorithm (e.g. using DES, a hokey custom XOR stream cipher, AES-ECB, etc.).
* Not following general cryptographic best practices (e.g. hardcoded or predictable key/IV/nonce, insecure storage of keys).
* Not following best practices for the specific algorithm (e.g. using AES-GCM, but reusing a key/nonce combination; using AES-CBC without applying an integrity-protection mechanism).
* The software is doing something that doesn't make sense, cryptographically (e.g. using symmetric encryption to encrypt sensitive data, but the data and the keys are necessarily accessible to the same set of users/service accounts, so there's no net change in security).
If such an attack fails because a developer has made changes to the cryptographic algorithm, a motivated attacker is likely just going to look at the code in Ghidra, x64dbg, etc. and figure out how to account for the changes. It's not a strong security control. I've been decrypting content stored using that kind of software for something like 20 years.
The correct approach is to verify that the use of a particular type of cryptography makes sense in the first place, then use a well-tested modern algorithm and follow the current best practices. i.e. using code from years-old forum posts will likely result in an insecure product.
What I know: the karma system is broken, because it is very hard to provide a controversial opinion/fact without being mass de-karma-ed.
For instance, there is serious hate here about web interop with classic noscript/basic (x)html browsers (namely basic HTML forms with at best <video> <audio> elements, optional simple CSS, often a document which is a "semantic" 2D html table with proper ids for navigation, encrypted URL parameters are your friends).
On and off, I am currently coding a minimal wayland compositor for linux and AMD GPU(no libdrm), in RISC-V assembly (64bits), which I run on x86_64 with a small RISC-V machine code interpreter written itself in x86_64 assembly. I do not use ELF, but a file format (excrutiatingly simple as such file format type should be on modern hardware architectures) of my own with an ELF capsule (also written in x86_64 assembly).
I start with SHM memory, will add linux dma-buf once SHM is enough up and running. Currenty monothreaded, ofc. AMD GPU code for SHM is in, now writting wayland protocol code to please the first wayland clients I would like to run (not using the C libraries provided by the wayland project, native wire format).
I want to move away from x11, and once I get something decent with this compositor, I will probably have to fork xwayland in order to make it work with this minimal compositor, that for some level of legacy compatibility (steam client/some games).
In the end, I did design some kind of methodology and coded some SDK tools in order to write a bit more comfortably RISC-V machine code programs in a very simple fire format (only core ISA, not even compressed instructions, no pseudo instructions, using only a simple C preprocessor).
Coding time does not matter on such software in the light of their life cycle once it does "happen".
All that presuming not too much IRL interference... yeah, I know this is excessive to expect that...
The super hard part is not coding, it is motivation: energy, mood, cognitive bias, etc.
You should be able to create an API account with a classic noscript/basic (x)html browser (optionally with an email, and that could be an IPv[46] literal email address, you know, self-hosted & DNS free, which is stronger than SPF...).
Then to pay for, I should be able to redeem a code bought at my local and physical monetary terminals (no credit card info input on an internet able computer, even if elf/linux and lean classic noscript/basic (x)html web browsers) that to add credits on my account. Like steam. In my country, we even have codes for age verification only (you have physical age verification like when you buy alcohol from a [bottle] shop), much easier to crack down on abuse.
Another thing could be a public "anonymous", severely rate limited, API key for 'testing purpose' or very rare usage, or a noscript/basic (x)html web site (namely a real and honest web site) with ads (text/image/videos[<video>])... and with solid handling of HTTP refresh?
My main usage for AI would be coding. I am craving at mass porting C++ to a plain and simple subset of C code (it seems some people are getting reasonably good results, and it seems rust has a brain damaged syntax on the scale of c++), and assembly coding with very specialized code snippet.
Gogol is part of the whatng cartel with its blink web engine, but that case is more accute with gogol because they own very dominant IP based services which apple and mozilla don't have.
There are 2 webs:
web apps, requiring a gigantic-enormously-complex(SDK included) [java|ecma]script web engines: only available from the whatng cartel, webkit (mostly apple), geeko(mozilla), blink (gogol, fork of webkit).
web sites, classic noscript/basic (x)html (simple HTML forms with <video> and <audio> elements, often with an optional simple CSS).
gogol: "You want to use my ultra-dominant online services? I will force you to use one web engine of our whatng cartel, even if they had been working decades more than fine with alternative classic noscript/basic (x)html browsers."
Remember, not so long ago, gogol was paying billions apple in order to get their browsers to default on gogol... I don't recall the amount mozilla was paid.
Don't let them fool you: they actually "work" hand in hand.
I've tried them all but find it useful to hear the preferred choices of humans with established opinions about search engines. You'd think we were all discussing eggs and I interrupted with "Who's your favorite Spice Girl?"
Don't forget the links2 classic web browser! (still missing the <video> and <audio> element support on x11/wayland though).
Server side rendering will collect(steal) personal info, it is a no go. The only right solution is online services to provide a web site on the side of a whatng cartel web app, if the online service does fit ofc. No other way, and hardcore regulation is very probably required.
I wish everyone knew the difference between patents and copyright.
You can download an open source HEVC codec, and use it for all they care according to their copyright. But! You also owe MPEG-LA 0.2 USD if you want to use it, not to mention an undisclosed sum to actors like HEVC Advance and all the other patent owners I don't remember, because they have their own terms, and it's not their problem that you compiled an open source implementation.
VP9 is more on the level of H265 really. VVC/H266 is closer to AV1. It's not an exact comparison but it is close. The licensing is just awful for VVC similar to HEVC and now that AV1 has proved itself everyone is pivoting away from VVC/h266 especially on the consumer side. Pretty much all VVC adoption is entirely internal (studios, set top boxes, etc) and it is not used by any major consumer streaming service afaik.
I'm well aware of dark shikari. While there are obvious differences technologically and subjectively, on a generational level vp9 and HEVC are both positioned as h264 successors. We all know h264 is brilliant and flexible and very capable. Many companies were looking for an alternative with better licensing; on2's vp8 was Google's first push, but it still lagged behind h264, even though it used many of the same concepts and limitations. Vp9 and HEVC were certainly the next generation and competed with each other directly, really. Among the larger consumer video services, many keep h264 for compatibility, and reserve higher quality or resolutions or frame rates for the newer codecs. tiktok eventually settled on HEVC as its preferred codec, while Instagram used HEVC for a short while before migrating entirely to vp9.
Other developers ran into a ton of issues with licensing HEVC for their own software which is still a complete pain.
Anyway, people are now looking at what's next. VVC came out quite a while ago, and AV1 more recently, but when people are looking for the current sota codec with at least some good support, they end up choosing between the two, realistically. And yeah, VVC has advantages over AV1 and they are very different technically. But the market has pretty loudly spoken that VVC is a hassle no one wants to mess with, and AV1 is quickly becoming the ubiquitous codec with the closest rival VVC offering little to offset the licensing troubles (and lack of hardware support at this point as well)
Anyway, just saying. VVC is a huge pain. HEVC still is a huge pain, and though I prefer it to vp9 and it has much better quality and capabilities, the licensing issue makes it troublesome in so many ways. But the choice almost always comes down to vp9 or HEVC, then AV1 or VVC. Though at this point it might as well be, in order, h264, vp9, HEVC, AV1, and no one really cares about VVC.
reply