We use cookies to personalize content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you’ve provided to them or that they’ve collected from your use of their services.
Some of the data collected by this provider is for the purposes of personalization and measuring advertising effectiveness.
Some of the data collected by this provider is for the purposes of personalization and measuring advertising effectiveness.
Some of the data collected by this provider is for the purposes of personalization and measuring advertising effectiveness.
0:00 AI、機械学習、ディープラーニングとは何なのか、なぜ違うのか?
0:55. 機械学習モデルはどのように構築されているのか?
2:01 深層学習はどのように優れているのか?
4:13 サイバーセキュリティにとってディープラーニングはどのように優れているのか?"
I think the best metaphor, one can use and that I use quite a lot is thinking of those three terms being put together as if they were in in in the shape of an onion where AI is one of the outer layers. Machine learning is kind of the mid layer and then deep learning as the most internal layer. Machine learning here with a focus on learning relates to the machine or the computer or the algorithm having to learn and understand the data, the distribution of the data, and the characteristics of the data. Deep learning is in fact a family of algorithms within machine learning that is characterized by the use of Venural networks as the main data structure that is used in order to train and later on infer on the data.
We need to have domain experts that understand the data, understand the problem domain, look at the data, inspect the data, understand its distribution, and most importantly decide what are the most important features or characteristics that lie within the data that are relevant to the problem domain.
Then we would need once those are identified to extract those features from the data quantize them, meaning digitize them somehow and then input those into a vector of features. That vector of features is a representation of those most important features and characteristics of each sample of the data.
Once we've extracted all the features from all data samples that we have, We can take the collection of vectors that we've created and then feed those vectors into the machine learning model of our choice.
The fact that deep learning allows us to train directly on the raw data lies at the heart of the differentiation between deep learning and machine the most important characteristics, that lie within the data, and generalize on its own on the most relevant and important features for the solution that we see or the decision that want to have the model perform for us. In deep learning, we simply need to take the raw data and feed that into our deep learning model in order for it to infer on it and make a decision. In machine learning, on the other hand, just like we do in training, before we do inference, we need to extract and compute features, organize those into a vector and then have that fed into our machine learning algorithm.
That extraction and computation and vectorization of features is something that takes up more time and resources in terms of computational complexity, CPU, RAM usage, etcetera. Having to do manual feature extraction is extremely counter intuitive when we were talking about a problem domain, which is innately adversarial.
Sometimes, even if we have the best cybersecurity experts, researchers, the best malware analysts and reverse engineers, we don't even know and understand all the different features and characteristics that are found within the data regarding known attack techniques mutation, techniques, and evasion methods, let alone in an area where in, like, cyber security where there are always zero day exploitations and vulnerabilities that are found out there. If we're not aware of those, we can't possibly as humans extrapolate and understand what are the relevant features for those that may be found within the data Deep learning algorithms allow us to train on all of the data that's available to us. Rather than just a fraction of it. It allows us to truly and fully harness the power of the machine in order to generalize on features and the correlations between those features in a way that we as humans simply cannot do or do not understand.
It allows us to create models that consume less time and do it with less computation resources, and it allows us to react faster and better to a quickly, rapidly changing threat landscape. By using deep learning, we can deliver models that have much higher quality. It means we can deliver models with a much higher detection rate. And a considerably lower false positive rates. These are models that can be used for pre execution prevention across our entire product offering and install base.