Title:A Multimodal Fusion and Ensemble Approach for Robust Fake News Detection through Deep Learning, Reinforcement Learning, and Using Blockchain to Enhance Security and Authorization
Volume: 18
Issue: 2
Author(s): Vivek Kumar*, Satveer, Waseem Ahmad and Satendra Kumar
Affiliation:
- Department of Computer Application, Quantum University, Roorkee, U.K, India
Keywords:
Reinforcement learning, fake media, blockchain, natural language processing, machine learning, security, authorization.
Abstract:
Background: Computer science significantly influences modern culture, especially with
the rapid breakthroughs and technology in social media networking. Social media platforms have
become significant channels for sharing and exchanging daily news and information on many issues
in the current digital environment, which is known for its massive data collection and transmission
capabilities. While there are many benefits to this environment, there are also many false reports and
information that deceive readers and users into believing they are receiving correct information.
Objective: Nowadays, all users use social media to obtain news content, but sometimes some malicious
users tamper with real news and then spread fake news, which may reduce the reputation of
social media. Therefore, many existing models have been introduced to detect fake news, but these
models are based on traditional machine learning algorithms, such as decision tree (DT), multilayer
perceptron (MLP), random forest (RF), etc. These models Lack of performance, security, and authorization.
Our proposed model can solve existing model problems using reinforcement learning and
blockchain technology.
Methods: In this research paper, we explain a new way to identify fake news. This new approach,
combined with policy-based heuristic reinforcement learning (PHRL), where the model dynamically
adjusts through iterative learning, is the key innovation and gradually improves classification accuracy.
The same as our smart contract authorization method, which enhances the authenticity of content
posted safely by authorized users and improves the transparency and accountability of information.
Results: Our model was tested on real-time information collected from various sources with 70%
accuracy and valid authentication.
Conclusion: Our proposed model produced better results with a Mean Absolute Error (MAE) of
0.0811 and Root Mean Squared Error (RMSE) of 0.2847, both significantly lower values. Our proposed
model performs better than multilayer perceptron (MLP), and random forest (RF).