HTTPS SSH

Intelligent profanity filter

Intelligent profanity filter project provides simple group chat application with neural network usage to filter vulgarisms that can occur in users messages. Application is created in server - client architecture. Client program is used only to send and present new messages. All work is done by server side. Server is responsible for vulgarisms recognition by neural network.

Neural network architecture

Feed forward neural network with back propagation learning algorithm that works on server side is built with three layers:

  • Hidden input layer with 18 neurons
  • Hidden layer with 4 neurons
  • Output layer with one neuron

Input vector contains 35 values and output vector contains one value. For classification we use constant sigmoid bipolar function.

Neural network should classify word token to one of the three categories:

  • vulgarism <-1; -0.5>
  • unknown (-0.5; 0.5)
  • normal <0.5; 1>

Neural network learning algorithm

We are using back propagation algorithm made from scratch. After complete learning we are saving neurons weights for future usage. As it is given above there is planned unknown group of words. Every word from this group should be classified by user to one of the group: vulgarism or normal. After user action, neural network is using incremental learning.

Incremental learning work as follows:

  1. Create new neural network from old one.
  2. Add new learning examples.
  3. Learn new neural network new examples (tuning old weights).

Neural network training data

Training data is made from scratch and contains our examples that we created during the work on the project.

Training data is divided into two groups: vulgarisms and normal.

Notation:

  • ala:positive
  • cholera:negative

Every word has its vector representation. Vector contains 35 values (as quantity of letters in Polish alphabet). Each index in vector is representing one of the letter from Polish alphabet:

[a, ą, b, c, ć, d, e, ę, f, g, h, i, j, k, l, ł, m, n, ń, o, ó, p, q, r, s, ś, t, u, v, w, x, y, z, ź, ż]

Words as vectors example:

  • ala [13, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
  • samochód [2, 0, 0, 5, 0, 8, 0, 0, 0, 0, 6, 0, 0, 0, 0, 0, 3, 0, 0, 4, 7, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
  • fatamorgana [24911, 0, 0, 0, 0, 0, 0, 0, 1, 8, 0, 0, 0, 0, 0, 0, 5, 10, 0, 6, 0, 0, 0, 7, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0]

With that purpose every word has unique vector representation. With that approach users can write messages in language they want (limit is that the language should contain a subset of letters from Polish alphabet, every other character will be ignored).

Client - server architecture

Architecture is realised with RMI protocol that ideally resolves problem of synchronous chat behaviour and neural network background learning.

Getting started

In the project package there are two .jar files. The most important one is ChatServer.jar. After launching that program you will see two dialogs to choose training model file and file with weights for all neurons in neural network. You should point to training data model file, weights file is optional if you are starting program for the first time. Neural network will automatically generate weights for all neurons (if weights file not pointed). After that, group chat server is ready to run. Clicking start button will invoke new dialog with question if neural network should be learnt. The question is there because as you noticed, you can point weights file to program. If this file contains really good accuracy, learning is not recommended because of long learning time. Now you are ready to go.

Let's open ChatClient.jar. Firstly you should log in to the server by giving server IP address and nickname (nicknames are unique). If you connected successfully, you will see proper message about this. Now you can start chatting with all connected to chat server people.

Server will receive new message and process it word by word. Every word will be changed to vector representation and classified by neural network to one of three groups: positive, negative, unknown. If word is positive, it stays without change. If word is negative, it will be marked with "*" characters. If word is unknown, user action will be needed to classify the word (server will be blocked to this time). Unknown word will go to the words queue for future incremental learning. Incremental learning will start in background right after whole message analysis. New learnt neural network will replace the old one after learning process.

This will goes all over again to point when you decide to end the program. Clients can be closed whenever you want. Server can be closed whenever you want but it will save training data model and weights only of the main neural network (network learning in background will disappear). If you want to save new training data model and new weights you need to wait for neural network that works in background to finish learning.

Every next run of the server can use previous training data model and weights state for all neurones so in time filtering precision will be better and better.

If you want to see internal application output, open it by terminal.

First summary of the project (16.05.2018 - 15.06.2018)

After first phase of project, there is working group chat server and client with graphical user interface. RMI protocol was used to resolve problem of synchronised messages distribution and background learning of additional neural network that learns incrementally.

Neural network is created in given above architecture, back propagation algorithm is implemented.

Training data model contains 484 learning examples (422 normal words and 62 vulgarisms). Program loads data alphabetically so new examples are located in pseudo random places.

Neural network learning rate is set to 0.1 and learns for maximum of 1000 epochs. Learning takes about 10 minutes (4.2GHz CPU) and network error is estimating near 70.0.

Neural network accuracy is not acceptable for now but it works surprisingly well for lack of words stemmer (not implemented because of complications in Polish language).

Next phase is to prepare better training data model with more examples and intense training.

Second [final] summary of the project (15.06.2018 - 26.06.2018)

Program is consisting from 5 modules:

  1. neuralnetwork - feed forward neural network, layers, neurons and necessary utilities
  2. server - group chat server with RMI protocol
  3. client - group chat client with RMI protocol
  4. logger - customized logger for project
  5. test - program tests for each module separately

Neural network parameters are static and can be changed in the code.

Learning rate: 0.1

Max epochs: 500

Detailed modules description can be found in HTML form in folder: docs/ready/html.

Neural network is always tested after learning process by its training data model. In our project you can find three training data models:

  1. first_model.data - 481 learning examples (419 positive and 62 negative words)
  2. second_model.data - 1262 learning examples (1184 positive and 78 negative words)
  3. third_model.data - 1706 learning examples (1552 positive and 154 negative words)

We changed value of epochs (500 now) for learning process of neural network till last summary because network error do not want to decrease and learning rate is the same as the last time. Learning rate bigger than 0.1 gives bigger network error and lower values do not impact really much on network error.

500 epochs learning takes (4.2GHz CPU):

  • 3 minutes for first_model.data
  • 7 minutes for second_model.data
  • 11 minutes for third_model.data

Current neural network is learning the biggest training data model in the same time as the previous one on the smallest training data model (mentioned in the previous summary) without much difference in accuracy.

In neural network learning process we started from first_model.data, nextly we saved wights for all neurons and started learning second_model.data with weights from previous learning. Learning third_model.data started with weights from second learning.

Results looks as follows:

1: first_model.data

Accuracy for 481 training examples:

  • Correct predictions: 454/481
  • Incorrect predictions: 27/481
  • Neural network accuracy: 94.0%

<p align="center"> <img src="docs/charts/first_model_chart.jpeg"> </p>

2: second_model.data

Accuracy for 1262 training examples:

  • Correct predictions: 1212/1262
  • Incorrect predictions: 50/1262
  • Neural network accuracy: 96.0%

<p align="center"> <img src="docs/charts/second_model_chart.jpeg"> </p>

3: third_model.data

Accuracy for 1706 training examples:

  • Correct predictions: 1590/1706
  • Incorrect predictions: 116/1706
  • Neural network accuracy: 93.0%

<p align="center"> <img src="docs/charts/third_model_chart.jpeg"> </p>

As you can see above, with each bigger training data model the neural network error increases. Accuracy is over 90% for every training data model so in theory it is brilliant result with such primitive idea for words encoding without stemmer usage. In practice there is one really big problem. Our training data models are not balanced. In every model there is much more positive words than negative. It makes neural network really tolerant for vulgarisms. Language is really complicated because words can have different meaning in different situations and it is natural that there is more positive words than negative ones.

Our neural network do not meet our expectations, it have not enough precision to be intelligent profanity filter on group chat but it was really great experiment to see how our primitive approach will handle the problem.

For presentation approach first_model.data has the most interesting results to work with.

License

Copyright (C) 2018 Kacper Gąsior, Maciej Bedra, Sebastian Marut