StudyBoss » Education » Semantic Analysis

Semantic Analysis

Table of Contents

There has been a lot of interest recently in making quality education more accessible to students worldwide using information technology [1]. Automated marking systems (AMS) and computer-based assessment (CBA) are rapidly growing areas of research, concerning academics involved in teaching at all levels across a wide range of disciplines. Automated marking has the potential to lower teaching costs, aid in distance learning, provide instant feedback to students and to increase the consistency (and therefore academic integrity) of assessment.

Assessment is an essential part of the learning process, especially informative learning settings. In the current context of massive open online courses (MOOC), assessment is challenging as it aims to ensure consistency, reliability and does not favor one person against another. Informative assessment the problem of workload and timely results is even greater, as the task is carried out more frequently while the interpretation of one human marker differs from another [2].

Essays are considered by many researchers as the most useful tool to assess learning outcomes implying a) the ability to recall, organize and integrate ideas, b) the ability both to express oneself in writing and c) to supply more than identify interpretation and application of data [3]. However, evaluating essays and open-end questions is a time consuming and tiring process. We assume that machine learning can provide help to teachers in this field by using automated essay evaluation systems.

Automated essay evaluation (AEE) is the process of evaluating and scoring the written essays via computer programs. For teachers and educational institutions, AEE represents not only a tool to assess learning outcomes but also helps save time, effort and money without lowering the quality. AEE systems can also be used in all other application areas of text mining, where the content of the text needs to be graded or prioritized, such as written applications, cover letters, scientific papers, e-mail classification etc. [4]

Several AEG systems have been developed under academic and commercial initiative using statistical [5], Natural Language Processing (NLP) [6], Bayesian text classification [7], Information Retrieval (IR) technique [8], amongst many others. Latent Semantic Analysis (LSA) is a powerful IR technique that uses statistics and linear algebra to discover underlying “latent” meaning of the text and has been successfully used in English language text evaluation and retrieval [8], [9], [10]. LSA applies Singular Value Decomposition (SVD) to a large term by context matrix created from a corpus and uses the results to construct a semantic space representing topics contained in the corpus. Vectors representing text passages can then be transformed and placed within the semantic space where their semantic similarity can be determined by measuring how close they are from one another.

The main dimension of measuring the performance of AEG is how much the system accurate with the human grade. The existing AEG techniques which are using LSA do not consider the word sequence of sentences in the documents.

In existing LSA methods, the creation of word by document matrix is somewhat arbitrary [8]. Automated essay grading by using these methods are not a replica of human grader [5].

2 – Latent Semantic Analysis

The training set is prepared by choosing essays of a specific subject of any levels. The essays are reviewed first by more than one human specialists of that subject. The number of human graders may expend for the non-one-sided framework. The normal estimation of the human evaluations is dealt with as preparing score of a specific preparing essay. The pre-processing is done on the training set. In pre-processing steps, the stopwords are omitted from the article and words are stemmed from their roots [11][12].

N-grams i.e. unigrams, bigrams, trigram,….., n-grams is used to create document matrix. Every cell of the matrix is filled by the frequency of n-grams in the document. The n-gram by documents matrix is then decomposed by singular value decomposition (SVD) of the matrix. Deerwester et. al., describe the process as follows:[13]

Let t = the number of terms, or rows

d = the number of documents, or columns

X = at by d matrix

Then, after applying SVD, X = TSD, where

M = the number of dimensions, m <= min(t,d)

T = a t by m matrix

S = an m by m diagonal matrix, i.e., only diagonal entries have non-zero values

D = an m by d matrix .

Now both student answer and model answer is ready to be compared. Numerical representations of the essays are needed for the comparison. Comparison between texts is done based on mathematical models [14].

In order to assess the human score and the system score, surface features should be taken into consideration. So, the content penalty is adapted to the final score [15][16]. This minimizes the system biased towards the essays which are short compared to model answer and essays which are longer than the model answer. The last stage comes to the system evaluation. First, it is done based on the training set than on a new dataset [14]. Figure 1[17] illustrates this process.

But Latent Semantic Analysis has a great disadvantage in that it is not able to the word if it has several meanings. Every vector represents a word regardless of its meaning [18]. All the processing that is based on this partial view of documents will not be effective as they do not use the whole content in processing. As a result, the logical view of documents must include words’ semantic in order to convey the entire content [19]. To solve this problem Ontology is integrated with Latent Semantic Analysis to form the document corpus.

Ontology

Ontology has become a great interest for communities that deal with semantic similarity. This is because it provides a structured representation of the knowledge in a form of conceptualization interconnected by means of semantic relationships [20].

In an ontology, concepts are arranged in a hierarchical structure, which is also a directed acyclic graph with a root node (considered as a taxonomic structure, or a taxonomic tree). Concepts with lower depths, located closer to the root, have broader meanings; concepts with higher depths, located farther away from the root, are hyponyms with more specific meanings. [21] [22] [23]. Ontology unifies the representation of each concept, relating it to the appropriate terms, as well as to other concepts with which it shares a semantic relation. Ontological component makes it possible to make calculations of semantic similarity between concepts. Semantic similarity offers the possibility to build explanations for clarifying a concept to the user based on similar concepts, thereby enhancing communicative effectiveness. The most beneficial gain is that we can substitute one concept with another [24]. Ontologies are always concerned with a specific domain of interest, for example, tourism, biology or law [25]. Figure 2 gives an example of an ontology [26]. An ontology consists of four main components to represent a domain [27]. They are:

• The concept represents a set of entities within a domain.

• Relation specifies the interaction among concepts

• Instance indicates the concrete example of concepts within the domain

• Axioms denote a statement that is always true.

Ontologies allow the semantics of a domain to be expressed in a language understood by computers, enabling automatic processing of the meaning of shared information [28]. Ontologies are a key element in the Semantic Web, an effort to make information on the Internet more accessible to agents and other software [29].

There are, still, some limitations in using ontology. One of them is that there is a difficulty in transferring knowledge from text to abstract and concepts with efficiency. This makes finding out the relationships between concepts and terms very difficult [30]. Second, sometimes the semantic relationships between the concepts are vague and ontology can deal with or understands them. This makes the use of fuzzy ontology very beneficial.

Fuzzy ontology

Fuzzy ontology is an extension of the domain ontology with crisp concepts. It is more suitable to describe the domain knowledge than domain ontology for solving the uncertainty reasoning problems. Fuzzy ontology of terms is used when the term connections “smaller than” and “more extensive than” might be fuzzy, i.e., have an affiliation degree consequently decided from data straightforwardly acquired from a corpus. Fuzzy ontology is utilized to define a client’s query and is consolidated in an area particular search engine [31].

Fuzzy ontology emphasizes on giving a meaning to the vagueness of the ontology’s components. Its important characteristic is that it makes the fuzzy ontology’s imprecision explicit. This makes the acquirement of the knowledge easier and efficient. It also enables the definition of the semantic measure which makes the information retrieval more efficient [32].

Why Fuzzy Ontology?

Fuzzy ontology has the advantage of extending information queries, allowing the search to cover all the related results. This makes the results based on relatedness based on modeled domain knowledge, instead of just offering exact matches. The search can also be extended to cover all related concepts so that precise wording is not needed to get a useful hit (as the context of a document does not have to be exactly the same one for the user to benefit from it) [33]. This results in more effective retrieval. Likewise, another advantage of fuzzy ontologies is the fuzzy semantics, as they are more flexible towards mapping between different ontologies [33] [34].

Problem Statement:

Existing Automatic Essay Evaluation systems has a main weakness which is that they take into consideration text semantics in a vague way and focus on the syntax. We can still reason out that they mostly perform syntax and shallow content measurements (calculating the similarity between texts) and ignore the semantics. However, the details of the majority of the systems have never been announced publicly. There are a lot of ways that the state-of-the-art systems use to analyze semantics. Some of them are latent semantic analysis (LSA) [35], content vector analysis (CVA) [35], and latent Dirichlet allocation (LDA) [36]. To measure the coherence of essays’ content, LSA [35, 37], random indexing [38], and an entity-based approach [39] have been used. But there are only two systems [40, 41] that use approaches which check for consistency of the statements in the essays. A lot of efforts have been done and still, the latter systems are not automatic. They need manual interventions from the user at least in the earlier steps.

Evaluation System Challenge:

There are some challenges have to be considered when working in field essay evaluation which is discussed as follows:

1 – Language ambiguities and the lack of one “correct” answer to any given essay question task make evaluating process challenging [42].

2 – Communication infrastructures are different between e-learning content objects and e-learning platforms. To be a successful essay evaluation system it has to get information about a learner’s knowledge [43].

3 – Many word-based and statistical approaches have supported information retrieval, data mining, and natural language processing systems, but a deeper understanding of the text is still an urgent challenge: concepts, semantic relationships among them, contextual information needed for the concept disambiguation require further progress in the textual information management.

4 – The system has to be reliable as teachers and more and usable.

The proposed system includes new attributes to measure the coherence (semantic development) and consistency of facts (compared to common sense knowledge and other facts in essays). Spatial patterns, measuring distance, and spatial autocorrelation between essay’s parts are the coherence attributes. Detecting the number of semantic errors in a student essay using information extraction, representing it using fuzzy ontology then passing it to a logical reasoner are the consistency attributes. The proposed system also provides a feedback about the essay. Many papers say that as discussed in many papers [44] [45], AEE systems aim is no longer to accurately reproduce the human grader’s scores, but it aims to validate scores and give accurate, and informative feedback.

Cite This Work

To export a reference to this article please select a referencing style below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Leave a Comment